Commit f30dad25 authored by zhangwenbo's avatar zhangwenbo 🎨
Browse files

Initial commit

parents
# logs
*.log
logs/
runs/
# build / install artifacts
rocblas-install/
build/
install/
# data
Data/
*.mat
# cache
__pycache__/
.cache/
MIT License
Copyright (c) 2018 Maziar Raissi
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
# https://github.com/maziarraissi/HFM
## 论文
Application of Deep Learning Methods for Distinguishing Gamma-Ray Bursts from Fermi/GBM TTE Data
- https://arxiv.org/abs/2303.00370
## 模型简介
HFM是一种基于物理学的深度学习框架,能够对控制流体运动的一类重要物理定律进行编码,即纳维-斯托克斯方程。具体来说,我们试图利用底层守恒定律(即质量、动量和能量守恒定律)来推断隐藏的感兴趣量,例如速度和压力场,这些量仅来自在任意复杂域(例如人体动脉或脑动脉瘤)中传输的被动缩放器(例如染料或烟雾)的时空可视化。我们解决上述数据同化问题的方法是独一无二的,因为我们设计了一种与几何形状或初始和边界条件无关的算法。这使得 HFM 在选择感兴趣的时空域进行数据采集以及后续训练和预测时具有高度灵活性。因此,HFM 所做的预测属于纯机器学习策略或单纯的科学计算方法根本无法重现的情况。对于由实际应用引发的几个基准问题,所提出的算法可以准确预测二维和三维流动中的压力和速度场。我们的结果表明,这种相对简单的方法可用于物理和生物医学问题,以提取有价值的定量信息(例如升力和阻力或动脉壁面剪切应力),而这些信息可能无法直接测量。
## 环境配置
### Docker(方法一)
docker pull image.sourcefind.cn:5000/dcu/admin/base/tensorflow:2.13.1-py3.10-dtk24.04.3-ubuntu20.04
docker run --shm-size 50g --network=host --name=binary_distinguish_GRB_by_DL --privileged --device=/dev/kfd --device=/dev/dri --group-add video --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -v 项目地址(绝对路径):/home/ -v /opt/hyhal:/opt/hyhal:ro -it <your IMAGE ID> bash
pip install scipy -i https://mirrors.aliyun.com/pypi/simple/ --trusted-host mirrors.aliyun.com
### Dockerfile(方法二)
docker build -t <IMAGE_NAME>:<TAG> .
docker run --shm-size 50g --network=host --name=binary_distinguish_GRB_by_DL --privileged --device=/dev/kfd --device=/dev/dri --group-add video --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -v 项目地址(绝对路径):/home/ -v /opt/hyhal:/opt/hyhal:ro -it <your IMAGE ID> bash
pip install scipy -i https://mirrors.aliyun.com/pypi/simple/ --trusted-host mirrors.aliyun.com
## 数据集
[数据下载地址](https://o365coloradoedu-my.sharepoint.com/:f:/g/personal/mara4513_colorado_edu/EnMZcTjA6BVBjovGVJT3hk4BIemGxYF9lO6ry0SyizIjoQ?e=fuHKi2)
```
Data
├── Aneurysm3D.mat
├── Cylinder2D.mat
├── Cylinder2D_flower.mat
├── Cylinder2D_Re200Pec200_Dirichlet_Streaks_Forces.mat
├── Cylinder2D_Re200Pec200_Neumann_Streaks_Forces.mat
└── Cylinder2D_Re200Pec2000_Dirichlet_Streaks.mat
└── ......
```
## 训练
- DCU :K100 AI
```bash
cd Source
python Cylinder2D.py > Cylinder2D_stdout.txt
python Stenosis2D.py > Stenosis2D_stdout.txt
python Aneurysm3D.py > Aneurysm3D_stdout.txt
python Cylinder2D_Pec_Re.py > Cylinder2D_Pec_Re_stdout.txt
python Stenosis2D_Pec_Re.py > Stenosis2D_Pec_Re_stdout.txt
python Davinci.py > Davinci_stdout.txt
python Cylinder3D.py > Cylinder3D_stdout.txt
python Cylinder2D_flower.py > Cylinder2D_flower_stdout.txt
python Cylinder2D_No_Slip.py > Cylinder2D_No_Slip_stdout.txt
python Cylinder2D_flower_convergence_plot.py > Cylinder2D_flower_convergence_plot_stdout.txt
python Aneurysm3D_Wall_Stresses.py > Aneurysm3D_Wall_Stresses_stdout.txt
```
注:多卡和其他相关训练参见Scripts
## 推理
暂无
## 应用场景
### 算法类别
`AI for science`
### 热点应用行业
`科研,教育`
## 参考资料
* https://github.com/maziarraissi/HFM
[![DOI](https://zenodo.org/badge/152933799.svg)](https://zenodo.org/badge/latestdoi/152933799)
# [Hidden Fluid Mechanics](https://maziarraissi.github.io/HFM/)
We present [hidden fluid mechanics](https://science.sciencemag.org/content/367/6481/1026.abstract) (HFM), a physics informed deep learning framework capable of encoding an important class of physical laws governing fluid motions, namely the Navier-Stokes equations. In particular, we seek to leverage the underlying conservation laws (i.e., for mass, momentum, and energy) to infer hidden quantities of interest such as velocity and pressure fields merely from spatio-temporal visualizations of a passive scaler (e.g., dye or smoke), transported in arbitrarily complex domains (e.g., in human arteries or brain aneurysms). Our approach towards solving the aforementioned data assimilation problem is unique as we design an algorithm that is agnostic to the geometry or the initial and boundary conditions. This makes HFM highly flexible in choosing the spatio-temporal domain of interest for data acquisition as well as subsequent training and predictions. Consequently, the predictions made by HFM are among those cases where a pure machine learning strategy or a mere scientific computing approach simply cannot reproduce. The proposed algorithm achieves accurate predictions of the pressure and velocity fields in both two and three dimensional flows for several benchmark problems motivated by real-world applications. Our results demonstrate that this relatively simple methodology can be used in physical and biomedical problems to extract valuable quantitative information (e.g., lift and drag forces or wall shear stresses in arteries) for which direct measurements may not be possible.
For more information, please refer to the following: (https://maziarraissi.github.io/HFM/)
- Raissi, Maziar, Alireza Yazdani, and George Em Karniadakis. "[Hidden fluid mechanics: Learning velocity and pressure fields from flow visualizations](https://science.sciencemag.org/content/367/6481/1026.abstract)." Science 367.6481 (2020): 1026-1030.
- Raissi, Maziar, Alireza Yazdani, and George Em Karniadakis. "[Hidden Fluid Mechanics: A Navier-Stokes Informed Deep Learning Framework for Assimilating Flow Visualization Data](https://arxiv.org/abs/1808.04327)." arXiv preprint arXiv:1808.04327 (2018).
## Note
The required data (to be copied in the Data directory) and some Matlab scripts (to be copied in the Figures directory) for plotting purposes are provided in the following link:
- [Data and Figures](https://o365coloradoedu-my.sharepoint.com/:f:/g/personal/mara4513_colorado_edu/EnMZcTjA6BVBjovGVJT3hk4BIemGxYF9lO6ry0SyizIjoQ?e=fuHKi2)
In addition to the Data and Figures directories, the Results folder is currently empty and will be automatically populated after running the corresponding examples provided in the Source and Scripts directories.
## Citation
@article{raissi2020hidden,
title={Hidden fluid mechanics: Learning velocity and pressure fields from flow visualizations},
author={Raissi, Maziar and Yazdani, Alireza and Karniadakis, George Em},
journal={Science},
volume={367},
number={6481},
pages={1026--1030},
year={2020},
publisher={American Association for the Advancement of Science}
}
@article{raissi2018hidden,
title={Hidden Fluid Mechanics: A Navier-Stokes Informed Deep Learning Framework for Assimilating Flow Visualization Data},
author={Raissi, Maziar and Yazdani, Alireza and Karniadakis, George Em},
journal={arXiv preprint arXiv:1808.04327},
year={2018}
}
#!/bin/bash
FAIL=0
HIP_VISIBLE_DEVICES=0 python Cylinder2D_flower_systematic.py 201 15000 > Cylinder2D_flower_201_15000_stdout.txt &
HIP_VISIBLE_DEVICES=1 python Cylinder2D_flower_systematic.py 101 15000 > Cylinder2D_flower_101_15000_stdout.txt &
HIP_VISIBLE_DEVICES=2 python Cylinder2D_flower_systematic.py 51 15000 > Cylinder2D_flower_51_15000_stdout.txt &
HIP_VISIBLE_DEVICES=3 python Cylinder2D_flower_systematic.py 26 15000 > Cylinder2D_flower_26_15000_stdout.txt &
HIP_VISIBLE_DEVICES=4 python Cylinder2D_flower_systematic.py 201 10000 > Cylinder2D_flower_201_10000_stdout.txt &
HIP_VISIBLE_DEVICES=5 python Cylinder2D_flower_systematic.py 101 10000 > Cylinder2D_flower_101_10000_stdout.txt &
HIP_VISIBLE_DEVICES=6 python Cylinder2D_flower_systematic.py 51 10000 > Cylinder2D_flower_51_10000_stdout.txt &
HIP_VISIBLE_DEVICES=7 python Cylinder2D_flower_systematic.py 26 10000 > Cylinder2D_flower_26_10000_stdout.txt&
HIP_VISIBLE_DEVICES=8 python Cylinder2D_flower_systematic.py 201 5000 > Cylinder2D_flower_201_5000_stdout.txt &
HIP_VISIBLE_DEVICES=9 python Cylinder2D_flower_systematic.py 101 5000 > Cylinder2D_flower_101_5000_stdout.txt &
HIP_VISIBLE_DEVICES=10 python Cylinder2D_flower_systematic.py 51 5000 > Cylinder2D_flower_51_5000_stdout.txt &
HIP_VISIBLE_DEVICES=11 python Cylinder2D_flower_systematic.py 26 5000 > Cylinder2D_flower_26_5000_stdout.txt &
HIP_VISIBLE_DEVICES=12 python Cylinder2D_flower_systematic.py 201 2500 > Cylinder2D_flower_201_2500_stdout.txt &
HIP_VISIBLE_DEVICES=13 python Cylinder2D_flower_systematic.py 101 2500 > Cylinder2D_flower_101_2500_stdout.txt &
HIP_VISIBLE_DEVICES=14 python Cylinder2D_flower_systematic.py 51 2500 > Cylinder2D_flower_51_2500_stdout.txt &
HIP_VISIBLE_DEVICES=15 python Cylinder2D_flower_systematic.py 26 2500 > Cylinder2D_flower_26_2500_stdout.txt &
for job in `jobs -p`
do
echo $job
wait $job || let "FAIL+=1"
done
echo $FAIL
if [ "$FAIL" == "0" ];
then
echo "YAY!"
else
echo "FAIL! ($FAIL)"
fi
#!/bin/bash
FAIL=0
HIP_VISIBLE_DEVICES=0 python Cylinder2D_flower_systematic.py 26 2500 > Cylinder2D_flower_26_2500_stdout.txt &
HIP_VISIBLE_DEVICES=1 python Cylinder2D_flower_systematic.py 13 2500 > Cylinder2D_flower_13_2500_stdout.txt &
HIP_VISIBLE_DEVICES=2 python Cylinder2D_flower_systematic.py 7 2500 > Cylinder2D_flower_7_2500_stdout.txt &
HIP_VISIBLE_DEVICES=3 python Cylinder2D_flower_systematic.py 3 2500 > Cylinder2D_flower_3_2500_stdout.txt &
HIP_VISIBLE_DEVICES=4 python Cylinder2D_flower_systematic.py 26 1500 > Cylinder2D_flower_26_1500_stdout.txt &
HIP_VISIBLE_DEVICES=5 python Cylinder2D_flower_systematic.py 13 1500 > Cylinder2D_flower_13_1500_stdout.txt &
HIP_VISIBLE_DEVICES=6 python Cylinder2D_flower_systematic.py 7 1500 > Cylinder2D_flower_7_1500_stdout.txt &
HIP_VISIBLE_DEVICES=7 python Cylinder2D_flower_systematic.py 3 1500 > Cylinder2D_flower_3_1500_stdout.txt &
HIP_VISIBLE_DEVICES=8 python Cylinder2D_flower_systematic.py 26 500 > Cylinder2D_flower_26_500_stdout.txt &
HIP_VISIBLE_DEVICES=9 python Cylinder2D_flower_systematic.py 13 500 > Cylinder2D_flower_13_500_stdout.txt &
HIP_VISIBLE_DEVICES=10 python Cylinder2D_flower_systematic.py 7 500 > Cylinder2D_flower_7_500_stdout.txt &
HIP_VISIBLE_DEVICES=11 python Cylinder2D_flower_systematic.py 3 500 > Cylinder2D_flower_3_500_stdout.txt &
HIP_VISIBLE_DEVICES=12 python Cylinder2D_flower_systematic.py 26 250 > Cylinder2D_flower_26_250_stdout.txt &
HIP_VISIBLE_DEVICES=13 python Cylinder2D_flower_systematic.py 13 250 > Cylinder2D_flower_13_250_stdout.txt &
HIP_VISIBLE_DEVICES=14 python Cylinder2D_flower_systematic.py 7 250 > Cylinder2D_flower_7_250_stdout.txt &
HIP_VISIBLE_DEVICES=15 python Cylinder2D_flower_systematic.py 3 250 > Cylinder2D_flower_3_250_stdout.txt &
for job in `jobs -p`
do
echo $job
wait $job || let "FAIL+=1"
done
echo $FAIL
if [ "$FAIL" == "0" ];
then
echo "YAY!"
else
echo "FAIL! ($FAIL)"
fi
#!/bin/bash
FAIL=0
HIP_VISIBLE_DEVICES=0 python Cylinder2D_flower_systematic_noise.py 0.01 > Cylinder2D_flower_noise_01_stdout.txt &
HIP_VISIBLE_DEVICES=1 python Cylinder2D_flower_systematic_noise.py 0.02 > Cylinder2D_flower_noise_02_stdout.txt &
HIP_VISIBLE_DEVICES=2 python Cylinder2D_flower_systematic_noise.py 0.04 > Cylinder2D_flower_noise_04_stdout.txt &
HIP_VISIBLE_DEVICES=3 python Cylinder2D_flower_systematic_noise.py 0.06 > Cylinder2D_flower_noise_06_stdout.txt &
HIP_VISIBLE_DEVICES=4 python Cylinder2D_flower_systematic_noise.py 0.08 > Cylinder2D_flower_noise_08_stdout.txt &
HIP_VISIBLE_DEVICES=5 python Cylinder2D_flower_systematic_noise.py 0.10 > Cylinder2D_flower_noise_10_stdout.txt &
HIP_VISIBLE_DEVICES=6 python Cylinder2D_flower_systematic_noise.py 0.12 > Cylinder2D_flower_noise_12_stdout.txt &
HIP_VISIBLE_DEVICES=7 python Cylinder2D_flower_systematic_noise.py 0.14 > Cylinder2D_flower_noise_14_stdout.txt &
HIP_VISIBLE_DEVICES=8 python Cylinder2D_flower_systematic_noise.py 0.16 > Cylinder2D_flower_noise_16_stdout.txt &
HIP_VISIBLE_DEVICES=9 python Cylinder2D_flower_systematic_noise.py 0.18 > Cylinder2D_flower_noise_18_stdout.txt &
HIP_VISIBLE_DEVICES=10 python Cylinder2D_flower_systematic_noise.py 0.20 > Cylinder2D_flower_noise_20_stdout.txt &
HIP_VISIBLE_DEVICES=11 python Cylinder2D_flower_systematic_noise.py 0.22 > Cylinder2D_flower_noise_22_stdout.txt &
HIP_VISIBLE_DEVICES=12 python Cylinder2D_flower_systematic_noise.py 0.24 > Cylinder2D_flower_noise_24_stdout.txt &
HIP_VISIBLE_DEVICES=13 python Cylinder2D_flower_systematic_noise.py 0.26 > Cylinder2D_flower_noise_26_stdout.txt &
HIP_VISIBLE_DEVICES=14 python Cylinder2D_flower_systematic_noise.py 0.28 > Cylinder2D_flower_noise_28_stdout.txt &
HIP_VISIBLE_DEVICES=15 python Cylinder2D_flower_systematic_noise.py 0.30 > Cylinder2D_flower_noise_30_stdout.txt &
for job in `jobs -p`
do
echo $job
wait $job || let "FAIL+=1"
done
echo $FAIL
if [ "$FAIL" == "0" ];
then
echo "YAY!"
else
echo "FAIL! ($FAIL)"
fi
#!/bin/bash
FAIL=0
HIP_VISIBLE_DEVICES=0 python Cylinder2D_flower_systematic_noise.py 0.32 > Cylinder2D_flower_noise_32_stdout.txt &
HIP_VISIBLE_DEVICES=1 python Cylinder2D_flower_systematic_noise.py 0.34 > Cylinder2D_flower_noise_34_stdout.txt &
HIP_VISIBLE_DEVICES=2 python Cylinder2D_flower_systematic_noise.py 0.36 > Cylinder2D_flower_noise_36_stdout.txt &
HIP_VISIBLE_DEVICES=3 python Cylinder2D_flower_systematic_noise.py 0.38 > Cylinder2D_flower_noise_38_stdout.txt &
HIP_VISIBLE_DEVICES=4 python Cylinder2D_flower_systematic_noise.py 0.40 > Cylinder2D_flower_noise_40_stdout.txt &
HIP_VISIBLE_DEVICES=5 python Cylinder2D_flower_systematic_noise.py 0.42 > Cylinder2D_flower_noise_42_stdout.txt &
HIP_VISIBLE_DEVICES=6 python Cylinder2D_flower_systematic_noise.py 0.44 > Cylinder2D_flower_noise_44_stdout.txt &
HIP_VISIBLE_DEVICES=7 python Cylinder2D_flower_systematic_noise.py 0.46 > Cylinder2D_flower_noise_46_stdout.txt &
HIP_VISIBLE_DEVICES=8 python Cylinder2D_flower_systematic_noise.py 0.48 > Cylinder2D_flower_noise_48_stdout.txt &
HIP_VISIBLE_DEVICES=9 python Cylinder2D_flower_systematic_noise.py 0.50 > Cylinder2D_flower_noise_50_stdout.txt &
HIP_VISIBLE_DEVICES=10 python Cylinder2D_flower_systematic_noise.py 0.52 > Cylinder2D_flower_noise_52_stdout.txt &
HIP_VISIBLE_DEVICES=11 python Cylinder2D_flower_systematic_noise.py 0.54 > Cylinder2D_flower_noise_54_stdout.txt &
HIP_VISIBLE_DEVICES=12 python Cylinder2D_flower_systematic_noise.py 0.56 > Cylinder2D_flower_noise_56_stdout.txt &
HIP_VISIBLE_DEVICES=13 python Cylinder2D_flower_systematic_noise.py 0.58 > Cylinder2D_flower_noise_58_stdout.txt &
HIP_VISIBLE_DEVICES=14 python Cylinder2D_flower_systematic_noise.py 0.60 > Cylinder2D_flower_noise_60_stdout.txt &
HIP_VISIBLE_DEVICES=15 python Cylinder2D_flower_systematic_noise.py 0.62 > Cylinder2D_flower_noise_62_stdout.txt &
for job in `jobs -p`
do
echo $job
wait $job || let "FAIL+=1"
done
echo $FAIL
if [ "$FAIL" == "0" ];
then
echo "YAY!"
else
echo "FAIL! ($FAIL)"
fi
#!/bin/bash
FAIL=0
HIP_VISIBLE_DEVICES=0 python Cylinder2D_flower_systematic_noise.py 0.64 > Cylinder2D_flower_noise_64_stdout.txt &
HIP_VISIBLE_DEVICES=1 python Cylinder2D_flower_systematic_noise.py 0.66 > Cylinder2D_flower_noise_66_stdout.txt &
HIP_VISIBLE_DEVICES=2 python Cylinder2D_flower_systematic_noise.py 0.68 > Cylinder2D_flower_noise_68_stdout.txt &
HIP_VISIBLE_DEVICES=3 python Cylinder2D_flower_systematic_noise.py 0.70 > Cylinder2D_flower_noise_70_stdout.txt &
HIP_VISIBLE_DEVICES=4 python Cylinder2D_flower_systematic_noise.py 0.72 > Cylinder2D_flower_noise_72_stdout.txt &
HIP_VISIBLE_DEVICES=5 python Cylinder2D_flower_systematic_noise.py 0.74 > Cylinder2D_flower_noise_74_stdout.txt &
HIP_VISIBLE_DEVICES=6 python Cylinder2D_flower_systematic_noise.py 0.76 > Cylinder2D_flower_noise_76_stdout.txt &
HIP_VISIBLE_DEVICES=7 python Cylinder2D_flower_systematic_noise.py 0.78 > Cylinder2D_flower_noise_78_stdout.txt &
HIP_VISIBLE_DEVICES=8 python Cylinder2D_flower_systematic_noise.py 0.80 > Cylinder2D_flower_noise_80_stdout.txt &
HIP_VISIBLE_DEVICES=9 python Cylinder2D_flower_systematic_noise.py 0.82 > Cylinder2D_flower_noise_82_stdout.txt &
HIP_VISIBLE_DEVICES=10 python Cylinder2D_flower_systematic_noise.py 0.84 > Cylinder2D_flower_noise_84_stdout.txt &
HIP_VISIBLE_DEVICES=11 python Cylinder2D_flower_systematic_noise.py 0.86 > Cylinder2D_flower_noise_86_stdout.txt &
HIP_VISIBLE_DEVICES=12 python Cylinder2D_flower_systematic_noise.py 0.88 > Cylinder2D_flower_noise_88_stdout.txt &
HIP_VISIBLE_DEVICES=13 python Cylinder2D_flower_systematic_noise.py 0.90 > Cylinder2D_flower_noise_90_stdout.txt &
HIP_VISIBLE_DEVICES=14 python Cylinder2D_flower_systematic_noise.py 0.92 > Cylinder2D_flower_noise_92_stdout.txt &
HIP_VISIBLE_DEVICES=15 python Cylinder2D_flower_systematic_noise.py 0.94 > Cylinder2D_flower_noise_94_stdout.txt &
for job in `jobs -p`
do
echo $job
wait $job || let "FAIL+=1"
done
echo $FAIL
if [ "$FAIL" == "0" ];
then
echo "YAY!"
else
echo "FAIL! ($FAIL)"
fi
#!/bin/bash
FAIL=0
HIP_VISIBLE_DEVICES=0 python Cylinder2D_flower_systematic_noise.py 0.98 > Cylinder2D_flower_noise_98_stdout.txt &
HIP_VISIBLE_DEVICES=1 python Cylinder2D_flower_systematic_noise.py 1.00 > Cylinder2D_flower_noise_100_stdout.txt &
HIP_VISIBLE_DEVICES=2 python Cylinder2D_flower_systematic_noise.py 1.02 > Cylinder2D_flower_noise_102_stdout.txt &
HIP_VISIBLE_DEVICES=3 python Cylinder2D_flower_systematic_noise.py 1.04 > Cylinder2D_flower_noise_104_stdout.txt &
HIP_VISIBLE_DEVICES=4 python Cylinder2D_flower_systematic_noise.py 1.06 > Cylinder2D_flower_noise_106_stdout.txt &
HIP_VISIBLE_DEVICES=5 python Cylinder2D_flower_systematic_noise.py 1.08 > Cylinder2D_flower_noise_108_stdout.txt &
HIP_VISIBLE_DEVICES=6 python Cylinder2D_flower_systematic_noise.py 1.10 > Cylinder2D_flower_noise_110_stdout.txt &
HIP_VISIBLE_DEVICES=7 python Cylinder2D_flower_systematic_noise.py 1.12 > Cylinder2D_flower_noise_112_stdout.txt &
HIP_VISIBLE_DEVICES=8 python Cylinder2D_flower_systematic_noise.py 1.14 > Cylinder2D_flower_noise_114_stdout.txt &
HIP_VISIBLE_DEVICES=9 python Cylinder2D_flower_systematic_noise.py 1.16 > Cylinder2D_flower_noise_116_stdout.txt &
HIP_VISIBLE_DEVICES=10 python Cylinder2D_flower_systematic_noise.py 1.18 > Cylinder2D_flower_noise_118_stdout.txt &
HIP_VISIBLE_DEVICES=11 python Cylinder2D_flower_systematic_noise.py 1.20 > Cylinder2D_flower_noise_120_stdout.txt &
HIP_VISIBLE_DEVICES=12 python Cylinder2D_flower_systematic_noise.py 1.22 > Cylinder2D_flower_noise_122_stdout.txt &
HIP_VISIBLE_DEVICES=13 python Cylinder2D_flower_systematic_noise.py 1.24 > Cylinder2D_flower_noise_124_stdout.txt &
HIP_VISIBLE_DEVICES=14 python Cylinder2D_flower_systematic_noise.py 1.26 > Cylinder2D_flower_noise_126_stdout.txt &
HIP_VISIBLE_DEVICES=15 python Cylinder2D_flower_systematic_noise.py 1.28 > Cylinder2D_flower_noise_128_stdout.txt &
for job in `jobs -p`
do
echo $job
wait $job || let "FAIL+=1"
done
echo $FAIL
if [ "$FAIL" == "0" ];
then
echo "YAY!"
else
echo "FAIL! ($FAIL)"
fi
#!/bin/bash
FAIL=0
HIP_VISIBLE_DEVICES=0 python Cylinder2D_flower_systematic_noise.py 1.30 > Cylinder2D_flower_noise_130_stdout.txt &
HIP_VISIBLE_DEVICES=1 python Cylinder2D_flower_systematic_noise.py 1.32 > Cylinder2D_flower_noise_132_stdout.txt &
HIP_VISIBLE_DEVICES=2 python Cylinder2D_flower_systematic_noise.py 1.34 > Cylinder2D_flower_noise_134_stdout.txt &
HIP_VISIBLE_DEVICES=3 python Cylinder2D_flower_systematic_noise.py 1.36 > Cylinder2D_flower_noise_136_stdout.txt &
HIP_VISIBLE_DEVICES=4 python Cylinder2D_flower_systematic_noise.py 1.38 > Cylinder2D_flower_noise_138_stdout.txt &
HIP_VISIBLE_DEVICES=5 python Cylinder2D_flower_systematic_noise.py 1.40 > Cylinder2D_flower_noise_140_stdout.txt &
HIP_VISIBLE_DEVICES=6 python Cylinder2D_flower_systematic_noise.py 1.42 > Cylinder2D_flower_noise_142_stdout.txt &
HIP_VISIBLE_DEVICES=7 python Cylinder2D_flower_systematic_noise.py 1.44 > Cylinder2D_flower_noise_144_stdout.txt &
HIP_VISIBLE_DEVICES=8 python Cylinder2D_flower_systematic_noise.py 1.46 > Cylinder2D_flower_noise_146_stdout.txt &
HIP_VISIBLE_DEVICES=9 python Cylinder2D_flower_systematic_noise.py 1.48 > Cylinder2D_flower_noise_148_stdout.txt &
HIP_VISIBLE_DEVICES=10 python Cylinder2D_flower_systematic_noise.py 1.50 > Cylinder2D_flower_noise_150_stdout.txt &
HIP_VISIBLE_DEVICES=11 python Cylinder2D_flower_systematic_noise.py 1.52 > Cylinder2D_flower_noise_152_stdout.txt &
HIP_VISIBLE_DEVICES=12 python Cylinder2D_flower_systematic_noise.py 1.54 > Cylinder2D_flower_noise_154_stdout.txt &
HIP_VISIBLE_DEVICES=13 python Cylinder2D_flower_systematic_noise.py 1.56 > Cylinder2D_flower_noise_156_stdout.txt &
HIP_VISIBLE_DEVICES=14 python Cylinder2D_flower_systematic_noise.py 1.58 > Cylinder2D_flower_noise_158_stdout.txt &
HIP_VISIBLE_DEVICES=15 python Cylinder2D_flower_systematic_noise.py 1.60 > Cylinder2D_flower_noise_160_stdout.txt &
for job in `jobs -p`
do
echo $job
wait $job || let "FAIL+=1"
done
echo $FAIL
if [ "$FAIL" == "0" ];
then
echo "YAY!"
else
echo "FAIL! ($FAIL)"
fi
#!/bin/bash
FAIL=0
HIP_VISIBLE_DEVICES=0 python Stenosis2D.py > Stenosis2D_stdout.txt &
HIP_VISIBLE_DEVICES=1 python Cylinder2D.py > Cylinder2D_stdout.txt &
HIP_VISIBLE_DEVICES=2 python Aneurysm3D.py > Aneurysm3D_stdout.txt &
HIP_VISIBLE_DEVICES=3 python Cylinder2D_Pec_Re.py > Cylinder2D_Pec_Re_stdout.txt &
HIP_VISIBLE_DEVICES=4 python Stenosis2D_Pec_Re.py > Stenosis2D_Pec_Re_stdout.txt &
HIP_VISIBLE_DEVICES=5 python Davinci.py > Davinci_stdout.txt &
HIP_VISIBLE_DEVICES=6 python Cylinder3D.py > Cylinder3D_stdout.txt &
HIP_VISIBLE_DEVICES=7 python Cylinder2D_flower.py > Cylinder2D_flower_stdout.txt &
HIP_VISIBLE_DEVICES=8 python Cylinder2D_No_Slip.py > Cylinder2D_No_Slip_stdout.txt &
HIP_VISIBLE_DEVICES=9 python Cylinder2D_flower_convergence_plot.py > Cylinder2D_flower_convergence_plot_stdout.txt &
HIP_VISIBLE_DEVICES=10 python Aneurysm3D_Wall_Stresses.py > Aneurysm3D_Wall_Stresses_stdout.txt &
for job in `jobs -p`
do
echo $job
wait $job || let "FAIL+=1"
done
echo $FAIL
if [ "$FAIL" == "0" ];
then
echo "YAY!"
else
echo "FAIL! ($FAIL)"
fi
"""
@author: Maziar Raissi
"""
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
import numpy as np
import scipy.io
import time
import sys
from utilities import neural_net, Navier_Stokes_3D, \
tf_session, mean_squared_error, relative_error
class HFM(object):
# notational conventions
# _tf: placeholders for input/output data and points used to regress the equations
# _pred: output of neural network
# _eqns: points used to regress the equations
# _data: input-output data
# _star: preditions
def __init__(self, t_data, x_data, y_data, z_data, c_data,
t_eqns, x_eqns, y_eqns, z_eqns,
layers, batch_size,
Pec, Rey):
# specs
self.layers = layers
self.batch_size = batch_size
# flow properties
self.Pec = Pec
self.Rey = Rey
# data
[self.t_data, self.x_data, self.y_data, self.z_data, self.c_data] = [t_data, x_data, y_data, z_data, c_data]
[self.t_eqns, self.x_eqns, self.y_eqns, self.z_eqns] = [t_eqns, x_eqns, y_eqns, z_eqns]
# placeholders
[self.t_data_tf, self.x_data_tf, self.y_data_tf, self.z_data_tf, self.c_data_tf] = [tf.placeholder(tf.float32, shape=[None, 1]) for _ in range(5)]
[self.t_eqns_tf, self.x_eqns_tf, self.y_eqns_tf, self.z_eqns_tf] = [tf.placeholder(tf.float32, shape=[None, 1]) for _ in range(4)]
# physics "uninformed" neural networks
self.net_cuvwp = neural_net(self.t_data, self.x_data, self.y_data, self.z_data, layers = self.layers)
[self.c_data_pred,
self.u_data_pred,
self.v_data_pred,
self.w_data_pred,
self.p_data_pred] = self.net_cuvwp(self.t_data_tf,
self.x_data_tf,
self.y_data_tf,
self.z_data_tf)
# physics "informed" neural networks
[self.c_eqns_pred,
self.u_eqns_pred,
self.v_eqns_pred,
self.w_eqns_pred,
self.p_eqns_pred] = self.net_cuvwp(self.t_eqns_tf,
self.x_eqns_tf,
self.y_eqns_tf,
self.z_eqns_tf)
[self.e1_eqns_pred,
self.e2_eqns_pred,
self.e3_eqns_pred,
self.e4_eqns_pred,
self.e5_eqns_pred] = Navier_Stokes_3D(self.c_eqns_pred,
self.u_eqns_pred,
self.v_eqns_pred,
self.w_eqns_pred,
self.p_eqns_pred,
self.t_eqns_tf,
self.x_eqns_tf,
self.y_eqns_tf,
self.z_eqns_tf,
self.Pec,
self.Rey)
# loss
self.loss = mean_squared_error(self.c_data_pred, self.c_data_tf) + \
mean_squared_error(self.e1_eqns_pred, 0.0) + \
mean_squared_error(self.e2_eqns_pred, 0.0) + \
mean_squared_error(self.e3_eqns_pred, 0.0) + \
mean_squared_error(self.e4_eqns_pred, 0.0) + \
mean_squared_error(self.e5_eqns_pred, 0.0)
# optimizers
self.learning_rate = tf.placeholder(tf.float32, shape=[])
self.optimizer = tf.train.AdamOptimizer(learning_rate = self.learning_rate)
self.train_op = self.optimizer.minimize(self.loss)
self.sess = tf_session()
def train(self, total_time, learning_rate):
N_data = self.t_data.shape[0]
N_eqns = self.t_eqns.shape[0]
start_time = time.time()
running_time = 0
it = 0
while running_time < total_time:
idx_data = np.random.choice(N_data, self.batch_size)
idx_eqns = np.random.choice(N_eqns, self.batch_size)
(t_data_batch,
x_data_batch,
y_data_batch,
z_data_batch,
c_data_batch) = (self.t_data[idx_data,:],
self.x_data[idx_data,:],
self.y_data[idx_data,:],
self.z_data[idx_data,:],
self.c_data[idx_data,:])
(t_eqns_batch,
x_eqns_batch,
y_eqns_batch,
z_eqns_batch) = (self.t_eqns[idx_eqns,:],
self.x_eqns[idx_eqns,:],
self.y_eqns[idx_eqns,:],
self.z_eqns[idx_eqns,:])
tf_dict = {self.t_data_tf: t_data_batch,
self.x_data_tf: x_data_batch,
self.y_data_tf: y_data_batch,
self.z_data_tf: z_data_batch,
self.c_data_tf: c_data_batch,
self.t_eqns_tf: t_eqns_batch,
self.x_eqns_tf: x_eqns_batch,
self.y_eqns_tf: y_eqns_batch,
self.z_eqns_tf: z_eqns_batch,
self.learning_rate: learning_rate}
self.sess.run([self.train_op], tf_dict)
# Print
if it % 10 == 0:
elapsed = time.time() - start_time
running_time += elapsed/3600.0
[loss_value,
learning_rate_value] = self.sess.run([self.loss,
self.learning_rate], tf_dict)
print('It: %d, Loss: %.3e, Time: %.2fs, Running Time: %.2fh, Learning Rate: %.1e'
%(it, loss_value, elapsed, running_time, learning_rate_value))
sys.stdout.flush()
start_time = time.time()
it += 1
def predict(self, t_star, x_star, y_star, z_star):
tf_dict = {self.t_data_tf: t_star, self.x_data_tf: x_star, self.y_data_tf: y_star, self.z_data_tf: z_star}
c_star = self.sess.run(self.c_data_pred, tf_dict)
u_star = self.sess.run(self.u_data_pred, tf_dict)
v_star = self.sess.run(self.v_data_pred, tf_dict)
w_star = self.sess.run(self.w_data_pred, tf_dict)
p_star = self.sess.run(self.p_data_pred, tf_dict)
return c_star, u_star, v_star, w_star, p_star
if __name__ == "__main__":
batch_size = 10000
layers = [4] + 10*[5*50] + [5]
# Load Data
data = scipy.io.loadmat('../Data/Aneurysm3D.mat')
t_star = data['t_star'] # T x 1
x_star = data['x_star'] # N x 1
y_star = data['y_star'] # N x 1
z_star = data['z_star'] # N x 1
T = t_star.shape[0]
N = x_star.shape[0]
U_star = data['U_star'] # N x T
V_star = data['V_star'] # N x T
W_star = data['W_star'] # N x T
P_star = data['P_star'] # N x T
C_star = data['C_star'] # N x T
# Rearrange Data
T_star = np.tile(t_star, (1,N)).T # N x T
X_star = np.tile(x_star, (1,T)) # N x T
Y_star = np.tile(y_star, (1,T)) # N x T
Z_star = np.tile(z_star, (1,T)) # N x T
######################################################################
######################## Noiseles Data ###############################
######################################################################
T_data = T
N_data = N
idx_t = np.concatenate([np.array([0]), np.random.choice(T-2, T_data-2, replace=False)+1, np.array([T-1])] )
idx_x = np.random.choice(N, N_data, replace=False)
t_data = T_star[:, idx_t][idx_x,:].flatten()[:,None]
x_data = X_star[:, idx_t][idx_x,:].flatten()[:,None]
y_data = Y_star[:, idx_t][idx_x,:].flatten()[:,None]
z_data = Z_star[:, idx_t][idx_x,:].flatten()[:,None]
c_data = C_star[:, idx_t][idx_x,:].flatten()[:,None]
T_eqns = T
N_eqns = N
idx_t = np.concatenate([np.array([0]), np.random.choice(T-2, T_eqns-2, replace=False)+1, np.array([T-1])] )
idx_x = np.random.choice(N, N_eqns, replace=False)
t_eqns = T_star[:, idx_t][idx_x,:].flatten()[:,None]
x_eqns = X_star[:, idx_t][idx_x,:].flatten()[:,None]
y_eqns = Y_star[:, idx_t][idx_x,:].flatten()[:,None]
z_eqns = Z_star[:, idx_t][idx_x,:].flatten()[:,None]
# Training
model = HFM(t_data, x_data, y_data, z_data, c_data,
t_eqns, x_eqns, y_eqns, z_eqns,
layers, batch_size,
Pec = 1.0/0.0101822, Rey = 1.0/0.0101822)
model.train(total_time = 40, learning_rate=1e-3)
# Test Data
snap = np.array([100])
t_test = T_star[:,snap]
x_test = X_star[:,snap]
y_test = Y_star[:,snap]
z_test = Z_star[:,snap]
c_test = C_star[:,snap]
u_test = U_star[:,snap]
v_test = V_star[:,snap]
w_test = W_star[:,snap]
p_test = P_star[:,snap]
# Prediction
c_pred, u_pred, v_pred, w_pred, p_pred = model.predict(t_test, x_test, y_test, z_test)
# Error
error_c = relative_error(c_pred, c_test)
error_u = relative_error(u_pred, u_test)
error_v = relative_error(v_pred, v_test)
error_w = relative_error(w_pred, w_test)
error_p = relative_error(p_pred - np.mean(p_pred), p_test - np.mean(p_test))
print('Error c: %e' % (error_c))
print('Error u: %e' % (error_u))
print('Error v: %e' % (error_v))
print('Error w: %e' % (error_w))
print('Error p: %e' % (error_p))
################# Save Data ###########################
C_pred = 0*C_star
U_pred = 0*U_star
V_pred = 0*V_star
W_pred = 0*W_star
P_pred = 0*P_star
for snap in range(0,t_star.shape[0]):
t_test = T_star[:,snap:snap+1]
x_test = X_star[:,snap:snap+1]
y_test = Y_star[:,snap:snap+1]
z_test = Z_star[:,snap:snap+1]
c_test = C_star[:,snap:snap+1]
u_test = U_star[:,snap:snap+1]
v_test = V_star[:,snap:snap+1]
w_test = W_star[:,snap:snap+1]
p_test = P_star[:,snap:snap+1]
# Prediction
c_pred, u_pred, v_pred, w_pred, p_pred = model.predict(t_test, x_test, y_test, z_test)
C_pred[:,snap:snap+1] = c_pred
U_pred[:,snap:snap+1] = u_pred
V_pred[:,snap:snap+1] = v_pred
W_pred[:,snap:snap+1] = w_pred
P_pred[:,snap:snap+1] = p_pred
# Error
error_c = relative_error(c_pred, c_test)
error_u = relative_error(u_pred, u_test)
error_v = relative_error(v_pred, v_test)
error_w = relative_error(w_pred, w_test)
error_p = relative_error(p_pred - np.mean(p_pred), p_test - np.mean(p_test))
print('Error c: %e' % (error_c))
print('Error u: %e' % (error_u))
print('Error v: %e' % (error_v))
print('Error w: %e' % (error_w))
print('Error p: %e' % (error_p))
scipy.io.savemat('../Results/Aneurysm3D_results_%s.mat' %(time.strftime('%d_%m_%Y')),
{'C_pred':C_pred, 'U_pred':U_pred, 'V_pred':V_pred, 'W_pred':W_pred, 'P_pred':P_pred})
"""
@author: Maziar Raissi
"""
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
import numpy as np
import scipy.io
import time
import sys
from utilities import neural_net, Navier_Stokes_3D, Shear_Stress_3D, \
tf_session, mean_squared_error, relative_error
class HFM(object):
# notational conventions
# _tf: placeholders for input/output data and points used to regress the equations
# _pred: output of neural network
# _eqns: points used to regress the equations
# _data: input-output data
# _star: preditions
def __init__(self, t_data, x_data, y_data, z_data, c_data,
t_eqns, x_eqns, y_eqns, z_eqns,
layers, batch_size,
Pec, Rey):
# specs
self.layers = layers
self.batch_size = batch_size
# flow properties
self.Pec = Pec
self.Rey = Rey
# data
[self.t_data, self.x_data, self.y_data, self.z_data, self.c_data] = [t_data, x_data, y_data, z_data, c_data]
[self.t_eqns, self.x_eqns, self.y_eqns, self.z_eqns] = [t_eqns, x_eqns, y_eqns, z_eqns]
# placeholders
[self.t_data_tf, self.x_data_tf, self.y_data_tf, self.z_data_tf, self.c_data_tf] = [tf.placeholder(tf.float32, shape=[None, 1]) for _ in range(5)]
[self.t_eqns_tf, self.x_eqns_tf, self.y_eqns_tf, self.z_eqns_tf] = [tf.placeholder(tf.float32, shape=[None, 1]) for _ in range(4)]
[self.nx_eqns_tf, self.ny_eqns_tf, self.nz_eqns_tf] = [tf.placeholder(tf.float32, shape=[None, 1]) for _ in range(3)]
# physics "uninformed" neural networks
self.net_cuvwp = neural_net(self.t_data, self.x_data, self.y_data, self.z_data, layers = self.layers)
[self.c_data_pred,
self.u_data_pred,
self.v_data_pred,
self.w_data_pred,
self.p_data_pred] = self.net_cuvwp(self.t_data_tf,
self.x_data_tf,
self.y_data_tf,
self.z_data_tf)
# physics "informed" neural networks
[self.c_eqns_pred,
self.u_eqns_pred,
self.v_eqns_pred,
self.w_eqns_pred,
self.p_eqns_pred] = self.net_cuvwp(self.t_eqns_tf,
self.x_eqns_tf,
self.y_eqns_tf,
self.z_eqns_tf)
[self.e1_eqns_pred,
self.e2_eqns_pred,
self.e3_eqns_pred,
self.e4_eqns_pred,
self.e5_eqns_pred] = Navier_Stokes_3D(self.c_eqns_pred,
self.u_eqns_pred,
self.v_eqns_pred,
self.w_eqns_pred,
self.p_eqns_pred,
self.t_eqns_tf,
self.x_eqns_tf,
self.y_eqns_tf,
self.z_eqns_tf,
self.Pec,
self.Rey)
[self.sx_eqns_pred,
self.sy_eqns_pred,
self.sz_eqns_pred] = Shear_Stress_3D(self.u_eqns_pred,
self.v_eqns_pred,
self.w_eqns_pred,
self.x_eqns_tf,
self.y_eqns_tf,
self.z_eqns_tf,
self.nx_eqns_tf,
self.ny_eqns_tf,
self.nz_eqns_tf,
self.Rey)
# loss
self.loss = mean_squared_error(self.c_data_pred, self.c_data_tf) + \
mean_squared_error(self.e1_eqns_pred, 0.0) + \
mean_squared_error(self.e2_eqns_pred, 0.0) + \
mean_squared_error(self.e3_eqns_pred, 0.0) + \
mean_squared_error(self.e4_eqns_pred, 0.0) + \
mean_squared_error(self.e5_eqns_pred, 0.0)
# optimizers
self.learning_rate = tf.placeholder(tf.float32, shape=[])
self.optimizer = tf.train.AdamOptimizer(learning_rate = self.learning_rate)
self.train_op = self.optimizer.minimize(self.loss)
self.sess = tf_session()
def train(self, total_time, learning_rate):
N_data = self.t_data.shape[0]
N_eqns = self.t_eqns.shape[0]
start_time = time.time()
running_time = 0
it = 0
while running_time < total_time:
idx_data = np.random.choice(N_data, self.batch_size)
idx_eqns = np.random.choice(N_eqns, self.batch_size)
(t_data_batch,
x_data_batch,
y_data_batch,
z_data_batch,
c_data_batch) = (self.t_data[idx_data,:],
self.x_data[idx_data,:],
self.y_data[idx_data,:],
self.z_data[idx_data,:],
self.c_data[idx_data,:])
(t_eqns_batch,
x_eqns_batch,
y_eqns_batch,
z_eqns_batch) = (self.t_eqns[idx_eqns,:],
self.x_eqns[idx_eqns,:],
self.y_eqns[idx_eqns,:],
self.z_eqns[idx_eqns,:])
tf_dict = {self.t_data_tf: t_data_batch,
self.x_data_tf: x_data_batch,
self.y_data_tf: y_data_batch,
self.z_data_tf: z_data_batch,
self.c_data_tf: c_data_batch,
self.t_eqns_tf: t_eqns_batch,
self.x_eqns_tf: x_eqns_batch,
self.y_eqns_tf: y_eqns_batch,
self.z_eqns_tf: z_eqns_batch,
self.learning_rate: learning_rate}
self.sess.run([self.train_op], tf_dict)
# Print
if it % 10 == 0:
elapsed = time.time() - start_time
running_time += elapsed/3600.0
[loss_value,
learning_rate_value] = self.sess.run([self.loss,
self.learning_rate], tf_dict)
print('It: %d, Loss: %.3e, Time: %.2fs, Running Time: %.2fh, Learning Rate: %.1e'
%(it, loss_value, elapsed, running_time, learning_rate_value))
sys.stdout.flush()
start_time = time.time()
it += 1
def predict(self, t_star, x_star, y_star, z_star):
tf_dict = {self.t_data_tf: t_star, self.x_data_tf: x_star, self.y_data_tf: y_star, self.z_data_tf: z_star}
c_star = self.sess.run(self.c_data_pred, tf_dict)
u_star = self.sess.run(self.u_data_pred, tf_dict)
v_star = self.sess.run(self.v_data_pred, tf_dict)
w_star = self.sess.run(self.w_data_pred, tf_dict)
p_star = self.sess.run(self.p_data_pred, tf_dict)
return c_star, u_star, v_star, w_star, p_star
def predict_shear(self, t_star, x_star, y_star, z_star, nx_star, ny_star, nz_star):
tf_dict = {self.t_eqns_tf: t_star, self.x_eqns_tf: x_star, self.y_eqns_tf: y_star, self.z_eqns_tf: z_star,
self.nx_eqns_tf: nx_star, self.ny_eqns_tf: ny_star, self.nz_eqns_tf: nz_star}
sx_star = self.sess.run(self.sx_eqns_pred, tf_dict)
sy_star = self.sess.run(self.sy_eqns_pred, tf_dict)
sz_star = self.sess.run(self.sz_eqns_pred, tf_dict)
return sx_star, sy_star, sz_star
if __name__ == "__main__":
batch_size = 10000
layers = [4] + 10*[5*50] + [5]
# Load Shear Data
data_shear = scipy.io.loadmat('../Data/real_aneurysm_shear.mat')
xb_star = data_shear['xb_star']
yb_star = data_shear['yb_star']
zb_star = data_shear['zb_star']
nx_star = data_shear['nx_star']
ny_star = data_shear['ny_star']
nz_star = data_shear['nz_star']
Sx_star = data_shear['Sx_star']
Sy_star = data_shear['Sy_star']
Sz_star = data_shear['Sz_star']
# Load Data
data = scipy.io.loadmat('../Data/real_aneurysm.mat')
t_star = data['t_star'] # T x 1
x_star = data['x_star'] # N x 1
y_star = data['y_star'] # N x 1
z_star = data['z_star'] # N x 1
T = t_star.shape[0]
N = x_star.shape[0]
U_star = data['U_star'] # N x T
V_star = data['V_star'] # N x T
W_star = data['W_star'] # N x T
P_star = data['P_star'] # N x T
C_star = data['C_star'] # N x T
# Rearrange Data
T_star = np.tile(t_star, (1,N)).T # N x T
X_star = np.tile(x_star, (1,T)) # N x T
Y_star = np.tile(y_star, (1,T)) # N x T
Z_star = np.tile(z_star, (1,T)) # N x T
######################################################################
######################## Training Data ###############################
######################################################################
T_data = T
N_data = N
idx_t = np.concatenate([np.array([0]), np.random.choice(T-2, T_data-2, replace=False)+1, np.array([T-1])] )
idx_x = np.random.choice(N, N_data, replace=False)
t_data = T_star[:, idx_t][idx_x,:].flatten()[:,None]
x_data = X_star[:, idx_t][idx_x,:].flatten()[:,None]
y_data = Y_star[:, idx_t][idx_x,:].flatten()[:,None]
z_data = Z_star[:, idx_t][idx_x,:].flatten()[:,None]
c_data = C_star[:, idx_t][idx_x,:].flatten()[:,None]
T_eqns = T
N_eqns = N
idx_t = np.concatenate([np.array([0]), np.random.choice(T-2, T_eqns-2, replace=False)+1, np.array([T-1])] )
idx_x = np.random.choice(N, N_eqns, replace=False)
t_eqns = T_star[:, idx_t][idx_x,:].flatten()[:,None]
x_eqns = X_star[:, idx_t][idx_x,:].flatten()[:,None]
y_eqns = Y_star[:, idx_t][idx_x,:].flatten()[:,None]
z_eqns = Z_star[:, idx_t][idx_x,:].flatten()[:,None]
# Training
model = HFM(t_data, x_data, y_data, z_data, c_data,
t_eqns, x_eqns, y_eqns, z_eqns,
layers, batch_size,
Pec = 1.0/0.01, Rey = 1.0/0.01)
model.train(total_time = 40, learning_rate=1e-3)
# Test Data
snap = np.array([150])
t_test = T_star[:,snap]
x_test = X_star[:,snap]
y_test = Y_star[:,snap]
z_test = Z_star[:,snap]
c_test = C_star[:,snap]
u_test = U_star[:,snap]
v_test = V_star[:,snap]
w_test = W_star[:,snap]
p_test = P_star[:,snap]
sx_test = Sx_star[:,snap]
sy_test = Sy_star[:,snap]
sz_test = Sz_star[:,snap]
# Prediction
c_pred, u_pred, v_pred, w_pred, p_pred = model.predict(t_test, x_test, y_test, z_test)
# Shear
sx_pred, sy_pred, sz_pred = model.predict_shear(t_test[0] + 0.0*xb_star,
xb_star, yb_star, zb_star,
nx_star, ny_star, nz_star)
# Error
error_c = relative_error(c_pred, c_test)
error_u = relative_error(u_pred, u_test)
error_v = relative_error(v_pred, v_test)
error_w = relative_error(w_pred, w_test)
error_p = relative_error(p_pred - np.mean(p_pred), p_test - np.mean(p_test))
print('Error c: %e' % (error_c))
print('Error u: %e' % (error_u))
print('Error v: %e' % (error_v))
print('Error w: %e' % (error_w))
print('Error p: %e' % (error_p))
sys.stdout.flush()
# Error
error_sx = relative_error(sx_pred, sx_test)
error_sy = relative_error(sy_pred, sy_test)
error_sz = relative_error(sz_pred, sz_test)
print('Error sx: %e' % (error_sx))
print('Error sy: %e' % (error_sy))
print('Error sz: %e' % (error_sz))
sys.stdout.flush()
################# Save Data ###########################
C_pred = 0*C_star
U_pred = 0*U_star
V_pred = 0*V_star
W_pred = 0*W_star
P_pred = 0*P_star
Sx_pred = 0*Sx_star
Sy_pred = 0*Sy_star
Sz_pred = 0*Sz_star
for snap in range(0,t_star.shape[0]):
t_test = T_star[:,snap:snap+1]
x_test = X_star[:,snap:snap+1]
y_test = Y_star[:,snap:snap+1]
z_test = Z_star[:,snap:snap+1]
c_test = C_star[:,snap:snap+1]
u_test = U_star[:,snap:snap+1]
v_test = V_star[:,snap:snap+1]
w_test = W_star[:,snap:snap+1]
p_test = P_star[:,snap:snap+1]
sx_test = Sx_star[:,snap:snap+1]
sy_test = Sy_star[:,snap:snap+1]
sz_test = Sz_star[:,snap:snap+1]
# Prediction
c_pred, u_pred, v_pred, w_pred, p_pred = model.predict(t_test, x_test, y_test, z_test)
# Shear
sx_pred, sy_pred, sz_pred = model.predict_shear(t_test[0] + 0.0*xb_star,
xb_star, yb_star, zb_star,
nx_star, ny_star, nz_star)
C_pred[:,snap:snap+1] = c_pred
U_pred[:,snap:snap+1] = u_pred
V_pred[:,snap:snap+1] = v_pred
W_pred[:,snap:snap+1] = w_pred
P_pred[:,snap:snap+1] = p_pred
Sx_pred[:,snap:snap+1] = sx_pred
Sy_pred[:,snap:snap+1] = sy_pred
Sz_pred[:,snap:snap+1] = sz_pred
# Error
error_c = relative_error(c_pred, c_test)
error_u = relative_error(u_pred, u_test)
error_v = relative_error(v_pred, v_test)
error_w = relative_error(w_pred, w_test)
error_p = relative_error(p_pred - np.mean(p_pred), p_test - np.mean(p_test))
print('Error c: %e' % (error_c))
print('Error u: %e' % (error_u))
print('Error v: %e' % (error_v))
print('Error w: %e' % (error_w))
print('Error p: %e' % (error_p))
sys.stdout.flush()
# Error
error_sx = relative_error(sx_pred, sx_test)
error_sy = relative_error(sy_pred, sy_test)
error_sz = relative_error(sz_pred, sz_test)
print('Error sx: %e' % (error_sx))
print('Error sy: %e' % (error_sy))
print('Error sz: %e' % (error_sz))
sys.stdout.flush()
scipy.io.savemat('../Results/Aneurysm3D_Wall_Stresses_results_%s.mat' %(time.strftime('%d_%m_%Y')),
{'C_pred':C_pred, 'U_pred':U_pred, 'V_pred':V_pred, 'W_pred':W_pred, 'P_pred':P_pred,
'Sx_pred':Sx_pred, 'Sy_pred':Sy_pred, 'Sz_pred':Sz_pred})
"""
@author: Maziar Raissi
"""
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
import numpy as np
import scipy.io
import time
import sys
from utilities import neural_net, Navier_Stokes_2D, Gradient_Velocity_2D, \
tf_session, mean_squared_error, relative_error
class HFM(object):
# notational conventions
# _tf: placeholders for input/output data and points used to regress the equations
# _pred: output of neural network
# _eqns: points used to regress the equations
# _data: input-output data
# _inlet: input-output data at the inlet
# _star: preditions
def __init__(self, t_data, x_data, y_data, c_data,
t_eqns, x_eqns, y_eqns,
t_inlet, x_inlet, y_inlet, u_inlet, v_inlet,
layers, batch_size,
Pec, Rey):
# specs
self.layers = layers
self.batch_size = batch_size
# flow properties
self.Pec = Pec
self.Rey = Rey
# data
[self.t_data, self.x_data, self.y_data, self.c_data] = [t_data, x_data, y_data, c_data]
[self.t_eqns, self.x_eqns, self.y_eqns] = [t_eqns, x_eqns, y_eqns]
[self.t_inlet, self.x_inlet, self.y_inlet, self.u_inlet, self.v_inlet] = [t_inlet, x_inlet, y_inlet, u_inlet, v_inlet]
# placeholders
[self.t_data_tf, self.x_data_tf, self.y_data_tf, self.c_data_tf] = [tf.placeholder(tf.float32, shape=[None, 1]) for _ in range(4)]
[self.t_eqns_tf, self.x_eqns_tf, self.y_eqns_tf] = [tf.placeholder(tf.float32, shape=[None, 1]) for _ in range(3)]
[self.t_inlet_tf, self.x_inlet_tf, self.y_inlet_tf, self.u_inlet_tf, self.v_inlet_tf] = [tf.placeholder(tf.float32, shape=[None, 1]) for _ in range(5)]
# physics "uninformed" neural networks
self.net_cuvp = neural_net(self.t_data, self.x_data, self.y_data, layers = self.layers)
[self.c_data_pred,
self.u_data_pred,
self.v_data_pred,
self.p_data_pred] = self.net_cuvp(self.t_data_tf,
self.x_data_tf,
self.y_data_tf)
# physics "uninformed" neural networks (data at the inlet)
[_,
self.u_inlet_pred,
self.v_inlet_pred,
_] = self.net_cuvp(self.t_inlet_tf,
self.x_inlet_tf,
self.y_inlet_tf)
# physics "informed" neural networks
[self.c_eqns_pred,
self.u_eqns_pred,
self.v_eqns_pred,
self.p_eqns_pred] = self.net_cuvp(self.t_eqns_tf,
self.x_eqns_tf,
self.y_eqns_tf)
[self.e1_eqns_pred,
self.e2_eqns_pred,
self.e3_eqns_pred,
self.e4_eqns_pred] = Navier_Stokes_2D(self.c_eqns_pred,
self.u_eqns_pred,
self.v_eqns_pred,
self.p_eqns_pred,
self.t_eqns_tf,
self.x_eqns_tf,
self.y_eqns_tf,
self.Pec,
self.Rey)
# gradients required for the lift and drag forces
[self.u_x_eqns_pred,
self.v_x_eqns_pred,
self.u_y_eqns_pred,
self.v_y_eqns_pred] = Gradient_Velocity_2D(self.u_eqns_pred,
self.v_eqns_pred,
self.x_eqns_tf,
self.y_eqns_tf)
# loss
self.loss = mean_squared_error(self.c_data_pred, self.c_data_tf) + \
mean_squared_error(self.u_inlet_pred, self.u_inlet_tf) + \
mean_squared_error(self.v_inlet_pred, self.v_inlet_tf) + \
mean_squared_error(self.e1_eqns_pred, 0.0) + \
mean_squared_error(self.e2_eqns_pred, 0.0) + \
mean_squared_error(self.e3_eqns_pred, 0.0) + \
mean_squared_error(self.e4_eqns_pred, 0.0)
# optimizers
self.learning_rate = tf.placeholder(tf.float32, shape=[])
self.optimizer = tf.train.AdamOptimizer(learning_rate = self.learning_rate)
self.train_op = self.optimizer.minimize(self.loss)
self.sess = tf_session()
def train(self, total_time, learning_rate):
N_data = self.t_data.shape[0]
N_eqns = self.t_eqns.shape[0]
start_time = time.time()
running_time = 0
it = 0
while running_time < total_time:
idx_data = np.random.choice(N_data, min(self.batch_size, N_data))
idx_eqns = np.random.choice(N_eqns, self.batch_size)
(t_data_batch,
x_data_batch,
y_data_batch,
c_data_batch) = (self.t_data[idx_data,:],
self.x_data[idx_data,:],
self.y_data[idx_data,:],
self.c_data[idx_data,:])
(t_eqns_batch,
x_eqns_batch,
y_eqns_batch) = (self.t_eqns[idx_eqns,:],
self.x_eqns[idx_eqns,:],
self.y_eqns[idx_eqns,:])
tf_dict = {self.t_data_tf: t_data_batch,
self.x_data_tf: x_data_batch,
self.y_data_tf: y_data_batch,
self.c_data_tf: c_data_batch,
self.t_eqns_tf: t_eqns_batch,
self.x_eqns_tf: x_eqns_batch,
self.y_eqns_tf: y_eqns_batch,
self.t_inlet_tf: self.t_inlet,
self.x_inlet_tf: self.x_inlet,
self.y_inlet_tf: self.y_inlet,
self.u_inlet_tf: self.u_inlet,
self.v_inlet_tf: self.v_inlet,
self.learning_rate: learning_rate}
self.sess.run([self.train_op], tf_dict)
# Print
if it % 10 == 0:
elapsed = time.time() - start_time
running_time += elapsed/3600.0
[loss_value,
learning_rate_value] = self.sess.run([self.loss,
self.learning_rate], tf_dict)
print('It: %d, Loss: %.3e, Time: %.2fs, Running Time: %.2fh, Learning Rate: %.1e'
%(it, loss_value, elapsed, running_time, learning_rate_value))
sys.stdout.flush()
start_time = time.time()
it += 1
def predict(self, t_star, x_star, y_star):
tf_dict = {self.t_data_tf: t_star, self.x_data_tf: x_star, self.y_data_tf: y_star}
# c_star = self.sess.run(self.c_data_pred, tf_dict)
# u_star = self.sess.run(self.u_data_pred, tf_dict)
# v_star = self.sess.run(self.v_data_pred, tf_dict)
# p_star = self.sess.run(self.p_data_pred, tf_dict)
c_star, u_star, v_star, p_star = self.sess.run(
[self.c_data_pred, self.u_data_pred, self.v_data_pred, self.p_data_pred],
tf_dict
)
return c_star, u_star, v_star, p_star
def predict_drag_lift(self, t_cyl):
viscosity = (1.0/self.Rey)
theta = np.linspace(0.0,2*np.pi,200)[:,None] # N x 1
d_theta = theta[1,0] - theta[0,0]
x_cyl = 0.5*np.cos(theta) # N x 1
y_cyl = 0.5*np.sin(theta) # N x 1
N = x_cyl.shape[0]
T = t_cyl.shape[0]
T_star = np.tile(t_cyl, (1,N)).T # N x T
X_star = np.tile(x_cyl, (1,T)) # N x T
Y_star = np.tile(y_cyl, (1,T)) # N x T
t_star = np.reshape(T_star,[-1,1]) # NT x 1
x_star = np.reshape(X_star,[-1,1]) # NT x 1
y_star = np.reshape(Y_star,[-1,1]) # NT x 1
tf_dict = {self.t_eqns_tf: t_star, self.x_eqns_tf: x_star, self.y_eqns_tf: y_star}
[p_star,
u_x_star,
u_y_star,
v_x_star,
v_y_star] = self.sess.run([self.p_eqns_pred,
self.u_x_eqns_pred,
self.u_y_eqns_pred,
self.v_x_eqns_pred,
self.v_y_eqns_pred], tf_dict)
P_star = np.reshape(p_star, [N,T]) # N x T
P_star = P_star - np.mean(P_star, axis=0)
U_x_star = np.reshape(u_x_star, [N,T]) # N x T
U_y_star = np.reshape(u_y_star, [N,T]) # N x T
V_x_star = np.reshape(v_x_star, [N,T]) # N x T
V_y_star = np.reshape(v_y_star, [N,T]) # N x T
INT0 = (-P_star[0:-1,:] + 2*viscosity*U_x_star[0:-1,:])*X_star[0:-1,:] + viscosity*(U_y_star[0:-1,:] + V_x_star[0:-1,:])*Y_star[0:-1,:]
INT1 = (-P_star[1: , :] + 2*viscosity*U_x_star[1: , :])*X_star[1: , :] + viscosity*(U_y_star[1: , :] + V_x_star[1: , :])*Y_star[1: , :]
F_D = 0.5*np.sum(INT0.T+INT1.T, axis = 1)*d_theta # T x 1
INT0 = (-P_star[0:-1,:] + 2*viscosity*V_y_star[0:-1,:])*Y_star[0:-1,:] + viscosity*(U_y_star[0:-1,:] + V_x_star[0:-1,:])*X_star[0:-1,:]
INT1 = (-P_star[1: , :] + 2*viscosity*V_y_star[1: , :])*Y_star[1: , :] + viscosity*(U_y_star[1: , :] + V_x_star[1: , :])*X_star[1: , :]
F_L = 0.5*np.sum(INT0.T+INT1.T, axis = 1)*d_theta # T x 1
return F_D, F_L
import time
if __name__ == "__main__":
batch_size = 10000
layers = [3] + 10*[4*50] + [4]
# Load Data
data = scipy.io.loadmat('../Data/Cylinder2D.mat')
t_star = data['t_star'] # T x 1
x_star = data['x_star'] # N x 1
y_star = data['y_star'] # N x 1
T = t_star.shape[0]
N = x_star.shape[0]
U_star = data['U_star'] # N x T
V_star = data['V_star'] # N x T
P_star = data['P_star'] # N x T
C_star = data['C_star'] # N x T
# Rearrange Data
T_star = np.tile(t_star, (1,N)).T # N x T
X_star = np.tile(x_star, (1,T)) # N x T
Y_star = np.tile(y_star, (1,T)) # N x T
t = T_star.flatten()[:,None] # NT x 1
x = X_star.flatten()[:,None] # NT x 1
y = Y_star.flatten()[:,None] # NT x 1
u = U_star.flatten()[:,None] # NT x 1
v = V_star.flatten()[:,None] # NT x 1
p = P_star.flatten()[:,None] # NT x 1
c = C_star.flatten()[:,None] # NT x 1
######################################################################
######################## Training Data ###############################
######################################################################
T_data = T # int(sys.argv[1])
N_data = N # int(sys.argv[2])
idx_t = np.concatenate([np.array([0]), np.random.choice(T-2, T_data-2, replace=False)+1, np.array([T-1])] )
idx_x = np.random.choice(N, N_data, replace=False)
t_data = T_star[:, idx_t][idx_x,:].flatten()[:,None]
x_data = X_star[:, idx_t][idx_x,:].flatten()[:,None]
y_data = Y_star[:, idx_t][idx_x,:].flatten()[:,None]
c_data = C_star[:, idx_t][idx_x,:].flatten()[:,None]
T_eqns = T
N_eqns = N
idx_t = np.concatenate([np.array([0]), np.random.choice(T-2, T_eqns-2, replace=False)+1, np.array([T-1])] )
idx_x = np.random.choice(N, N_eqns, replace=False)
t_eqns = T_star[:, idx_t][idx_x,:].flatten()[:,None]
x_eqns = X_star[:, idx_t][idx_x,:].flatten()[:,None]
y_eqns = Y_star[:, idx_t][idx_x,:].flatten()[:,None]
# Training Data on velocity (inlet)
t_inlet = t[x == x.min()][:,None]
x_inlet = x[x == x.min()][:,None]
y_inlet = y[x == x.min()][:,None]
u_inlet = u[x == x.min()][:,None]
v_inlet = v[x == x.min()][:,None]
# Training
model = HFM(t_data, x_data, y_data, c_data,
t_eqns, x_eqns, y_eqns,
t_inlet, x_inlet, y_inlet, u_inlet, v_inlet,
layers, batch_size,
Pec = 100, Rey = 100)
model.train(total_time = 0.1, learning_rate=1e-3)
F_D, F_L = model.predict_drag_lift(t_star)
# Test Data
snap = np.array([100])
t_test = T_star[:,snap]
x_test = X_star[:,snap]
y_test = Y_star[:,snap]
c_test = C_star[:,snap]
u_test = U_star[:,snap]
v_test = V_star[:,snap]
p_test = P_star[:,snap]
# Prediction
start_time = time.time()
c_pred, u_pred, v_pred, p_pred = model.predict(t_test, x_test, y_test)
pred_time = time.time() - start_time
print("Single prediction time: %.4fs" % pred_time)
# Error
error_c = relative_error(c_pred, c_test)
error_u = relative_error(u_pred, u_test)
error_v = relative_error(v_pred, v_test)
error_p = relative_error(p_pred - np.mean(p_pred), p_test - np.mean(p_test))
print('Error c: %e' % (error_c))
print('Error u: %e' % (error_u))
print('Error v: %e' % (error_v))
print('Error p: %e' % (error_p))
################# Save Data ###########################
C_pred = 0*C_star
U_pred = 0*U_star
V_pred = 0*V_star
P_pred = 0*P_star
total_pred_time = 0.0
for snap in range(0,t_star.shape[0]):
t_test = T_star[:,snap:snap+1]
x_test = X_star[:,snap:snap+1]
y_test = Y_star[:,snap:snap+1]
c_test = C_star[:,snap:snap+1]
u_test = U_star[:,snap:snap+1]
v_test = V_star[:,snap:snap+1]
p_test = P_star[:,snap:snap+1]
# Prediction
start_time = time.time()
c_pred, u_pred, v_pred, p_pred = model.predict(t_test, x_test, y_test)
pred_time = time.time() - start_time
total_pred_time += pred_time
C_pred[:,snap:snap+1] = c_pred
U_pred[:,snap:snap+1] = u_pred
V_pred[:,snap:snap+1] = v_pred
P_pred[:,snap:snap+1] = p_pred
# Error
error_c = relative_error(c_pred, c_test)
error_u = relative_error(u_pred, u_test)
error_v = relative_error(v_pred, v_test)
error_p = relative_error(p_pred - np.mean(p_pred), p_test - np.mean(p_test))
print('Snap: %d, Time: %.4fs' % (snap, pred_time))
print('Error c: %e' % (error_c))
print('Error u: %e' % (error_u))
print('Error v: %e' % (error_v))
print('Error p: %e' % (error_p))
print('Total prediction time for all frames: %.2fs' % total_pred_time)
print('Average time per frame: %.4fs' % (total_pred_time / t_star.shape[0]))
import os
os.makedirs('../Results', exist_ok=True)
scipy.io.savemat('../Results/Cylinder2D_results_%s.mat' %(time.strftime('%d_%m_%Y')),
{'C_pred':C_pred, 'U_pred':U_pred, 'V_pred':V_pred, 'P_pred':P_pred, 'F_L':F_L, 'F_D':F_D})
"""
@author: Maziar Raissi
"""
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
import numpy as np
import scipy.io
import time
import sys
from utilities import neural_net, Navier_Stokes_2D, Gradient_Velocity_2D, \
tf_session, mean_squared_error, relative_error
class HFM(object):
# notational conventions
# _tf: placeholders for input/output data and points used to regress the equations
# _pred: output of neural network
# _eqns: points used to regress the equations
# _data: input-output data
# _inlet: input-output data at the inlet
# _star: preditions
def __init__(self, t_data, x_data, y_data, c_data,
t_eqns, x_eqns, y_eqns,
t_inlet, x_inlet, y_inlet, u_inlet, v_inlet,
layers, batch_size,
Pec, Rey):
# specs
self.layers = layers
self.batch_size = batch_size
# flow properties
self.Pec = Pec
self.Rey = Rey
# data
[self.t_data, self.x_data, self.y_data, self.c_data] = [t_data, x_data, y_data, c_data]
[self.t_eqns, self.x_eqns, self.y_eqns] = [t_eqns, x_eqns, y_eqns]
[self.t_inlet, self.x_inlet, self.y_inlet, self.u_inlet, self.v_inlet] = [t_inlet, x_inlet, y_inlet, u_inlet, v_inlet]
# placeholders
[self.t_data_tf, self.x_data_tf, self.y_data_tf, self.c_data_tf] = [tf.placeholder(tf.float32, shape=[None, 1]) for _ in range(4)]
[self.t_eqns_tf, self.x_eqns_tf, self.y_eqns_tf] = [tf.placeholder(tf.float32, shape=[None, 1]) for _ in range(3)]
[self.t_inlet_tf, self.x_inlet_tf, self.y_inlet_tf, self.u_inlet_tf, self.v_inlet_tf] = [tf.placeholder(tf.float32, shape=[None, 1]) for _ in range(5)]
# physics "uninformed" neural networks
self.net_cuvp = neural_net(self.t_data, self.x_data, self.y_data, layers = self.layers)
[self.c_data_pred,
self.u_data_pred,
self.v_data_pred,
self.p_data_pred] = self.net_cuvp(self.t_data_tf,
self.x_data_tf,
self.y_data_tf)
# physics "uninformed" neural networks (data at the inlet)
[_,
self.u_inlet_pred,
self.v_inlet_pred,
_] = self.net_cuvp(self.t_inlet_tf,
self.x_inlet_tf,
self.y_inlet_tf)
# physics "informed" neural networks
[self.c_eqns_pred,
self.u_eqns_pred,
self.v_eqns_pred,
self.p_eqns_pred] = self.net_cuvp(self.t_eqns_tf,
self.x_eqns_tf,
self.y_eqns_tf)
[self.e1_eqns_pred,
self.e2_eqns_pred,
self.e3_eqns_pred,
self.e4_eqns_pred] = Navier_Stokes_2D(self.c_eqns_pred,
self.u_eqns_pred,
self.v_eqns_pred,
self.p_eqns_pred,
self.t_eqns_tf,
self.x_eqns_tf,
self.y_eqns_tf,
self.Pec,
self.Rey)
# gradients required for the lift and drag forces
[self.u_x_eqns_pred,
self.v_x_eqns_pred,
self.u_y_eqns_pred,
self.v_y_eqns_pred] = Gradient_Velocity_2D(self.u_eqns_pred,
self.v_eqns_pred,
self.x_eqns_tf,
self.y_eqns_tf)
# loss
self.loss = mean_squared_error(self.c_data_pred, self.c_data_tf) + \
mean_squared_error(self.u_inlet_pred, self.u_inlet_tf) + \
mean_squared_error(self.v_inlet_pred, self.v_inlet_tf) + \
mean_squared_error(self.e1_eqns_pred, 0.0) + \
mean_squared_error(self.e2_eqns_pred, 0.0) + \
mean_squared_error(self.e3_eqns_pred, 0.0) + \
mean_squared_error(self.e4_eqns_pred, 0.0)
# optimizers
self.learning_rate = tf.placeholder(tf.float32, shape=[])
self.optimizer = tf.train.AdamOptimizer(learning_rate = self.learning_rate)
self.train_op = self.optimizer.minimize(self.loss)
self.sess = tf_session()
def train(self, total_time, learning_rate):
N_data = self.t_data.shape[0]
N_eqns = self.t_eqns.shape[0]
start_time = time.time()
running_time = 0
it = 0
while running_time < total_time:
idx_data = np.random.choice(N_data, min(self.batch_size, N_data))
idx_eqns = np.random.choice(N_eqns, self.batch_size)
(t_data_batch,
x_data_batch,
y_data_batch,
c_data_batch) = (self.t_data[idx_data,:],
self.x_data[idx_data,:],
self.y_data[idx_data,:],
self.c_data[idx_data,:])
(t_eqns_batch,
x_eqns_batch,
y_eqns_batch) = (self.t_eqns[idx_eqns,:],
self.x_eqns[idx_eqns,:],
self.y_eqns[idx_eqns,:])
tf_dict = {self.t_data_tf: t_data_batch,
self.x_data_tf: x_data_batch,
self.y_data_tf: y_data_batch,
self.c_data_tf: c_data_batch,
self.t_eqns_tf: t_eqns_batch,
self.x_eqns_tf: x_eqns_batch,
self.y_eqns_tf: y_eqns_batch,
self.t_inlet_tf: self.t_inlet,
self.x_inlet_tf: self.x_inlet,
self.y_inlet_tf: self.y_inlet,
self.u_inlet_tf: self.u_inlet,
self.v_inlet_tf: self.v_inlet,
self.learning_rate: learning_rate}
self.sess.run([self.train_op], tf_dict)
# Print
if it % 10 == 0:
elapsed = time.time() - start_time
running_time += elapsed/3600.0
[loss_value,
learning_rate_value] = self.sess.run([self.loss,
self.learning_rate], tf_dict)
print('It: %d, Loss: %.3e, Time: %.2fs, Running Time: %.2fh, Learning Rate: %.1e'
%(it, loss_value, elapsed, running_time, learning_rate_value))
sys.stdout.flush()
start_time = time.time()
it += 1
def predict(self, t_star, x_star, y_star):
tf_dict = {self.t_data_tf: t_star, self.x_data_tf: x_star, self.y_data_tf: y_star}
c_star = self.sess.run(self.c_data_pred, tf_dict)
u_star = self.sess.run(self.u_data_pred, tf_dict)
v_star = self.sess.run(self.v_data_pred, tf_dict)
p_star = self.sess.run(self.p_data_pred, tf_dict)
return c_star, u_star, v_star, p_star
def predict_drag_lift(self, t_cyl):
viscosity = (1.0/self.Rey)
theta = np.linspace(0.0,2*np.pi,200)[:,None] # N x 1
d_theta = theta[1,0] - theta[0,0]
x_cyl = 0.5*np.cos(theta) # N x 1
y_cyl = 0.5*np.sin(theta) # N x 1
N = x_cyl.shape[0]
T = t_cyl.shape[0]
T_star = np.tile(t_cyl, (1,N)).T # N x T
X_star = np.tile(x_cyl, (1,T)) # N x T
Y_star = np.tile(y_cyl, (1,T)) # N x T
t_star = np.reshape(T_star,[-1,1]) # NT x 1
x_star = np.reshape(X_star,[-1,1]) # NT x 1
y_star = np.reshape(Y_star,[-1,1]) # NT x 1
tf_dict = {self.t_eqns_tf: t_star, self.x_eqns_tf: x_star, self.y_eqns_tf: y_star}
[p_star,
u_x_star,
u_y_star,
v_x_star,
v_y_star] = self.sess.run([self.p_eqns_pred,
self.u_x_eqns_pred,
self.u_y_eqns_pred,
self.v_x_eqns_pred,
self.v_y_eqns_pred], tf_dict)
P_star = np.reshape(p_star, [N,T]) # N x T
P_star = P_star - np.mean(P_star, axis=0)
U_x_star = np.reshape(u_x_star, [N,T]) # N x T
U_y_star = np.reshape(u_y_star, [N,T]) # N x T
V_x_star = np.reshape(v_x_star, [N,T]) # N x T
V_y_star = np.reshape(v_y_star, [N,T]) # N x T
INT0 = (-P_star[0:-1,:] + 2*viscosity*U_x_star[0:-1,:])*X_star[0:-1,:] + viscosity*(U_y_star[0:-1,:] + V_x_star[0:-1,:])*Y_star[0:-1,:]
INT1 = (-P_star[1: , :] + 2*viscosity*U_x_star[1: , :])*X_star[1: , :] + viscosity*(U_y_star[1: , :] + V_x_star[1: , :])*Y_star[1: , :]
F_D = 0.5*np.sum(INT0.T+INT1.T, axis = 1)*d_theta # T x 1
INT0 = (-P_star[0:-1,:] + 2*viscosity*V_y_star[0:-1,:])*Y_star[0:-1,:] + viscosity*(U_y_star[0:-1,:] + V_x_star[0:-1,:])*X_star[0:-1,:]
INT1 = (-P_star[1: , :] + 2*viscosity*V_y_star[1: , :])*Y_star[1: , :] + viscosity*(U_y_star[1: , :] + V_x_star[1: , :])*X_star[1: , :]
F_L = 0.5*np.sum(INT0.T+INT1.T, axis = 1)*d_theta # T x 1
return F_D, F_L
if __name__ == "__main__":
batch_size = 10000
layers = [3] + 10*[4*50] + [4]
# Load Data
data = scipy.io.loadmat('../Data/Cylinder2D_Re200Pec2000_Dirichlet_Streaks.mat')
t_star = data['t_data'] # T x 1
x_star = data['x_data'] # N x 1
y_star = data['y_data'] # N x 1
T = t_star.shape[0]
N = x_star.shape[0]
U_star = data['U_data'] # N x T
V_star = data['V_data'] # N x T
P_star = data['P_data'] # N x T
C_star = data['C_data'] # N x T
# Rearrange Data
T_star = np.tile(t_star, (1,N)).T # N x T
X_star = np.tile(x_star, (1,T)) # N x T
Y_star = np.tile(y_star, (1,T)) # N x T
t = T_star.flatten()[:,None] # NT x 1
x = X_star.flatten()[:,None] # NT x 1
y = Y_star.flatten()[:,None] # NT x 1
u = U_star.flatten()[:,None] # NT x 1
v = V_star.flatten()[:,None] # NT x 1
p = P_star.flatten()[:,None] # NT x 1
c = C_star.flatten()[:,None] # NT x 1
######################################################################
######################## Training Data ###############################
######################################################################
T_data = T # int(sys.argv[1])
N_data = N # int(sys.argv[2])
idx_t = np.concatenate([np.array([0]), np.random.choice(T-2, T_data-2, replace=False)+1, np.array([T-1])] )
idx_x = np.random.choice(N, N_data, replace=False)
t_data = T_star[:, idx_t][idx_x,:].flatten()[:,None]
x_data = X_star[:, idx_t][idx_x,:].flatten()[:,None]
y_data = Y_star[:, idx_t][idx_x,:].flatten()[:,None]
c_data = C_star[:, idx_t][idx_x,:].flatten()[:,None]
T_eqns = T
N_eqns = N
idx_t = np.concatenate([np.array([0]), np.random.choice(T-2, T_eqns-2, replace=False)+1, np.array([T-1])] )
idx_x = np.random.choice(N, N_eqns, replace=False)
t_eqns = T_star[:, idx_t][idx_x,:].flatten()[:,None]
x_eqns = X_star[:, idx_t][idx_x,:].flatten()[:,None]
y_eqns = Y_star[:, idx_t][idx_x,:].flatten()[:,None]
# Training Data on velocity (inlet)
t_inlet = t[x == x.min()][:,None]
x_inlet = x[x == x.min()][:,None]
y_inlet = y[x == x.min()][:,None]
u_inlet = u[x == x.min()][:,None]
v_inlet = v[x == x.min()][:,None]
# Training
model = HFM(t_data, x_data, y_data, c_data,
t_eqns, x_eqns, y_eqns,
t_inlet, x_inlet, y_inlet, u_inlet, v_inlet,
layers, batch_size,
Pec = 2000, Rey = 200)
model.train(total_time = 40, learning_rate=1e-3)
F_D, F_L = model.predict_drag_lift(t_star)
# Test Data
snap = np.array([100])
t_test = T_star[:,snap]
x_test = X_star[:,snap]
y_test = Y_star[:,snap]
c_test = C_star[:,snap]
u_test = U_star[:,snap]
v_test = V_star[:,snap]
p_test = P_star[:,snap]
# Prediction
c_pred, u_pred, v_pred, p_pred = model.predict(t_test, x_test, y_test)
# Error
error_c = relative_error(c_pred, c_test)
error_u = relative_error(u_pred, u_test)
error_v = relative_error(v_pred, v_test)
error_p = relative_error(p_pred - np.mean(p_pred), p_test - np.mean(p_test))
print('Error c: %e' % (error_c))
print('Error u: %e' % (error_u))
print('Error v: %e' % (error_v))
print('Error p: %e' % (error_p))
################# Save Data ###########################
C_pred = 0*C_star
U_pred = 0*U_star
V_pred = 0*V_star
P_pred = 0*P_star
for snap in range(0,t_star.shape[0]):
t_test = T_star[:,snap:snap+1]
x_test = X_star[:,snap:snap+1]
y_test = Y_star[:,snap:snap+1]
c_test = C_star[:,snap:snap+1]
u_test = U_star[:,snap:snap+1]
v_test = V_star[:,snap:snap+1]
p_test = P_star[:,snap:snap+1]
# Prediction
c_pred, u_pred, v_pred, p_pred = model.predict(t_test, x_test, y_test)
C_pred[:,snap:snap+1] = c_pred
U_pred[:,snap:snap+1] = u_pred
V_pred[:,snap:snap+1] = v_pred
P_pred[:,snap:snap+1] = p_pred
# Error
error_c = relative_error(c_pred, c_test)
error_u = relative_error(u_pred, u_test)
error_v = relative_error(v_pred, v_test)
error_p = relative_error(p_pred - np.mean(p_pred), p_test - np.mean(p_test))
print('Error c: %e' % (error_c))
print('Error u: %e' % (error_u))
print('Error v: %e' % (error_v))
print('Error p: %e' % (error_p))
scipy.io.savemat('../Results/Cylinder2D_Dirichlet_Streaks_results_%s.mat' %(time.strftime('%d_%m_%Y')),
{'C_pred':C_pred, 'U_pred':U_pred, 'V_pred':V_pred, 'P_pred':P_pred, 'F_L':F_L, 'F_D':F_D})
"""
@author: Maziar Raissi
"""
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
import numpy as np
import scipy.io
import time
import sys
from utilities import neural_net, Navier_Stokes_2D, Gradient_Velocity_2D, \
tf_session, mean_squared_error, relative_error
class HFM(object):
# notational conventions
# _tf: placeholders for input/output data and points used to regress the equations
# _pred: output of neural network
# _eqns: points used to regress the equations
# _data: input-output data
# _inlet: input-output data at the inlet
# _star: preditions
def __init__(self, t_data, x_data, y_data, c_data,
t_eqns, x_eqns, y_eqns,
t_inlet, x_inlet, y_inlet, u_inlet, v_inlet,
layers, batch_size,
Pec, Rey):
# specs
self.layers = layers
self.batch_size = batch_size
# flow properties
self.Pec = Pec
self.Rey = Rey
# data
[self.t_data, self.x_data, self.y_data, self.c_data] = [t_data, x_data, y_data, c_data]
[self.t_eqns, self.x_eqns, self.y_eqns] = [t_eqns, x_eqns, y_eqns]
[self.t_inlet, self.x_inlet, self.y_inlet, self.u_inlet, self.v_inlet] = [t_inlet, x_inlet, y_inlet, u_inlet, v_inlet]
# placeholders
[self.t_data_tf, self.x_data_tf, self.y_data_tf, self.c_data_tf] = [tf.placeholder(tf.float32, shape=[None, 1]) for _ in range(4)]
[self.t_eqns_tf, self.x_eqns_tf, self.y_eqns_tf] = [tf.placeholder(tf.float32, shape=[None, 1]) for _ in range(3)]
[self.t_inlet_tf, self.x_inlet_tf, self.y_inlet_tf, self.u_inlet_tf, self.v_inlet_tf] = [tf.placeholder(tf.float32, shape=[None, 1]) for _ in range(5)]
# physics "uninformed" neural networks
self.net_cuvp = neural_net(self.t_data, self.x_data, self.y_data, layers = self.layers)
[self.c_data_pred,
self.u_data_pred,
self.v_data_pred,
self.p_data_pred] = self.net_cuvp(self.t_data_tf,
self.x_data_tf,
self.y_data_tf)
# physics "uninformed" neural networks (data at the inlet)
[_,
self.u_inlet_pred,
self.v_inlet_pred,
_] = self.net_cuvp(self.t_inlet_tf,
self.x_inlet_tf,
self.y_inlet_tf)
# physics "informed" neural networks
[self.c_eqns_pred,
self.u_eqns_pred,
self.v_eqns_pred,
self.p_eqns_pred] = self.net_cuvp(self.t_eqns_tf,
self.x_eqns_tf,
self.y_eqns_tf)
[self.e1_eqns_pred,
self.e2_eqns_pred,
self.e3_eqns_pred,
self.e4_eqns_pred] = Navier_Stokes_2D(self.c_eqns_pred,
self.u_eqns_pred,
self.v_eqns_pred,
self.p_eqns_pred,
self.t_eqns_tf,
self.x_eqns_tf,
self.y_eqns_tf,
self.Pec,
self.Rey)
# gradients required for the lift and drag forces
[self.u_x_eqns_pred,
self.v_x_eqns_pred,
self.u_y_eqns_pred,
self.v_y_eqns_pred] = Gradient_Velocity_2D(self.u_eqns_pred,
self.v_eqns_pred,
self.x_eqns_tf,
self.y_eqns_tf)
# loss
self.loss = mean_squared_error(self.c_data_pred, self.c_data_tf) + \
mean_squared_error(self.u_inlet_pred, self.u_inlet_tf) + \
mean_squared_error(self.v_inlet_pred, self.v_inlet_tf) + \
mean_squared_error(self.e1_eqns_pred, 0.0) + \
mean_squared_error(self.e2_eqns_pred, 0.0) + \
mean_squared_error(self.e3_eqns_pred, 0.0) + \
mean_squared_error(self.e4_eqns_pred, 0.0)
# optimizers
self.learning_rate = tf.placeholder(tf.float32, shape=[])
self.optimizer = tf.train.AdamOptimizer(learning_rate = self.learning_rate)
self.train_op = self.optimizer.minimize(self.loss)
self.sess = tf_session()
def train(self, total_time, learning_rate):
N_data = self.t_data.shape[0]
N_eqns = self.t_eqns.shape[0]
start_time = time.time()
running_time = 0
it = 0
while running_time < total_time:
idx_data = np.random.choice(N_data, min(self.batch_size, N_data))
idx_eqns = np.random.choice(N_eqns, self.batch_size)
(t_data_batch,
x_data_batch,
y_data_batch,
c_data_batch) = (self.t_data[idx_data,:],
self.x_data[idx_data,:],
self.y_data[idx_data,:],
self.c_data[idx_data,:])
(t_eqns_batch,
x_eqns_batch,
y_eqns_batch) = (self.t_eqns[idx_eqns,:],
self.x_eqns[idx_eqns,:],
self.y_eqns[idx_eqns,:])
tf_dict = {self.t_data_tf: t_data_batch,
self.x_data_tf: x_data_batch,
self.y_data_tf: y_data_batch,
self.c_data_tf: c_data_batch,
self.t_eqns_tf: t_eqns_batch,
self.x_eqns_tf: x_eqns_batch,
self.y_eqns_tf: y_eqns_batch,
self.t_inlet_tf: self.t_inlet,
self.x_inlet_tf: self.x_inlet,
self.y_inlet_tf: self.y_inlet,
self.u_inlet_tf: self.u_inlet,
self.v_inlet_tf: self.v_inlet,
self.learning_rate: learning_rate}
self.sess.run([self.train_op], tf_dict)
# Print
if it % 10 == 0:
elapsed = time.time() - start_time
running_time += elapsed/3600.0
[loss_value,
learning_rate_value] = self.sess.run([self.loss,
self.learning_rate], tf_dict)
print('It: %d, Loss: %.3e, Time: %.2fs, Running Time: %.2fh, Learning Rate: %.1e'
%(it, loss_value, elapsed, running_time, learning_rate_value))
sys.stdout.flush()
start_time = time.time()
it += 1
def predict(self, t_star, x_star, y_star):
tf_dict = {self.t_data_tf: t_star, self.x_data_tf: x_star, self.y_data_tf: y_star}
c_star = self.sess.run(self.c_data_pred, tf_dict)
u_star = self.sess.run(self.u_data_pred, tf_dict)
v_star = self.sess.run(self.v_data_pred, tf_dict)
p_star = self.sess.run(self.p_data_pred, tf_dict)
return c_star, u_star, v_star, p_star
def predict_drag_lift(self, t_cyl):
viscosity = (1.0/self.Rey)
theta = np.linspace(0.0,2*np.pi,200)[:,None] # N x 1
d_theta = theta[1,0] - theta[0,0]
x_cyl = 0.5*np.cos(theta) # N x 1
y_cyl = 0.5*np.sin(theta) # N x 1
N = x_cyl.shape[0]
T = t_cyl.shape[0]
T_star = np.tile(t_cyl, (1,N)).T # N x T
X_star = np.tile(x_cyl, (1,T)) # N x T
Y_star = np.tile(y_cyl, (1,T)) # N x T
t_star = np.reshape(T_star,[-1,1]) # NT x 1
x_star = np.reshape(X_star,[-1,1]) # NT x 1
y_star = np.reshape(Y_star,[-1,1]) # NT x 1
tf_dict = {self.t_eqns_tf: t_star, self.x_eqns_tf: x_star, self.y_eqns_tf: y_star}
[p_star,
u_x_star,
u_y_star,
v_x_star,
v_y_star] = self.sess.run([self.p_eqns_pred,
self.u_x_eqns_pred,
self.u_y_eqns_pred,
self.v_x_eqns_pred,
self.v_y_eqns_pred], tf_dict)
P_star = np.reshape(p_star, [N,T]) # N x T
P_star = P_star - np.mean(P_star, axis=0)
U_x_star = np.reshape(u_x_star, [N,T]) # N x T
U_y_star = np.reshape(u_y_star, [N,T]) # N x T
V_x_star = np.reshape(v_x_star, [N,T]) # N x T
V_y_star = np.reshape(v_y_star, [N,T]) # N x T
INT0 = (-P_star[0:-1,:] + 2*viscosity*U_x_star[0:-1,:])*X_star[0:-1,:] + viscosity*(U_y_star[0:-1,:] + V_x_star[0:-1,:])*Y_star[0:-1,:]
INT1 = (-P_star[1: , :] + 2*viscosity*U_x_star[1: , :])*X_star[1: , :] + viscosity*(U_y_star[1: , :] + V_x_star[1: , :])*Y_star[1: , :]
F_D = 0.5*np.sum(INT0.T+INT1.T, axis = 1)*d_theta # T x 1
INT0 = (-P_star[0:-1,:] + 2*viscosity*V_y_star[0:-1,:])*Y_star[0:-1,:] + viscosity*(U_y_star[0:-1,:] + V_x_star[0:-1,:])*X_star[0:-1,:]
INT1 = (-P_star[1: , :] + 2*viscosity*V_y_star[1: , :])*Y_star[1: , :] + viscosity*(U_y_star[1: , :] + V_x_star[1: , :])*X_star[1: , :]
F_L = 0.5*np.sum(INT0.T+INT1.T, axis = 1)*d_theta # T x 1
return F_D, F_L
if __name__ == "__main__":
batch_size = 10000
layers = [3] + 10*[4*50] + [4]
# Load Data
data = scipy.io.loadmat('../Data/Cylinder2D_Re200Pec2000_Neumann_Streaks.mat')
t_star = data['t_data'] # T x 1
x_star = data['x_data'] # N x 1
y_star = data['y_data'] # N x 1
T = t_star.shape[0]
N = x_star.shape[0]
U_star = data['U_data'] # N x T
V_star = data['V_data'] # N x T
P_star = data['P_data'] # N x T
C_star = data['C_data'] # N x T
# Rearrange Data
T_star = np.tile(t_star, (1,N)).T # N x T
X_star = np.tile(x_star, (1,T)) # N x T
Y_star = np.tile(y_star, (1,T)) # N x T
t = T_star.flatten()[:,None] # NT x 1
x = X_star.flatten()[:,None] # NT x 1
y = Y_star.flatten()[:,None] # NT x 1
u = U_star.flatten()[:,None] # NT x 1
v = V_star.flatten()[:,None] # NT x 1
p = P_star.flatten()[:,None] # NT x 1
c = C_star.flatten()[:,None] # NT x 1
######################################################################
######################## Training Data ###############################
######################################################################
T_data = T # int(sys.argv[1])
N_data = N # int(sys.argv[2])
idx_t = np.concatenate([np.array([0]), np.random.choice(T-2, T_data-2, replace=False)+1, np.array([T-1])] )
idx_x = np.random.choice(N, N_data, replace=False)
t_data = T_star[:, idx_t][idx_x,:].flatten()[:,None]
x_data = X_star[:, idx_t][idx_x,:].flatten()[:,None]
y_data = Y_star[:, idx_t][idx_x,:].flatten()[:,None]
c_data = C_star[:, idx_t][idx_x,:].flatten()[:,None]
T_eqns = T
N_eqns = N
idx_t = np.concatenate([np.array([0]), np.random.choice(T-2, T_eqns-2, replace=False)+1, np.array([T-1])] )
idx_x = np.random.choice(N, N_eqns, replace=False)
t_eqns = T_star[:, idx_t][idx_x,:].flatten()[:,None]
x_eqns = X_star[:, idx_t][idx_x,:].flatten()[:,None]
y_eqns = Y_star[:, idx_t][idx_x,:].flatten()[:,None]
# Training Data on velocity (inlet)
t_inlet = t[x == x.min()][:,None]
x_inlet = x[x == x.min()][:,None]
y_inlet = y[x == x.min()][:,None]
u_inlet = u[x == x.min()][:,None]
v_inlet = v[x == x.min()][:,None]
# Training
model = HFM(t_data, x_data, y_data, c_data,
t_eqns, x_eqns, y_eqns,
t_inlet, x_inlet, y_inlet, u_inlet, v_inlet,
layers, batch_size,
Pec = 2000, Rey = 200)
model.train(total_time = 40, learning_rate=1e-3)
F_D, F_L = model.predict_drag_lift(t_star)
# Test Data
snap = np.array([100])
t_test = T_star[:,snap]
x_test = X_star[:,snap]
y_test = Y_star[:,snap]
c_test = C_star[:,snap]
u_test = U_star[:,snap]
v_test = V_star[:,snap]
p_test = P_star[:,snap]
# Prediction
c_pred, u_pred, v_pred, p_pred = model.predict(t_test, x_test, y_test)
# Error
error_c = relative_error(c_pred, c_test)
error_u = relative_error(u_pred, u_test)
error_v = relative_error(v_pred, v_test)
error_p = relative_error(p_pred - np.mean(p_pred), p_test - np.mean(p_test))
print('Error c: %e' % (error_c))
print('Error u: %e' % (error_u))
print('Error v: %e' % (error_v))
print('Error p: %e' % (error_p))
################# Save Data ###########################
C_pred = 0*C_star
U_pred = 0*U_star
V_pred = 0*V_star
P_pred = 0*P_star
for snap in range(0,t_star.shape[0]):
t_test = T_star[:,snap:snap+1]
x_test = X_star[:,snap:snap+1]
y_test = Y_star[:,snap:snap+1]
c_test = C_star[:,snap:snap+1]
u_test = U_star[:,snap:snap+1]
v_test = V_star[:,snap:snap+1]
p_test = P_star[:,snap:snap+1]
# Prediction
c_pred, u_pred, v_pred, p_pred = model.predict(t_test, x_test, y_test)
C_pred[:,snap:snap+1] = c_pred
U_pred[:,snap:snap+1] = u_pred
V_pred[:,snap:snap+1] = v_pred
P_pred[:,snap:snap+1] = p_pred
# Error
error_c = relative_error(c_pred, c_test)
error_u = relative_error(u_pred, u_test)
error_v = relative_error(v_pred, v_test)
error_p = relative_error(p_pred - np.mean(p_pred), p_test - np.mean(p_test))
print('Error c: %e' % (error_c))
print('Error u: %e' % (error_u))
print('Error v: %e' % (error_v))
print('Error p: %e' % (error_p))
scipy.io.savemat('../Results/Cylinder2D_Neumann_Streaks_results_%s.mat' %(time.strftime('%d_%m_%Y')),
{'C_pred':C_pred, 'U_pred':U_pred, 'V_pred':V_pred, 'P_pred':P_pred, 'F_L':F_L, 'F_D':F_D})
"""
@author: Maziar Raissi
"""
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
import numpy as np
import scipy.io
import time
import sys
from utilities import neural_net, Navier_Stokes_2D, Gradient_Velocity_2D, \
tf_session, mean_squared_error, relative_error
class HFM(object):
# notational conventions
# _tf: placeholders for input/output data and points used to regress the equations
# _pred: output of neural network
# _eqns: points used to regress the equations
# _data: input-output data
# _inlet: input-output data at the inlet
# _cyl: points used to regress the no slip boundary condition
# _star: preditions
def __init__(self, t_data, x_data, y_data, c_data,
t_eqns, x_eqns, y_eqns,
t_inlet, x_inlet, y_inlet, u_inlet, v_inlet,
t_cyl, x_cyl, y_cyl,
layers, batch_size,
Pec, Rey):
# specs
self.layers = layers
self.batch_size = batch_size
# flow properties
self.Pec = Pec
self.Rey = Rey
# data
[self.t_data, self.x_data, self.y_data, self.c_data] = [t_data, x_data, y_data, c_data]
[self.t_eqns, self.x_eqns, self.y_eqns] = [t_eqns, x_eqns, y_eqns]
[self.t_inlet, self.x_inlet, self.y_inlet, self.u_inlet, self.v_inlet] = [t_inlet, x_inlet, y_inlet, u_inlet, v_inlet]
[self.t_cyl, self.x_cyl, self.y_cyl] = [t_cyl, x_cyl, y_cyl]
# placeholders
[self.t_data_tf, self.x_data_tf, self.y_data_tf, self.c_data_tf] = [tf.placeholder(tf.float32, shape=[None, 1]) for _ in range(4)]
[self.t_eqns_tf, self.x_eqns_tf, self.y_eqns_tf] = [tf.placeholder(tf.float32, shape=[None, 1]) for _ in range(3)]
[self.t_inlet_tf, self.x_inlet_tf, self.y_inlet_tf, self.u_inlet_tf, self.v_inlet_tf] = [tf.placeholder(tf.float32, shape=[None, 1]) for _ in range(5)]
[self.t_cyl_tf, self.x_cyl_tf, self.y_cyl_tf] = [tf.placeholder(tf.float32, shape=[None, 1]) for _ in range(3)]
# physics "uninformed" neural networks
self.net_cuvp = neural_net(self.t_data, self.x_data, self.y_data, layers = self.layers)
[self.c_data_pred,
self.u_data_pred,
self.v_data_pred,
self.p_data_pred] = self.net_cuvp(self.t_data_tf,
self.x_data_tf,
self.y_data_tf)
# physics "uninformed" neural networks (data at the inlet)
[_,
self.u_inlet_pred,
self.v_inlet_pred,
_] = self.net_cuvp(self.t_inlet_tf,
self.x_inlet_tf,
self.y_inlet_tf)
# physics "uninformed" neural networks (data on the cylinder)
[_,
self.u_cyl_pred,
self.v_cyl_pred,
_] = self.net_cuvp(self.t_cyl_tf,
self.x_cyl_tf,
self.y_cyl_tf)
# physics "informed" neural networks
[self.c_eqns_pred,
self.u_eqns_pred,
self.v_eqns_pred,
self.p_eqns_pred] = self.net_cuvp(self.t_eqns_tf,
self.x_eqns_tf,
self.y_eqns_tf)
[self.e1_eqns_pred,
self.e2_eqns_pred,
self.e3_eqns_pred,
self.e4_eqns_pred] = Navier_Stokes_2D(self.c_eqns_pred,
self.u_eqns_pred,
self.v_eqns_pred,
self.p_eqns_pred,
self.t_eqns_tf,
self.x_eqns_tf,
self.y_eqns_tf,
self.Pec,
self.Rey)
# gradients required for the lift and drag forces
[self.u_x_eqns_pred,
self.v_x_eqns_pred,
self.u_y_eqns_pred,
self.v_y_eqns_pred] = Gradient_Velocity_2D(self.u_eqns_pred,
self.v_eqns_pred,
self.x_eqns_tf,
self.y_eqns_tf)
# loss
self.loss = mean_squared_error(self.c_data_pred, self.c_data_tf) + \
mean_squared_error(self.u_inlet_pred, self.u_inlet_tf) + \
mean_squared_error(self.v_inlet_pred, self.v_inlet_tf) + \
mean_squared_error(self.u_cyl_pred, 0.0) + \
mean_squared_error(self.v_cyl_pred, 0.0) + \
mean_squared_error(self.e1_eqns_pred, 0.0) + \
mean_squared_error(self.e2_eqns_pred, 0.0) + \
mean_squared_error(self.e3_eqns_pred, 0.0) + \
mean_squared_error(self.e4_eqns_pred, 0.0)
# optimizers
self.learning_rate = tf.placeholder(tf.float32, shape=[])
self.optimizer = tf.train.AdamOptimizer(learning_rate = self.learning_rate)
self.train_op = self.optimizer.minimize(self.loss)
self.sess = tf_session()
def train(self, total_time, learning_rate):
N_data = self.t_data.shape[0]
N_eqns = self.t_eqns.shape[0]
start_time = time.time()
running_time = 0
it = 0
while running_time < total_time:
idx_data = np.random.choice(N_data, min(self.batch_size, N_data))
idx_eqns = np.random.choice(N_eqns, self.batch_size)
(t_data_batch,
x_data_batch,
y_data_batch,
c_data_batch) = (self.t_data[idx_data,:],
self.x_data[idx_data,:],
self.y_data[idx_data,:],
self.c_data[idx_data,:])
(t_eqns_batch,
x_eqns_batch,
y_eqns_batch) = (self.t_eqns[idx_eqns,:],
self.x_eqns[idx_eqns,:],
self.y_eqns[idx_eqns,:])
tf_dict = {self.t_data_tf: t_data_batch,
self.x_data_tf: x_data_batch,
self.y_data_tf: y_data_batch,
self.c_data_tf: c_data_batch,
self.t_eqns_tf: t_eqns_batch,
self.x_eqns_tf: x_eqns_batch,
self.y_eqns_tf: y_eqns_batch,
self.t_inlet_tf: self.t_inlet,
self.x_inlet_tf: self.x_inlet,
self.y_inlet_tf: self.y_inlet,
self.u_inlet_tf: self.u_inlet,
self.v_inlet_tf: self.v_inlet,
self.t_cyl_tf: self.t_cyl,
self.x_cyl_tf: self.x_cyl,
self.y_cyl_tf: self.y_cyl,
self.learning_rate: learning_rate}
self.sess.run([self.train_op], tf_dict)
# Print
if it % 10 == 0:
elapsed = time.time() - start_time
running_time += elapsed/3600.0
[loss_value,
learning_rate_value] = self.sess.run([self.loss,
self.learning_rate], tf_dict)
print('It: %d, Loss: %.3e, Time: %.2fs, Running Time: %.2fh, Learning Rate: %.1e'
%(it, loss_value, elapsed, running_time, learning_rate_value))
sys.stdout.flush()
start_time = time.time()
it += 1
def predict(self, t_star, x_star, y_star):
tf_dict = {self.t_data_tf: t_star, self.x_data_tf: x_star, self.y_data_tf: y_star}
c_star = self.sess.run(self.c_data_pred, tf_dict)
u_star = self.sess.run(self.u_data_pred, tf_dict)
v_star = self.sess.run(self.v_data_pred, tf_dict)
p_star = self.sess.run(self.p_data_pred, tf_dict)
return c_star, u_star, v_star, p_star
def predict_drag_lift(self, t_cyl):
viscosity = (1.0/self.Rey)
theta = np.linspace(0.0,2*np.pi,200)[:,None] # N x 1
d_theta = theta[1,0] - theta[0,0]
x_cyl = 0.5*np.cos(theta) # N x 1
y_cyl = 0.5*np.sin(theta) # N x 1
N = x_cyl.shape[0]
T = t_cyl.shape[0]
T_star = np.tile(t_cyl, (1,N)).T # N x T
X_star = np.tile(x_cyl, (1,T)) # N x T
Y_star = np.tile(y_cyl, (1,T)) # N x T
t_star = np.reshape(T_star,[-1,1]) # NT x 1
x_star = np.reshape(X_star,[-1,1]) # NT x 1
y_star = np.reshape(Y_star,[-1,1]) # NT x 1
tf_dict = {self.t_eqns_tf: t_star, self.x_eqns_tf: x_star, self.y_eqns_tf: y_star}
[p_star,
u_x_star,
u_y_star,
v_x_star,
v_y_star] = self.sess.run([self.p_eqns_pred,
self.u_x_eqns_pred,
self.u_y_eqns_pred,
self.v_x_eqns_pred,
self.v_y_eqns_pred], tf_dict)
P_star = np.reshape(p_star, [N,T]) # N x T
P_star = P_star - np.mean(P_star, axis=0)
U_x_star = np.reshape(u_x_star, [N,T]) # N x T
U_y_star = np.reshape(u_y_star, [N,T]) # N x T
V_x_star = np.reshape(v_x_star, [N,T]) # N x T
V_y_star = np.reshape(v_y_star, [N,T]) # N x T
INT0 = (-P_star[0:-1,:] + 2*viscosity*U_x_star[0:-1,:])*X_star[0:-1,:] + viscosity*(U_y_star[0:-1,:] + V_x_star[0:-1,:])*Y_star[0:-1,:]
INT1 = (-P_star[1: , :] + 2*viscosity*U_x_star[1: , :])*X_star[1: , :] + viscosity*(U_y_star[1: , :] + V_x_star[1: , :])*Y_star[1: , :]
F_D = 0.5*np.sum(INT0.T+INT1.T, axis = 1)*d_theta # T x 1
INT0 = (-P_star[0:-1,:] + 2*viscosity*V_y_star[0:-1,:])*Y_star[0:-1,:] + viscosity*(U_y_star[0:-1,:] + V_x_star[0:-1,:])*X_star[0:-1,:]
INT1 = (-P_star[1: , :] + 2*viscosity*V_y_star[1: , :])*Y_star[1: , :] + viscosity*(U_y_star[1: , :] + V_x_star[1: , :])*X_star[1: , :]
F_L = 0.5*np.sum(INT0.T+INT1.T, axis = 1)*d_theta # T x 1
return F_D, F_L
if __name__ == "__main__":
batch_size = 10000
layers = [3] + 10*[4*50] + [4]
# Load Data
data = scipy.io.loadmat('../Data/Cylinder2D.mat')
t_star = data['t_star'] # T x 1
x_star = data['x_star'] # N x 1
y_star = data['y_star'] # N x 1
T = t_star.shape[0]
N = x_star.shape[0]
U_star = data['U_star'] # N x T
V_star = data['V_star'] # N x T
P_star = data['P_star'] # N x T
C_star = data['C_star'] # N x T
# Rearrange Data
T_star = np.tile(t_star, (1,N)).T # N x T
X_star = np.tile(x_star, (1,T)) # N x T
Y_star = np.tile(y_star, (1,T)) # N x T
t = T_star.flatten()[:,None] # NT x 1
x = X_star.flatten()[:,None] # NT x 1
y = Y_star.flatten()[:,None] # NT x 1
u = U_star.flatten()[:,None] # NT x 1
v = V_star.flatten()[:,None] # NT x 1
p = P_star.flatten()[:,None] # NT x 1
c = C_star.flatten()[:,None] # NT x 1
######################################################################
######################## Training Data ###############################
######################################################################
T_data = T # int(sys.argv[1])
N_data = N # int(sys.argv[2])
idx_t = np.concatenate([np.array([0]), np.random.choice(T-2, T_data-2, replace=False)+1, np.array([T-1])] )
idx_x = np.random.choice(N, N_data, replace=False)
t_data = T_star[:, idx_t][idx_x,:].flatten()[:,None]
x_data = X_star[:, idx_t][idx_x,:].flatten()[:,None]
y_data = Y_star[:, idx_t][idx_x,:].flatten()[:,None]
c_data = C_star[:, idx_t][idx_x,:].flatten()[:,None]
T_eqns = T
N_eqns = N
idx_t = np.concatenate([np.array([0]), np.random.choice(T-2, T_eqns-2, replace=False)+1, np.array([T-1])] )
idx_x = np.random.choice(N, N_eqns, replace=False)
t_eqns = T_star[:, idx_t][idx_x,:].flatten()[:,None]
x_eqns = X_star[:, idx_t][idx_x,:].flatten()[:,None]
y_eqns = Y_star[:, idx_t][idx_x,:].flatten()[:,None]
# Training Data on velocity (inlet)
t_inlet = t[x == x.min()][:,None]
x_inlet = x[x == x.min()][:,None]
y_inlet = y[x == x.min()][:,None]
u_inlet = u[x == x.min()][:,None]
v_inlet = v[x == x.min()][:,None]
# Training Data on velocity (cylinder)
t_cyl = t[x**2 + y**2 == 0.25][:,None]
x_cyl = x[x**2 + y**2 == 0.25][:,None]
y_cyl = y[x**2 + y**2 == 0.25][:,None]
# Training
model = HFM(t_data, x_data, y_data, c_data,
t_eqns, x_eqns, y_eqns,
t_inlet, x_inlet, y_inlet, u_inlet, v_inlet,
t_cyl, x_cyl, y_cyl,
layers, batch_size,
Pec = 100, Rey = 100)
model.train(total_time = 40, learning_rate=1e-3)
F_D, F_L = model.predict_drag_lift(t_star)
# Test Data
snap = np.array([100])
t_test = T_star[:,snap]
x_test = X_star[:,snap]
y_test = Y_star[:,snap]
c_test = C_star[:,snap]
u_test = U_star[:,snap]
v_test = V_star[:,snap]
p_test = P_star[:,snap]
# Prediction
c_pred, u_pred, v_pred, p_pred = model.predict(t_test, x_test, y_test)
# Error
error_c = relative_error(c_pred, c_test)
error_u = relative_error(u_pred, u_test)
error_v = relative_error(v_pred, v_test)
error_p = relative_error(p_pred - np.mean(p_pred), p_test - np.mean(p_test))
print('Error c: %e' % (error_c))
print('Error u: %e' % (error_u))
print('Error v: %e' % (error_v))
print('Error p: %e' % (error_p))
################# Save Data ###########################
C_pred = 0*C_star
U_pred = 0*U_star
V_pred = 0*V_star
P_pred = 0*P_star
for snap in range(0,t_star.shape[0]):
t_test = T_star[:,snap:snap+1]
x_test = X_star[:,snap:snap+1]
y_test = Y_star[:,snap:snap+1]
c_test = C_star[:,snap:snap+1]
u_test = U_star[:,snap:snap+1]
v_test = V_star[:,snap:snap+1]
p_test = P_star[:,snap:snap+1]
# Prediction
c_pred, u_pred, v_pred, p_pred = model.predict(t_test, x_test, y_test)
C_pred[:,snap:snap+1] = c_pred
U_pred[:,snap:snap+1] = u_pred
V_pred[:,snap:snap+1] = v_pred
P_pred[:,snap:snap+1] = p_pred
# Error
error_c = relative_error(c_pred, c_test)
error_u = relative_error(u_pred, u_test)
error_v = relative_error(v_pred, v_test)
error_p = relative_error(p_pred - np.mean(p_pred), p_test - np.mean(p_test))
print('Error c: %e' % (error_c))
print('Error u: %e' % (error_u))
print('Error v: %e' % (error_v))
print('Error p: %e' % (error_p))
scipy.io.savemat('../Results/Cylinder2D_No_Slip_results_%s.mat' %(time.strftime('%d_%m_%Y')),
{'C_pred':C_pred, 'U_pred':U_pred, 'V_pred':V_pred, 'P_pred':P_pred, 'F_L':F_L, 'F_D':F_D})
"""
@author: Maziar Raissi
"""
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
import numpy as np
import scipy.io
import time
import sys
from utilities import neural_net, Navier_Stokes_2D, Gradient_Velocity_2D, \
tf_session, mean_squared_error, relative_error
class HFM(object):
# notational conventions
# _tf: placeholders for input/output data and points used to regress the equations
# _pred: output of neural network
# _eqns: points used to regress the equations
# _data: input-output data
# _inlet: input-output data at the inlet
# _cyl: points used to regress the no slip boundary condition
# _star: preditions
def __init__(self, t_data, x_data, y_data, c_data,
t_eqns, x_eqns, y_eqns,
t_inlet, x_inlet, y_inlet, u_inlet, v_inlet,
t_cyl, x_cyl, y_cyl,
layers, batch_size,
Pec, Rey):
# specs
self.layers = layers
self.batch_size = batch_size
# flow properties
self.Pec = Pec
self.Rey = Rey
# data
[self.t_data, self.x_data, self.y_data, self.c_data] = [t_data, x_data, y_data, c_data]
[self.t_eqns, self.x_eqns, self.y_eqns] = [t_eqns, x_eqns, y_eqns]
[self.t_inlet, self.x_inlet, self.y_inlet, self.u_inlet, self.v_inlet] = [t_inlet, x_inlet, y_inlet, u_inlet, v_inlet]
[self.t_cyl, self.x_cyl, self.y_cyl] = [t_cyl, x_cyl, y_cyl]
# placeholders
[self.t_data_tf, self.x_data_tf, self.y_data_tf, self.c_data_tf] = [tf.placeholder(tf.float32, shape=[None, 1]) for _ in range(4)]
[self.t_eqns_tf, self.x_eqns_tf, self.y_eqns_tf] = [tf.placeholder(tf.float32, shape=[None, 1]) for _ in range(3)]
[self.t_inlet_tf, self.x_inlet_tf, self.y_inlet_tf, self.u_inlet_tf, self.v_inlet_tf] = [tf.placeholder(tf.float32, shape=[None, 1]) for _ in range(5)]
[self.t_cyl_tf, self.x_cyl_tf, self.y_cyl_tf] = [tf.placeholder(tf.float32, shape=[None, 1]) for _ in range(3)]
# physics "uninformed" neural networks
self.net_cuvp = neural_net(self.t_data, self.x_data, self.y_data, layers = self.layers)
[self.c_data_pred,
self.u_data_pred,
self.v_data_pred,
self.p_data_pred] = self.net_cuvp(self.t_data_tf,
self.x_data_tf,
self.y_data_tf)
# physics "uninformed" neural networks (data at the inlet)
[_,
self.u_inlet_pred,
self.v_inlet_pred,
_] = self.net_cuvp(self.t_inlet_tf,
self.x_inlet_tf,
self.y_inlet_tf)
# physics "uninformed" neural networks (data on the cylinder)
[_,
self.u_cyl_pred,
self.v_cyl_pred,
_] = self.net_cuvp(self.t_cyl_tf,
self.x_cyl_tf,
self.y_cyl_tf)
# physics "informed" neural networks
[self.c_eqns_pred,
self.u_eqns_pred,
self.v_eqns_pred,
self.p_eqns_pred] = self.net_cuvp(self.t_eqns_tf,
self.x_eqns_tf,
self.y_eqns_tf)
[self.e1_eqns_pred,
self.e2_eqns_pred,
self.e3_eqns_pred,
self.e4_eqns_pred] = Navier_Stokes_2D(self.c_eqns_pred,
self.u_eqns_pred,
self.v_eqns_pred,
self.p_eqns_pred,
self.t_eqns_tf,
self.x_eqns_tf,
self.y_eqns_tf,
self.Pec,
self.Rey)
# gradients required for the lift and drag forces
[self.u_x_eqns_pred,
self.v_x_eqns_pred,
self.u_y_eqns_pred,
self.v_y_eqns_pred] = Gradient_Velocity_2D(self.u_eqns_pred,
self.v_eqns_pred,
self.x_eqns_tf,
self.y_eqns_tf)
# loss
self.loss = mean_squared_error(self.c_data_pred, self.c_data_tf) + \
mean_squared_error(self.u_inlet_pred, self.u_inlet_tf) + \
mean_squared_error(self.v_inlet_pred, self.v_inlet_tf) + \
mean_squared_error(self.u_cyl_pred, 0.0) + \
mean_squared_error(self.v_cyl_pred, 0.0) + \
mean_squared_error(self.e1_eqns_pred, 0.0) + \
mean_squared_error(self.e2_eqns_pred, 0.0) + \
mean_squared_error(self.e3_eqns_pred, 0.0) + \
mean_squared_error(self.e4_eqns_pred, 0.0)
# optimizers
self.learning_rate = tf.placeholder(tf.float32, shape=[])
self.optimizer = tf.train.AdamOptimizer(learning_rate = self.learning_rate)
self.train_op = self.optimizer.minimize(self.loss)
self.sess = tf_session()
def train(self, total_time, learning_rate):
N_data = self.t_data.shape[0]
N_eqns = self.t_eqns.shape[0]
start_time = time.time()
running_time = 0
it = 0
while running_time < total_time:
idx_data = np.random.choice(N_data, min(self.batch_size, N_data))
idx_eqns = np.random.choice(N_eqns, self.batch_size)
(t_data_batch,
x_data_batch,
y_data_batch,
c_data_batch) = (self.t_data[idx_data,:],
self.x_data[idx_data,:],
self.y_data[idx_data,:],
self.c_data[idx_data,:])
(t_eqns_batch,
x_eqns_batch,
y_eqns_batch) = (self.t_eqns[idx_eqns,:],
self.x_eqns[idx_eqns,:],
self.y_eqns[idx_eqns,:])
tf_dict = {self.t_data_tf: t_data_batch,
self.x_data_tf: x_data_batch,
self.y_data_tf: y_data_batch,
self.c_data_tf: c_data_batch,
self.t_eqns_tf: t_eqns_batch,
self.x_eqns_tf: x_eqns_batch,
self.y_eqns_tf: y_eqns_batch,
self.t_inlet_tf: self.t_inlet,
self.x_inlet_tf: self.x_inlet,
self.y_inlet_tf: self.y_inlet,
self.u_inlet_tf: self.u_inlet,
self.v_inlet_tf: self.v_inlet,
self.t_cyl_tf: self.t_cyl,
self.x_cyl_tf: self.x_cyl,
self.y_cyl_tf: self.y_cyl,
self.learning_rate: learning_rate}
self.sess.run([self.train_op], tf_dict)
# Print
if it % 10 == 0:
elapsed = time.time() - start_time
running_time += elapsed/3600.0
[loss_value,
learning_rate_value] = self.sess.run([self.loss,
self.learning_rate], tf_dict)
print('It: %d, Loss: %.3e, Time: %.2fs, Running Time: %.2fh, Learning Rate: %.1e'
%(it, loss_value, elapsed, running_time, learning_rate_value))
sys.stdout.flush()
start_time = time.time()
it += 1
def predict(self, t_star, x_star, y_star):
tf_dict = {self.t_data_tf: t_star, self.x_data_tf: x_star, self.y_data_tf: y_star}
c_star = self.sess.run(self.c_data_pred, tf_dict)
u_star = self.sess.run(self.u_data_pred, tf_dict)
v_star = self.sess.run(self.v_data_pred, tf_dict)
p_star = self.sess.run(self.p_data_pred, tf_dict)
return c_star, u_star, v_star, p_star
def predict_drag_lift(self, t_cyl):
viscosity = (1.0/self.Rey)
theta = np.linspace(0.0,2*np.pi,200)[:,None] # N x 1
d_theta = theta[1,0] - theta[0,0]
x_cyl = 0.5*np.cos(theta) # N x 1
y_cyl = 0.5*np.sin(theta) # N x 1
N = x_cyl.shape[0]
T = t_cyl.shape[0]
T_star = np.tile(t_cyl, (1,N)).T # N x T
X_star = np.tile(x_cyl, (1,T)) # N x T
Y_star = np.tile(y_cyl, (1,T)) # N x T
t_star = np.reshape(T_star,[-1,1]) # NT x 1
x_star = np.reshape(X_star,[-1,1]) # NT x 1
y_star = np.reshape(Y_star,[-1,1]) # NT x 1
tf_dict = {self.t_eqns_tf: t_star, self.x_eqns_tf: x_star, self.y_eqns_tf: y_star}
[p_star,
u_x_star,
u_y_star,
v_x_star,
v_y_star] = self.sess.run([self.p_eqns_pred,
self.u_x_eqns_pred,
self.u_y_eqns_pred,
self.v_x_eqns_pred,
self.v_y_eqns_pred], tf_dict)
P_star = np.reshape(p_star, [N,T]) # N x T
P_star = P_star - np.mean(P_star, axis=0)
U_x_star = np.reshape(u_x_star, [N,T]) # N x T
U_y_star = np.reshape(u_y_star, [N,T]) # N x T
V_x_star = np.reshape(v_x_star, [N,T]) # N x T
V_y_star = np.reshape(v_y_star, [N,T]) # N x T
INT0 = (-P_star[0:-1,:] + 2*viscosity*U_x_star[0:-1,:])*X_star[0:-1,:] + viscosity*(U_y_star[0:-1,:] + V_x_star[0:-1,:])*Y_star[0:-1,:]
INT1 = (-P_star[1: , :] + 2*viscosity*U_x_star[1: , :])*X_star[1: , :] + viscosity*(U_y_star[1: , :] + V_x_star[1: , :])*Y_star[1: , :]
F_D = 0.5*np.sum(INT0.T+INT1.T, axis = 1)*d_theta # T x 1
INT0 = (-P_star[0:-1,:] + 2*viscosity*V_y_star[0:-1,:])*Y_star[0:-1,:] + viscosity*(U_y_star[0:-1,:] + V_x_star[0:-1,:])*X_star[0:-1,:]
INT1 = (-P_star[1: , :] + 2*viscosity*V_y_star[1: , :])*Y_star[1: , :] + viscosity*(U_y_star[1: , :] + V_x_star[1: , :])*X_star[1: , :]
F_L = 0.5*np.sum(INT0.T+INT1.T, axis = 1)*d_theta # T x 1
return F_D, F_L
if __name__ == "__main__":
batch_size = 10000
layers = [3] + 10*[4*50] + [4]
# Load Data
data = scipy.io.loadmat('../Data/Cylinder2D_Re200Pec2000_Dirichlet_Streaks.mat')
t_star = data['t_data'] # T x 1
x_star = data['x_data'] # N x 1
y_star = data['y_data'] # N x 1
T = t_star.shape[0]
N = x_star.shape[0]
U_star = data['U_data'] # N x T
V_star = data['V_data'] # N x T
P_star = data['P_data'] # N x T
C_star = data['C_data'] # N x T
# Rearrange Data
T_star = np.tile(t_star, (1,N)).T # N x T
X_star = np.tile(x_star, (1,T)) # N x T
Y_star = np.tile(y_star, (1,T)) # N x T
t = T_star.flatten()[:,None] # NT x 1
x = X_star.flatten()[:,None] # NT x 1
y = Y_star.flatten()[:,None] # NT x 1
u = U_star.flatten()[:,None] # NT x 1
v = V_star.flatten()[:,None] # NT x 1
p = P_star.flatten()[:,None] # NT x 1
c = C_star.flatten()[:,None] # NT x 1
######################################################################
######################## Training Data ###############################
######################################################################
T_data = T # int(sys.argv[1])
N_data = N # int(sys.argv[2])
idx_t = np.concatenate([np.array([0]), np.random.choice(T-2, T_data-2, replace=False)+1, np.array([T-1])] )
idx_x = np.random.choice(N, N_data, replace=False)
t_data = T_star[:, idx_t][idx_x,:].flatten()[:,None]
x_data = X_star[:, idx_t][idx_x,:].flatten()[:,None]
y_data = Y_star[:, idx_t][idx_x,:].flatten()[:,None]
c_data = C_star[:, idx_t][idx_x,:].flatten()[:,None]
T_eqns = T
N_eqns = N
idx_t = np.concatenate([np.array([0]), np.random.choice(T-2, T_eqns-2, replace=False)+1, np.array([T-1])] )
idx_x = np.random.choice(N, N_eqns, replace=False)
t_eqns = T_star[:, idx_t][idx_x,:].flatten()[:,None]
x_eqns = X_star[:, idx_t][idx_x,:].flatten()[:,None]
y_eqns = Y_star[:, idx_t][idx_x,:].flatten()[:,None]
# Training Data on velocity (inlet)
t_inlet = t[x == x.min()][:,None]
x_inlet = x[x == x.min()][:,None]
y_inlet = y[x == x.min()][:,None]
u_inlet = u[x == x.min()][:,None]
v_inlet = v[x == x.min()][:,None]
# Training Data on velocity (cylinder)
t_cyl = t[x**2 + y**2 == 0.25][:,None]
x_cyl = x[x**2 + y**2 == 0.25][:,None]
y_cyl = y[x**2 + y**2 == 0.25][:,None]
# Training
model = HFM(t_data, x_data, y_data, c_data,
t_eqns, x_eqns, y_eqns,
t_inlet, x_inlet, y_inlet, u_inlet, v_inlet,
t_cyl, x_cyl, y_cyl,
layers, batch_size,
Pec = 2000, Rey = 200)
model.train(total_time = 40, learning_rate=1e-3)
F_D, F_L = model.predict_drag_lift(t_star)
# Test Data
snap = np.array([100])
t_test = T_star[:,snap]
x_test = X_star[:,snap]
y_test = Y_star[:,snap]
c_test = C_star[:,snap]
u_test = U_star[:,snap]
v_test = V_star[:,snap]
p_test = P_star[:,snap]
# Prediction
c_pred, u_pred, v_pred, p_pred = model.predict(t_test, x_test, y_test)
# Error
error_c = relative_error(c_pred, c_test)
error_u = relative_error(u_pred, u_test)
error_v = relative_error(v_pred, v_test)
error_p = relative_error(p_pred - np.mean(p_pred), p_test - np.mean(p_test))
print('Error c: %e' % (error_c))
print('Error u: %e' % (error_u))
print('Error v: %e' % (error_v))
print('Error p: %e' % (error_p))
################# Save Data ###########################
C_pred = 0*C_star
U_pred = 0*U_star
V_pred = 0*V_star
P_pred = 0*P_star
for snap in range(0,t_star.shape[0]):
t_test = T_star[:,snap:snap+1]
x_test = X_star[:,snap:snap+1]
y_test = Y_star[:,snap:snap+1]
c_test = C_star[:,snap:snap+1]
u_test = U_star[:,snap:snap+1]
v_test = V_star[:,snap:snap+1]
p_test = P_star[:,snap:snap+1]
# Prediction
c_pred, u_pred, v_pred, p_pred = model.predict(t_test, x_test, y_test)
C_pred[:,snap:snap+1] = c_pred
U_pred[:,snap:snap+1] = u_pred
V_pred[:,snap:snap+1] = v_pred
P_pred[:,snap:snap+1] = p_pred
# Error
error_c = relative_error(c_pred, c_test)
error_u = relative_error(u_pred, u_test)
error_v = relative_error(v_pred, v_test)
error_p = relative_error(p_pred - np.mean(p_pred), p_test - np.mean(p_test))
print('Error c: %e' % (error_c))
print('Error u: %e' % (error_u))
print('Error v: %e' % (error_v))
print('Error p: %e' % (error_p))
scipy.io.savemat('../Results/Cylinder2D_No_Slip_Dirichlet_Streaks_results_%s.mat' %(time.strftime('%d_%m_%Y')),
{'C_pred':C_pred, 'U_pred':U_pred, 'V_pred':V_pred, 'P_pred':P_pred, 'F_L':F_L, 'F_D':F_D})
"""
@author: Maziar Raissi
"""
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
import numpy as np
import scipy.io
import time
import sys
from utilities import neural_net, Navier_Stokes_2D, Gradient_Velocity_2D, \
tf_session, mean_squared_error, relative_error
class HFM(object):
# notational conventions
# _tf: placeholders for input/output data and points used to regress the equations
# _pred: output of neural network
# _eqns: points used to regress the equations
# _data: input-output data
# _inlet: input-output data at the inlet
# _cyl: points used to regress the no slip boundary condition
# _star: preditions
def __init__(self, t_data, x_data, y_data, c_data,
t_eqns, x_eqns, y_eqns,
t_inlet, x_inlet, y_inlet, u_inlet, v_inlet,
t_cyl, x_cyl, y_cyl,
layers, batch_size,
Pec, Rey):
# specs
self.layers = layers
self.batch_size = batch_size
# flow properties
self.Pec = Pec
self.Rey = Rey
# data
[self.t_data, self.x_data, self.y_data, self.c_data] = [t_data, x_data, y_data, c_data]
[self.t_eqns, self.x_eqns, self.y_eqns] = [t_eqns, x_eqns, y_eqns]
[self.t_inlet, self.x_inlet, self.y_inlet, self.u_inlet, self.v_inlet] = [t_inlet, x_inlet, y_inlet, u_inlet, v_inlet]
[self.t_cyl, self.x_cyl, self.y_cyl] = [t_cyl, x_cyl, y_cyl]
# placeholders
[self.t_data_tf, self.x_data_tf, self.y_data_tf, self.c_data_tf] = [tf.placeholder(tf.float32, shape=[None, 1]) for _ in range(4)]
[self.t_eqns_tf, self.x_eqns_tf, self.y_eqns_tf] = [tf.placeholder(tf.float32, shape=[None, 1]) for _ in range(3)]
[self.t_inlet_tf, self.x_inlet_tf, self.y_inlet_tf, self.u_inlet_tf, self.v_inlet_tf] = [tf.placeholder(tf.float32, shape=[None, 1]) for _ in range(5)]
[self.t_cyl_tf, self.x_cyl_tf, self.y_cyl_tf] = [tf.placeholder(tf.float32, shape=[None, 1]) for _ in range(3)]
# physics "uninformed" neural networks
self.net_cuvp = neural_net(self.t_data, self.x_data, self.y_data, layers = self.layers)
[self.c_data_pred,
self.u_data_pred,
self.v_data_pred,
self.p_data_pred] = self.net_cuvp(self.t_data_tf,
self.x_data_tf,
self.y_data_tf)
# physics "uninformed" neural networks (data at the inlet)
[_,
self.u_inlet_pred,
self.v_inlet_pred,
_] = self.net_cuvp(self.t_inlet_tf,
self.x_inlet_tf,
self.y_inlet_tf)
# physics "uninformed" neural networks (data on the cylinder)
[_,
self.u_cyl_pred,
self.v_cyl_pred,
_] = self.net_cuvp(self.t_cyl_tf,
self.x_cyl_tf,
self.y_cyl_tf)
# physics "informed" neural networks
[self.c_eqns_pred,
self.u_eqns_pred,
self.v_eqns_pred,
self.p_eqns_pred] = self.net_cuvp(self.t_eqns_tf,
self.x_eqns_tf,
self.y_eqns_tf)
[self.e1_eqns_pred,
self.e2_eqns_pred,
self.e3_eqns_pred,
self.e4_eqns_pred] = Navier_Stokes_2D(self.c_eqns_pred,
self.u_eqns_pred,
self.v_eqns_pred,
self.p_eqns_pred,
self.t_eqns_tf,
self.x_eqns_tf,
self.y_eqns_tf,
self.Pec,
self.Rey)
# gradients required for the lift and drag forces
[self.u_x_eqns_pred,
self.v_x_eqns_pred,
self.u_y_eqns_pred,
self.v_y_eqns_pred] = Gradient_Velocity_2D(self.u_eqns_pred,
self.v_eqns_pred,
self.x_eqns_tf,
self.y_eqns_tf)
# loss
self.loss = mean_squared_error(self.c_data_pred, self.c_data_tf) + \
mean_squared_error(self.u_inlet_pred, self.u_inlet_tf) + \
mean_squared_error(self.v_inlet_pred, self.v_inlet_tf) + \
mean_squared_error(self.u_cyl_pred, 0.0) + \
mean_squared_error(self.v_cyl_pred, 0.0) + \
mean_squared_error(self.e1_eqns_pred, 0.0) + \
mean_squared_error(self.e2_eqns_pred, 0.0) + \
mean_squared_error(self.e3_eqns_pred, 0.0) + \
mean_squared_error(self.e4_eqns_pred, 0.0)
# optimizers
self.learning_rate = tf.placeholder(tf.float32, shape=[])
self.optimizer = tf.train.AdamOptimizer(learning_rate = self.learning_rate)
self.train_op = self.optimizer.minimize(self.loss)
self.sess = tf_session()
def train(self, total_time, learning_rate):
N_data = self.t_data.shape[0]
N_eqns = self.t_eqns.shape[0]
start_time = time.time()
running_time = 0
it = 0
while running_time < total_time:
idx_data = np.random.choice(N_data, min(self.batch_size, N_data))
idx_eqns = np.random.choice(N_eqns, self.batch_size)
(t_data_batch,
x_data_batch,
y_data_batch,
c_data_batch) = (self.t_data[idx_data,:],
self.x_data[idx_data,:],
self.y_data[idx_data,:],
self.c_data[idx_data,:])
(t_eqns_batch,
x_eqns_batch,
y_eqns_batch) = (self.t_eqns[idx_eqns,:],
self.x_eqns[idx_eqns,:],
self.y_eqns[idx_eqns,:])
tf_dict = {self.t_data_tf: t_data_batch,
self.x_data_tf: x_data_batch,
self.y_data_tf: y_data_batch,
self.c_data_tf: c_data_batch,
self.t_eqns_tf: t_eqns_batch,
self.x_eqns_tf: x_eqns_batch,
self.y_eqns_tf: y_eqns_batch,
self.t_inlet_tf: self.t_inlet,
self.x_inlet_tf: self.x_inlet,
self.y_inlet_tf: self.y_inlet,
self.u_inlet_tf: self.u_inlet,
self.v_inlet_tf: self.v_inlet,
self.t_cyl_tf: self.t_cyl,
self.x_cyl_tf: self.x_cyl,
self.y_cyl_tf: self.y_cyl,
self.learning_rate: learning_rate}
self.sess.run([self.train_op], tf_dict)
# Print
if it % 10 == 0:
elapsed = time.time() - start_time
running_time += elapsed/3600.0
[loss_value,
learning_rate_value] = self.sess.run([self.loss,
self.learning_rate], tf_dict)
print('It: %d, Loss: %.3e, Time: %.2fs, Running Time: %.2fh, Learning Rate: %.1e'
%(it, loss_value, elapsed, running_time, learning_rate_value))
sys.stdout.flush()
start_time = time.time()
it += 1
def predict(self, t_star, x_star, y_star):
tf_dict = {self.t_data_tf: t_star, self.x_data_tf: x_star, self.y_data_tf: y_star}
c_star = self.sess.run(self.c_data_pred, tf_dict)
u_star = self.sess.run(self.u_data_pred, tf_dict)
v_star = self.sess.run(self.v_data_pred, tf_dict)
p_star = self.sess.run(self.p_data_pred, tf_dict)
return c_star, u_star, v_star, p_star
def predict_drag_lift(self, t_cyl):
viscosity = (1.0/self.Rey)
theta = np.linspace(0.0,2*np.pi,200)[:,None] # N x 1
d_theta = theta[1,0] - theta[0,0]
x_cyl = 0.5*np.cos(theta) # N x 1
y_cyl = 0.5*np.sin(theta) # N x 1
N = x_cyl.shape[0]
T = t_cyl.shape[0]
T_star = np.tile(t_cyl, (1,N)).T # N x T
X_star = np.tile(x_cyl, (1,T)) # N x T
Y_star = np.tile(y_cyl, (1,T)) # N x T
t_star = np.reshape(T_star,[-1,1]) # NT x 1
x_star = np.reshape(X_star,[-1,1]) # NT x 1
y_star = np.reshape(Y_star,[-1,1]) # NT x 1
tf_dict = {self.t_eqns_tf: t_star, self.x_eqns_tf: x_star, self.y_eqns_tf: y_star}
[p_star,
u_x_star,
u_y_star,
v_x_star,
v_y_star] = self.sess.run([self.p_eqns_pred,
self.u_x_eqns_pred,
self.u_y_eqns_pred,
self.v_x_eqns_pred,
self.v_y_eqns_pred], tf_dict)
P_star = np.reshape(p_star, [N,T]) # N x T
P_star = P_star - np.mean(P_star, axis=0)
U_x_star = np.reshape(u_x_star, [N,T]) # N x T
U_y_star = np.reshape(u_y_star, [N,T]) # N x T
V_x_star = np.reshape(v_x_star, [N,T]) # N x T
V_y_star = np.reshape(v_y_star, [N,T]) # N x T
INT0 = (-P_star[0:-1,:] + 2*viscosity*U_x_star[0:-1,:])*X_star[0:-1,:] + viscosity*(U_y_star[0:-1,:] + V_x_star[0:-1,:])*Y_star[0:-1,:]
INT1 = (-P_star[1: , :] + 2*viscosity*U_x_star[1: , :])*X_star[1: , :] + viscosity*(U_y_star[1: , :] + V_x_star[1: , :])*Y_star[1: , :]
F_D = 0.5*np.sum(INT0.T+INT1.T, axis = 1)*d_theta # T x 1
INT0 = (-P_star[0:-1,:] + 2*viscosity*V_y_star[0:-1,:])*Y_star[0:-1,:] + viscosity*(U_y_star[0:-1,:] + V_x_star[0:-1,:])*X_star[0:-1,:]
INT1 = (-P_star[1: , :] + 2*viscosity*V_y_star[1: , :])*Y_star[1: , :] + viscosity*(U_y_star[1: , :] + V_x_star[1: , :])*X_star[1: , :]
F_L = 0.5*np.sum(INT0.T+INT1.T, axis = 1)*d_theta # T x 1
return F_D, F_L
if __name__ == "__main__":
batch_size = 10000
layers = [3] + 10*[4*50] + [4]
# Load Data
data = scipy.io.loadmat('../Data/Cylinder2D_Re200Pec2000_Neumann_Streaks.mat')
t_star = data['t_data'] # T x 1
x_star = data['x_data'] # N x 1
y_star = data['y_data'] # N x 1
T = t_star.shape[0]
N = x_star.shape[0]
U_star = data['U_data'] # N x T
V_star = data['V_data'] # N x T
P_star = data['P_data'] # N x T
C_star = data['C_data'] # N x T
# Rearrange Data
T_star = np.tile(t_star, (1,N)).T # N x T
X_star = np.tile(x_star, (1,T)) # N x T
Y_star = np.tile(y_star, (1,T)) # N x T
t = T_star.flatten()[:,None] # NT x 1
x = X_star.flatten()[:,None] # NT x 1
y = Y_star.flatten()[:,None] # NT x 1
u = U_star.flatten()[:,None] # NT x 1
v = V_star.flatten()[:,None] # NT x 1
p = P_star.flatten()[:,None] # NT x 1
c = C_star.flatten()[:,None] # NT x 1
######################################################################
######################## Training Data ###############################
######################################################################
T_data = T # int(sys.argv[1])
N_data = N # int(sys.argv[2])
idx_t = np.concatenate([np.array([0]), np.random.choice(T-2, T_data-2, replace=False)+1, np.array([T-1])] )
idx_x = np.random.choice(N, N_data, replace=False)
t_data = T_star[:, idx_t][idx_x,:].flatten()[:,None]
x_data = X_star[:, idx_t][idx_x,:].flatten()[:,None]
y_data = Y_star[:, idx_t][idx_x,:].flatten()[:,None]
c_data = C_star[:, idx_t][idx_x,:].flatten()[:,None]
T_eqns = T
N_eqns = N
idx_t = np.concatenate([np.array([0]), np.random.choice(T-2, T_eqns-2, replace=False)+1, np.array([T-1])] )
idx_x = np.random.choice(N, N_eqns, replace=False)
t_eqns = T_star[:, idx_t][idx_x,:].flatten()[:,None]
x_eqns = X_star[:, idx_t][idx_x,:].flatten()[:,None]
y_eqns = Y_star[:, idx_t][idx_x,:].flatten()[:,None]
# Training Data on velocity (inlet)
t_inlet = t[x == x.min()][:,None]
x_inlet = x[x == x.min()][:,None]
y_inlet = y[x == x.min()][:,None]
u_inlet = u[x == x.min()][:,None]
v_inlet = v[x == x.min()][:,None]
# Training Data on velocity (cylinder)
t_cyl = t[x**2 + y**2 == 0.25][:,None]
x_cyl = x[x**2 + y**2 == 0.25][:,None]
y_cyl = y[x**2 + y**2 == 0.25][:,None]
# Training
model = HFM(t_data, x_data, y_data, c_data,
t_eqns, x_eqns, y_eqns,
t_inlet, x_inlet, y_inlet, u_inlet, v_inlet,
t_cyl, x_cyl, y_cyl,
layers, batch_size,
Pec = 2000, Rey = 200)
model.train(total_time = 40, learning_rate=1e-3)
F_D, F_L = model.predict_drag_lift(t_star)
# Test Data
snap = np.array([100])
t_test = T_star[:,snap]
x_test = X_star[:,snap]
y_test = Y_star[:,snap]
c_test = C_star[:,snap]
u_test = U_star[:,snap]
v_test = V_star[:,snap]
p_test = P_star[:,snap]
# Prediction
c_pred, u_pred, v_pred, p_pred = model.predict(t_test, x_test, y_test)
# Error
error_c = relative_error(c_pred, c_test)
error_u = relative_error(u_pred, u_test)
error_v = relative_error(v_pred, v_test)
error_p = relative_error(p_pred - np.mean(p_pred), p_test - np.mean(p_test))
print('Error c: %e' % (error_c))
print('Error u: %e' % (error_u))
print('Error v: %e' % (error_v))
print('Error p: %e' % (error_p))
################# Save Data ###########################
C_pred = 0*C_star
U_pred = 0*U_star
V_pred = 0*V_star
P_pred = 0*P_star
for snap in range(0,t_star.shape[0]):
t_test = T_star[:,snap:snap+1]
x_test = X_star[:,snap:snap+1]
y_test = Y_star[:,snap:snap+1]
c_test = C_star[:,snap:snap+1]
u_test = U_star[:,snap:snap+1]
v_test = V_star[:,snap:snap+1]
p_test = P_star[:,snap:snap+1]
# Prediction
c_pred, u_pred, v_pred, p_pred = model.predict(t_test, x_test, y_test)
C_pred[:,snap:snap+1] = c_pred
U_pred[:,snap:snap+1] = u_pred
V_pred[:,snap:snap+1] = v_pred
P_pred[:,snap:snap+1] = p_pred
# Error
error_c = relative_error(c_pred, c_test)
error_u = relative_error(u_pred, u_test)
error_v = relative_error(v_pred, v_test)
error_p = relative_error(p_pred - np.mean(p_pred), p_test - np.mean(p_test))
print('Error c: %e' % (error_c))
print('Error u: %e' % (error_u))
print('Error v: %e' % (error_v))
print('Error p: %e' % (error_p))
scipy.io.savemat('../Results/Cylinder2D_No_Slip_Neumann_Streaks_results_%s.mat' %(time.strftime('%d_%m_%Y')),
{'C_pred':C_pred, 'U_pred':U_pred, 'V_pred':V_pred, 'P_pred':P_pred, 'F_L':F_L, 'F_D':F_D})
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment