Copy disabled (too large)
Download .txt
Showing preview only (23,830K chars total). Download the full file to get everything.
Repository: AlexeyPechnikov/pygmtsar
Branch: pygmtsar2
Commit: 1a6f59b66174
Files: 112
Total size: 74.5 MB
Directory structure:
gitextract_wmdc1yew/
├── .github/
│ ├── FUNDING.yml
│ ├── ISSUE_TEMPLATE/
│ │ ├── bug_report.md
│ │ ├── feature_request.md
│ │ └── general-help.md
│ └── workflows/
│ ├── dockerhub-cache-cleanup.yml
│ ├── macos.yml
│ ├── ubuntu.yml
│ └── ubuntu_docker.yml
├── .gitignore
├── CITATION.cff
├── CODE_OF_CONDUCT.md
├── LICENSE.TXT
├── README.md
├── docker/
│ ├── README.md
│ ├── pygmtsar.Dockerfile
│ ├── pygmtsar.ubuntu2204.Dockerfile
│ ├── requirements.json
│ └── requirements.sh
├── notebooks/
│ └── dload.sh
├── pubs/
│ └── README.md
├── pygmtsar/
│ ├── LICENSE.txt
│ ├── MANIFEST.in
│ ├── pygmtsar/
│ │ ├── ASF.py
│ │ ├── AWS.py
│ │ ├── GMT.py
│ │ ├── IO.py
│ │ ├── MultiInstanceManager.py
│ │ ├── NCubeVTK.py
│ │ ├── PRM.py
│ │ ├── PRM_gmtsar.py
│ │ ├── S1.py
│ │ ├── Stack.py
│ │ ├── Stack_align.py
│ │ ├── Stack_base.py
│ │ ├── Stack_dem.py
│ │ ├── Stack_detrend.py
│ │ ├── Stack_export.py
│ │ ├── Stack_geocode.py
│ │ ├── Stack_incidence.py
│ │ ├── Stack_landmask.py
│ │ ├── Stack_lstsq.py
│ │ ├── Stack_multilooking.py
│ │ ├── Stack_orbits.py
│ │ ├── Stack_phasediff.py
│ │ ├── Stack_prm.py
│ │ ├── Stack_ps.py
│ │ ├── Stack_reframe.py
│ │ ├── Stack_reframe_gmtsar.py
│ │ ├── Stack_sbas.py
│ │ ├── Stack_stl.py
│ │ ├── Stack_tidal.py
│ │ ├── Stack_topo.py
│ │ ├── Stack_trans.py
│ │ ├── Stack_trans_inv.py
│ │ ├── Stack_unwrap.py
│ │ ├── Stack_unwrap_snaphu.py
│ │ ├── Tiles.py
│ │ ├── XYZTiles.py
│ │ ├── __init__.py
│ │ ├── data/
│ │ │ ├── geoid_egm96_icgem.grd
│ │ │ └── google_colab.sh
│ │ ├── datagrid.py
│ │ ├── tqdm_dask.py
│ │ ├── tqdm_joblib.py
│ │ └── utils.py
│ └── setup.py
├── tests/
│ ├── goldenvalley.py
│ ├── imperial_valley_2015.py
│ ├── iran–iraq_earthquake_2017.py
│ ├── kalkarindji_flooding_2024.py
│ ├── la_cumbre_volcano_eruption_2020.py
│ ├── lakesarez_landslides_2017.py
│ ├── pico_do_fogo_volcano_eruption_2014.py
│ ├── türkiye_earthquakes_2023.py
│ └── türkiye_elevation_2019.py
├── todo/
│ ├── PRM.robust_trend2d.ipynb
│ ├── baseline/
│ │ ├── S1_20201222_ALL_F2.LED
│ │ ├── S1_20201222_ALL_F2.PRM
│ │ ├── S1_20210103_ALL_F2.LED
│ │ ├── S1_20210103_ALL_F2.PRM
│ │ ├── SAT_baseline.sh
│ │ └── baseline.v7.ipynb
│ ├── branch_cut.ipynb
│ ├── bursts_extraction_remote.ipynb
│ ├── gmt_fft_2d.md
│ ├── interp.ipynb
│ ├── orbit/
│ │ ├── S1_20150521_ALL_F1.LED
│ │ ├── S1_20150521_ALL_F1.extend.LED
│ │ ├── hermite_c.c
│ │ ├── read_orb_hermite.ipynb
│ │ └── test_hermite_c.c
│ ├── phasediff.md
│ ├── phasediff_calc_drho.md
│ ├── phasediff_spline_,evals_.md
│ ├── phasefilt.ipynb
│ ├── phasefilt_Goldstein.c
│ ├── phasefilt_Goldstein.md
│ ├── phasefilt_calc_corr.md
│ ├── phasefilt_lazy.ipynb
│ ├── phasefilt_make_wgt.md
│ ├── pixel_decimator.py
│ ├── test_gmt_fft_2d.c
│ ├── test_gmt_fft_2d.py
│ ├── test_gmt_ifft_2d.c
│ ├── test_gmt_ifft_2d.py
│ ├── test_hermite.py
│ ├── test_hermite_interpolation.py
│ ├── test_make_wgt.c
│ ├── test_phasediff.c
│ └── trans_dat.ipynb
└── ui/
├── Imperial_Valley_2015.html
└── Imperial_Valley_2015_ipyleaflet.html
================================================
FILE CONTENTS
================================================
================================================
FILE: .github/FUNDING.yml
================================================
# These are supported funding model platforms
github: # Replace with up to 4 GitHub Sponsors-enabled usernames e.g., [user1, user2]
patreon: pechnikov
open_collective: # Replace with a single Open Collective username
ko_fi: # Replace with a single Ko-fi username
tidelift: # Replace with a single Tidelift platform-name/package-name e.g., npm/babel
community_bridge: # Replace with a single Community Bridge project-name e.g., cloud-foundry
liberapay: # Replace with a single Liberapay username
issuehunt: # Replace with a single IssueHunt username
otechie: # Replace with a single Otechie username
lfx_crowdfunding: # Replace with a single LFX Crowdfunding project-name e.g., cloud-foundry
custom: # Replace with up to 4 custom sponsorship URLs e.g., ['link1', 'link2']
================================================
FILE: .github/ISSUE_TEMPLATE/bug_report.md
================================================
---
name: Bug report
about: Create a report to help us improve
title: "[Bug]: "
labels: ''
assignees: ''
---
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
**Screenshots**
If applicable, add screenshots to help explain your problem.
**System and software version:**
- OS: [e.g. iOS]
- PyGMTSAR version:
**Processing Log**
If applicable, attach a log file from notebook cell or terminal output or relevant error messages
[Please note you can use the ChatGPT4-based InSAR AI Assistant for guidance or troubleshooting. Visit https://insar.dev/ai for more information.]
================================================
FILE: .github/ISSUE_TEMPLATE/feature_request.md
================================================
---
name: Feature request
about: Suggest an idea for PyGMTSAR
title: "[Feature]: "
labels: ''
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
Provide a clear and concise description of the problem. Include details about the dataset or specific scenario where this feature could be beneficial.
**Describe the solution you'd like**
Offer a clear and concise description of what you want to happen. How do you envision this feature working within PyGMTSAR?
**Describe alternatives you've considered**
Share a clear and concise description of any alternative solutions or features you've considered. Why might these alternatives not meet your needs fully?
**Additional context**
Add any other context, examples, or screenshots about the feature request here. The more information you provide, the easier it is for us to understand and evaluate the request.
**Note**
Please be aware that feature requests often involve developing new code or examples, which means they may be addressed over the long term. We will keep the issue open until it is resolved in some form. Your patience and contributions towards implementing this feature are welcome.
[For guidance or to discuss potential features and their implementation, the ChatGPT4-based InSAR AI Assistant is available at https://insar.dev/ai. It can provide insights and help articulate feature requests more clearly.]
================================================
FILE: .github/ISSUE_TEMPLATE/general-help.md
================================================
---
name: General help
about: Help with problems users met during software installation, data processing, etc.
title: "[Help]: "
labels: ''
assignees: ''
---
**Describe the problem you encountered**
Provide a brief description of the issue you're facing.
**OS and software version**
Please specify your operating system and the version of PyGMTSAR you are using, if applicable.
- OS:
- PyGMTSAR version:
**Steps to reproduce the problem**
If possible, describe how the problem can be reproduced from scratch. Include any commands or steps you took leading up to the issue.
**Log file**
If you have a log file or terminal output related to the problem, please attach it here.
**Screenshot**
If applicable, attach a screenshot to help us better understand your situation.
**Note**
- If you have a general question not directly related to PyGMTSAR, consider using the discussions page.
[For additional support, consider using the ChatGPT4-based InSAR AI Assistant at https://insar.dev/ai. It can provide guidance and help with understanding PyGMTSAR's capabilities and troubleshooting issues.]
================================================
FILE: .github/workflows/dockerhub-cache-cleanup.yml
================================================
name: Cleanup Docker Hub Cache
on:
schedule:
- cron: '0 0 * * 0'
jobs:
delete-cache:
runs-on: ubuntu-latest
steps:
- name: Get Docker Hub JWT Token
run: |
TOKEN=$(curl -s -X POST "https://hub.docker.com/v2/users/login/" \
-H "Content-Type: application/json" \
-d '{"username": "'"${{ secrets.DOCKER_USERNAME }}"'", "password": "'"${{ secrets.DOCKER_PASSWORD }}"'"}' | jq -r .token)
echo "TOKEN=$TOKEN" >> $GITHUB_ENV
- name: Delete cache tags from Docker Hub
run: |
curl -X DELETE -H "Authorization: JWT $TOKEN" \
"https://hub.docker.com/v2/repositories/pechnikov/pygmtsar/tags/cache-arm64/"
curl -X DELETE -H "Authorization: JWT $TOKEN" \
"https://hub.docker.com/v2/repositories/pechnikov/pygmtsar/tags/cache-amd64/"
================================================
FILE: .github/workflows/macos.yml
================================================
# This workflow will install Python dependencies, run tests and lint with a single version of Python
# For more information see: https://help.github.com/actions/language-and-framework-guides/using-python-with-github-actions
name: MacOS tests
on:
push:
branches: [ "pygmtsar2" ]
pull_request:
branches: [ "pygmtsar2" ]
permissions:
contents: read
jobs:
Imperial_Valley_2015:
strategy:
fail-fast: false
matrix:
os: ["macos-13"]
python-version: ["3.13"]
runs-on: ${{ matrix.os }}
steps:
- name: Print runner details
run: uname -a
- uses: actions/checkout@v4
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
- name: Install system dependencies
run: |
uname -a
# prepare system
brew install wget libtiff hdf5 gmt ghostscript autoconf
- name: Compile GMTSAR
run: |
git config --global advice.detachedHead false
git clone --branch master https://github.com/gmtsar/gmtsar GMTSAR
cd GMTSAR
git checkout e98ebc0f4164939a4780b1534bac186924d7c998
autoconf
./configure --with-orbits-dir=/tmp
make
make install
- name: Install PyGMTSAR
run: |
pip3 install pyvista panel ipyleaflet
pip3 install -e ./pygmtsar/
- name: Run test
working-directory: tests
run: |
SCRIPT=imperial_valley_2015.py
export PATH=$PATH:${{ github.workspace }}/GMTSAR/bin
ulimit -n 10000
# remove Google Colab specific commands and add __main__ section
cat "$SCRIPT" \
| sed '/if \x27google\.colab\x27 in sys\.modules:/,/^$/d' \
| sed 's/^[[:blank:]]*!.*$//' \
| awk '/username = \x27GoogleColab2023\x27/ {print "if __name__ == \x27__main__\x27:"; indent=1} {if(indent) sub(/^/, " "); print}' \
> "$SCRIPT.fixed.py"
python3 "$SCRIPT.fixed.py"
- name: Archive test results
uses: actions/upload-artifact@v4
with:
name: Plots (${{ matrix.os }}, ${{ matrix.python-version }})
path: tests/*.jpg
if-no-files-found: error
================================================
FILE: .github/workflows/ubuntu.yml
================================================
# This workflow will install Python dependencies, run tests and lint with a single version of Python
# For more information see: https://help.github.com/actions/language-and-framework-guides/using-python-with-github-actions
name: Ubuntu tests
on:
push:
branches: [ "pygmtsar2" ]
pull_request:
branches: [ "pygmtsar2" ]
permissions:
contents: read
jobs:
Imperial_Valley_2015:
strategy:
fail-fast: false
matrix:
os: ["ubuntu-24.04"]
python-version: ["3.13"]
runs-on: ${{ matrix.os }}
steps:
- name: Print runner details
run: uname -a
- uses: actions/checkout@v4
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
- name: Install system dependencies
run: |
# prepare system
sudo apt update
# https://github.com/gmtsar/gmtsar/wiki/GMTSAR-Wiki-Page
sudo apt install -y csh subversion autoconf libtiff5-dev libhdf5-dev wget
sudo apt install -y liblapack-dev
sudo apt install -y gfortran
sudo apt install -y g++
sudo apt install -y libgmt-dev
# gmt-gshhg-full should be installed automatically (it is required to use GMTSAR landmask)
sudo apt install -y gmt
# add missed package
sudo apt install -y make
# vtk rendering
sudo apt install -y xvfb
- name: Compile GMTSAR
run: |
git config --global advice.detachedHead false
git clone --branch master https://github.com/gmtsar/gmtsar GMTSAR
cd GMTSAR
git checkout e98ebc0f4164939a4780b1534bac186924d7c998
autoconf
./configure --with-orbits-dir=/tmp CFLAGS='-z muldefs' LDFLAGS='-z muldefs'
make CFLAGS='-z muldefs' LDFLAGS='-z muldefs'
make install
- name: Install PyGMTSAR
run: |
pip3 install pyvista panel ipyleaflet
pip3 install -e ./pygmtsar/
- name: Run test
working-directory: tests
run: |
SCRIPT=imperial_valley_2015.py
export PATH=$PATH:${{ github.workspace }}/GMTSAR/bin
Xvfb :99 -screen 0 800x600x24 > /dev/null 2>&1 &
export DISPLAY=:99
export XVFB_PID=$!
# remove Google Colab specific commands and add __main__ section
cat "$SCRIPT" \
| sed '/if \x27google\.colab\x27 in sys\.modules:/,/^$/d' \
| sed 's/^[[:blank:]]*!.*$//' \
| awk '/username = \x27GoogleColab2023\x27/ {print "if __name__ == \x27__main__\x27:"; indent=1} {if(indent) sub(/^/, " "); print}' \
> "$SCRIPT.fixed.py"
python3 "$SCRIPT.fixed.py"
- name: Archive test results
uses: actions/upload-artifact@v4
with:
name: Plots (${{ matrix.os }}, ${{ matrix.python-version }})
path: tests/*.jpg
if-no-files-found: error
================================================
FILE: .github/workflows/ubuntu_docker.yml
================================================
name: Docker Build (Ubuntu amd64/arm64)
#on:
# workflow_dispatch:
on:
push:
branches: [ "pygmtsar2" ]
pull_request:
branches: [ "pygmtsar2" ]
jobs:
build:
strategy:
fail-fast: false
matrix:
arch: [amd64, arm64]
#runs-on: ${{ matrix.arch == 'amd64' && 'ubuntu-24.04' || 'ubuntu-24.04-arm' }}
runs-on: ${{ fromJSON('{"amd64":"ubuntu-24.04", "arm64":"ubuntu-24.04-arm"}')[matrix.arch] }}
steps:
- name: Print runner details
run: uname -a
- name: Checkout repository
uses: actions/checkout@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Log in to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Get Git commit SHA
run: echo "COMMIT_SHA=$(echo ${GITHUB_SHA} | cut -c1-7)" >> $GITHUB_ENV
- name: Build and Push Docker Image
run: |
test -d docker && ls -R docker || echo "Error: 'docker/' directory not found."
docker buildx build . -f docker/pygmtsar.Dockerfile \
--tag pechnikov/pygmtsar:latest-dev-${{ matrix.arch }} \
--tag pechnikov/pygmtsar:${COMMIT_SHA}-dev-${{ matrix.arch }} \
--push \
--cache-from type=registry,ref=pechnikov/pygmtsar:cache-${{ matrix.arch }} \
--cache-to type=registry,ref=pechnikov/pygmtsar:cache-${{ matrix.arch }},mode=max
================================================
FILE: .gitignore
================================================
autom4te.cache/
bin/
config.log
config.mk
config.status
configure
gmtsar/SAT_baseline
gmtsar/SAT_baseline.o
gmtsar/SAT_llt2rat
gmtsar/SAT_llt2rat.o
gmtsar/SAT_llt2rat_sub.o
gmtsar/SAT_look
gmtsar/SAT_look.o
gmtsar/aastretch.o
gmtsar/acpatch.o
gmtsar/bperp
gmtsar/bperp.o
gmtsar/calc_dop.o
gmtsar/calc_dop_orb
gmtsar/calc_dop_orb.o
gmtsar/conv
gmtsar/conv.o
gmtsar/conv2d.o
gmtsar/csh/gmtsar_sharedir.csh
gmtsar/cut_slc
gmtsar/cut_slc.o
gmtsar/do_freq_xcorr.o
gmtsar/do_time_int_xcorr.o
gmtsar/esarp
gmtsar/esarp.o
gmtsar/extend_orbit
gmtsar/extend_orbit.o
gmtsar/fft_bins.o
gmtsar/fft_interpolate_routines.o
gmtsar/file_stuff.o
gmtsar/fitoffset
gmtsar/fitoffset.o
gmtsar/geoxyz.o
gmtsar/get_PRM
gmtsar/get_PRM.o
gmtsar/get_locations.o
gmtsar/get_params.o
gmtsar/hermite_c.o
gmtsar/highres_corr.o
gmtsar/interpolate_orbit.o
gmtsar/intp_coef.o
gmtsar/ldr_orbit.o
gmtsar/lib_strfuncs.o
gmtsar/libgmtsar.a
gmtsar/make_gaussian_filter
gmtsar/make_gaussian_filter.o
gmtsar/nearest_grid
gmtsar/nearest_grid.o
gmtsar/offset_topo
gmtsar/offset_topo.o
gmtsar/p_scatter
gmtsar/p_scatter.o
gmtsar/parse_xcorr_input.o
gmtsar/phase2topo
gmtsar/phase2topo.o
gmtsar/phasediff
gmtsar/phasediff.o
gmtsar/phasediff_get_topo_phase
gmtsar/phasediff_get_topo_phase.o
gmtsar/phasefilt
gmtsar/phasefilt.o
gmtsar/plxyz.o
gmtsar/polyfit.o
gmtsar/print_results.o
gmtsar/radopp.o
gmtsar/read_orb.o
gmtsar/read_xcorr_data.o
gmtsar/resamp
gmtsar/resamp.o
gmtsar/rmpatch.o
gmtsar/rng_cmp.o
gmtsar/rng_filter.o
gmtsar/rng_ref.o
gmtsar/sbas
gmtsar/sbas.o
gmtsar/sbas_utils.o
gmtsar/set_prm_defaults.o
gmtsar/shift.o
gmtsar/sio_struct.o
gmtsar/siocomplex.o
gmtsar/solid_tide
gmtsar/solid_tide.o
gmtsar/spline.o
gmtsar/split_aperture
gmtsar/split_aperture.o
gmtsar/split_spectrum
gmtsar/split_spectrum.o
gmtsar/stringutils.o
gmtsar/trans_col.o
gmtsar/update_PRM
gmtsar/update_PRM.o
gmtsar/update_PRM_sub.o
gmtsar/utils.o
gmtsar/utils_complex.o
gmtsar/write_orb.o
gmtsar/xcorr
gmtsar/xcorr.o
gmtsar/geocode_slc
gmtsar/geocode_slc.o
preproc/ALOS_preproc/ALOS_baseline/ALOS_baseline.o
preproc/ALOS_preproc/ALOS_baseline/ALOS_llt2rat_sub.o
preproc/ALOS_preproc/ALOS_fbd2fbs/ALOS_fbd2fbs
preproc/ALOS_preproc/ALOS_fbd2fbs/ALOS_fbd2fbs.o
preproc/ALOS_preproc/ALOS_fbd2fbs_SLC/ALOS_fbd2fbs_SLC
preproc/ALOS_preproc/ALOS_fbd2fbs_SLC/ALOS_fbd2fbs_SLC.o
preproc/ALOS_preproc/ALOS_fbd2ss/ALOS_fbd2ss
preproc/ALOS_preproc/ALOS_fbd2ss/ALOS_fbd2ss.o
preproc/ALOS_preproc/ALOS_look/ALOS_look
preproc/ALOS_preproc/ALOS_look/ALOS_look.o
preproc/ALOS_preproc/ALOS_merge/ALOS_merge
preproc/ALOS_preproc/ALOS_merge/ALOS_merge.o
preproc/ALOS_preproc/ALOS_mosaic_ss/ALOS_mosaic_ss
preproc/ALOS_preproc/ALOS_mosaic_ss/ALOS_mosaic_ss.o
preproc/ALOS_preproc/ALOS_mosaic_ss_2frames/ALOS_mosaic_ss_2frames
preproc/ALOS_preproc/ALOS_mosaic_ss_2frames/ALOS_mosaic_ss_2frames.o
preproc/ALOS_preproc/ALOS_pre_process/ALOS_pre_process
preproc/ALOS_preproc/ALOS_pre_process/ALOS_pre_process.o
preproc/ALOS_preproc/ALOS_pre_process/parse_ALOS_commands.o
preproc/ALOS_preproc/ALOS_pre_process/read_ALOSE_data.o
preproc/ALOS_preproc/ALOS_pre_process/read_ALOS_data.o
preproc/ALOS_preproc/ALOS_pre_process/roi_utils.o
preproc/ALOS_preproc/ALOS_pre_process/swap_ALOS_data_info.o
preproc/ALOS_preproc/ALOS_pre_process_SLC/ALOS_pre_process_SLC
preproc/ALOS_preproc/ALOS_pre_process_SLC/ALOS_pre_process_SLC.o
preproc/ALOS_preproc/ALOS_pre_process_SLC/parse_ALOS_commands.o
preproc/ALOS_preproc/ALOS_pre_process_SLC/read_ALOS_data_SLC.o
preproc/ALOS_preproc/ALOS_pre_process_SLC/swap_ALOS_data_info.o
preproc/ALOS_preproc/ALOS_pre_process_SS/ALOS_pre_process_SS
preproc/ALOS_preproc/ALOS_pre_process_SS/ALOS_pre_process_SS.o
preproc/ALOS_preproc/ALOS_pre_process_SS/parse_ALOS_commands.o
preproc/ALOS_preproc/ALOS_pre_process_SS/read_ALOS_data_SS.o
preproc/ALOS_preproc/ALOS_pre_process_SS/swap_ALOS_data_info.o
preproc/ALOS_preproc/lib/
preproc/ALOS_preproc/lib_src/ALOSE_orbits_utils.o
preproc/ALOS_preproc/lib_src/ALOS_ldr_orbit.o
preproc/ALOS_preproc/lib_src/calc_dop.o
preproc/ALOS_preproc/lib_src/cfft1d.o
preproc/ALOS_preproc/lib_src/fftpack.o
preproc/ALOS_preproc/lib_src/find_fft_length.o
preproc/ALOS_preproc/lib_src/get_sio_struct.o
preproc/ALOS_preproc/lib_src/hermite_c.o
preproc/ALOS_preproc/lib_src/interpolate_ALOS_orbit.o
preproc/ALOS_preproc/lib_src/libALOS.a
preproc/ALOS_preproc/lib_src/null_sio_struct.o
preproc/ALOS_preproc/lib_src/plh2xyz.o
preproc/ALOS_preproc/lib_src/polyfit.o
preproc/ALOS_preproc/lib_src/put_sio_struct.o
preproc/ALOS_preproc/lib_src/read_ALOS_sarleader.o
preproc/ALOS_preproc/lib_src/rng_compress.o
preproc/ALOS_preproc/lib_src/rng_expand.o
preproc/ALOS_preproc/lib_src/rng_filter.o
preproc/ALOS_preproc/lib_src/set_ALOS_defaults.o
preproc/ALOS_preproc/lib_src/siocomplex.o
preproc/ALOS_preproc/lib_src/swap16.o
preproc/ALOS_preproc/lib_src/swap32.o
preproc/ALOS_preproc/lib_src/utils.o
preproc/ALOS_preproc/lib_src/write_ALOS_LED.o
preproc/ALOS_preproc/lib_src/write_ALOS_prm.o
preproc/ALOS_preproc/lib_src/write_orb.o
preproc/ALOS_preproc/lib_src/xyz2plh.o
preproc/CSK_preproc/src_raw/make_raw_csk
preproc/CSK_preproc/src_raw/make_raw_csk.o
preproc/CSK_preproc/src_slc/make_slc_csk
preproc/CSK_preproc/src_slc/make_slc_csk.o
preproc/ENVI_preproc/ASA_CAT/asa_cat
preproc/ENVI_preproc/ASA_CAT/asa_cat.o
preproc/ENVI_preproc/Dop_orbit/calc_dop_orb_envi
preproc/ENVI_preproc/Dop_orbit/calc_dop_orb_envi.o
preproc/ENVI_preproc/ENVI_SLC_decode/envi_slc_decode
preproc/ENVI_preproc/ENVI_SLC_decode/envisat_dump_data
preproc/ENVI_preproc/ENVI_SLC_decode/envisat_dump_header
preproc/ENVI_preproc/ENVI_SLC_decode/epr_api-2.3/src/epr_api.o
preproc/ENVI_preproc/ENVI_SLC_decode/epr_api-2.3/src/epr_band.o
preproc/ENVI_preproc/ENVI_SLC_decode/epr_api-2.3/src/epr_bitmask.o
preproc/ENVI_preproc/ENVI_SLC_decode/epr_api-2.3/src/epr_core.o
preproc/ENVI_preproc/ENVI_SLC_decode/epr_api-2.3/src/epr_dataset.o
preproc/ENVI_preproc/ENVI_SLC_decode/epr_api-2.3/src/epr_dddb.o
preproc/ENVI_preproc/ENVI_SLC_decode/epr_api-2.3/src/epr_dsd.o
preproc/ENVI_preproc/ENVI_SLC_decode/epr_api-2.3/src/epr_dump.o
preproc/ENVI_preproc/ENVI_SLC_decode/epr_api-2.3/src/epr_field.o
preproc/ENVI_preproc/ENVI_SLC_decode/epr_api-2.3/src/epr_msph.o
preproc/ENVI_preproc/ENVI_SLC_decode/epr_api-2.3/src/epr_param.o
preproc/ENVI_preproc/ENVI_SLC_decode/epr_api-2.3/src/epr_product.o
preproc/ENVI_preproc/ENVI_SLC_decode/epr_api-2.3/src/epr_ptrarray.o
preproc/ENVI_preproc/ENVI_SLC_decode/epr_api-2.3/src/epr_record.o
preproc/ENVI_preproc/ENVI_SLC_decode/epr_api-2.3/src/epr_string.o
preproc/ENVI_preproc/ENVI_SLC_decode/epr_api-2.3/src/epr_swap.o
preproc/ENVI_preproc/ENVI_SLC_decode/epr_api-2.3/src/epr_typconv.o
preproc/ENVI_preproc/ENVI_decode/asa_im_decode
preproc/ENVI_preproc/ENVI_decode/asa_im_decode.o
preproc/ENVI_preproc/lib/
preproc/ENVI_preproc/lib_src/ENVI_ldr_orbit.o
preproc/ENVI_preproc/lib_src/libENVI.a
preproc/ENVI_preproc/lib_src/read_ENVI_orb.o
preproc/ENVI_preproc/scripts/ENVI_SLC_pre_process
preproc/ENVI_preproc/scripts/ENVI_pre_process
preproc/ERS_preproc/ers_line_fixer/ers_line_fixer
preproc/ERS_preproc/ers_line_fixer/ers_line_fixer.o
preproc/ERS_preproc/read_data_file_ccrs/read_data_file_ccrs
preproc/ERS_preproc/read_data_file_ccrs/read_data_file_ccrs.o
preproc/ERS_preproc/read_data_file_dpaf/read_data_file_dpaf
preproc/ERS_preproc/read_data_file_dpaf/read_data_file_dpaf.o
preproc/ERS_preproc/read_sarleader_dpaf/make_prm_dpaf.o
preproc/ERS_preproc/read_sarleader_dpaf/read_sarleader_dpaf
preproc/ERS_preproc/read_sarleader_dpaf/read_sarleader_dpaf.o
preproc/ERS_preproc/scripts/ERS_pre_process
preproc/RS2_preproc/src/make_slc_rs2
preproc/RS2_preproc/src/make_slc_rs2.o
preproc/S1A_preproc/lib/lib
preproc/S1A_preproc/lib/libxmlC.a
preproc/S1A_preproc/lib/xml.o
preproc/S1A_preproc/src_assembly/assemble_tops
preproc/S1A_preproc/src_assembly/assemble_tops.o
preproc/S1A_preproc/src_orbit/ext_orb_s1a
preproc/S1A_preproc/src_orbit/ext_orb_s1a.o
preproc/S1A_preproc/src_spec_div/spectral_diversity
preproc/S1A_preproc/src_spec_div/spectral_diversity.o
preproc/S1A_preproc/src_stitch/merge_swath
preproc/S1A_preproc/src_stitch/merge_swath.o
preproc/S1A_preproc/src_stitch/stitch_tops
preproc/S1A_preproc/src_stitch/stitch_tops.o
preproc/S1A_preproc/src_swath/make_slc_s1a
preproc/S1A_preproc/src_swath/make_slc_s1a.o
preproc/S1A_preproc/src_tops/make_s1a_tops
preproc/S1A_preproc/src_tops/make_s1a_tops.o
preproc/S1A_preproc/src_tops/make_s1a_tops_6par
preproc/S1A_preproc/src_tops/make_s1a_tops_6par.o
preproc/TSX_preproc/src/make_slc_tsx
preproc/TSX_preproc/src/make_slc_tsx.o
share/
snaphu/src/snaphu
snaphu/src/snaphu.o
snaphu/src/snaphu_cost.o
snaphu/src/snaphu_cs2.o
snaphu/src/snaphu_io.o
snaphu/src/snaphu_solver.o
snaphu/src/snaphu_tile.o
snaphu/src/snaphu_util.o
================================================
FILE: CITATION.cff
================================================
cff-version: 1.2.0
message: "If you use this software, please cite it as below."
authors:
- family-names: "Pechnikov"
given-names: "Alexey (Aleksei)"
orcid: "https://orcid.org/0000-0001-9626-8615"
title: "PyGMTSAR (Python InSAR)"
version: 2024.2.8
doi: 10.5281/zenodo.7725131
date-released: 2024-02-08
url: "https://insar.dev"
================================================
FILE: CODE_OF_CONDUCT.md
================================================
# Contributor Covenant Code of Conduct
## Our Pledge
We as members, contributors, and leaders pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
nationality, personal appearance, race, religion, or sexual identity
and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming,
diverse, inclusive, and healthy community.
## Our Standards
Examples of behavior that contributes to a positive environment for our
community include:
* Demonstrating empathy and kindness toward other people
* Being respectful of differing opinions, viewpoints, and experiences
* Giving and gracefully accepting constructive feedback
* Accepting responsibility and apologizing to those affected by our mistakes,
and learning from the experience
* Focusing on what is best not just for us as individuals, but for the
overall community
Examples of unacceptable behavior include:
* The use of sexualized language or imagery, and sexual attention or
advances of any kind
* Trolling, insulting or derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or email
address, without their explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Enforcement Responsibilities
Community leaders are responsible for clarifying and enforcing our standards of
acceptable behavior and will take appropriate and fair corrective action in
response to any behavior that they deem inappropriate, threatening, offensive,
or harmful.
Community leaders have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are
not aligned to this Code of Conduct, and will communicate reasons for moderation
decisions when appropriate.
## Scope
This Code of Conduct applies within all community spaces, and also applies when
an individual is officially representing the community in public spaces.
Examples of representing our community include using an official e-mail address,
posting via an official social media account, or acting as an appointed
representative at an online or offline event.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported to the community leaders responsible for enforcement at
pechnikov@mobigroup.ru.
All complaints will be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the
reporter of any incident.
## Enforcement Guidelines
Community leaders will follow these Community Impact Guidelines in determining
the consequences for any action they deem in violation of this Code of Conduct:
### 1. Correction
**Community Impact**: Use of inappropriate language or other behavior deemed
unprofessional or unwelcome in the community.
**Consequence**: A private, written warning from community leaders, providing
clarity around the nature of the violation and an explanation of why the
behavior was inappropriate. A public apology may be requested.
### 2. Warning
**Community Impact**: A violation through a single incident or series
of actions.
**Consequence**: A warning with consequences for continued behavior. No
interaction with the people involved, including unsolicited interaction with
those enforcing the Code of Conduct, for a specified period of time. This
includes avoiding interactions in community spaces as well as external channels
like social media. Violating these terms may lead to a temporary or
permanent ban.
### 3. Temporary Ban
**Community Impact**: A serious violation of community standards, including
sustained inappropriate behavior.
**Consequence**: A temporary ban from any sort of interaction or public
communication with the community for a specified period of time. No public or
private interaction with the people involved, including unsolicited interaction
with those enforcing the Code of Conduct, is allowed during this period.
Violating these terms may lead to a permanent ban.
### 4. Permanent Ban
**Community Impact**: Demonstrating a pattern of violation of community
standards, including sustained inappropriate behavior, harassment of an
individual, or aggression toward or disparagement of classes of individuals.
**Consequence**: A permanent ban from any sort of public interaction within
the community.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
version 2.0, available at
https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
Community Impact Guidelines were inspired by [Mozilla's code of conduct
enforcement ladder](https://github.com/mozilla/diversity).
[homepage]: https://www.contributor-covenant.org
For answers to common questions about this code of conduct, see the FAQ at
https://www.contributor-covenant.org/faq. Translations are available at
https://www.contributor-covenant.org/translations.
================================================
FILE: LICENSE.TXT
================================================
BSD 3-Clause License
Copyright (c) 2023, Alexey Pechnikov, https://orcid.org/0000-0001-9626-8615 (ORCID)
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
================================================
FILE: README.md
================================================
[](https://github.com/AlexeyPechnikov/pygmtsar)
[](https://pypi.python.org/pypi/pygmtsar/)
[](https://hub.docker.com/r/pechnikov/pygmtsar)
[](https://zenodo.org/badge/latestdoi/398018212)
[](https://www.patreon.com/pechnikov)
## Announcement: InSAR.dev—A Federated Python Ecosystem for InSAR
InSAR.dev is the next evolution of PyGMTSAR and is under active development. Whereas PyGMTSAR processes single-polarization scenes and bursts along one orbital path, InSAR.dev separates the workflow into two phases: preparing SLC data (including geocoding, flat-earth and topographic correction, and packaging into cloud-ready bursts) and then performing the core interferometric analysis on those datasets. This design scales to hundreds or thousands of bursts across multiple polarizations and orbital paths.
For example, the following interactive notebooks demonstrate processing of 8 Sentinel-1 scenes (~1000 bursts) across 2 orbital paths and 2 polarizations (VH, VV):
[](https://colab.research.google.com/drive/1KsHRDz1XVtDWAkJMXK0gdpMiEfHNvXB3?usp=sharing) InSAR.dev Sentinel-1 SLC Burst Preprocessing.
[](https://colab.research.google.com/drive/156Gvd-0C7DrXWDe1JCSxJyE-s1FyZeDl?usp=sharing) InSAR.dev Sentinel‑1 Multi‑Polarization and Multi‑Path Interferograms (adopted for slow 2 vCPU free Google Colab).
[](https://colab.research.google.com/drive/1g_mZK6YZ614t45D1zKCO-U_X5E6hfknz?usp=sharing) InSAR.dev Sentinel‑1 Multi‑Polarization and Multi‑Path Interferograms on Google Colab Pro.
The InSAR.dev ecosystem comprises three Python packages. **insardev_toolkit** provides utility functions and helpers; **insardev_pygmtsar**handles Sentinel‑1 SLC preprocessing (requires GMTSAR binaries); and **insardev** performs core interferometric processing and analysis with no external dependencies. Both **insardev_toolkit** and **insardev_pygmtsar** are BSD‑licensed, while **insardev** may require a subscription for certain use cases.
InSAR.dev documentation, use cases and project updates are available on [Patreon](https://www.patreon.com/pechnikov).
## PyGMTSAR (Python InSAR): Powerful, Accessible Satellite Interferometry
<img src="assets/logo.jpg" width="15%" />
PyGMTSAR (Python InSAR) is designed for both occasional users and experts working with Sentinel-1 satellite interferometry. It supports a wide range of features, including SBAS, PSI, PSI-SBAS, and more. In addition to the examples below, you’ll find more Jupyter notebook use cases on [Patreon](https://www.patreon.com/pechnikov) and updates on [LinkedIn](https://www.linkedin.com/in/alexey-pechnikov/).
## About PyGMTSAR
PyGMTSAR offers reproducible, high-performance Sentinel-1 interferometry accessible to everyone—whether you prefer Google Colab, cloud servers, or local processing. It automatically retrieves Sentinel-1 SLC scenes and bursts, DEMs, and orbits; computes interferograms and correlations; performs time-series analysis; and provides 3D visualization. This single library enables users to build a fully integrated InSAR project with minimal hassle. Whether you need a single interferogram or a multi-year analysis involving thousands of datasets, PyGMTSAR can handle the task efficiently, even on standard commodity hardware.
## PyGMTSAR Live Examples on Google Colab
Google Colab is a free service that lets you run interactive notebooks directly in your browser—no powerful computer, extensive disk space, or special installations needed. You can even do InSAR processing from a smartphone. These notebooks automate every step: installing PyGMTSAR library and its dependencies on a Colab host (Ubuntu 22, Python 3.10), downloading Sentinel-1 SLCs, orbit files, SRTM DEM data (automatically converted to ellipsoidal heights via EGM96), land mask data, and then performing complete interferometry with final mapping. You can also modify scene or bursts names to analyze your own area of interest, and each notebook includes instant interactive 3D maps.
[](https://colab.research.google.com/drive/1TARVTB7z8goZyEVDRWyTAKJpyuqZxzW2?usp=sharing) **Central Türkiye Earthquakes (2023).** The area is large, covering two consecutive Sentinel-1 scenes or a total of 56 bursts.
<img src="assets/turkie_2023a.jpg" width="40%" /><img src="assets/turkie_2023b.jpg" width="40%" />
[](https://colab.research.google.com/drive/1dDFG8BoF4WfB6tOF5sAi5mjdBKRbhxHo?usp=sharing) **Pico do Fogo Volcano Eruption, Fogo Island, Cape Verde (2014).** The interferogram for this event is compared to the study *The 2014–2015 eruption of Fogo volcano: Geodetic modeling of Sentinel-1 TOPS interferometry* (*Geophysical Research Letters*, DOI: [10.1002/2015GL066003](https://doi.org/10.1002/2015GL066003)).
<img src="assets/pico_2014a.jpg" width="40%" /><img src="assets/pico_2014b.jpg" width="40%" />
[](https://colab.research.google.com/drive/1d9RcqBmWIKQDEwJYo8Dh6M4tMjJtvseC?usp=sharing) **La Cumbre Volcano Eruption, Ecuador (2020).** The results compare with the report from Instituto Geofísico, Escuela Politécnica Nacional (IG-EPN) (InSAR software unspecified).
<img src="assets/la_cumbre_2020a.jpg" width="40%" /><img src="assets/la_cumbre_2020b.jpg" width="40%" />
[](https://colab.research.google.com/drive/1shNGvUlUiXeyV7IcTmDbWaEM6XrB0014?usp=sharing) **Iran–Iraq Earthquake (2017).** The event has been well investigated, and the results compared to outputs from GMTSAR, SNAP, and GAMMA software.
<img src="assets/iran_iraq_2017a.jpg" width="40%" /><img src="assets/iran_iraq_2017b.jpg" width="40%" />
[](https://colab.research.google.com/drive/1h4XxJZwFfm7EC8NUzl34cCkOVUG2uJr4?usp=sharing) **Imperial Valley Subsidence, CA USA (2015).** This example is provided in the [GMTSAR project](https://topex.ucsd.edu/gmtsar/downloads/) in the archive file [S1A_Stack_CPGF_T173.tar.gz](http://topex.ucsd.edu/gmtsar/tar/S1A_Stack_CPGF_T173.tar.gz), titled 'Sentinel-1 TOPS Time Series'.
The resulting InSAR velocity map is available as a self-contained web page at: [Imperial_Valley_2015.html](https://insar.dev/ui/Imperial_Valley_2015.html)
<img src="assets/imperial_valley_2015a.jpg" width="40%" /> <img src="assets/imperial_valley_2015b.jpg" width="40%" />
[](https://colab.research.google.com/drive/1aqAr9KWKzGx9XpVie1M000C3vUxzNDxu?usp=sharing) **Kalkarindji Flooding, NT Australia (2024).** Correlation loss serves to identify flooded areas.
<img src="assets/kalkarindji_2024.jpg" width="80%" />
[](https://colab.research.google.com/drive/1ipiQGbvUF8duzjZER8v-_R48DSpSmgvQ?usp=sharing) **Golden Valley Subsidence, CA USA (2021).** This example demonstrates the case study 'Antelope Valley Freeway in Santa Clarita, CA,' as detailed in [SAR Technical Series Part 4 Sentinel-1 global velocity layer: Using global InSAR at scale](https://blog.descarteslabs.com/using-global-insar-at-scale) and [Sentinel-1 Technical Series Part 5 Targeted Analysis](https://blog.descarteslabs.com/sentinel-1-targeted-analysis) with a significant subsidence rate 'exceeding 5cm/year in places'.
<img src="assets/golden_valley_2021.jpg" width="80%" />
[](https://colab.research.google.com/drive/1O3aZtZsTrQIldvCqlVRel13wJRLhmTJt?usp=sharing) **Lake Sarez Landslides, Tajikistan (2017).** The example reproduces the findings shared in the following paper: [Integration of satellite SAR and optical acquisitions for the characterization of the Lake Sarez landslides in Tajikistan](https://www.google.com/url?q=https%3A%2F%2Fwww.researchgate.net%2Fpublication%2F378176884_Integration_of_satellite_SAR_and_optical_acquisitions_for_the_characterization_of_the_Lake_Sarez_landslides_in_Tajikistan).
<img src="assets/lake_sarez_2017.jpg" width="80%" />
[](https://colab.research.google.com/drive/19PLuebOZ4gaYX5ym1H7SwUbJKfl23qPr?usp=sharing) **Erzincan Elevation, Türkiye (2019).** This example reproduces 29-page ESA document [DEM generation with Sentinel-1 IW](https://step.esa.int/docs/tutorials/S1TBX%20DEM%20generation%20with%20Sentinel-1%20IW%20Tutorial.pdf).
<img src="assets/erzincan_2019.jpg" width="80%" />
## More PyGMTSAR Live Examples on Google Colab
[](https://colab.research.google.com/drive/1yuuA1ES2ly4QG3hyPg8YYT0nnpGDiQDw?usp=sharing) **Mexico City Subsidence, Mexico (2016).** This example replicates the 29-page ESA manual [TRAINING KIT – HAZA03. LAND SUBSIDENCE WITH SENTINEL-1 using SNAP](https://eo4society.esa.int/wp-content/uploads/2022/01/HAZA03_Land-Subsidence_Mexico-city.pdf).
## PyGMTSAR Live Examples on Google Colab Pro
I share additional InSAR projects on Google Colab Pro through my [Patreon page](https://www.patreon.com/pechnikov). These are ideal for InSAR learners, researchers, and industry professionals tackling challenging projects with large areas, big stacks of interferograms, low-coherence regions, or significant atmospheric delays. You can run these privately shared notebooks online with Colab Pro or locally/on remote servers.
## Projects and Publications Using PyGMTSAR
See the [Projects and Publications](/pubs/README.md) page for real-world projects and academic research applying PyGMTSAR. This is not an exhaustive list—contact me if you’d like your project or publication included.
## Resources
**PyGMTSAR projects and e-books**
Available on [Patreon](https://www.patreon.com/c/pechnikov/shop). Preview versions can be found in this GitHub repo:
- [PyGMTSAR Introduction Preview](https://github.com/AlexeyPechnikov/pygmtsar/blob/pygmtsar2/book/PyGMTSAR_preview.pdf)
- [PyGMTSAR Gaussian Filtering Preview](https://github.com/AlexeyPechnikov/pygmtsar/blob/pygmtsar2/book/Gaussian_preview.pdf)
<img src="assets/listing.jpg" width="40%" />
**Video Lessons and Notebooks**
Find PyGMTSAR (Python InSAR) video lessons and educational notebooks on [Patreon](https://www.patreon.com/collection/12458) and [YouTube](https://www.youtube.com/channel/UCSEeXKAn9f_bDiTjT6l87Lg).
**PyGMTSAR AI Assistant**
The [PyGMTSAR AI Assistant](https://insar.dev/ai), powered by OpenAI ChatGPT, can explain InSAR theory, guide you through examples, help build an InSAR processing pipeline, and troubleshoot.
<img width="40%" alt="PyGMTSAR AI Assistant" src="assets/ai.jpg" />
**PyGMTSAR on DockerHub**
Run InSAR processing on macOS, Linux, or Windows via [Docker images](https://hub.docker.com/r/pechnikov/pygmtsar).
**PyGMTSAR on PyPI**
Install the library from [PyPI](https://pypi.python.org/pypi/pygmtsar).
**PyGMTSAR Previous Versions**
2023 releases are still on GitHub, PyPI, DockerHub, and Google Colab. Compare PyGMTSAR InSAR with other software by checking out the [PyGMTSAR 2023 Repository](https://github.com/AlexeyPechnikov/pygmtsar/tree/pygmtsar).
© Alexey Pechnikov, 2025
================================================
FILE: docker/README.md
================================================
[](https://github.com/AlexeyPechnikov/pygmtsar)
[](https://pypi.python.org/pypi/pygmtsar/)
[](https://zenodo.org/badge/latestdoi/398018212)
[](https://www.patreon.com/pechnikov)
[](https://insar.dev/ai)
## PyGMTSAR (Python InSAR) - Powerful and Accessible Satellite Interferometry
**Note**: Previous builds available at https://hub.docker.com/r/mobigroup/pygmtsar
PyGMTSAR (Python InSAR) is designed to meet the needs of both occasional users and experts in Sentinel-1 Satellite Interferometry. It offers a wide range of features, including SBAS, PSI, PSI-SBAS, and more. In addition to the examples provided below, I also share Jupyter notebook examples on [Patreon](https://www.patreon.com/pechnikov) and provide updates on its progress through my [LinkedIn profile](https://www.linkedin.com/in/alexey-pechnikov/).
<img src="https://github.com/AlexeyPechnikov/pygmtsar/assets/7342379/c157c3a6-ed06-4b6d-82ae-c0aefb286d47" width="40%" />
## About PyGMTSAR
PyGMTSAR provides accessible, reproducible, and powerful Sentinel-1 interferometry that is available to everyone, regardless of their location. It encompasses a variety of interferometry approaches, including SBAS, PSI, PSI-SBAS, and time series and trend analysis, all integrated into a single Python package. Whether you're utilizing Google Colab, DockerHub, or any other platform, PyGMTSAR is ready to meet your needs.
One of the most in-demand features in PyGMTSAR (Python InSAR) is the combined analysis of Persistent Scatterers (PS or PSI) and the Small Baseline Subset (SBAS). Each method has its own unique advantages and drawbacks — SBAS typically performs better in rural areas, while PS is more suited to urban environments. My vision is to merge the benefits of both methods while mitigating their shortcomings through a unified PS-SBAS process. Additionally, PyGMTSAR offers weighted interferogram processing using an amplitude stability matrix, which emphasizes stable pixels. This approach enhances phase and coherence, improving the accuracy of results by maintaining high coherence, even in rural areas.
## PyGMTSAR Live Examples in Docker Image
Configure your Docker runtime (Preferences -> Resources tab for Docker Desktop) using the example configurations in the table below, or adjust it to fit your needs. The disk usage for the examples is shown in the table. To process all the examples, you will need a 120 GB Docker virtual disk limit (114 GB Docker container size in my tests, though I recommend allowing slightly more to be safe). You can check the Docker container’s disk usage with the following command:
```
docker ps -s
```
The examples are coded with different objectives in mind. Some notebooks are designed to provide easy-to-read educational content, while others are optimized for maximum execution efficiency. This variation in coding approaches is why an annual SBAS+PSI example can run on a system with 2GB RAM, whereas an SBAS analysis of a few interferograms may require 8GB RAM.
Some examples use complete Sentinel-1 SLC scenes, while others utilize Sentinel-1 bursts. PyGMTSAR can effectively process both, allowing you to choose the type of source data that best suits your needs.
**Table: Processing Times for InSAR Analysis Example Notebooks on iMac 2021 (Apple M1, 8 CPU cores, 16 GB RAM, 2 TB SSD) Using Various Docker Configurations and Natively Without Docker**
| Analysis | Notebook | **Scn** | **Brst** | **Subswth** | **Intf** | Time (Native, no Docker) | Time (4 CPUs, 8 GB RAM) | Time (2 CPUs, 4 GB RAM) | Time (1 CPUs, 2 GB RAM) | Disk Usage, GB |
| ------------------------ | ----------------------------- | -------- | --------- | ----------- | ---------------------- | ------------------------ | ----------------------- | ----------------------- | ----------------------- | -------------- |
| SBAS and PSI Analyses | Lake Sarez Landslides | 19 | 38 | 1 | 76 | 11 min | 17 min | 24 min | 38 min* | 22.2 |
| Co-Seismic Interferogram | CENTRAL Türkiye Earthquake | 4 | 112 | 3 | 1 | 15 min | 24 min | 33 min | 62 min* | 50.3 |
| SBAS and PSI Analyses | Golden Valley Subsidence | 30 | 30 | 1 | 57 | 5 min | 7 min | 10 min | 18 min | 12.8 |
| SBAS Analysis | Imperial Valley Groundwater | 5 | | 1 | 9 | 6 min | 9 min | 12 min | 21 min | 5.5 |
| Co-Seismic Interferogram | Iran–Iraq Earthquake | 3 | | 1 | 1 | 1 min | 2 min | 2 min | 3 min* | 6.4 |
| Co-Seismic Interferogram | La Cumbre Volcano Eruption | 2 | 8 | 2 | 1 | 1 min | 1 min | 1 min | 2 min | 2.4 |
| Co-Seismic Interferogram | Pico do Fogo Volcano Eruption | 2 | | 1 | 1 | 1 min | 1 min | 1 min | 2 min* | 2.1 |
| Flooding Map | Kalkarindji | 3 | | 1 | 2 | 1 min | 1 min | 1 min | 2 min* | 5.3 |
| Elevation Map | Erzincan, Türkiye | 2 | | 1 | 1 | 5 min | 7 min | 9 min | 16 min* | 5.3 |
**Note**: Download time is excluded. The reported times reflect the complete notebook run time when all data has already been downloaded in a previous run. For stable downloading of scenes and bursts with 2GB or 4GB RAM configurations, set the parameter n_jobs=1 for the ASF.download() function call. The default Docker Desktop 1GB swap is used, except for some examples on a 2GB RAM configuration that require a 2GB swap, indicated by an asterisk (*).
Download the Docker image (or build it yourself using the [Dockerfile](https://github.com/AlexeyPechnikov/pygmtsar/blob/pygmtsar2/docker/pygmtsar.Dockerfile) in the repository), and run the container while forwarding port 8888 to JupyterLab using these commands inside your command line terminal window:
```
docker pull pechnikov/pygmtsar
docker run -dp 8888:8888 --name pygmtsar docker.io/pechnikov/pygmtsar
docker logs pygmtsar
```
See the output for the JupyterLab link and copy and past it into your web browser address line. Also, the donwloaded Docker image can be started in Docker Desktop app - press "RUN" button and define the container name and the port in the opened dialog window (see "Optional settings" for the port number input field) and click on the newly created container to launch it and see the output log with the clickable link.
System operations are available using password-less “sudo” command. To upgrade PyGMTSAR Python library, execute the command below in a notebook cell and restart the notebook:
```
import sys
!sudo {sys.executable} -m pip install -U pygmtsar
```
Alternatively, use the following terminal command in the Docker container's Terminal:
```
sudo --preserve-env=PATH sh -c "pip3 install -U pygmtsar"
```
## Build Docker images
The commands below build multi-architecture images using the [Dockerfile](https://github.com/AlexeyPechnikov/pygmtsar/blob/pygmtsar2/docker/pygmtsar.Dockerfile) from the project’s GitHub repository and push them to DockerHub in the ‘pechnikov’ repository with the tags ‘latest’ and the build date (e.g., ‘2024-01-21’):
```
docker buildx create --name pygmtsar
docker buildx use pygmtsar
docker buildx inspect --bootstrap
docker buildx build . -f pygmtsar.Dockerfile \
--platform linux/amd64,linux/arm64 \
--tag pechnikov/pygmtsar:$(date "+%Y-%m-%d") \
--tag pechnikov/pygmtsar:latest \
--pull --push --no-cache
docker buildx rm pygmtsar
```
For a local build, use the following command:
```
docker build . -f pygmtsar.Dockerfile -t pygmtsar:latest --no-cache
```
The only requirement for these builds is the [Dockerfile](https://github.com/AlexeyPechnikov/pygmtsar/blob/pygmtsar2/docker/pygmtsar.Dockerfile); there is no need to download the entire PyGMTSAR GitHub repository, as the build process will automatically retrieve all necessary files.
## Learn more
- Source code and bug tracker on [Github](https://github.com/AlexeyPechnikov/pygmtsar)
- Python library on [PyPI](https://pypi.python.org/pypi/pygmtsar/)
- Interactive examples online on Google Colab at [InSAR.dev](https://insar.dev)
- Superior online examples on Google Colab Pro and articles at [pechnikov.dev](https://pechnikov.dev)
- AI InSAR Assistant powered by ChatGPT4 at [InSAR.dev/ai](https://insar.dev/ai)
================================================
FILE: docker/pygmtsar.Dockerfile
================================================
# https://jupyter-docker-stacks.readthedocs.io/en/latest/using/selecting.html
# host platform compilation:
# docker build . -f pygmtsar.Dockerfile -t pechnikov/pygmtsar:latest --no-cache
# cross-compilation:
# docker buildx build . -f pygmtsar.Dockerfile -t pechnikov/pygmtsar:latest-amd64 --no-cache --platform linux/amd64 --load
# multiple hosts compilation:
# amd64 image:
# docker buildx build . -f docker/pygmtsar.Dockerfile \
# --platform linux/amd64 \
# --tag pechnikov/pygmtsar:$(date "+%Y-%m-%d")-amd64 \
# --tag pechnikov/pygmtsar:latest-amd64 \
# --pull --push --no-cache
# arm64 image:
# docker buildx build . -f docker/pygmtsar.Dockerfile \
# --platform linux/arm64 \
# --tag pechnikov/pygmtsar:$(date "+%Y-%m-%d")-arm64 \
# --tag pechnikov/pygmtsar:latest-arm64 \
# --pull --push --no-cache
# create a multi-arch manifest:
# docker buildx imagetools create \
# --tag pechnikov/pygmtsar:latest \
# pechnikov/pygmtsar:latest-amd64 \
# pechnikov/pygmtsar:latest-arm64
FROM quay.io/jupyter/scipy-notebook:2025-01-28
##########################################################################################
# Start initialization
##########################################################################################
USER root
# grant passwordless sudo rights
RUN echo "${NB_USER} ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers
# install command-line tools
RUN apt-get -y update && apt-get -y upgrade && apt-get -y install \
git subversion curl jq csh zip htop mc netcdf-bin \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
##########################################################################################
# Install GMTSAR
##########################################################################################
# install dependencies
RUN apt-get -y update && apt-get -y install \
autoconf make gfortran \
gdal-bin libgdal-dev \
libtiff5-dev \
libhdf5-dev \
liblapack-dev \
gmt libgmt-dev \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# define installation and binaries search paths
ARG GMTSAR=/usr/local/GMTSAR
ARG ORBITS=/usr/local/orbits
ENV PATH=${GMTSAR}/bin:$PATH
# install GMTSAR from git
RUN cd $(dirname ${GMTSAR}) \
&& git config --global advice.detachedHead false \
&& git clone --branch master https://github.com/gmtsar/gmtsar GMTSAR \
&& cd ${GMTSAR} \
&& git checkout e98ebc0f4164939a4780b1534bac186924d7c998 \
&& autoconf \
&& ./configure --with-orbits-dir=${ORBITS} CFLAGS='-z muldefs' LDFLAGS='-z muldefs' \
&& make -j$(nproc) \
&& make install \
&& make clean
# system cleanup
RUN apt-get -y remove --purge \
libgdal-dev autoconf make gfortran \
libtiff5-dev libhdf5-dev liblapack-dev libgmt-dev \
&& apt-get autoremove -y --purge
##########################################################################################
# Install VTK
##########################################################################################
# install dependencies
RUN apt-get -y update && apt-get -y install \
libopengl0 mesa-utils \
libgl1-mesa-dev libegl1-mesa-dev libgles2-mesa-dev \
cmake ninja-build python3-dev build-essential \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# install python vtk library
RUN git clone --depth=1 --branch v9.3.1 https://gitlab.kitware.com/vtk/vtk.git \
&& cd vtk \
&& mkdir build && cd build \
&& cmake ../ \
-GNinja \
-DCMAKE_BUILD_TYPE=Release \
-DVTK_DEFAULT_RENDER_WINDOW_OFFSCREEN=ON \
-DVTK_OPENGL_HAS_EGL=ON \
-DVTK_OPENGL_HAS_OSMESA=OFF \
-DVTK_USE_X=OFF \
-DCMAKE_INSTALL_PREFIX=/opt/conda \
-DVTK_WRAP_PYTHON=ON \
-DVTK_PYTHON_VERSION=3 \
&& ninja -j$(nproc) \
&& ninja install \
&& cd ../.. \
&& rm -fr vtk
# system cleanup
RUN apt-get -y remove --purge \
libgl1-mesa-dev libegl1-mesa-dev libgles2-mesa-dev \
cmake ninja-build python3-dev build-essential \
&& apt-get autoremove -y --purge
##########################################################################################
# Install Python Libraries
##########################################################################################
# install dependencies to build rasterio
RUN apt-get -y update && apt-get -y install \
libhdf5-dev pkg-config \
libgdal-dev build-essential \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# install PyGMTSAR and visualization libraries
# install rasterio PyGMTSAR dependency because it requires compilation on ARM64
RUN /opt/conda/bin/pip3 install --no-cache-dir \
xvfbwrapper \
ipywidgets \
jupyter_bokeh \
panel \
ipyleaflet \
rasterio
# install recent pyvista with invalid dependencies specification
RUN /opt/conda/bin/pip3 install --no-cache-dir matplotlib pillow pooch scooby typing-extensions \
&& /opt/conda/bin/pip3 install --no-cache-dir --no-deps pyvista
# system cleanup
RUN apt-get -y remove --purge \
libhdf5-dev pkg-config \
libgdal-dev build-essential \
&& apt-get autoremove -y --purge
##########################################################################################
# Install Virtual Frame Buffer for PyVista
##########################################################################################
# install dependencies to compile
RUN apt-get -y update && apt-get -y install \
xvfb \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# modify start-notebook.py to start Xvfb
RUN sed -i '/import sys/a \
# Start Xvfb\n\
import xvfbwrapper\n\
display = xvfbwrapper.Xvfb(width=1280, height=1024)\n\
display.start()' /usr/local/bin/start-notebook.py
##########################################################################################
# Add PyGMTSAR examples
##########################################################################################
# download Google Colab notebooks from Google Drive
RUN wget -q https://raw.githubusercontent.com/AlexeyPechnikov/pygmtsar/refs/heads/pygmtsar2/notebooks/dload.sh \
&& chmod +x dload.sh \
&& ./dload.sh colab_notebooks \
&& rm -f dload.sh \
&& mv colab_notebooks ${HOME}/notebooks \
&& chown -R ${NB_UID}:${NB_GID} ${HOME}/notebooks
##########################################################################################
# Install PyGMTSAR
##########################################################################################
ADD "https://api.github.com/repos/AlexeyPechnikov/pygmtsar/commits?per_page=1" latest_commit
RUN /opt/conda/bin/pip3 install --no-cache-dir git+https://github.com/AlexeyPechnikov/gmtsar.git@pygmtsar2#subdirectory=pygmtsar \
&& rm -f latest_commit
##########################################################################################
# End initialization
##########################################################################################
# switch user
USER ${NB_UID}
WORKDIR "${HOME}"
RUN rm -rf work
================================================
FILE: docker/pygmtsar.ubuntu2204.Dockerfile
================================================
# https://jupyter-docker-stacks.readthedocs.io/en/latest/using/selecting.html
# https://github.com/jupyter/docker-stacks/blob/main/base-notebook/Dockerfile
# https://hub.docker.com/repository/docker/mobigroup/pygmtsar
# host platform compilation:
# docker build . -f pygmtsar.Dockerfile -t mobigroup/pygmtsar:latest --no-cache
# cross-compilation:
# docker buildx build . -f pygmtsar.Dockerfile -t mobigroup/pygmtsar:latest --no-cache --platform linux/amd64 --load
FROM jupyter/scipy-notebook:ubuntu-22.04
USER root
# install GMTSAR dependencies and some helpful command-line tools
RUN apt-get -y update \
&& apt-get -y install git gdal-bin libgdal-dev subversion curl jq \
&& apt-get -y install csh autoconf make gfortran \
&& apt-get -y install libtiff5-dev libhdf5-dev liblapack-dev libgmt-dev gmt \
&& apt-get -y install zip htop mc netcdf-bin \
&& apt-get clean && rm -rf /var/lib/apt/lists/*
# define installation and binaries search paths
ARG GMTSAR=/usr/local/GMTSAR
ARG ORBITS=/usr/local/orbits
ENV PATH=${GMTSAR}/bin:$PATH
# install GMTSAR from git
RUN cd $(dirname ${GMTSAR}) \
&& git config --global advice.detachedHead false \
&& git clone --branch master https://github.com/gmtsar/gmtsar GMTSAR \
&& cd ${GMTSAR} \
&& git checkout e98ebc0f4164939a4780b1534bac186924d7c998 \
&& autoconf \
&& ./configure --with-orbits-dir=${ORBITS} CFLAGS='-z muldefs' LDFLAGS='-z muldefs' \
&& make \
&& make install
# install PyGMTSAR and additional libraries
RUN apt-get -y update \
&& apt-get -y install xvfb libegl1-mesa libgdal-dev gdal-bin \
&& apt-get clean && rm -rf /var/lib/apt/lists/*
RUN conda install -y -c conda-forge vtk panel xvfbwrapper pyvista
# magick fix for the library
RUN pip3 uninstall -y h5py
# use requirements.sh to build the installation command
RUN pip3 install \
asf_search==7.0.4 \
h5netcdf==1.3.0 \
h5py==3.10.0 \
rasterio==1.3.11 \
ipywidgets==8.1.1 \
ipyleaflet==0.19.1 \
remotezip==0.12.2 \
jupyter_bokeh \
pygmtsar \
&& jupyter lab build \
&& jupyter labextension install jupyter-leaflet \
&& jupyter labextension list
# modify start-notebook.py to start Xvfb
RUN sed -i '/import sys/a \
# Start Xvfb\n\
import xvfbwrapper\n\
display = xvfbwrapper.Xvfb(width=1280, height=1024)\n\
display.start()' /usr/local/bin/start-notebook.py
# grant passwordless sudo rights
RUN echo "${NB_USER} ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers
# switch user
USER ${NB_UID}
WORKDIR "${HOME}"
# Clone only the pygmtsar2 branch
RUN git config --global http.postBuffer 524288000 \
&& git clone --branch pygmtsar2 --single-branch https://github.com/AlexeyPechnikov/pygmtsar.git \
&& mv pygmtsar/notebooks ./notebooks \
&& mv pygmtsar/README.md ./ \
&& rm -rf pygmtsar work
================================================
FILE: docker/requirements.json
================================================
[{"name": "adjustText", "version": "1.0.4"}, {"name": "affine", "version": "2.4.0"}, {"name": "aiohttp", "version": "3.9.3"}, {"name": "aiosignal", "version": "1.3.1"}, {"name": "alembic", "version": "1.12.0"}, {"name": "altair", "version": "5.1.2"}, {"name": "anyio", "version": "4.0.0"}, {"name": "argon2-cffi", "version": "23.1.0"}, {"name": "argon2-cffi-bindings", "version": "21.2.0"}, {"name": "arrow", "version": "1.3.0"}, {"name": "asf_search", "version": "7.0.4"}, {"name": "asttokens", "version": "2.4.0"}, {"name": "async-generator", "version": "1.10"}, {"name": "async-lru", "version": "2.0.4"}, {"name": "attrs", "version": "23.1.0"}, {"name": "Babel", "version": "2.13.0"}, {"name": "backcall", "version": "0.2.0"}, {"name": "backports.functools-lru-cache", "version": "1.6.5"}, {"name": "beautifulsoup4", "version": "4.12.2"}, {"name": "bleach", "version": "6.1.0"}, {"name": "blinker", "version": "1.6.3"}, {"name": "bokeh", "version": "3.3.0"}, {"name": "boltons", "version": "23.0.0"}, {"name": "Bottleneck", "version": "1.3.7"}, {"name": "Brotli", "version": "1.1.0"}, {"name": "cached-property", "version": "1.5.2"}, {"name": "certifi", "version": "2024.2.2"}, {"name": "certipy", "version": "0.1.3"}, {"name": "cffi", "version": "1.16.0"}, {"name": "cftime", "version": "1.6.3"}, {"name": "charset-normalizer", "version": "3.3.0"}, {"name": "click", "version": "8.1.7"}, {"name": "click-plugins", "version": "1.1.1"}, {"name": "cligj", "version": "0.7.2"}, {"name": "cloudpickle", "version": "3.0.0"}, {"name": "colorama", "version": "0.4.6"}, {"name": "comm", "version": "0.1.4"}, {"name": "conda", "version": "23.9.0"}, {"name": "conda-package-handling", "version": "2.2.0"}, {"name": "conda_package_streaming", "version": "0.9.0"}, {"name": "contourpy", "version": "1.1.1"}, {"name": "cryptography", "version": "41.0.4"}, {"name": "cycler", "version": "0.12.1"}, {"name": "Cython", "version": "3.0.4"}, {"name": "cytoolz", "version": "0.12.2"}, {"name": "dask", "version": "2024.1.1"}, {"name": "dask-image", "version": "2023.8.1"}, {"name": "dateparser", "version": "1.2.0"}, {"name": "debugpy", "version": "1.8.0"}, {"name": "decorator", "version": "5.1.1"}, {"name": "defusedxml", "version": "0.7.1"}, {"name": "dill", "version": "0.3.7"}, {"name": "distributed", "version": "2024.1.1"}, {"name": "entrypoints", "version": "0.4"}, {"name": "et-xmlfile", "version": "1.1.0"}, {"name": "exceptiongroup", "version": "1.1.3"}, {"name": "executing", "version": "1.2.0"}, {"name": "fastjsonschema", "version": "2.18.1"}, {"name": "fiona", "version": "1.9.5"}, {"name": "fonttools", "version": "4.43.1"}, {"name": "fqdn", "version": "1.5.1"}, {"name": "frozenlist", "version": "1.4.1"}, {"name": "fsspec", "version": "2023.9.2"}, {"name": "geopandas", "version": "0.14.3"}, {"name": "gitdb", "version": "4.0.10"}, {"name": "GitPython", "version": "3.1.40"}, {"name": "gmpy2", "version": "2.1.2"}, {"name": "greenlet", "version": "3.0.0"}, {"name": "h5netcdf", "version": "1.3.0"}, {"name": "h5py", "version": "3.10.0"}, {"name": "idna", "version": "3.4"}, {"name": "imagecodecs", "version": "2023.9.18"}, {"name": "imageio", "version": "2.31.5"}, {"name": "importlib-metadata", "version": "6.8.0"}, {"name": "importlib-resources", "version": "6.1.0"}, {"name": "ipykernel", "version": "6.25.2"}, {"name": "ipympl", "version": "0.9.3"}, {"name": "ipython", "version": "8.16.1"}, {"name": "ipython-genutils", "version": "0.2.0"}, {"name": "ipywidgets", "version": "8.1.1"}, {"name": "isoduration", "version": "20.11.0"}, {"name": "jedi", "version": "0.19.1"}, {"name": "Jinja2", "version": "3.1.2"}, {"name": "joblib", "version": "1.3.2"}, {"name": "json5", "version": "0.9.14"}, {"name": "jsonpatch", "version": "1.33"}, {"name": "jsonpointer", "version": "2.4"}, {"name": "jsonschema", "version": "4.19.1"}, {"name": "jsonschema-specifications", "version": "2023.7.1"}, {"name": "jupyter_client", "version": "8.4.0"}, {"name": "jupyter_core", "version": "5.4.0"}, {"name": "jupyter-events", "version": "0.8.0"}, {"name": "jupyter-lsp", "version": "2.2.0"}, {"name": "jupyter_server", "version": "2.8.0"}, {"name": "jupyter-server-mathjax", "version": "0.2.6"}, {"name": "jupyter_server_terminals", "version": "0.4.4"}, {"name": "jupyter-telemetry", "version": "0.1.0"}, {"name": "jupyterhub", "version": "4.0.2"}, {"name": "jupyterlab", "version": "4.0.12"}, {"name": "jupyterlab-git", "version": "0.41.0"}, {"name": "jupyterlab-pygments", "version": "0.2.2"}, {"name": "jupyterlab_server", "version": "2.25.0"}, {"name": "jupyterlab-widgets", "version": "3.0.9"}, {"name": "kiwisolver", "version": "1.4.5"}, {"name": "lazy_loader", "version": "0.3"}, {"name": "libmambapy", "version": "1.5.2"}, {"name": "linkify-it-py", "version": "2.0.3"}, {"name": "llvmlite", "version": "0.40.1"}, {"name": "locket", "version": "1.0.0"}, {"name": "loguru", "version": "0.7.2"}, {"name": "lz4", "version": "4.3.2"}, {"name": "Mako", "version": "1.2.4"}, {"name": "mamba", "version": "1.5.2"}, {"name": "Markdown", "version": "3.5.2"}, {"name": "markdown-it-py", "version": "3.0.0"}, {"name": "MarkupSafe", "version": "2.1.3"}, {"name": "matplotlib", "version": "3.8.0"}, {"name": "matplotlib-inline", "version": "0.1.6"}, {"name": "mdit-py-plugins", "version": "0.4.0"}, {"name": "mdurl", "version": "0.1.2"}, {"name": "mistune", "version": "3.0.1"}, {"name": "mpmath", "version": "1.3.0"}, {"name": "msgpack", "version": "1.0.6"}, {"name": "multidict", "version": "6.0.5"}, {"name": "munkres", "version": "1.1.4"}, {"name": "nbclassic", "version": "1.0.0"}, {"name": "nbclient", "version": "0.8.0"}, {"name": "nbconvert", "version": "7.9.2"}, {"name": "nbdime", "version": "3.2.1"}, {"name": "nbformat", "version": "5.9.2"}, {"name": "nc-time-axis", "version": "1.4.1"}, {"name": "nest-asyncio", "version": "1.5.8"}, {"name": "networkx", "version": "3.2"}, {"name": "notebook", "version": "7.0.6"}, {"name": "notebook_shim", "version": "0.2.3"}, {"name": "numba", "version": "0.57.1"}, {"name": "numexpr", "version": "2.8.7"}, {"name": "numpy", "version": "1.24.4"}, {"name": "oauthlib", "version": "3.2.2"}, {"name": "openpyxl", "version": "3.1.2"}, {"name": "overrides", "version": "7.4.0"}, {"name": "packaging", "version": "23.2"}, {"name": "pamela", "version": "1.1.0"}, {"name": "pandas", "version": "2.2.1"}, {"name": "pandocfilters", "version": "1.5.0"}, {"name": "panel", "version": "1.3.8"}, {"name": "param", "version": "2.0.2"}, {"name": "parso", "version": "0.8.3"}, {"name": "partd", "version": "1.4.1"}, {"name": "patsy", "version": "0.5.3"}, {"name": "pexpect", "version": "4.8.0"}, {"name": "pickleshare", "version": "0.7.5"}, {"name": "Pillow", "version": "10.1.0"}, {"name": "PIMS", "version": "0.6.1"}, {"name": "pip", "version": "23.3"}, {"name": "pkgutil_resolve_name", "version": "1.3.10"}, {"name": "platformdirs", "version": "3.11.0"}, {"name": "pluggy", "version": "1.3.0"}, {"name": "pooch", "version": "1.8.1"}, {"name": "prometheus-client", "version": "0.17.1"}, {"name": "prompt-toolkit", "version": "3.0.39"}, {"name": "protobuf", "version": "4.24.3"}, {"name": "psutil", "version": "5.9.5"}, {"name": "ptyprocess", "version": "0.7.0"}, {"name": "pure-eval", "version": "0.2.2"}, {"name": "py-cpuinfo", "version": "9.0.0"}, {"name": "pyarrow", "version": "13.0.0"}, {"name": "pyarrow-hotfix", "version": "0.6"}, {"name": "pycosat", "version": "0.6.6"}, {"name": "pycparser", "version": "2.21"}, {"name": "pycurl", "version": "7.45.1"}, {"name": "Pygments", "version": "2.16.1"}, {"name": "pygmtsar", "version": "2024.2.21.post3"}, {"name": "PyJWT", "version": "2.8.0"}, {"name": "pyOpenSSL", "version": "23.2.0"}, {"name": "pyparsing", "version": "3.1.1"}, {"name": "pyproj", "version": "3.6.1"}, {"name": "PySocks", "version": "1.7.1"}, {"name": "python-dateutil", "version": "2.8.2"}, {"name": "python-json-logger", "version": "2.0.7"}, {"name": "pytz", "version": "2023.3.post1"}, {"name": "pyvista", "version": "0.43.3"}, {"name": "pyviz_comms", "version": "3.0.0"}, {"name": "PyWavelets", "version": "1.4.1"}, {"name": "PyYAML", "version": "6.0.1"}, {"name": "pyzmq", "version": "25.1.1"}, {"name": "rasterio", "version": "1.3.9"}, {"name": "referencing", "version": "0.30.2"}, {"name": "regex", "version": "2023.12.25"}, {"name": "remotezip", "version": "0.12.2"}, {"name": "requests", "version": "2.31.0"}, {"name": "rfc3339-validator", "version": "0.1.4"}, {"name": "rfc3986-validator", "version": "0.1.1"}, {"name": "rioxarray", "version": "0.15.1"}, {"name": "rpds-py", "version": "0.10.6"}, {"name": "ruamel.yaml", "version": "0.17.39"}, {"name": "ruamel.yaml.clib", "version": "0.2.7"}, {"name": "scikit-image", "version": "0.22.0"}, {"name": "scikit-learn", "version": "1.3.1"}, {"name": "scipy", "version": "1.11.4"}, {"name": "scooby", "version": "0.9.2"}, {"name": "seaborn", "version": "0.13.0"}, {"name": "Send2Trash", "version": "1.8.2"}, {"name": "setuptools", "version": "68.2.2"}, {"name": "shapely", "version": "2.0.3"}, {"name": "six", "version": "1.16.0"}, {"name": "slicerator", "version": "1.1.0"}, {"name": "smmap", "version": "3.0.5"}, {"name": "sniffio", "version": "1.3.0"}, {"name": "snuggs", "version": "1.4.7"}, {"name": "sortedcontainers", "version": "2.4.0"}, {"name": "soupsieve", "version": "2.5"}, {"name": "SQLAlchemy", "version": "2.0.22"}, {"name": "stack-data", "version": "0.6.2"}, {"name": "statsmodels", "version": "0.14.0"}, {"name": "sympy", "version": "1.12"}, {"name": "tables", "version": "3.9.1"}, {"name": "tabulate", "version": "0.9.0"}, {"name": "tblib", "version": "2.0.0"}, {"name": "tenacity", "version": "8.2.2"}, {"name": "terminado", "version": "0.17.1"}, {"name": "threadpoolctl", "version": "3.2.0"}, {"name": "tifffile", "version": "2023.9.26"}, {"name": "tinycss2", "version": "1.2.1"}, {"name": "tomli", "version": "2.0.1"}, {"name": "toolz", "version": "0.12.0"}, {"name": "tornado", "version": "6.3.3"}, {"name": "tqdm", "version": "4.66.1"}, {"name": "traitlets", "version": "5.11.2"}, {"name": "truststore", "version": "0.8.0"}, {"name": "types-python-dateutil", "version": "2.8.19.14"}, {"name": "typing_extensions", "version": "4.8.0"}, {"name": "typing-utils", "version": "0.1.0"}, {"name": "tzdata", "version": "2023.3"}, {"name": "tzlocal", "version": "5.2"}, {"name": "uc-micro-py", "version": "1.0.3"}, {"name": "uri-template", "version": "1.3.0"}, {"name": "urllib3", "version": "2.0.7"}, {"name": "vtk", "version": "9.2.6"}, {"name": "wcwidth", "version": "0.2.8"}, {"name": "webcolors", "version": "1.13"}, {"name": "webencodings", "version": "0.5.1"}, {"name": "websocket-client", "version": "1.6.4"}, {"name": "wheel", "version": "0.41.2"}, {"name": "widgetsnbextension", "version": "4.0.9"}, {"name": "wslink", "version": "1.12.4"}, {"name": "xarray", "version": "2024.2.0"}, {"name": "xlrd", "version": "2.0.1"}, {"name": "xmltodict", "version": "0.13.0"}, {"name": "xvfbwrapper", "version": "0.2.9"}, {"name": "xyzservices", "version": "2023.10.0"}, {"name": "yarl", "version": "1.9.4"}, {"name": "zict", "version": "3.0.0"}, {"name": "zipp", "version": "3.17.0"}, {"name": "zstandard", "version": "0.21.0"}]
================================================
FILE: docker/requirements.sh
================================================
#!/bin/bash
# Get the list of all installed packages with their versions inside Docker container
#pip list --format=json > requirements.json
# Get the list of dependencies for pygmtsar
dependencies=$(pip3 show pygmtsar 2>/dev/null | grep Requires | cut -d: -f2 | tr -d ' ')
echo "RUN pip3 install \\"
# Loop through the dependencies to find their versions
for dep in ${dependencies//,/ }; do
# Use jq to parse the JSON and find the version of the dependency
version=$(jq --arg dep "$dep" -r '.[] | select(.name==$dep) | .version' requirements.json)
if [[ ! -z "$version" ]]; then
# Write the dependency and its version to requirements.txt
echo " $dep==$version \\"
fi
done
echo " pygmtsar"
================================================
FILE: notebooks/dload.sh
================================================
#!/bin/sh
DIR="notebooks"
if [ $# -gt 0 ]; then
DIR="$1"
fi
rm -fr README.md "$DIR"
mkdir -p "$DIR"
wget -c 'https://raw.githubusercontent.com/AlexeyPechnikov/pygmtsar/refs/heads/pygmtsar2/README.md'
grep -o 'https://colab.research.google.com/drive/[0-9A-Za-z_-]*' README.md \
| sed 's#https://colab.research.google.com/drive/##' \
> "$DIR/notebooks.txt"
while IFS= read -r ID; do
if [ -n "$ID" ]; then
echo "Downloading notebook: $ID"
wget --no-check-certificate "https://drive.google.com/uc?export=download&id=$ID" -O "$DIR/$ID.ipynb"
FILENAME=$(jq -r '
first(
.cells[]
| select(.cell_type == "markdown")
| .source[0]
| capture("^## (?<title>.+)").title
| split(": ") | last
| gsub(" |,|\\(|\\)"; "_")
| gsub("_+"; "_")
| gsub("_$"; "")
)
' "$DIR/$ID.ipynb")
mv "$DIR/$ID.ipynb" "$DIR/$FILENAME.ipynb"
fi
done < "$DIR/notebooks.txt"
rm -f "$DIR/notebooks.txt"
echo "All notebooks downloaded to: $DIR"
================================================
FILE: pubs/README.md
================================================
## Publications that Cite PyGMTSAR
Have you published a paper or released a publicly accessible project that **explicitly cites PyGMTSAR**? I welcome peer-reviewed articles, conference papers, theses, preprints, and industry case studies.
Celik, F., Sanli, F.B., Celik, K., & Celik, A. (2025). Kurtun Dam oscillate characterization with landslide possible effect detection using InSAR observations. Natural Hazards, 121(14), 16747–16763. https://doi.org/10.1007/s11069-025-07447-1 https://link.springer.com/content/pdf/10.1007/s11069-025-07447-1.pdf [Direct PDF]
Dewan, A., Jain, H., Hossain, M.A., Adnan, M.S.G., et al. (2025). Estimating vertical land motion-adjusted sea level rise in a data-sparse and vulnerable coastal region. Geomatics, Natural Hazards and Risk, 16(1). https://doi.org/10.1080/19475705.2025.2545375 https://www.tandfonline.com/doi/full/10.1080/19475705.2025.2545375 [Full text (HTML)]
Hussain, S., Pan, B., Hussain, W., Ali, M., Sajjad, M.M., Afzal, Z., & Tariq, A. (2025). Analyzing coseismic displacement of the M7.7 Myanmar earthquake on March 28, 2025, using Sentinel-1 InSAR data. Structures, 80, 109718. https://doi.org/10.1016/j.istruc.2025.109718 https://www.researchgate.net/publication/393698762_Analyzing_coseismic_displacement_of_the_M_77_Myanmar_earthquake_on_march_28_2025_using_Sentinel-1_InSAR_data [Author PDF]
Tung, S., Shirzaei, M., et al. (2025). Redistributed Seismicity at the Coso Geothermal Field Investigated Through Stress Changes from Fluid Production and Migration. *Proceedings of the 50th Stanford Geothermal Workshop*. https://pangea.stanford.edu/ERE/db/GeoConf/papers/SGW/2025/Tung.pdf [Direct PDF]
Brar, A. (2024). Urban Subsidence Monitoring in Mexico City Using SBAS and PS-InSAR: Analysing Ground Deformation with Sentinel-1 SAR Data (Master’s Major Research Project, Toronto Metropolitan University). https://remneetbrar.com/Project%20Documents/MRP%20Final%20Copy.pdf [Direct PDF]
Jacobs, B., et al. (2025). Safeguarding Cultural Heritage: Integrative Analysis of Gravitational Mass Movements at the Mortuary Temple of Hatshepsut, Luxor, Egypt. EGUsphere preprint. https://doi.org/10.5194/egusphere-2025-2007 https://egusphere.copernicus.org/preprints/2025/egusphere-2025-2007/egusphere-2025-2007.pdf [Direct PDF]
Alfayyadh, M.R.N. (2024). *Pemanfaatan Citra Sentinel-1 Berbasis Python untuk Kajian Deformasi Permukaan Akibat Gempa Bumi di Cianjur, Jawa Barat* (Undergraduate thesis). Universitas Pendidikan Indonesia. https://repository.upi.edu/115771/ [Full text (repository)]
International Symposium on Geoinformatics for Spatial-Infrastructure Development in Earth & Allied Sciences (GIS-IDEAS 2024). (2024). E-Proceedings. University of Phayao, Chiang Rai, Thailand, 11–13 Dec 2024. https://gis-ideas.org/2024/GIS_IDEAS_e_proceedings.pdf [Direct PDF]
• “Processing and analysing multi-source, multi-resolution geospatial data on cloud platforms for some environmental and disaster applications. Landslide monitoring in Van Yen, Yen Bai, Vietnam by PyGMTSAR on Google Colab” (pp. 140–146).
• “Investigating land subsidence by processing multi-temporal SAR time series on Google Colab: case study in Ca Mau City, Vietnam” (pp. 69-78).
================================================
FILE: pygmtsar/LICENSE.txt
================================================
BSD 3-Clause License
Copyright (c) 2023, Alexey Pechnikov, https://orcid.org/0000-0001-9626-8615 (ORCID)
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
================================================
FILE: pygmtsar/MANIFEST.in
================================================
include pygmtsar/data/geoid_egm96_icgem.grd
include pygmtsar/data/google_colab.sh
================================================
FILE: pygmtsar/pygmtsar/ASF.py
================================================
# ----------------------------------------------------------------------------
# PyGMTSAR
#
# This file is part of the PyGMTSAR project: https://github.com/mobigroup/gmtsar
#
# Copyright (c) 2023, Alexey Pechnikov
#
# Licensed under the BSD 3-Clause License (see LICENSE for details)
# ----------------------------------------------------------------------------
from .tqdm_joblib import tqdm_joblib
from .S1 import S1
class ASF(tqdm_joblib):
import pandas as pd
from datetime import timedelta
template_url = 'https://datapool.asf.alaska.edu/SLC/S{satellite}/{scene}.zip'
template_safe = '*.SAFE/**/*{mission}-iw{subswath}-slc-{polarization}-*'
# check for downloaded scene files
template_scene = 'S1?_IW_SLC__1S??_{start}_*.SAFE/*/s1?-iw{subswath_lowercase}-slc-{polarization_lowercase}-{start_lowercase}-*'
template_scene_safe = 'S1?_IW_SLC__1S??_{start}_*.SAFE/**/*s1?-iw{subswath_lowercase}-slc-{polarization_lowercase}-{start_lowercase}-*'
def __init__(self, username=None, password=None):
import asf_search
import getpass
if username is None:
username = getpass.getpass('Please enter your ASF username and press Enter key:')
if password is None:
password = getpass.getpass('Please enter your ASF password and press Enter key:')
self.username = username
self.password = password
def _get_asf_session(self):
import asf_search
return asf_search.ASFSession().auth_with_creds(self.username, self.password)
def download(self, basedir, scenes_or_bursts, subswaths=None, polarization='VV', **kwargs):
"""
Downloads the specified subswaths or bursts extracted from Sentinel-1 SLC scenes.
Parameters
----------
basedir : str
The directory where the downloaded scenes will be saved.
scenes_or_bursts : list of str
List of scene and bursts identifiers to download.
subswaths : list of str
Number representing the subswaths to download for each scene (e.g., 1 or 123). Ignored if a burst ID is provided.
polarization : str, optional
The polarization to download ('VV' by default). Ignored if the burst ID is provided.
session : asf_search.ASFSession, optional
The session object for authentication. If None, a new session is created.
n_jobs : int, optional
The number of concurrent download jobs. Default is 4 for scenes and 8 for bursts.
joblib_backend : str, optional
The backend for parallel processing. Default is 'loky'.
skip_exist : bool, optional
If True, skips downloading scenes that already exist. Default is True.
debug : bool, optional
If True, prints debugging information. Default is False.
Returns
-------
pandas.DataFrame
A DataFrame containing the list of downloaded scenes and bursts.
"""
import pandas as pd
bursts = [item for item in scenes_or_bursts if item.endswith('-BURST')]
scenes = [item[:-4] if item.endswith('-SLC') else item for item in scenes_or_bursts if item not in bursts]
results = []
if len(bursts):
result = self.download_bursts(basedir, bursts, **kwargs)
if result is not None:
results.append(result.rename({'burst': 'burst_or_scene'}, axis=1))
if len(scenes):
result = self.download_scenes(basedir, scenes, subswaths=subswaths, polarization=polarization, **kwargs)
if result is not None:
results.append(result.rename({'scene': 'burst_or_scene'}, axis=1))
if len(results):
return pd.concat(results)
def download_scenes(self, basedir, scenes, subswaths, polarization='VV', session=None,
n_jobs=4, joblib_backend='loky', skip_exist=True,
retries=30, timeout_second=3, debug=False):
"""
Downloads the specified subswaths extracted from Sentinel-1 SLC scenes.
Parameters
----------
basedir : str
The directory where the downloaded scenes will be saved.
scenes : list of str
List of scene identifiers to download.
subswaths : list of str
Number representing the subswaths to download for each scene (e.g., 1 or 123).
polarization : str, optional
The polarization to download ('VV' by default).
session : asf_search.ASFSession, optional
The session object for authentication. If None, a new session is created.
n_jobs : int, optional
The number of concurrent download jobs. Default is 4.
joblib_backend : str, optional
The backend for parallel processing. Default is 'loky'.
skip_exist : bool, optional
If True, skips downloading scenes that already exist. Default is True.
debug : bool, optional
If True, prints debugging information. Default is False.
Returns
-------
pandas.DataFrame
A DataFrame containing the list of downloaded scenes.
"""
import pandas as pd
import numpy as np
import asf_search
import fnmatch
import joblib
from tqdm.auto import tqdm
import os
import re
import glob
from datetime import datetime, timedelta
import time
import warnings
# supress asf_search 'UserWarning: File already exists, skipping download'
warnings.filterwarnings("ignore", category=UserWarning)
# create the directory if needed
os.makedirs(basedir, exist_ok=True)
# collect all the existing files once
files = os.listdir(basedir)
#print ('files', len(files))
# skip existing scenes
if skip_exist:
# collect all the existing files once
files = glob.glob('**', root_dir=basedir, recursive=True)
#print ('files', len(files))
# check scenes folders only
#scenes_missed = np.unique([scene for scene in scenes if f'{scene}.SAFE' not in files])
scenes_missed = []
for scene in scenes:
scene_path = '/'.join([scene + '.SAFE'] + self.template_scene_safe.split('/')[1:])
#print ('scene_path', scene_path)
for subswath in str(subswaths):
pattern = scene_path.format(
subswath_lowercase = subswath.lower(),
polarization_lowercase = polarization.lower(),
start_lowercase = '*')
#print ('pattern', pattern)
matching = [filename for filename in files if fnmatch.fnmatch(filename, pattern)]
exts = [os.path.splitext(fname)[1] for fname in matching]
if '.tiff' in exts and '.xml' in exts and len(matching)>=4:
pass
else:
scenes_missed.append(scene)
else:
# process all the defined scenes
scenes_missed = scenes
#print ('scenes_missed', len(scenes_missed))
# do not use internet connection, work offline when all the scenes already available
if len(scenes_missed) == 0:
return
def get_url(scene):
return self.template_url.format(satellite=scene[2:3], scene=scene)
def get_patterns(subswaths, polarization, mission):
#print (f'get_patterns: {subswaths}, {polarization}, {mission}')
assert len(mission) == 3 and mission[:2]=='S1', \
f'ERROR: mission name is invalid: {mission}. Expected names like "S1A", "S1B", etc.'
outs = []
for subswath in str(subswaths):
pattern = self.template_safe.format(mission=mission.lower(),
subswath=subswath,
polarization=polarization.lower())
outs.append(pattern)
return outs
def download_scene(scene, subswaths, polarization, basedir, session):
# define scene zip url
url = get_url(scene)
#print (f'download_scene: {url}, {patterns}, {basedir}, {session}')
patterns = get_patterns(subswaths, polarization, mission=scene[:3])
#print ('patterns', patterns)
with asf_search.remotezip(url, session) as remotezip:
filenames = remotezip.namelist()
#print ('filenames', filenames)
for pattern in patterns:
#print ('pattern', pattern)
matching = [filename for filename in filenames if fnmatch.fnmatch(filename, pattern)]
for filename in matching:
filesize = remotezip.getinfo(filename).file_size
#print (filename, filesize)
fullname = os.path.join(basedir, filename)
if os.path.exists(fullname) and os.path.getsize(fullname) == filesize:
#print (f'pass {fullname}')
pass
else:
#print (f'download {fullname}')
# create the directory if needed
try:
os.makedirs(os.path.dirname(fullname), exist_ok=True)
if os.path.exists(fullname + '.tmp'):
os.remove(fullname + '.tmp')
with open(fullname + '.tmp', 'wb') as file:
file.write(remotezip.read(filename))
assert os.path.getsize(fullname + '.tmp') == filesize, \
f'ERROR: Downloaded incomplete scene content'
os.rename(fullname + '.tmp', fullname)
except Exception as e:
print(e)
raise
finally:
if os.path.exists(fullname + '.tmp'):
os.remove(fullname + '.tmp')
#remotezip.extract(filename, basedir)
# prepare authorized connection
if session is None:
session = self._get_asf_session()
if n_jobs is None or debug == True:
print ('Note: sequential joblib processing is applied when "n_jobs" is None or "debug" is True.')
joblib_backend = 'sequential'
def download_scene_with_retry(scene, subswaths, polarization, basedir, session, retries, timeout_second):
for retry in range(retries):
try:
download_scene(scene, subswaths, polarization, basedir, session)
return True
except Exception as e:
print(f'ERROR: download attempt {retry+1} failed for {scene}: {e}')
if retry + 1 == retries:
return False
time.sleep(timeout_second)
# download scenes
with self.tqdm_joblib(tqdm(desc='ASF Downloading Sentinel-1 SLC Scenes:', total=len(scenes_missed))) as progress_bar:
statuses = joblib.Parallel(n_jobs=n_jobs, backend=joblib_backend)(joblib.delayed(download_scene_with_retry)\
(scene, subswaths, polarization, basedir, session,
retries=retries, timeout_second=timeout_second) for scene in scenes_missed)
failed_count = statuses.count(False)
if failed_count > 0:
raise Exception(f'Scenes downloading failed for {failed_count} items.')
# parse processed scenes and convert to dataframe
#print ('scenes', len(scenes))
scenes_downloaded = pd.DataFrame(scenes_missed, columns=['scene'])
# return the results in a user-friendly dataframe
#scenes_downloaded['scene'] = scenes_downloaded.scene\
# .apply(lambda name: self.template_url.format(satellite=name[2:3], scene=name))
return scenes_downloaded
# https://asf.alaska.edu/datasets/data-sets/derived-data-sets/sentinel-1-bursts/
def download_bursts(self, basedir, bursts, session=None, n_jobs=8, joblib_backend='loky', skip_exist=True,
retries=30, timeout_second=3, debug=False):
"""
Downloads the specified bursts extracted from Sentinel-1 SLC scenes.
Parameters
----------
basedir : str
The directory where the downloaded bursts will be saved.
bursts : list of str
List of burst identifiers to download.
session : asf_search.ASFSession, optional
The session object for authentication. If None, a new session is created.
n_jobs : int, optional
The number of concurrent download jobs. Default is 8.
joblib_backend : str, optional
The backend for parallel processing. Default is 'loky'.
skip_exist : bool, optional
If True, skips downloading bursts that already exist. Default is True.
debug : bool, optional
If True, prints debugging information. Default is False.
Returns
-------
pandas.DataFrame
A DataFrame containing the list of downloaded bursts.
"""
import rioxarray as rio
from tifffile import TiffFile
import xmltodict
from xml.etree import ElementTree
import pandas as pd
import asf_search
import joblib
from tqdm.auto import tqdm
import os
import glob
from datetime import datetime, timedelta
import time
import warnings
# supress asf_search 'UserWarning: File already exists, skipping download'
warnings.filterwarnings("ignore", category=UserWarning)
def filter_azimuth_time(items, start_utc_dt, stop_utc_dt, delta=3):
return [item for item in items if
datetime.strptime(item['azimuthTime'], '%Y-%m-%dT%H:%M:%S.%f') >= start_utc_dt - timedelta(seconds=delta) and
datetime.strptime(item['azimuthTime'], '%Y-%m-%dT%H:%M:%S.%f') <= stop_utc_dt + timedelta(seconds=delta)]
# create the directory if needed
os.makedirs(basedir, exist_ok=True)
# skip existing bursts
if skip_exist:
bursts_missed = []
for burst in bursts:
#print (burst)
burst_parts = burst.split('_')
subswath = burst_parts[2]
polarization = burst_parts[4]
start = burst_parts[3]
#print ('start', start, 'subswath', subswath, 'polarization', polarization)
template = self.template_scene.format(start=start,
subswath_lowercase = subswath.lower(),
polarization_lowercase = polarization.lower(),
start_lowercase = start.lower())
#print ('template', template)
files = glob.glob(template, root_dir=basedir)
exts =[ os.path.splitext(file)[-1] for file in files]
if sorted(exts) == ['.tiff', '.xml']:
#print ('pass')
pass
else:
bursts_missed.append(burst)
else:
# process all the defined scenes
bursts_missed = bursts
#print ('bursts_missed', len(bursts_missed))
# do not use internet connection, work offline when all the scenes already available
if len(bursts_missed) == 0:
return
def download_burst(result, basedir, session):
properties = result.geojson()['properties']
#print (properties)
burstIndex = properties['burst']['burstIndex']
platform = properties['platform'][-2:]
polarization = properties['polarization']
#print ('polarization', polarization)
subswath = properties['burst']['subswath']
#print ('subswath', subswath)
# fake image number unique per polarizations
# the real image number is sequential between all the polarizations
# this trick allows to detect scene files without manifest downloading
#imageNumber = '00' + subswath[-1:]
#print ('imageNumber', imageNumber)
# define new scene name
scene = properties['url'].split('/')[3]
#print ('scene', scene)
# use fake start and stop times to follow the burst naming
start_time = properties['sceneName'].split('_')[3]
start_time_dt = datetime.strptime(start_time, '%Y%m%dT%H%M%S')
stop_time_dt = (start_time_dt + timedelta(seconds=3))
stop_time = stop_time_dt.strftime('%Y%m%dT%H%M%S')
#print ('start_time, stop_time', start_time, stop_time)
scene_parts = scene.split('_')
#scene_parts[5] = startTime.replace('-','').replace(':','')[:len(scene_parts[5])]
scene_parts[5] = start_time
#scene_parts[6] = stopTime.replace('-','').replace(':','')[:len(scene_parts[6])]
scene_parts[6] = stop_time
scene = '_'.join(scene_parts)
#print ('scene', scene)
scene_dir = os.path.join(basedir, scene + '.SAFE')
# define burst name
burst = '-'.join([f's{platform.lower()}-{subswath.lower()}-slc-{polarization.lower()}'] + scene_parts[5:-1] + ['001']).lower()
# s1a-iw2-slc-vv-20240314t130744-20240314t130747-052978-0669c6-001
#print ('burst', burst)
# create the directories if needed
tif_dir = os.path.join(scene_dir, 'measurement')
xml_dir = os.path.join(scene_dir, 'annotation')
xml_calib_dir = os.path.join(xml_dir, 'calibration')
# save annotation using the burst and scene names
xml_file = os.path.join(xml_dir, burst + '.xml')
xml_noise_file = os.path.join(xml_calib_dir, 'noise-' + burst + '.xml')
xml_calib_file = os.path.join(xml_calib_dir, 'calibration-' + burst + '.xml')
#rint ('xml_file', xml_file)
tif_file = os.path.join(tif_dir, burst + '.tiff')
#print ('tif_file', tif_file)
for dirname in [scene_dir, tif_dir, xml_dir, xml_calib_dir]:
os.makedirs(dirname, exist_ok=True)
# download tif
# properties['bytes'] is not an accurate file size but it looks about 40 kB smaller
if os.path.exists(tif_file) and os.path.getsize(tif_file) >= int(properties['bytes']):
#print (f'pass {tif_file}')
pass
else:
#print ('YYY', os.path.getsize(tif_file), properties['bytes'])
# remove potentially incomplete file if needed
if os.path.exists(tif_file):
os.remove(tif_file)
# check if we can open the downloaded file without errors
tmp_file = os.path.join(scene_dir, os.path.basename(tif_file))
# remove potentially incomplete data file if needed
if os.path.exists(tmp_file):
os.remove(tmp_file)
result.download(scene_dir, filename=os.path.basename(tif_file), session=session)
if not os.path.exists(tmp_file):
raise Exception(f'ERROR: TiFF file is not downloaded: {tmp_file}')
if os.path.getsize(tmp_file) == 0:
raise Exception(f'ERROR: TiFF file is empty: {tmp_file}')
# check TiFF file validity opening it
with TiffFile(tmp_file) as tif:
# get TiFF file information
page = tif.pages[0]
tags = page.tags
data = page.asarray()
# attention: rasterio can crash the interpreter on a corrupted TIFF file
# perform this check as the final step
with rio.open_rasterio(tmp_file) as raster:
raster.load()
# TiFF file is well loaded
if not os.path.exists(tmp_file):
raise Exception(f'ERROR: TiFF file is missed: {tmp_file}')
# move to persistent name
if os.path.exists(tmp_file):
os.rename(tmp_file, tif_file)
# download xml
if os.path.exists(xml_file) and os.path.getsize(xml_file) > 0 \
and os.path.exists(xml_noise_file) and os.path.getsize(xml_noise_file) > 0 \
and os.path.exists(xml_calib_file) and os.path.getsize(xml_calib_file) > 0:
#print (f'pass {xml_file}')
pass
else:
# get TiFF file information
with TiffFile(tif_file) as tif:
page = tif.pages[0]
offset = page.dataoffsets[0]
#print ('offset', offset)
# get the file name
basename = os.path.basename(properties['additionalUrls'][0])
#print ('basename', '=>', basename)
manifest_file = os.path.join(scene_dir, basename)
# remove potentially incomplete manifest file if needed
if os.path.exists(manifest_file):
os.remove(manifest_file)
asf_search.download_urls(urls=properties['additionalUrls'], path=scene_dir, session=session)
if not os.path.exists(manifest_file):
raise Exception(f'ERROR: manifest file is not downloaded: {manifest_file}')
if os.path.getsize(manifest_file) == 0:
raise Exception(f'ERROR: manifest file is empty: {manifest_file}')
# check XML file validity parsing it
with open(manifest_file, 'r') as file:
xml_content = file.read()
_ = ElementTree.fromstring(xml_content)
# xml file is well parsed
if not os.path.exists(manifest_file):
raise Exception(f'ERROR: manifest file is missed: {manifest_file}')
# parse xml
with open(manifest_file, 'r') as file:
xml_content = file.read()
# remove manifest file
if os.path.exists(manifest_file):
os.remove(manifest_file)
subswathidx = int(subswath[-1:]) - 1
content = xmltodict.parse(xml_content)['burst']['metadata']['product'][subswathidx]
assert polarization == content['polarisation'], 'ERROR: XML polarization differs from burst polarization'
annotation = content['content']
annotation_burst = annotation['swathTiming']['burstList']['burst'][burstIndex]
start_utc = annotation_burst['azimuthTime']
start_utc_dt = datetime.strptime(start_utc, '%Y-%m-%dT%H:%M:%S.%f')
#print ('start_utc', start_utc, start_utc_dt)
length = int(annotation['swathTiming']['linesPerBurst'])
#print (f'length={length}, burstIndex={burstIndex}')
azimuth_time_interval = annotation['imageAnnotation']['imageInformation']['azimuthTimeInterval']
burst_time_interval = timedelta(seconds=(length - 1) * float(azimuth_time_interval))
stop_utc_dt = start_utc_dt + burst_time_interval
stop_utc = stop_utc_dt.strftime('%Y-%m-%dT%H:%M:%S.%f')
#print ('stop_utc', stop_utc, stop_utc_dt)
# output xml
product = {}
adsHeader = annotation['adsHeader']
adsHeader['startTime'] = start_utc
adsHeader['stopTime'] = stop_utc
adsHeader['imageNumber'] = '001'
product = product | {'adsHeader': adsHeader}
qualityInformation = {'productQualityIndex': annotation['qualityInformation']['productQualityIndex']} |\
{'qualityDataList': annotation['qualityInformation']['qualityDataList']}
product = product | {'qualityInformation': qualityInformation}
generalAnnotation = annotation['generalAnnotation']
# filter annotation['generalAnnotation']['replicaInformationList'] by azimuthTime
product = product | {'generalAnnotation': generalAnnotation}
imageAnnotation = annotation['imageAnnotation']
imageAnnotation['imageInformation']['productFirstLineUtcTime'] = start_utc
imageAnnotation['imageInformation']['productLastLineUtcTime'] = stop_utc
imageAnnotation['imageInformation']['productComposition'] = 'Assembled'
imageAnnotation['imageInformation']['sliceNumber'] = '0'
imageAnnotation['imageInformation']['sliceList'] = {'@count': '0'}
imageAnnotation['imageInformation']['numberOfLines'] = str(length)
# imageStatistics and inputDimensionsList are not updated
product = product | {'imageAnnotation': imageAnnotation}
dopplerCentroid = annotation['dopplerCentroid']
items = filter_azimuth_time(dopplerCentroid['dcEstimateList']['dcEstimate'], start_utc_dt, stop_utc_dt)
dopplerCentroid['dcEstimateList'] = {'@count': len(items), 'dcEstimate': items}
product = product | {'dopplerCentroid': dopplerCentroid}
antennaPattern = annotation['antennaPattern']
items = filter_azimuth_time(antennaPattern['antennaPatternList']['antennaPattern'], start_utc_dt, stop_utc_dt)
antennaPattern['antennaPatternList'] = {'@count': len(items), 'antennaPattern': items}
product = product | {'antennaPattern': antennaPattern}
swathTiming = annotation['swathTiming']
items = filter_azimuth_time(swathTiming['burstList']['burst'], start_utc_dt, start_utc_dt, 1)
assert len(items) == 1, 'ERROR: unexpected bursts count, should be 1'
# add TiFF file information
items[0]['byteOffset'] = offset
swathTiming['burstList'] = {'@count': len(items), 'burst': items}
product = product | {'swathTiming': swathTiming}
geolocationGrid = annotation['geolocationGrid']
items = filter_azimuth_time(geolocationGrid['geolocationGridPointList']['geolocationGridPoint'], start_utc_dt, stop_utc_dt, 1)
# re-numerate line numbers for the burst
for item in items: item['line'] = str(int(item['line']) - (length * burstIndex))
geolocationGrid['geolocationGridPointList'] = {'@count': len(items), 'geolocationGridPoint': items}
product = product | {'geolocationGrid': geolocationGrid}
product = product | {'coordinateConversion': annotation['coordinateConversion']}
product = product | {'swathMerging': annotation['swathMerging']}
with open(xml_file, 'w') as file:
file.write(xmltodict.unparse({'product': product}, pretty=True, indent=' '))
# output noise xml
content = xmltodict.parse(xml_content)['burst']['metadata']['noise'][subswathidx]
assert polarization == content['polarisation'], 'ERROR: XML polarization differs from burst polarization'
annotation = content['content']
noise = {}
adsHeader = annotation['adsHeader']
adsHeader['startTime'] = start_utc
adsHeader['stopTime'] = stop_utc
adsHeader['imageNumber'] = '001'
noise = noise | {'adsHeader': adsHeader}
if 'noiseVectorList' in annotation:
noiseRangeVector = annotation['noiseVectorList']
items = filter_azimuth_time(noiseRangeVector['noiseVector'], start_utc_dt, stop_utc_dt)
# re-numerate line numbers for the burst
for item in items: item['line'] = str(int(item['line']) - (length * burstIndex))
noiseRangeVector = {'@count': len(items), 'noiseVector': items}
noise = noise | {'noiseVectorList': noiseRangeVector}
if 'noiseRangeVectorList' in annotation:
noiseRangeVector = annotation['noiseRangeVectorList']
items = filter_azimuth_time(noiseRangeVector['noiseRangeVector'], start_utc_dt, stop_utc_dt)
# re-numerate line numbers for the burst
for item in items: item['line'] = str(int(item['line']) - (length * burstIndex))
noiseRangeVector = {'@count': len(items), 'noiseRangeVector': items}
noise = noise | {'noiseRangeVectorList': noiseRangeVector}
if 'noiseAzimuthVectorList' in annotation:
noiseAzimuthVector = annotation['noiseAzimuthVectorList']
items = noiseAzimuthVector['noiseAzimuthVector']['line']['#text'].split(' ')
items = [int(item) for item in items]
lowers = [item for item in items if item <= burstIndex * length] or items[0]
uppers = [item for item in items if item >= (burstIndex + 1) * length - 1] or items[-1]
mask = [True if item>=lowers[-1] and item<=uppers[0] else False for item in items]
items = [item - burstIndex * length for item, m in zip(items, mask) if m]
noiseAzimuthVector['noiseAzimuthVector']['firstAzimuthLine'] = lowers[-1] - burstIndex * length
noiseAzimuthVector['noiseAzimuthVector']['lastAzimuthLine'] = uppers[0] - burstIndex * length
noiseAzimuthVector['noiseAzimuthVector']['line'] = {'@count': len(items), '#text': ' '.join([str(item) for item in items])}
items = noiseAzimuthVector['noiseAzimuthVector']['noiseAzimuthLut']['#text'].split(' ')
items = [item for item, m in zip(items, mask) if m]
noiseAzimuthVector['noiseAzimuthVector']['noiseAzimuthLut'] = {'@count': len(items), '#text': ' '.join(items)}
noise = noise | {'noiseAzimuthVectorList': noiseAzimuthVector}
with open(xml_noise_file, 'w') as file:
file.write(xmltodict.unparse({'noise': noise}, pretty=True, indent=' '))
# output calibration xml
content = xmltodict.parse(xml_content)['burst']['metadata']['calibration'][subswathidx]
assert polarization == content['polarisation'], 'ERROR: XML polarization differs from burst polarization'
annotation = content['content']
calibration = {}
adsHeader = annotation['adsHeader']
adsHeader['startTime'] = start_utc
adsHeader['stopTime'] = stop_utc
adsHeader['imageNumber'] = '001'
calibration = calibration | {'adsHeader': adsHeader}
calibration = calibration | {'calibrationInformation': annotation['calibrationInformation']}
calibrationVector = annotation['calibrationVectorList']
items = filter_azimuth_time(calibrationVector['calibrationVector'], start_utc_dt, stop_utc_dt)
# re-numerate line numbers for the burst
for item in items: item['line'] = str(int(item['line']) - (length * burstIndex))
calibrationVector = {'@count': len(items), 'calibrationVector': items}
calibration = calibration | {'calibrationVectorList': calibrationVector}
with open(xml_calib_file, 'w') as file:
file.write(xmltodict.unparse({'calibration': calibration}, pretty=True, indent=' '))
# prepare authorized connection
if session is None:
session = self._get_asf_session()
with tqdm(desc=f'ASF Downloading Bursts Catalog', total=1) as pbar:
results = asf_search.granule_search(bursts_missed)
pbar.update(1)
if n_jobs is None or debug == True:
print ('Note: sequential joblib processing is applied when "n_jobs" is None or "debug" is True.')
joblib_backend = 'sequential'
def download_burst_with_retry(result, basedir, session, retries, timeout_second):
for retry in range(retries):
try:
download_burst(result, basedir, session)
return True
except Exception as e:
print(f'ERROR: download attempt {retry+1} failed for {result}: {e}')
if retry + 1 == retries:
return False
time.sleep(timeout_second)
# download bursts
with self.tqdm_joblib(tqdm(desc='ASF Downloading Sentinel-1 SLC Bursts', total=len(bursts_missed))) as progress_bar:
statuses = joblib.Parallel(n_jobs=n_jobs, backend=joblib_backend)(joblib.delayed(download_burst_with_retry)\
(result, basedir, session, retries=retries, timeout_second=timeout_second) for result in results)
failed_count = statuses.count(False)
if failed_count > 0:
raise Exception(f'Bursts downloading failed for {failed_count} items.')
# parse processed bursts and convert to dataframe
bursts_downloaded = pd.DataFrame(bursts_missed, columns=['burst'])
# return the results in a user-friendly dataframe
return bursts_downloaded
@staticmethod
def search(geometry, startTime=None, stopTime=None, flightDirection=None,
platform='SENTINEL-1', processingLevel='auto', polarization='VV', beamMode='IW'):
import geopandas as gpd
import asf_search
import shapely
# cover defined time interval
if len(startTime)==10:
startTime=f'{startTime} 00:00:01'
if len(stopTime)==10:
stopTime=f'{stopTime} 23:59:59'
if flightDirection == 'D':
flightDirection = 'DESCENDING'
elif flightDirection == 'A':
flightDirection = 'ASCENDING'
# convert to a single geometry
if isinstance(geometry, (gpd.GeoDataFrame, gpd.GeoSeries)):
geometry = geometry.geometry.union_all()
# convert closed linestring to polygon
if geometry.type == 'LineString' and geometry.coords[0] == geometry.coords[-1]:
geometry = shapely.geometry.Polygon(geometry.coords)
if geometry.type == 'Polygon':
# force counterclockwise orientation.
geometry = shapely.geometry.polygon.orient(geometry, sign=1.0)
#print ('wkt', geometry.wkt)
if isinstance(processingLevel, str) and processingLevel=='auto' and platform == 'SENTINEL-1':
processingLevel = asf_search.PRODUCT_TYPE.BURST
# search bursts
results = asf_search.search(
start=startTime,
end=stopTime,
flightDirection=flightDirection,
intersectsWith=geometry.wkt,
platform=platform,
processingLevel=processingLevel,
polarization=polarization,
beamMode=beamMode,
)
return gpd.GeoDataFrame.from_features([product.geojson() for product in results], crs="EPSG:4326")
@staticmethod
def plot(bursts):
import pandas as pd
import matplotlib
import matplotlib.pyplot as plt
bursts['date'] = pd.to_datetime(bursts['startTime']).dt.strftime('%Y-%m-%d')
bursts['label'] = bursts.apply(lambda rec: f"{rec['flightDirection'].replace('E','')[:3]} {rec['date']} [{rec['pathNumber']}]", axis=1)
unique_labels = sorted(bursts['label'].unique())
unique_paths = sorted(bursts['pathNumber'].astype(str).unique())
colors = {label[-4:-1]: 'orange' if label[0] == 'A' else 'cyan' for i, label in enumerate(unique_labels)}
fig, ax = plt.subplots(figsize=(10, 8))
for label, group in bursts.groupby('label'):
group.plot(ax=ax, edgecolor=colors[label[-4:-1]], facecolor='none', linewidth=1, alpha=1, label=label)
burst_handles = [matplotlib.lines.Line2D([0], [0], color=colors[label[-4:-1]], lw=1, label=label) for label in unique_labels]
aoi_handle = matplotlib.lines.Line2D([0], [0], color='red', lw=1, label='AOI')
handles = burst_handles + [aoi_handle]
ax.legend(handles=handles, loc='upper right')
ax.set_title('Sentinel-1 Burst Footprints')
ax.set_xlabel('Longitude')
ax.set_ylabel('Latitude')
================================================
FILE: pygmtsar/pygmtsar/AWS.py
================================================
# ----------------------------------------------------------------------------
# PyGMTSAR
#
# This file is part of the PyGMTSAR project: https://github.com/mobigroup/gmtsar
#
# Copyright (c) 2024, Alexey Pechnikov
#
# Licensed under the BSD 3-Clause License (see LICENSE for details)
# ----------------------------------------------------------------------------
class AWS():
def download_dem(self, geometry, filename=None, product='1s', provider='GLO', n_jobs=4, joblib_backend='loky', skip_exist=True):
"""
Download Copernicus GLO-30/GLO-90 Digital Elevation Model or NASA SRTM Digital Elevation Model from open AWS storage.
from pygmtsar import AWS
dem = AWS().download_dem(AOI)
dem.plot.imshow()
AWS().download_dem(S1.scan_slc(DATADIR), 'dem.nc')
"""
print('NOTE: AWS module is removed. Download DEM using Tiles().download_dem(AOI, filename=...).')
================================================
FILE: pygmtsar/pygmtsar/GMT.py
================================================
# ----------------------------------------------------------------------------
# PyGMTSAR
#
# This file is part of the PyGMTSAR project: https://github.com/mobigroup/gmtsar
#
# Copyright (c) 2024, Alexey Pechnikov
#
# Licensed under the BSD 3-Clause License (see LICENSE for details)
# ----------------------------------------------------------------------------
class GMT():
def download_dem(self, geometry, filename=None, product='1s', skip_exist=True):
"""
Download and preprocess SRTM digital elevation model (DEM) data using GMT library.
Parameters
----------
product : str, optional
Product type of the DEM data. Available options are '1s' or 'SRTM1' (1 arcsec ~= 30m, default)
and '3s' or 'SRTM3' (3 arcsec ~= 90m).
Returns
-------
None or Xarray Dataarray
Examples
--------
Download default STRM1 DEM (~30 meters):
GMT().download_dem()
Download STRM3 DEM (~90 meters):
GMT.download_dem(product='STRM3')
Download default STRM DEM to cover the selected area AOI:
GMT().download_dem(AOI)
Download default STRM DEM to cover all the scenes:
GMT().download_dem(S1.scan_slc(DATADIR))
Notes
--------
https://docs.generic-mapping-tools.org/6.0/datasets/earth_relief.html
"""
print('NOTE: GMT module is removed. Download DEM using Tiles().download_dem(AOI, filename=...).')
def download_landmask(self, geometry, filename=None, product='1s', resolution='f', skip_exist=True):
"""
Download the landmask and save as NetCDF file.
Parameters
----------
product : str, optional
Available options are '1s' (1 arcsec ~= 30m, default) and '3s' (3 arcsec ~= 90m).
Examples
--------
from pygmtsar import GMT
landmask = GMT().download_landmask(S1.scan_slc(DATADIR))
Notes
-----
This method downloads the landmask using GMT's local data or server.
"""
print('NOTE: GMT module is removed. Download landmask using Tiles().download_landmask(AOI, filename=...).')
================================================
FILE: pygmtsar/pygmtsar/IO.py
================================================
# ----------------------------------------------------------------------------
# PyGMTSAR
#.
# This file is part of the PyGMTSAR project: https://github.com/mobigroup/gmtsar
#.
# Copyright (c) 2023, Alexey Pechnikov
#.
# Licensed under the BSD 3-Clause License (see LICENSE for details)
# ----------------------------------------------------------------------------
from .datagrid import datagrid
from .tqdm_dask import tqdm_dask
class IO(datagrid):
# processing directory
basedir = '.'
def _glob_re(self, pathname):
import os
import re
filenames = filter(re.compile(pathname).match, os.listdir(self.basedir))
return sorted([os.path.join(self.basedir, filename) for filename in filenames])
def dump(self, to_path=None):
"""
Dump Stack object state to a pickle file (stack.pickle in the processing directory by default).
Parameters
----------
to_path : str, optional
Path to the output dump file. If not provided, the default dump file in the processing directory is used.
Returns
-------
None
Examples
--------
Dump the current state to the default dump file in the processing directory:
stack.dump()
Notes
-----
This method serializes the state of the Stack object and saves it to a pickle file. The pickle file can be used to
restore the Stack object with its processed data and configuration. By default, the dump file is named "stack.pickle"
and is saved in the processing directory. An alternative file path can be provided using the `to_path` parameter.
"""
import pickle
import os
if to_path is None:
stack_pickle = os.path.join(self.basedir, 'stack.pickle')
else:
if os.path.isdir(to_path):
stack_pickle = os.path.join(to_path, 'stack.pickle')
else:
stack_pickle = to_path
print (f'NOTE: save state to file {stack_pickle}')
pickle.dump(self, open(stack_pickle, 'wb'))
return
@staticmethod
def restore(from_path):
"""
Restore Stack object state from a pickle file (stack.pickle in the processing directory by default).
Parameters
----------
from_path : str
Path to the input dump file.
Returns
-------
Stack
The restored Stack object.
Examples
--------
Restore the current state from the default dump file in the processing directory:
Stack.restore()
Notes
-----
This static method restores the state of an Stack object from a pickle file. The pickle file should contain the
serialized state of the Stack object, including its processed data and configuration. By default, the method assumes
the input file is named "stack.pickle" and is located in the processing directory. An alternative file path can be
provided using the `from_path` parameter. The method returns the restored Stack object.
"""
import pickle
import os
if os.path.isdir(from_path):
stack_pickle = os.path.join(from_path, 'stack.pickle')
else:
stack_pickle = from_path
print (f'NOTE: load state from file {stack_pickle}')
with open(stack_pickle, 'rb') as f:
return pickle.load(f)
def backup(self, backup_dir, copy=False, debug=False):
"""
Backup framed Stack scenes, orbits, DEM, and landmask files to build a minimal reproducible dataset.
Parameters
----------
backup_dir : str
The backup directory where the files will be copied.
copy : bool, optional
Flag indicating whether to make a copy of scene and orbit files and remove the originals in the work directory.
If False, the files will be moved to the backup directory. Default is False.
debug : bool, optional
Flag indicating whether to print debug information. Default is False.
Returns
-------
None
Examples
--------
Backup the files to the specified directory:
stack.backup('backup')
Open the backup for the reproducible run by defining it as a new data directory:
stack = Stack('backup', 'backup/DEM_WGS84.nc', 'raw')
Notes
-----
This method backs up the framed Stack scenes, orbits, DEM, and landmask files to a specified backup directory.
It provides a way to create a minimal reproducible dataset by preserving the necessary files for processing.
The method creates the backup directory if it does not exist. By default, the method moves the scene and orbit files
to the backup directory, effectively removing them from the work directory. The DEM and landmask files are always
copied to the backup directory. If the `copy` parameter is set to True, the scene and orbit files will be copied
instead of moved. Use caution when setting `copy` to True as it can result in duplicated files and consume
additional storage space. The method also updates the Stack object's dataframe to mark the removed files as empty.
"""
import os
import shutil
os.makedirs(backup_dir, exist_ok=True)
# save the geometry
filename = os.path.join(backup_dir, 'stack.geojson')
self.df[['datetime', 'orbit','mission','polarization', 'subswath','geometry']].to_file(filename, driver='GeoJSON')
# this optional file is dumped state, copy it if exists
# auto-generated file can't be a symlink but user-defined symlink target should be copied
filename = os.path.join(self.basedir, 'stack.pickle')
if os.path.exists(filename):
if debug:
print ('DEBUG: copy', filename, backup_dir)
shutil.copy2(filename, backup_dir, follow_symlinks=True)
# these files required to continue the processing, do not remove and copy only
filenames = [self.dem_filename, self.landmask_filename]
for filename in filenames:
# DEM and landmask can be not defined
if filename is None:
continue
if debug:
print ('DEBUG: copy', filename, backup_dir)
shutil.copy2(filename, backup_dir, follow_symlinks=True)
# these files are large and are not required to continue the processing
filenames = []
for record in self.df.itertuples():
for filename in [record.datapath, record.metapath, record.orbitpath]:
filenames += filename if isinstance(filename, list) else [filename]
for filename in filenames:
# orbit files can be not defined
if filename is None:
continue
# copy and delete the original later to prevent cross-device links issues
if debug:
print ('DEBUG: copy', filename, backup_dir)
shutil.copy2(filename, backup_dir, follow_symlinks=True)
if not copy and self.basedir == os.path.dirname(filename):
# when copy is not needed then delete files in work directory only
if debug:
print ('DEBUG: remove', filename)
os.remove(filename)
if not copy:
# mark as empty all the removed files
for col in ['datapath','metapath','orbitpath']:
self.df[col] = None
return
def get_filename(self, name, add_subswath=False):
import os
if add_subswath:
subswath = self.get_subswath()
prefix = f'F{subswath}_'
else:
prefix = ''
filename = os.path.join(self.basedir, f'{prefix}{name}.grd')
return filename
def get_filenames(self, pairs, name, add_subswath=False):
"""
Get the filenames of the data grids. The filenames are determined by the subswath, pairs, and name parameters.
Parameters
----------
subswath : int or None
The subswath number. If None, the function will determine it based on the dataset.
pairs : np.ndarray or pd.DataFrame or None
An array or DataFrame of pairs. If None, the function will open a single grid or a set of subswath grids.
name : str
The name of the grid to be opened.
add_subswath : bool, optional
Whether to add subswath to the prefix of the filename. Default is True.
Returns
-------
str or list of str
The filename or a list of filenames of the grids.
"""
import pandas as pd
import numpy as np
import os
if add_subswath:
subswath = self.get_subswath()
prefix = f'F{subswath}_'
else:
prefix = ''
if isinstance(pairs, pd.DataFrame):
# convert to standalone DataFrame first
pairs = self.get_pairs(pairs)[['ref', 'rep']].astype(str).values
else:
pairs = np.asarray(pairs)
filenames = []
if len(pairs.shape) == 1:
# read all the grids from files
for date in sorted(pairs):
filename = os.path.join(self.basedir, f'{prefix}{name}_{date}.grd'.replace('-',''))
filenames.append(filename)
elif len(pairs.shape) == 2:
# read all the grids from files
for pair in pairs:
filename = os.path.join(self.basedir, f'{prefix}{name}_{pair[0]}_{pair[1]}.grd'.replace('-',''))
filenames.append(filename)
return filenames
# # 2.5e-07 is Sentinel-1 scale factor
# def open_data(self, dates=None, scale=2.5e-07, debug=False):
# import xarray as xr
# import pandas as pd
# import numpy as np
# import os
#
# if debug:
# print ('DEBUG: open_data: apply scale:', scale)
#
# if dates is None:
# dates = self.df.index.values
# #print ('dates', dates)
#
# filenames = [self.PRM(date).filename[:-4] + '.grd' for date in dates]
# #print ('filenames', filenames)
# ds = xr.open_mfdataset(
# filenames,
# engine=self.netcdf_engine,
# chunks=self.chunksize,
# parallel=True,
# concat_dim='date',
# combine='nested'
# ).assign(date=pd.to_datetime(dates)).rename({'a': 'y', 'r': 'x'})
# if scale is None:
# # there is no complex int16 datatype, so return two variables for real and imag parts
# return ds
# # scale and return as complex values
# ds_scaled = (scale*(ds.re.astype(np.float32) + 1j*ds.im.astype(np.float32))).assign_attrs(ds.attrs).rename('data')
# del ds
# # zero in np.int16 type means NODATA
# return ds_scaled.where(ds_scaled != 0)
# 2.5e-07 is Sentinel-1 scale factor
# use original PRM files to get binary subswath file locations
def open_data(self, dates=None, subswath=None, scale=2.5e-07, debug=False):
import xarray as xr
import pandas as pd
import numpy as np
from scipy import constants
import os
if debug:
print ('DEBUG: open_data: apply scale:', scale)
if dates is None:
#dates = self.df.index.values
dates = np.unique(self.df.index.values)
#print ('dates', dates)
subswaths = self.get_subswaths()
if not isinstance(subswaths, (str, int)):
subswaths = ''.join(map(str, subswaths))
#print ('subswaths', subswaths)
#offsets = {'bottoms': bottoms, 'lefts': lefts, 'rights': rights, 'bottom': minh, 'extent': [maxy, maxx], 'ylims': ylims, 'xlims': xlims}
offsets = self.prm_offsets(debug=debug)
#print ('offsets', offsets)
if len(subswaths) == 1:
# stack single subswath
stack = []
shape = offsets['extent']
for date in dates:
prm = self.PRM(date, subswath=int(subswaths))
# disable scaling
slc = prm.read_SLC_int(scale=scale, shape=shape)
stack.append(slc.assign_coords(date=date))
if shape is None:
shape = (slc.y.size, slc.x.size)
del slc, prm
else:
maxy, maxx = offsets['extent']
minh = offsets['bottom']
# merge subswaths
stack = []
for date in dates:
slcs = []
prms = []
# for single subswath accumulate offset in scene coordinates
offsetx = 0
for sw, bottom, left, right, ylim, xlim in zip(subswaths,
offsets['bottoms'], offsets['lefts'], offsets['rights'], offsets['ylims'], offsets['xlims']):
#print (date, subswath)
if subswath is not None and not str(subswath) == sw:
offsetx += right - left
continue
prm = self.PRM(date, subswath=int(sw))
# disable scaling
slc = prm.read_SLC_int(scale=scale, shape=(ylim, xlim))
slc = slc.isel(x=slice(left if subswath is None else None, right if subswath is None else None))\
.assign_coords(y=slc.y + bottom)
if subswath is not None:
slc = slc.assign_coords(x=slc.x + offsetx - left)
slcs.append(slc)
prms.append(prm)
if subswath is None:
# check and merge SLCs, use zero fill for np.int16 datatype
slc = xr.concat(slcs, dim='x', fill_value=0).assign_coords(x=0.5 + np.arange(maxx))
if debug:
print ('assert slc.y.size == maxy', slc.y.size, maxy)
assert slc.y.size == maxy, 'Incorrect output grid azimuth dimension size'
if debug:
print ('assert slc.x.size == maxx', slc.x.size, maxx)
assert slc.x.size == maxx, 'Incorrect output grid range dimension sizes'
del slcs
stack.append(slc.assign_coords(date=date))
del slc
# DEM extent in radar coordinates, merged reference PRM required
#print ('minx, miny, maxx, maxy', minx, miny, maxx, maxy)
extent_ra = np.round(self.get_extent_ra().bounds).astype(int)
#print ('extent_ra', extent_ra)
# minx, miny, maxx, maxy = extent_ra
ds = xr.concat(stack, dim='date').assign(date=pd.to_datetime(dates))\
.sel(y=slice(extent_ra[1], extent_ra[3]), x=slice(extent_ra[0], extent_ra[2])) \
.chunk({'y': self.chunksize, 'x': self.chunksize})
del stack
if scale is None:
# there is no complex int16 datatype, so return two variables for real and imag parts
return ds
# scale and return as complex values
ds_complex = (ds.re.astype(np.float32) + 1j*ds.im.astype(np.float32))
del ds
# zero in np.int16 type means NODATA
return ds_complex.where(ds_complex != 0).rename('data')
# def open_geotif(self, dates=None, subswath=None, intensity=False, chunksize=None):
# """
# tiffs = stack.open_data_geotif(['2022-06-16', '2022-06-28'], intensity=True)
# """
# import xarray as xr
# import rioxarray as rio
# import pandas as pd
# import numpy as np
# # from GMTSAR code
# DFACT = 2.5e-07
#
# if chunksize is None:
# chunksize = self.chunksize
#
# if subswath is None:
# subswaths = self.get_subswaths()
# else:
# subswaths = [subswath]
#
# stack = []
# for swath in self.get_subswaths():
# if dates is None:
# dates = self.df[self.df['subswath']==swath].index.values
# #print ('dates', dates)
# tiffs = self.df[(self.df['subswath']==swath)&(self.df.index.isin(dates))].datapath.values
# tiffs = [rio.open_rasterio(tif, chunks=chunksize)[0] for tif in tiffs]
# # build stack
# tiffs = xr.concat(tiffs, dim='date')
# tiffs['date'] = pd.to_datetime(dates)
# # 2 and 4 multipliers to have the same values as in SLC
# if intensity:
# stack.append((2*DFACT*np.abs(tiffs))**2)
# else:
# stack.append(2*DFACT*tiffs)
#
# return stacks if subswath is None else stacks[0]
def open_cube(self, name):
"""
Opens an xarray 2D/3D Dataset or dataArray from a NetCDF file.
This function takes the name of the model to be opened, reads the NetCDF file, and re-chunks
the dataset according to the provided chunksize or the default value from the 'stack' object.
The 'date' dimension is always chunked with a size of 1.
Parameters
----------
name : str
The name of the model file to be opened.
Returns
-------
xarray.Dataset
Xarray Dataset read from the specified NetCDF file.
"""
import xarray as xr
import pandas as pd
import numpy as np
import os
filename = self.get_filename(name)
assert os.path.exists(filename), f'ERROR: The NetCDF file is missed: {filename}'
# Workaround: open the dataset without chunking
data = xr.open_dataset(filename, engine=self.netcdf_engine)
if 'stack' in data.dims:
if 'y' in data.coords and 'x' in data.coords:
multi_index_names = ['y', 'x']
elif 'lat' in data.coords and 'lon' in data.coords:
multi_index_names = ['lat', 'lon']
multi_index = pd.MultiIndex.from_arrays([data.y.values, data.x.values], names=multi_index_names)
data = data.assign_coords(stack=multi_index).set_index({'stack': ['y', 'x']})
chunksize = self.chunksize1d
else:
chunksize = self.chunksize
# set the proper chunk sizes
chunks = {dim: 1 if dim in ['pair', 'date'] else chunksize for dim in data.dims}
data = data.chunk(chunks)
# attributes are empty when dataarray is prezented as dataset
# revert dataarray converted to dataset
data_vars = list(data.data_vars)
if len(data_vars) == 1 and 'dataarray' in data.attrs:
assert data.attrs['dataarray'] == data_vars[0]
data = data[data_vars[0]]
# convert string dates to dates
for dim in ['date', 'ref', 'rep']:
if dim in data.dims:
data[dim] = pd.to_datetime(data[dim])
# restore missed coordinates
for dim in ['y', 'x', 'lat', 'lon']:
if dim not in data.coords \
and f'start_{dim}' in data.attrs \
and f'step_{dim}' in data.attrs \
and f'stop_{dim}' in data.attrs \
and f'size_{dim}' in data.attrs:
start = data.attrs[f'start_{dim}']
step = data.attrs[f'step_{dim}']
stop = data.attrs[f'stop_{dim}']
size = data.attrs[f'size_{dim}']
coords = np.arange(start, stop+step/2, step)
assert coords.size == size, 'Restored dimension has invalid size'
#print (dim, start, step, stop, coords)
data = data.assign_coords({dim: coords})
# remove the system attributes
del data.attrs[f'start_{dim}']
del data.attrs[f'step_{dim}']
del data.attrs[f'stop_{dim}']
del data.attrs[f'size_{dim}']
return data
def sync_cube(self, data, name=None, caption='Syncing NetCDF 2D/3D Dataset'):
import xarray as xr
if name is None and isinstance(data, xr.DataArray):
assert data.name is not None, 'Define data name or use "name" argument for the NetCDF filename'
name = data.name
elif name is None:
raise ValueError('Specify name for the output NetCDF file')
self.save_cube(data, name, caption)
return self.open_cube(name)
def save_cube(self, data, name=None, caption='Saving NetCDF 2D/3D Dataset'):
"""
Save a lazy and not lazy 2D/3D xarray Dataset or DataArray to a NetCDF file.
The 'date' or 'pair' dimension is always chunked with a size of 1.
Parameters
----------
data : xarray.Dataset or xarray.DataArray
The model to be saved.
name : str
The text name for the output NetCDF file.
caption: str
The text caption for the saving progress bar.
Returns
-------
None
Examples
-------
stack.save_cube(intf90m, 'intf90m') # save lazy 3d dataset
stack.save_cube(intf90m.phase, 'intf90m') # save lazy 3d dataarray
stack.save_cube(intf90m.isel(pair=0), 'intf90m') # save lazy 2d dataset
stack.save_cube(intf90m.isel(pair=0).phase, 'intf90m') # save lazy 2d dataarray
stack.save_cube(intf90m.compute(), 'intf90m') # save 3d dataset
stack.save_cube(intf90m.phase.compute(), 'intf90m') # save 3d dataarray
stack.save_cube(intf90m.isel(pair=0).compute(), 'intf90m') # save 2d dataset
stack.save_cube(intf90m.isel(pair=0).phase.compute(), 'intf90m') # save 2d dataarray
"""
import xarray as xr
import pandas as pd
import dask
import os
import warnings
# suppress Dask warning "RuntimeWarning: invalid value encountered in divide"
warnings.filterwarnings('ignore')
warnings.filterwarnings('ignore', module='dask')
warnings.filterwarnings('ignore', module='dask.core')
import logging
# prevent warnings "RuntimeWarning: All-NaN slice encountered"
logging.getLogger('distributed.nanny').setLevel(logging.ERROR)
# disable "distributed.utils_perf - WARNING - full garbage collections ..."
try:
from dask.distributed import utils_perf
utils_perf.disable_gc_diagnosis()
except ImportError:
from distributed.gc import disable_gc_diagnosis
disable_gc_diagnosis()
if name is None and isinstance(data, xr.DataArray):
assert data.name is not None, 'Define data name or use "name" argument for the NetCDF filename'
name = data.name
elif name is None:
raise ValueError('Specify name for the output NetCDF file')
chunksize = None
if 'stack' in data.dims and isinstance(data.coords['stack'].to_index(), pd.MultiIndex):
# replace multiindex by sequential numbers 0,1,...
data = data.reset_index('stack')
# single-dimensional data compression required
chunksize = self.netcdf_chunksize1d
for dim in ['y', 'x', 'lat', 'lon']:
if dim in data.dims:
# use attributes to hold grid spacing to prevent xarray Dask saving issues
data = data.drop(dim, dim=None).assign_attrs({
f'start_{dim}': data[dim].values[0],
f'step_{dim}': data[dim].diff(dim).values[0],
f'stop_{dim}': data[dim].values[-1],
f'size_{dim}': data[dim].size
})
if isinstance(data, xr.DataArray):
if data.name is None:
data = data.rename(name)
data = data.to_dataset().assign_attrs({'dataarray': data.name})
is_dask = isinstance(data[list(data.data_vars)[0]].data, dask.array.Array)
encoding = {varname: self._compression(data[varname].shape, chunksize=chunksize) for varname in data.data_vars}
#print ('save_cube encoding', encoding)
#print ('is_dask', is_dask, 'encoding', encoding)
# save to NetCDF file
filename = self.get_filename(name)
if os.path.exists(filename):
os.remove(filename)
delayed = data.to_netcdf(filename,
engine=self.netcdf_engine,
encoding=encoding,
compute=not is_dask)
if is_dask:
tqdm_dask(result := dask.persist(delayed), desc=caption)
# cleanup - sometimes writing NetCDF handlers are not closed immediately and block reading access
del delayed, result
import gc; gc.collect()
def delete_cube(self, name):
import os
filename = self.get_filename(name)
#print ('filename', filename)
if os.path.exists(filename):
os.remove(filename)
def sync_stack(self, data, name=None, caption='Saving 2D Stack', queue=None, timeout=300):
import xarray as xr
if name is None and isinstance(data, xr.DataArray):
assert data.name is not None, 'Define data name or use "name" argument for the NetCDF filenames'
name = data.name
elif name is None:
raise ValueError('Specify name for the output NetCDF files')
self.delete_stack(name)
self.save_stack(data, name, caption, queue, timeout)
return self.open_stack(name)
def open_stack(self, name, stack=None):
"""
Examples:
stack.open_stack('data')
stack.open_stack('data', ['2018-03-23'])
stack.open_stack('data', ['2018-03-23', '2018-03-11'])
stack.open_stack('phase15m')
stack.open_stack('intf90m',[['2018-02-21','2018-03-11']])
stack.open_stack('intf90m', stack.get_pairs([['2018-02-21','2018-03-11']]))
"""
import xarray as xr
import pandas as pd
import numpy as np
import glob
if stack is None:
# look for all stack files
#filenames = self.get_filenames(['*'], name)[0]
#filenames = self.get_filename(f'{name}_????????_????????')
# like data_20180323.grd or intf60m_20230114_20230219.grd
filenames = self._glob_re(name + '_[0-9]{8}(_[0-9]{8})*.grd')
elif isinstance(stack, (list, tuple, np.ndarray)) and len(np.asarray(stack).shape) == 1:
# dates
filenames = self.get_filenames(np.asarray(stack), name)
else:
# pairs
filenames = self.get_filenames(stack, name)
#print ('filenames', filenames)
data = xr.open_mfdataset(
filenames,
engine=self.netcdf_engine,
parallel=True,
concat_dim='stackvar',
combine='nested'
)
if 'stack' in data.dims:
if 'y' in data.coords and 'x' in data.coords:
multi_index_names = ['y', 'x']
elif 'lat' in data.coords and 'lon' in data.coords:
multi_index_names = ['lat', 'lon']
multi_index = pd.MultiIndex.from_arrays([data.y.values, data.x.values], names=multi_index_names)
data = data.assign_coords(stack=multi_index).set_index({'stack': ['y', 'x']}).chunk({'stack': self.chunksize1d})
else:
dims = list(data.dims)
data = data.chunk({dims[0]: 1, dims[1]: self.chunksize, dims[2]: self.chunksize})
# revert dataarray converted to dataset
data_vars = list(data.data_vars)
if len(data_vars) == 1 and 'dataarray' in data.attrs:
assert data.attrs['dataarray'] == data_vars[0]
data = data[data_vars[0]]
# attributes are empty when dataarray is prezented as dataset, convert it first
# restore missed coordinates
for dim in ['y', 'x', 'lat', 'lon']:
if dim not in data.coords \
and f'start_{dim}' in data.attrs \
and f'step_{dim}' in data.attrs \
and f'stop_{dim}' in data.attrs \
and f'size_{dim}' in data.attrs:
start = data.attrs[f'start_{dim}']
step = data.attrs[f'step_{dim}']
stop = data.attrs[f'stop_{dim}']
size = data.attrs[f'size_{dim}']
coords = np.arange(start, stop+step/2, step)
assert coords.size == size, 'Restored dimension has invalid size'
#print (dim, start, step, stop, coords)
data = data.assign_coords({dim: coords})
# remove the system attributes
del data.attrs[f'start_{dim}']
del data.attrs[f'step_{dim}']
del data.attrs[f'stop_{dim}']
del data.attrs[f'size_{dim}']
for dim in ['pair', 'date']:
if dim in data.coords:
if data[dim].shape == () or 'stack' in data.dims:
if data[dim].shape == ():
data = data.assign_coords(pair=('stackvar', [data[dim].values]))
data = data.rename({'stackvar': dim}).set_index({dim: dim})
else:
data = data.swap_dims({'stackvar': dim})
# convert string (or already timestamp) dates to dates
for dim in ['date', 'ref', 'rep']:
if dim in data.dims:
if not data[dim].shape == ():
data[dim] = pd.to_datetime(data[dim])
else:
data[dim].values = pd.to_datetime(data['date'].values)
return data
# # simple sequential realization is not suitable for a large stack
# def save_stack(self, data, name, caption='Saving 2D grids stack'):
# import xarray as xr
# import dask
# import os
# import warnings
# # suppress Dask warning "RuntimeWarning: invalid value encountered in divide"
# warnings.filterwarnings('ignore')
# warnings.filterwarnings('ignore', module='dask')
# warnings.filterwarnings('ignore', module='dask.core')
#
# if isinstance(data, xr.Dataset):
# stackvar = data[list(data.data_vars)[0]].dims[0]
# is_dask = isinstance(data[list(data.data_vars)[0]].data, dask.array.Array)
# elif isinstance(data, xr.DataArray):
# stackvar = data.dims[0]
# is_dask = isinstance(data.data, dask.array.Array)
# else:
# raise Exception('Argument grid is not xr.Dataset or xr.DataArray object')
# #print ('is_dask', is_dask, 'stackvar', stackvar)
# stacksize = data[stackvar].size
#
# for dim in ['y', 'x', 'lat', 'lon']:
# if dim in data.dims:
# # use attributes to hold grid spacing to prevent xarray Dask saving issues
# data = data.drop(dim, dim=None).assign_attrs({
# f'start_{dim}': data[dim].values[0],
# f'step_{dim}': data[dim].diff(dim).values[0],
# f'stop_{dim}': data[dim].values[-1],
# f'size_{dim}': data[dim].size
# })
#
# delayeds = []
# digits = len(str(stacksize))
# for ind in range(stacksize):
# data_slice = data.isel({stackvar: ind})
# if stackvar == 'date':
# stackval = str(data_slice[stackvar].dt.date.values)
# else:
# stackval = data_slice[stackvar].item().split(' ')
# #print ('stackval', stackval)
# # save to NetCDF file
# filename = self.get_filenames([stackval], name)[0]
# #print ('filename', filename)
# if os.path.exists(filename):
# os.remove(filename)
# if isinstance(data, xr.Dataset):
# encoding = {varname: self._compression(data_slice[varname].shape) for varname in data.data_vars}
# elif isinstance(data, xr.DataArray):
# encoding = {data.name: self._compression(data_slice.shape)}
# else:
# raise Exception('Argument grid is not xr.Dataset or xr.DataArray object')
#
# delayed = data_slice.to_netcdf(filename,
# encoding=encoding,
# engine=self.netcdf_engine,
# compute=not is_dask)
# if is_dask:
# delayeds.append(delayed)
# del delayed
# del data_slice
#
# if is_dask:
# tqdm_dask(result := dask.persist(*delayeds), desc=caption)
# # cleanup - sometimes writing NetCDF handlers are not closed immediately and block reading access
# del delayeds, result
# import gc; gc.collect()
# use save_mfdataset
def save_stack(self, data, name, caption='Saving 2D Stack', queue=None, timeout=None):
import numpy as np
import xarray as xr
import pandas as pd
import dask
import os
from dask.distributed import get_client
import warnings
# suppress Dask warning "RuntimeWarning: invalid value encountered in divide"
warnings.filterwarnings('ignore')
warnings.filterwarnings('ignore', module='dask')
warnings.filterwarnings('ignore', module='dask.core')
# Filter out Dask "Restarting worker" warnings
warnings.filterwarnings("ignore", module="distributed.nanny")
import logging
# Suppress Dask "Restarting worker" warnings
logging.getLogger('distributed.nanny').setLevel(logging.ERROR)
# disable "distributed.utils_perf - WARNING - full garbage collections ..."
try:
from dask.distributed import utils_perf
utils_perf.disable_gc_diagnosis()
except ImportError:
from distributed.gc import disable_gc_diagnosis
disable_gc_diagnosis()
# Dask cluster client
client = get_client()
if isinstance(data, xr.Dataset):
stackvar = data[list(data.data_vars)[0]].dims[0]
is_dask = isinstance(data[list(data.data_vars)[0]].data, dask.array.Array)
elif isinstance(data, xr.DataArray):
stackvar = data.dims[0]
is_dask = isinstance(data.data, dask.array.Array)
else:
raise Exception('Argument grid is not xr.Dataset or xr.DataArray object')
#print ('is_dask', is_dask, 'stackvar', stackvar)
stacksize = data[stackvar].size
if queue is None:
queue = self.netcdf_queue
if queue is None:
# process all the stack items in a single operation
queue = stacksize
if 'stack' in data.dims and isinstance(data.coords['stack'].to_index(), pd.MultiIndex):
# replace multiindex by sequential numbers 0,1,...
data = data.reset_index('stack')
for dim in ['y', 'x', 'lat', 'lon']:
if dim in data.dims:
# use attributes to hold grid spacing to prevent xarray Dask saving issues
data = data.drop(dim, dim=None).assign_attrs({
f'start_{dim}': data[dim].values[0],
f'step_{dim}': data[dim].diff(dim).values[0],
f'stop_{dim}': data[dim].values[-1],
f'size_{dim}': data[dim].size
})
if isinstance(data, xr.DataArray):
data = data.to_dataset().assign_attrs({'dataarray': data.name})
encoding = {varname: self._compression(data[varname].shape[1:]) for varname in data.data_vars}
#print ('save_stack encoding', encoding)
# Applying iterative processing to prevent Dask scheduler deadlocks.
counter = 0
digits = len(str(stacksize))
# Splitting all the pairs into chunks, each containing approximately queue pairs.
n_chunks = stacksize // queue if stacksize > queue else 1
for chunk in np.array_split(range(stacksize), n_chunks):
dss = [data.isel({stackvar: ind}) for ind in chunk]
if stackvar == 'date':
stackvals = [ds[stackvar].dt.date.values for ds in dss]
else:
stackvals = [ds[stackvar].item().split(' ') for ds in dss]
# save to NetCDF file
filenames = self.get_filenames(stackvals, name)
#[os.remove(filename) for filename in filenames if os.path.exists(filename)]
delayeds = xr.save_mfdataset(dss,
filenames,
encoding=encoding,
engine=self.netcdf_engine,
compute=not is_dask)
# process lazy chunk
if is_dask:
if n_chunks > 1:
chunk_caption = f'{caption}: {(counter+1):0{digits}}...{(counter+len(chunk)):0{digits}} from {stacksize}'
else:
chunk_caption = caption
tqdm_dask(result := dask.persist(delayeds), desc=chunk_caption)
del delayeds, result
# cleanup - sometimes writing NetCDF handlers are not closed immediately and block reading access
import gc; gc.collect()
# cleanup - release all workers memory, call garbage collector before to prevent heartbeat errors
if timeout is not None:
client.restart(timeout=timeout, wait_for_workers=True)
# # more granular control
# n_workers = len(client.nthreads())
# client.restart(wait_for_workers=False)
# client.wait_for_workers(n_workers, timeout=timeout)
# update chunks counter
counter += len(chunk)
# # alternative realization
# def save_stack(self, data, name, caption='Saving 2D Stack', queue=50):
# import numpy as np
# import xarray as xr
# import dask
# import os
# from dask.distributed import get_client
# import warnings
# # suppress Dask warning "RuntimeWarning: invalid value encountered in divide"
# warnings.filterwarnings('ignore')
# warnings.filterwarnings('ignore', module='dask')
# warnings.filterwarnings('ignore', module='dask.core')
# # Filter out Dask "Restarting worker" warnings
# warnings.filterwarnings("ignore", category=UserWarning, module="distributed.nanny")
# import logging
# # Suppress Dask "Restarting worker" warnings
# logging.getLogger('distributed.nanny').setLevel(logging.ERROR)
# # disable "distributed.utils_perf - WARNING - full garbage collections ..."
# from dask.distributed import utils_perf
# utils_perf.disable_gc_diagnosis()
#
# if isinstance(data, xr.Dataset):
# stackvar = data[list(data.data_vars)[0]].dims[0]
# is_dask = isinstance(data[list(data.data_vars)[0]].data, dask.array.Array)
# elif isinstance(data, xr.DataArray):
# stackvar = data.dims[0]
# is_dask = isinstance(data.data, dask.array.Array)
# else:
# raise Exception('Argument grid is not xr.Dataset or xr.DataArray object')
# #print ('is_dask', is_dask, 'stackvar', stackvar)
# stacksize = data[stackvar].size
#
# if queue is None:
# # process all the stack items in a single operation
# queue = stacksize
#
# for dim in ['y', 'x', 'lat', 'lon']:
# if dim in data.dims:
# # use attributes to hold grid spacing to prevent xarray Dask saving issues
# data = data.drop(dim, dim=None).assign_attrs({
# f'start_{dim}': data[dim].values[0],
# f'step_{dim}': data[dim].diff(dim).values[0],
# f'stop_{dim}': data[dim].values[-1],
# f'size_{dim}': data[dim].size
# })
#
# if isinstance(data, xr.Dataset):
# encoding = {varname: self._compression(data[varname].shape[1:]) for varname in data.data_vars}
# elif isinstance(data, xr.DataArray):
# encoding = {data.name: self._compression(data.shape[1:])}
# else:
# raise Exception('Argument data is not 3D xr.Dataset or xr.DataArray object')
# #print ('encoding', encoding)
#
# # Applying iterative processing to prevent Dask scheduler deadlocks.
# counter = 0
# digits = len(str(stacksize))
# # Splitting all the pairs into chunks, each containing approximately queue pairs.
# n_chunks = stacksize // queue if stacksize >= queue else 1
# for chunk in np.array_split(range(stacksize), n_chunks):
# stack = [data.isel({stackvar: ind}) for ind in chunk]
# assert len(stack) == len(chunk), 'ERROR: incorrect queue size'
# if stackvar == 'date':
# stackvals = [ds[stackvar].dt.date.values for ds in stack]
# else:
# stackvals = [ds[stackvar].item().split(' ') for ds in stack]
# # NetCDF files
# filenames = self.get_filenames(stackvals, name)
#
# delayeds = []
# # prepare lazy chunk or process not lazy chunk
# for data_slice, filename in zip(stack, filenames):
# #print ('filename', filename)
# if os.path.exists(filename):
# os.remove(filename)
# delayed = data_slice.to_netcdf(filename,
# encoding=encoding,
# engine=self.netcdf_engine,
# compute=not is_dask)
# delayeds.append(delayed)
# del delayed, data_slice, filename
# # process lazy chunk
# if is_dask:
# chunk_caption = f'{caption}: {(counter+1):0{digits}}...{(counter+len(chunk)):0{digits}} from {stacksize}'
# tqdm_dask(dask.persist(delayeds), desc=chunk_caption)
# # update chunks counter
# counter += len(chunk)
# del delayeds, stack
# # cleanup - release all memory
# get_client().restart()
# # cleanup - sometimes writing NetCDF handlers are not closed immediately and block reading access
# import gc; gc.collect()
def delete_stack(self, name):
import os
filenames = self._glob_re(name + '_[0-9]{8}(_[0-9]{8})*.grd')
#print ('filenames', filenames)
for filename in filenames:
if os.path.exists(filename):
os.remove(filename)
================================================
FILE: pygmtsar/pygmtsar/MultiInstanceManager.py
================================================
# ----------------------------------------------------------------------------
# PyGMTSAR
#
# This file is part of the PyGMTSAR project: https://github.com/mobigroup/gmtsar
#
# Copyright (c) 2024, Alexey Pechnikov
#
# Licensed under the BSD 3-Clause License (see LICENSE for details)
# ----------------------------------------------------------------------------
class MultiInstanceManager:
def __init__(self, *instances):
self.instances = instances
self.context_params = {}
def __getattr__(self, name):
def method_wrapper(*args, **kwargs):
results = []
for instance in self.instances:
# Adjust arguments with context-specific parameters if applicable
instance_kwargs = {**kwargs}
for key, values in self.context_params.items():
if len(values) == len(self.instances):
instance_kwargs[key] = values[self.instances.index(instance)]
instance_method = getattr(instance, name)
results.append(instance_method(*args, **instance_kwargs))
return results
return method_wrapper
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, traceback):
self.context_params = {}
def set_context(self, **kwargs):
"""Prepare specific attributes or arguments for each instance."""
# import collections.abc to check for iterable
import collections.abc
for key, value in kwargs.items():
if isinstance(value, collections.abc.Iterable) and not isinstance(value, (str, bytes)):
value_list = list(value)
if len(value_list) != len(self.instances):
raise ValueError(f"Expected a list of {len(self.instances)} values for '{key}', but got {len(value_list)}")
self.context_params[key] = value_list
else:
raise ValueError(f"Provided value for '{key}' is not an appropriate iterable")
return self
def run_callable(self, func):
"""
Execute a lambda function or any callable across all managed instances,
passing each instance and its context-specific arguments to the callable.
This method allows for flexible execution of any function that requires instance-level context.
It is particularly useful for operations that need to dynamically interact with instance attributes
or methods during execution.
Args:
func (callable): A function or lambda to execute. The function should
accept the instance as its first argument, followed by any
number of keyword arguments.
Returns:
list: The results from executing the function across all instances.
Example:
# Assuming 'func' is a function defined to operate on an instance 'inst' with additional arguments
with sbas.set_context(arg1=value1, arg2=value2):
results = sbas.run_callable(lambda inst, arg1, arg2: inst.custom_method(arg1, arg2))
Note:
The function passed to this method should be capable of handling the specific attributes
or the state of the instances as passed. Misalignment between the expected instance state
and the function's requirements can lead to runtime errors.
"""
results = []
for instance in self.instances:
instance_args = {k: v[self.instances.index(instance)] for k, v in self.context_params.items()}
results.append(func(instance, **instance_args))
return results
def run_method(self, method_name, **method_args):
"""
Execute a specified method on all managed instances with given arguments,
considering any context-specific adjustments.
Args:
method_name (str): The name of the method to execute.
**method_args: Arbitrary keyword arguments to pass to the method.
Returns:
list: The results from each instance's method execution.
Example:
with sbas.apply(da=psf):
psf = sbas.run_method('conncomp_main', data=xr.where(np.isfinite(da), 1, 0).chunk(-1))
Raises:
AttributeError: If the specified method is not found on an instance.
"""
results = []
for instance in self.instances:
if hasattr(instance, method_name):
method = getattr(instance, method_name)
instance_args = {**{k: v[self.instances.index(instance)] for k, v in self.context_params.items()}, **method_args}
results.append(method(**instance_args))
else:
raise AttributeError(f"{instance} does not have a method named {method_name}")
return results
================================================
FILE: pygmtsar/pygmtsar/NCubeVTK.py
================================================
# ----------------------------------------------------------------------------
# PyGMTSAR
#
# This file is part of the PyGMTSAR project: https://github.com/mobigroup/gmtsar
#
# Copyright (c) 2023, Alexey Pechnikov
#
# Licensed under the BSD 3-Clause License (see LICENSE for details)
# ----------------------------------------------------------------------------
# the code adopted from my project https://github.com/mobigroup/ParaView-plugins
class NCubeVTK:
"""
This class provides a static method for mapping a 2D image onto a 3D topography.
Example usage:
```python
from pygmtsar import NCubeVTK
import vtk
vtk_ugrid = NCubeVTK.ImageOnTopography(sbas.get_dem().to_dataset().rename({'lat': 'y', 'lon': 'x'}))
writer = vtk.vtkUnstructuredGridWriter()
writer.SetFileName("DEM_WGS84.vtk")
writer.SetInputData(vtk_ugrid)
writer.Write()
vtk_ugrid
```
```python
from pygmtsar import NCubeVTK
import pyvista as pv
vtk_ugrid = NCubeVTK.ImageOnTopography(sbas.get_dem().to_dataset().rename({'lat': 'y', 'lon': 'x'}))
vtk_ugrid = pv.UnstructuredGrid(vtk_ugrid)
vtk_ugrid.save('DEM_WGS84.vtk')
vtk_ugrid
```
"""
@staticmethod
def ImageOnTopography(dataset, band_mask=None, use_sealevel=False):
"""
Map a 2D image onto a 3D topography.
Parameters:
dataset: A file name or Xarray Dataset to use for mapping the image.
band_mask (optional): A string representing a variable in the dataset to use as a mask. Default is None.
use_sealevel (optional): A boolean flag indicating whether to replace negative topography by sea level (z=0). Default is False.
Returns:
A vtkUnstructuredGrid representing the image mapped onto the topography.
Raises:
Error if the band_mask variable is not found in the dataset, or if the dataset is not an Xarray Dataset.
Note: fill NODATA by NAN for float variables.
"""
from vtk import vtkPoints, vtkStructuredGrid, vtkThreshold, vtkDataObject, \
VTK_FLOAT, VTK_UNSIGNED_CHAR, vtkStringArray, vtkFloatArray, vtkIntArray
from vtk.util import numpy_support as vn
import numpy as np
import xarray as xr
# https://corteva.github.io/rioxarray/stable/getting_started/getting_started.html
#import rioxarray as rio
print ('NOTE: Function is deprecated. Convert Xarray dataset to VTK using Stack.as_vtk()')
if band_mask is not None:
if band_mask not in dataset.data_vars:
print (f'ERROR: {band_mask} variable not found')
return
#if 'band' not in dataset[band_mask].dims:
# print (f'ERROR: {band_mask} variable has no band dimension')
# return
if not isinstance(dataset, xr.Dataset):
print ('ERROR: Unsupported ds argument, should be supported file name or Xarray Dataset')
return
# fill NODATA by NAN
for data_var in dataset.data_vars:
#if dataset[data_var].values.dtype in [np.dtype('float16'),np.dtype('float32'),np.dtype('float64')]:
if np.issubdtype(dataset[data_var].dtype, np.floating):
dataset[data_var].values = dataset[data_var].values.astype('float32')
if '_FillValue' in dataset[data_var] and not np.isnan(dataset[data_var]._FillValue):
dataset[data_var].values[dataset[data_var].values == dataset[data_var]._FillValue] = np.nan
xs = dataset.x.values
ys = dataset.y.values
(yy,xx) = np.meshgrid(ys, xs)
if 'z' in dataset:
zs = dataset.z.values
znanmask = np.isnan(dataset.z)
# replace negative topography by sea level (z=0)
if use_sealevel:
zs[zs <= 0] = 0
else:
# set z coordinate to zero
zs = np.zeros_like(xx)
znanmask = 0
# create raster mask by geometry and for NaNs
vtk_points = vtkPoints()
points = np.column_stack((xx.ravel('F'), yy.ravel('F'), zs.ravel('C')))
vtk_points.SetData(vn.numpy_to_vtk(points, deep=True))
sgrid = vtkStructuredGrid()
sgrid.SetDimensions(len(xs), len(ys), 1)
sgrid.SetPoints(vtk_points)
for data_var in dataset.data_vars:
if 'z' == data_var:
# variable is already included into coordinates
continue
da = dataset[data_var]
dims = da.dims
#print (data_var, dims, da.dtype)
if 'band'in dims:
bands = da.band.shape[0]
#print ('bands', bands)
if bands in [3,4]:
# RGB or RGBA, select 3 bands only
array = vn.numpy_to_vtk(da[:3].values.reshape(3,-1).T, deep=True, array_type=VTK_UNSIGNED_CHAR)
array.SetName(da.name)
sgrid.GetPointData().AddArray(array)
elif bands == 1:
array = vn.numpy_to_vtk(da.values.reshape(1,-1).T, deep=True, array_type=VTK_FLOAT)
array.SetName(da.name)
sgrid.GetPointData().AddArray(array)
else:
print (f'ERROR: Unsupported bands count (should be 1,3 or 4) {bands} for variable {data_var}')
return
else:
array = vn.numpy_to_vtk(da.values.ravel(), deep=True, array_type=VTK_FLOAT)
array.SetName(da.name)
sgrid.GetPointData().AddArray(array)
if band_mask is not None and 'band' in dataset[band_mask]:
# mask NaNs for band variable
nanmask = np.any(np.isnan(dataset[band_mask]),axis=0)
# mask zero areas for color bands
# a bit of magic to enhance image (mask single channel zeroes)
# that's more correct way, only totally black pixels ignored
#zeromask = (~np.all(image.values==0,axis=0)).astype(float)
# that's a magic for better borders
zeromask = np.any(ds[band_mask].values==0,axis=0)
mask = nanmask | zeromask | znanmask
elif band_mask is not None:
# mask NaNs
nanmask = np.isnan(dataset[band_mask])
mask = nanmask | znanmask
#else:
# mask = znanmask
if band_mask is not None:
array = vn.numpy_to_vtk(mask.values.ravel(), deep=True, array_type=VTK_UNSIGNED_CHAR)
array.SetName('band_mask')
sgrid.GetPointData().AddArray(array)
for coord in dataset.coords:
#print (coord, dataset[coord].dims, len(dataset[coord].dims))
if len(dataset[coord].dims) > 0:
# real coordinate, ignore it
continue
#print (coord, dataset[coord].dtype)
#print ('np.datetime64', np.issubdtype(dataset[coord].dtype, np.datetime64))
#print ('np.int64', np.issubdtype(dataset[coord].dtype, np.int64))
#print ('np.float64', np.issubdtype(dataset[coord].dtype, np.float64))
if np.issubdtype(dataset[coord].dtype, np.datetime64):
data_array = vtkStringArray()
data_value = str(dataset[coord].dt.date.values)
elif np.issubdtype(dataset[coord].dtype, str):
data_array = vtkStringArray()
data_value = str(dataset[coord].values)
elif np.issubdtype(dataset[coord].dtype, np.int64):
data_array = vtkIntArray()
data_value = dataset[coord].values.astype(np.int64)
elif np.issubdtype(dataset[coord].dtype, np.float64):
data_array = vtkFloatArray
data_value = dataset[coord].values.astype(np.float64)
else:
print(f'NOTE: unsupported attribute {coord} datatype {dataset[coord].dtype}, miss it')
continue
data_array.SetName(coord)
data_array.InsertNextValue(data_value)
sgrid.GetFieldData().AddArray(data_array)
# required for 3D interactive rendering on Google Colab
if band_mask is None:
return sgrid
thresh = vtkThreshold()
thresh.SetInputData(sgrid)
thresh.SetInputArrayToProcess(0, 0, 0, vtkDataObject.FIELD_ASSOCIATION_POINTS, 'band_mask')
# drop float NANs
#thresh.SetLowerThreshold(-1e30)
#thresh.SetUpperThreshold(1e30)
thresh.SetUpperThreshold(0)
thresh.Update()
ugrid = thresh.GetOutput()
# remove internal mask variable from output
ugrid.GetPointData().RemoveArray('band_mask')
return ugrid
#vtk_ugrid = NCubeVTK.ImageOnTopography(ds)
#pv.UnstructuredGrid(vtk_ugrid)
================================================
FILE: pygmtsar/pygmtsar/PRM.py
================================================
# ----------------------------------------------------------------------------
# PyGMTSAR
#
# This file is part of the PyGMTSAR project: https://github.com/mobigroup/gmtsar
#
# Copyright (c) 2021, Alexey Pechnikov
#
# Licensed under the BSD 3-Clause License (see LICENSE for details)
# ----------------------------------------------------------------------------
from .datagrid import datagrid
from .PRM_gmtsar import PRM_gmtsar
class PRM(datagrid, PRM_gmtsar):
int_types = ['num_valid_az', 'num_rng_bins', 'num_patches', 'bytes_per_line', 'good_bytes_per_line', 'num_lines','SC_identity']
@staticmethod
def SC_timestamp(SC_clock):
from datetime import datetime, timedelta
# extract year and Julian day with fractional part
year = int(SC_clock // 1000)
julian_day = SC_clock % 1000 # Keep fractional part of the day
# split integer Julian day and fractional day
integer_julian_day = int(julian_day)
fractional_day = julian_day - integer_julian_day
# convert integer and fraction parts of Julian day to datetime
timestamp = datetime(year, 1, 1) + timedelta(days=integer_julian_day) + timedelta(days=fractional_day)
return timestamp
@staticmethod
def to_numeric_or_original(val):
if isinstance(val, str):
try:
float_val = float(val)
return float_val
except ValueError:
return val
return val
# my replacement function for GMT based robust 2D trend coefficient calculations:
# gmt trend2d r.xyz -Fxyzmw -N1r -V
# gmt trend2d r.xyz -Fxyzmw -N2r -V
# gmt trend2d r.xyz -Fxyzmw -N3r -V
# https://github.com/GenericMappingTools/gmt/blob/master/src/trend2d.c#L719-L744
# 3 model parameters
# rank = 3 => nu = size-3
@staticmethod
def robust_trend2d(data, rank):
"""
Perform robust linear regression to estimate the trend in 2D data.
Parameters
----------
data : numpy.ndarray
Array containing the input data. The shape of the array should be (N, 3), where N is the number of data points.
The first column represents the x-coordinates, the second column represents the y-coordinates (if rank is 3),
and the third column represents the z-values.
rank : int
Number of model parameters to fit. Should be 1, 2, or 3. If rank is 1, the function fits a constant trend.
If rank is 2, it fits a linear trend. If rank is 3, it fits a planar trend.
Returns
-------
numpy.ndarray
Array containing the estimated trend coefficients. The length of the array depends on the specified rank.
For rank 1, the array will contain a single value (intercept).
For rank 2, the array will contain two values (intercept and slope).
For rank 3, the array will contain three values (intercept, slope_x, slope_y).
Raises
------
Exception
If the specified rank is not 1, 2, or 3.
Notes
-----
The function performs robust linear regression using the M-estimator technique. It iteratively fits a linear model
and updates the weights based on the residuals until convergence. The weights are adjusted using Tukey's bisquare
weights to downweight outliers.
References
----------
- Rousseeuw, P. J. (1984). Least median of squares regression. Journal of the American statistical Association, 79(388), 871-880.
- Huber, P. J. (1973). Robust regression: asymptotics, conjectures and Monte Carlo. The Annals of Statistics, 1(5), 799-821.
"""
import numpy as np
from sklearn.linear_model import LinearRegression
# scale factor for normally distributed data is 1.4826
# https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.median_abs_deviation.html
MAD_NORMALIZE = 1.4826
# significance value
sig_threshold = 0.51
if rank not in [1,2,3]:
raise Exception('Number of model parameters "rank" should be 1, 2, or 3')
#see gmt_stat.c
def gmtstat_f_q (chisq1, nu1, chisq2, nu2):
import scipy.special as sc
if chisq1 == 0.0:
return 1
if chisq2 == 0.0:
return 0
return sc.betainc(0.5*nu2, 0.5*nu1, chisq2/(chisq2+chisq1))
if rank in [2,3]:
x = data[:,0]
x = np.interp(x, (x.min(), x.max()), (-1, +1))
if rank == 3:
y = data[:,1]
y = np.interp(y, (y.min(), y.max()), (-1, +1))
z = data[:,2]
w = np.ones(z.shape)
if rank == 1:
xy = np.expand_dims(np.zeros(z.shape),1)
elif rank == 2:
xy = np.expand_dims(x,1)
elif rank == 3:
xy = np.stack([x,y]).transpose()
# create linear regression object
mlr = LinearRegression()
chisqs = []
coeffs = []
while True:
# fit linear regression
mlr.fit(xy, z, sample_weight=w)
r = np.abs(z - mlr.predict(xy))
chisq = np.sum((r**2*w))/(z.size-3)
chisqs.append(chisq)
k = 1.5 * MAD_NORMALIZE * np.median(r)
w = np.where(r <= k, 1, (2*k/r) - (k * k/(r**2)))
sig = 1 if len(chisqs)==1 else gmtstat_f_q(chisqs[-1], z.size-3, chisqs[-2], z.size-3)
# Go back to previous model only if previous chisq < current chisq
if len(chisqs)==1 or chisqs[-2] > chisqs[-1]:
coeffs = [mlr.intercept_, *mlr.coef_]
#print ('chisq', chisq, 'significant', sig)
if sig < sig_threshold:
break
# get the slope and intercept of the line best fit
return (coeffs[:rank])
# # standalone function is compatible but it is too slow while it should not be a problem
# @staticmethod
# def robust_trend2d(data, rank):
# import numpy as np
# import statsmodels.api as sm
#
# if rank not in [1, 2, 3]:
# raise Exception('Number of model parameters "rank" should be 1, 2, or 3')
#
# if rank in [2, 3]:
# x = data[:, 0]
# x = np.interp(x, (x.min(), x.max()), (-1, +1))
# if rank == 3:
# y = data[:, 1]
# y = np.interp(y, (y.min(), y.max()), (-1, +1))
# z = data[:, 2]
#
# if rank == 1:
# X = sm.add_constant(np.ones(z.shape))
# elif rank == 2:
# X = sm.add_constant(x)
# elif rank == 3:
# X = sm.add_constant(np.column_stack((x, y)))
#
# # Hampel weighting function looks the most accurate for noisy data
# rlm_model = sm.RLM(z, X, M=sm.robust.norms.Hampel())
# rlm_results = rlm_model.fit()
#
# return rlm_results.params[:rank]
# # calculate MSE (optional)
# @staticmethod
# def robust_trend2d_mse(data, coeffs, rank):
# import numpy as np
#
# x = data[:, 0]
# y = data[:, 1]
# z_actual = data[:, 2]
#
# if rank == 1:
# z_predicted = coeffs[0]
# elif rank == 2:
# z_predicted = coeffs[0] + coeffs[1] * x
# elif rank == 3:
# z_predicted = coeffs[0] + coeffs[1] * x + coeffs[2] * y
#
# mse = np.mean((z_actual - z_predicted) ** 2)
# return mse
@staticmethod
def from_list(prm_list):
"""
Convert a list of parameter and value pairs to a PRM object.
Parameters
----------
prm_list : list
A list of PRM strings.
Returns
-------
PRM
A PRM object.
"""
from io import StringIO
prm = StringIO('\n'.join(prm_list))
return PRM._from_io(prm)
@staticmethod
def from_str(prm_string):
"""
Convert a string of parameter and value pairs to a PRM object.
Parameters
----------
prm_string : str
A PRM string.
Returns
-------
PRM
A PRM object.
"""
from io import StringIO
if isinstance(prm_string, bytes):
# for cases like
#return PRM.from_str(os.read(pipe2[0],int(10e6))
prm_string = prm_string.decode('utf-8')
# for cases like
# return PRM.from_str(os.read(pipe2[0],int(10e6).decode('utf8'))
prm = StringIO(prm_string)
return PRM._from_io(prm)
@staticmethod
def from_file(prm_filename):
"""
Convert a PRM file of parameter and value pairs to a PRM object.
Parameters
----------
prm_filename : str
The filename of the PRM file.
Returns
-------
PRM
A PRM object.
"""
#data = json.loads(document)
prm = PRM._from_io(prm_filename)
prm.filename = prm_filename
return prm
@staticmethod
def _from_io(prm):
"""
Read parameter and value pairs from IO stream to a PRM object.
Parameters
----------
prm : IO stream
The IO stream.
Returns
-------
PRM
A PRM object.
"""
import pandas as pd
df = pd.read_csv(prm, sep=r'\s+=\s+', header=None, names=['name', 'value'], engine='python').set_index('name')
df['value'] = df['value'].map(PRM.to_numeric_or_original)
return PRM(df)
def __init__(self, prm=None):
"""
Initialize a PRM object.
Parameters
----------
prm : PRM or pd.DataFrame, optional
The PRM object or DataFrame to initialize from. Default is None.
Returns
-------
None
"""
import pandas as pd
# Initialize an empty DataFrame if prm is None
if prm is None:
_prm = pd.DataFrame(columns=['name', 'value'])
elif isinstance(prm, pd.DataFrame):
_prm = prm.reset_index()
else:
_prm = prm.df.reset_index()
# Convert values to numeric where possible, keep original value otherwise
_prm['value'] = _prm['value'].map(PRM.to_numeric_or_original)
# Set the DataFrame for the PRM object
self.df = _prm[['name', 'value']].drop_duplicates(keep='last').set_index('name')
self.filename = None
def __eq__(self, other):
"""
Compare two PRM objects for equality.
Parameters
----------
other : PRM
The other PRM object to compare with.
Returns
-------
bool
True if the PRM objects are equal, False otherwise.
"""
return isinstance(self, PRM) and self.df == other.df
def __str__(self):
"""
Return a string representation of the PRM object.
Returns
-------
str
The string representation of the PRM object.
"""
return self.to_str()
def __repr__(self):
"""
Return a string representation of the PRM object for debugging.
Returns
-------
str
The string representation of the PRM object. If the PRM object was created from a file,
the filename and the number of items in the DataFrame representation of the PRM object
are included in the string.
"""
if self.filename:
return 'Object %s (%s) %d items\n%r' % (self.__class__.__name__, self.filename, len(self.df), self.df)
else:
return 'Object %s %d items\n%r' % (self.__class__.__name__, len(self.df), self.df)
# use 'g' format for Python and numpy float values
def set(self, prm=None, **kwargs):
"""
Set PRM values.
Parameters
----------
prm : PRM, optional
The PRM object to set values from. Default is None.
**kwargs
Additional keyword arguments for setting individual values.
Returns
-------
PRM
The updated PRM object.
"""
import numpy as np
if isinstance(prm, PRM):
for (key, value) in prm.df.itertuples():
self.df.loc[key] = value
elif prm is not None:
raise Exception('Arguments is not a PRM object')
for key, value in kwargs.items():
self.df.loc[key] = value
return self
def to_dataframe(self):
"""
Convert the PRM object to a DataFrame.
Returns
-------
pd.DataFrame
The DataFrame representation of the PRM object.
"""
return self.df
def to_file(self, prm):
"""
Save the PRM object to a PRM file.
Parameters
----------
prm : str
The filename of the PRM file to save to.
Returns
-------
PRM
The PRM object.
"""
self._to_io(prm)
# update internal filename after saving with the new filename
self.filename = prm
return self
def update(self, debug=False):
"""
Save PRM file to disk.
Parameters
----------
debug : bool, optional
Whether to enable debug mode. Default is False.
Returns
-------
PRM
The updated PRM object.
"""
if self.filename is None:
raise Exception('PRM is not created from file, use to_file() method instead')
if debug:
print ('DEBUG:', self)
return self.to_file(self.filename)
def to_str(self):
"""
Convert the PRM object to a string.
Returns
-------
str
The PRM string.
"""
return self._to_io()
def _to_io(self, output=None):
"""
Convert the PRM object to an IO stream.
Parameters
----------
output : IO stream, optional
The IO stream to write the PRM string to. Default is None.
Returns
-------
str
The PRM string.
"""
return self.df.reset_index().astype(str).apply(lambda row: (' = ').join(row), axis=1)\
.to_csv(output, header=None, index=None)
def sel(self, *args):
"""
Select specific PRM attributes and create a new PRM object.
Parameters
----------
*args : str
The attribute names to select.
Returns
-------
PRM
The new PRM object with selected attributes.
"""
return PRM(self.df.loc[[*args]])
def __add__(self, other):
"""
Add two PRM objects or a PRM object and a scalar.
Parameters
----------
other : PRM or scalar
The PRM object or scalar to add.
Returns
-------
PRM
The resulting PRM object after addition.
"""
import pandas as pd
if isinstance(other, PRM):
prm = pd.concat([self.df, other.df])
# drop duplicates
prm = prm.groupby(prm.index).last()
else:
prm = self.df + other
return PRM(prm)
def __sub__(self, other):
"""
Subtract two PRM objects or a PRM object and a scalar.
Parameters
----------
other : PRM or scalar
The PRM object or scalar to subtract.
Returns
-------
PRM
The resulting PRM object after subtraction.
"""
import pandas as pd
if isinstance(other, PRM):
prm = pd.concat([self.df, other.df])
# drop duplicates
prm = prm.groupby(prm.index).last()
else:
prm = self.df - other
return PRM(prm)
def get(self, *args):
"""
Get the values of specific PRM attributes.
Parameters
----------
*args : str
The attribute names to get values for.
Returns
-------
Union[Any, List[Any]]
The values of the specified attributes. If only one attribute is requested,
return its value directly. If multiple attributes are requested, return a list of values.
"""
vals = [self.df.loc[[key]].iloc[0].values[0] for key in args]
out = [int(v) if k in self.int_types else v for (k, v) in zip(args,vals)]
if len(out) == 1:
return out[0]
return out
def shift_atime(self, lines, inplace=False):
"""
Shift time in azimuth by a number of lines.
Parameters
----------
lines : float
The number of lines to shift by.
inplace : bool, optional
Whether to modify the PRM object in-place. Default is False.
Returns
-------
PRM
The shifted PRM object or DataFrame. If 'inplace' is True, returns modified PRM object,
otherwise, returns a new PRM with shifted times.
"""
prm = self.sel('clock_start','clock_stop','SC_clock_start','SC_clock_stop') + lines/self.get('PRF')/86400.0
if inplace:
return self.set(prm)
else:
return prm
# fitoffset.csh 3 3 freq_xcorr.dat
# PRM.fitoffset(3, 3, offset_dat)
# PRM.fitoffset(3, 3, matrix_fromfile='raw/offset.dat')
@staticmethod
def fitoffset(rank_rng, rank_azi, matrix=None, matrix_fromfile=None, SNR=20):
"""
Estimates range and azimuth offsets for InSAR (Interferometric Synthetic Aperture Radar) data.
Parameters
----------
rank_rng : int
Number of parameters to fit in the range direction.
rank_azi : int
Number of parameters to fit in the azimuth direction.
matrix : numpy.ndarray, optional
Array of range and azimuth offset estimates. Default is None.
matrix_fromfile : str, optional
Path to a file containing range and azimuth offset estimates. Default is None.
SNR : int, optional
Signal-to-noise ratio cutoff. Data points with SNR below this threshold are discarded.
Default is 20.
Returns
-------
prm : PRM object
An instance of the PRM class with the calculated parameters.
Raises
------
Exception
If both 'matrix' and 'matrix_fromfile' arguments are provided or if neither is provided.
Exception
If there are not enough data points to estimate the parameters.
Usage
-----
The function estimates range and azimuth offsets for InSAR data based on the provided input.
It performs robust fitting to obtain range and azimuth coefficients, calculates scale coefficients,
and determines the range and azimuth shifts. The resulting parameters are then stored in a PRM object.
Example
-------
fitoffset(3, 3, matrix_fromfile='raw/offset.dat')
"""
import numpy as np
import math
if (matrix is None and matrix_fromfile is None) or (matrix is not None and matrix_fromfile is not None):
raise Exception('One and only one argument matrix or matrix_fromfile should be defined')
if matrix_fromfile is not None:
matrix = np.genfromtxt(matrix_fromfile)
# first extract the range and azimuth data
rng = matrix[np.where(matrix[:,4]>SNR)][:,[0,2,1]]
azi = matrix[np.where(matrix[:,4]>SNR)][:,[0,2,3]]
# make sure there are enough points remaining
if rng.shape[0] < 8:
raise Exception(f'FAILED - not enough points to estimate parameters, try lower SNR ({rng.shape[0]} < 8)')
rng_coef = PRM.robust_trend2d(rng, rank_rng)
azi_coef = PRM.robust_trend2d(azi, rank_azi)
# print MSE (optional)
#rng_mse = PRM.robust_trend2d_mse(rng, rng_coef, rank_rng)
#azi_mse = PRM.robust_trend2d_mse(azi, azi_coef, rank_azi)
#print ('rng_mse_norm', rng_mse/len(rng), 'azi_mse_norm', azi_mse/len(azi))
# range and azimuth data ranges
scale_coef = [np.min(rng[:,0]), np.max(rng[:,0]), np.min(rng[:,1]), np.max(rng[:,1])]
#print ('rng_coef', rng_coef)
#print ('azi_coef', azi_coef)
# now convert to range coefficients
rshift = rng_coef[0] - rng_coef[1]*(scale_coef[1]+scale_coef[0])/(scale_coef[1]-scale_coef[0]) \
- rng_coef[2]*(scale_coef[3]+scale_coef[2])/(scale_coef[3]-scale_coef[2])
# now convert to azimuth coefficients
ashift = azi_coef[0] - azi_coef[1]*(scale_coef[1]+scale_coef[0])/(scale_coef[1]-scale_coef[0]) \
- azi_coef[2]*(scale_coef[3]+scale_coef[2])/(scale_coef[3]-scale_coef[2])
#print ('rshift', rshift, 'ashift', ashift)
# note: Python x % y expression and nympy results are different to C, use math function
# use 'g' format for float values as in original GMTSAR codes to easy compare results
prm = PRM().set(rshift =int(rshift) if rshift>=0 else int(rshift)-1,
sub_int_r =math.fmod(rshift, 1) if rshift>=0 else math.fmod(rshift, 1) + 1,
stretch_r =rng_coef[1]*2/(scale_coef[1]-scale_coef[0]),
a_stretch_r=rng_coef[2]*2/(scale_coef[3]-scale_coef[2]),
ashift =int(ashift) if ashift>=0 else int(ashift)-1,
sub_int_a =math.fmod(ashift, 1) if ashift>=0 else math.fmod(ashift, 1) + 1,
stretch_a =azi_coef[1]*2/(scale_coef[1]-scale_coef[0]),
a_stretch_a=azi_coef[2]*2/(scale_coef[3]-scale_coef[2]),
)
return prm
def d
gitextract_wmdc1yew/
├── .github/
│ ├── FUNDING.yml
│ ├── ISSUE_TEMPLATE/
│ │ ├── bug_report.md
│ │ ├── feature_request.md
│ │ └── general-help.md
│ └── workflows/
│ ├── dockerhub-cache-cleanup.yml
│ ├── macos.yml
│ ├── ubuntu.yml
│ └── ubuntu_docker.yml
├── .gitignore
├── CITATION.cff
├── CODE_OF_CONDUCT.md
├── LICENSE.TXT
├── README.md
├── docker/
│ ├── README.md
│ ├── pygmtsar.Dockerfile
│ ├── pygmtsar.ubuntu2204.Dockerfile
│ ├── requirements.json
│ └── requirements.sh
├── notebooks/
│ └── dload.sh
├── pubs/
│ └── README.md
├── pygmtsar/
│ ├── LICENSE.txt
│ ├── MANIFEST.in
│ ├── pygmtsar/
│ │ ├── ASF.py
│ │ ├── AWS.py
│ │ ├── GMT.py
│ │ ├── IO.py
│ │ ├── MultiInstanceManager.py
│ │ ├── NCubeVTK.py
│ │ ├── PRM.py
│ │ ├── PRM_gmtsar.py
│ │ ├── S1.py
│ │ ├── Stack.py
│ │ ├── Stack_align.py
│ │ ├── Stack_base.py
│ │ ├── Stack_dem.py
│ │ ├── Stack_detrend.py
│ │ ├── Stack_export.py
│ │ ├── Stack_geocode.py
│ │ ├── Stack_incidence.py
│ │ ├── Stack_landmask.py
│ │ ├── Stack_lstsq.py
│ │ ├── Stack_multilooking.py
│ │ ├── Stack_orbits.py
│ │ ├── Stack_phasediff.py
│ │ ├── Stack_prm.py
│ │ ├── Stack_ps.py
│ │ ├── Stack_reframe.py
│ │ ├── Stack_reframe_gmtsar.py
│ │ ├── Stack_sbas.py
│ │ ├── Stack_stl.py
│ │ ├── Stack_tidal.py
│ │ ├── Stack_topo.py
│ │ ├── Stack_trans.py
│ │ ├── Stack_trans_inv.py
│ │ ├── Stack_unwrap.py
│ │ ├── Stack_unwrap_snaphu.py
│ │ ├── Tiles.py
│ │ ├── XYZTiles.py
│ │ ├── __init__.py
│ │ ├── data/
│ │ │ ├── geoid_egm96_icgem.grd
│ │ │ └── google_colab.sh
│ │ ├── datagrid.py
│ │ ├── tqdm_dask.py
│ │ ├── tqdm_joblib.py
│ │ └── utils.py
│ └── setup.py
├── tests/
│ ├── goldenvalley.py
│ ├── imperial_valley_2015.py
│ ├── iran–iraq_earthquake_2017.py
│ ├── kalkarindji_flooding_2024.py
│ ├── la_cumbre_volcano_eruption_2020.py
│ ├── lakesarez_landslides_2017.py
│ ├── pico_do_fogo_volcano_eruption_2014.py
│ ├── türkiye_earthquakes_2023.py
│ └── türkiye_elevation_2019.py
├── todo/
│ ├── PRM.robust_trend2d.ipynb
│ ├── baseline/
│ │ ├── S1_20201222_ALL_F2.LED
│ │ ├── S1_20201222_ALL_F2.PRM
│ │ ├── S1_20210103_ALL_F2.LED
│ │ ├── S1_20210103_ALL_F2.PRM
│ │ ├── SAT_baseline.sh
│ │ └── baseline.v7.ipynb
│ ├── branch_cut.ipynb
│ ├── bursts_extraction_remote.ipynb
│ ├── gmt_fft_2d.md
│ ├── interp.ipynb
│ ├── orbit/
│ │ ├── S1_20150521_ALL_F1.LED
│ │ ├── S1_20150521_ALL_F1.extend.LED
│ │ ├── hermite_c.c
│ │ ├── read_orb_hermite.ipynb
│ │ └── test_hermite_c.c
│ ├── phasediff.md
│ ├── phasediff_calc_drho.md
│ ├── phasediff_spline_,evals_.md
│ ├── phasefilt.ipynb
│ ├── phasefilt_Goldstein.c
│ ├── phasefilt_Goldstein.md
│ ├── phasefilt_calc_corr.md
│ ├── phasefilt_lazy.ipynb
│ ├── phasefilt_make_wgt.md
│ ├── pixel_decimator.py
│ ├── test_gmt_fft_2d.c
│ ├── test_gmt_fft_2d.py
│ ├── test_gmt_ifft_2d.c
│ ├── test_gmt_ifft_2d.py
│ ├── test_hermite.py
│ ├── test_hermite_interpolation.py
│ ├── test_make_wgt.c
│ ├── test_phasediff.c
│ └── trans_dat.ipynb
└── ui/
├── Imperial_Valley_2015.html
└── Imperial_Valley_2015_ipyleaflet.html
SYMBOL INDEX (349 symbols across 61 files)
FILE: pygmtsar/pygmtsar/ASF.py
class ASF (line 13) | class ASF(tqdm_joblib):
method __init__ (line 23) | def __init__(self, username=None, password=None):
method _get_asf_session (line 33) | def _get_asf_session(self):
method download (line 37) | def download(self, basedir, scenes_or_bursts, subswaths=None, polariza...
method download_scenes (line 84) | def download_scenes(self, basedir, scenes, subswaths, polarization='VV...
method download_bursts (line 262) | def download_bursts(self, basedir, bursts, session=None, n_jobs=8, job...
method search (line 654) | def search(geometry, startTime=None, stopTime=None, flightDirection=None,
method plot (line 699) | def plot(bursts):
FILE: pygmtsar/pygmtsar/AWS.py
class AWS (line 10) | class AWS():
method download_dem (line 12) | def download_dem(self, geometry, filename=None, product='1s', provider...
FILE: pygmtsar/pygmtsar/GMT.py
class GMT (line 10) | class GMT():
method download_dem (line 11) | def download_dem(self, geometry, filename=None, product='1s', skip_exi...
method download_landmask (line 45) | def download_landmask(self, geometry, filename=None, product='1s', res...
FILE: pygmtsar/pygmtsar/IO.py
class IO (line 13) | class IO(datagrid):
method _glob_re (line 18) | def _glob_re(self, pathname):
method dump (line 24) | def dump(self, to_path=None):
method restore (line 65) | def restore(from_path):
method backup (line 103) | def backup(self, backup_dir, copy=False, debug=False):
method get_filename (line 191) | def get_filename(self, name, add_subswath=False):
method get_filenames (line 203) | def get_filenames(self, pairs, name, add_subswath=False):
method open_data (line 287) | def open_data(self, dates=None, subswath=None, scale=2.5e-07, debug=Fa...
method open_cube (line 422) | def open_cube(self, name):
method sync_cube (line 502) | def sync_cube(self, data, name=None, caption='Syncing NetCDF 2D/3D Dat...
method save_cube (line 512) | def save_cube(self, data, name=None, caption='Saving NetCDF 2D/3D Data...
method delete_cube (line 609) | def delete_cube(self, name):
method sync_stack (line 617) | def sync_stack(self, data, name=None, caption='Saving 2D Stack', queue...
method open_stack (line 628) | def open_stack(self, name, stack=None):
method save_stack (line 792) | def save_stack(self, data, name, caption='Saving 2D Stack', queue=None...
method delete_stack (line 988) | def delete_stack(self, name):
FILE: pygmtsar/pygmtsar/MultiInstanceManager.py
class MultiInstanceManager (line 10) | class MultiInstanceManager:
method __init__ (line 12) | def __init__(self, *instances):
method __getattr__ (line 16) | def __getattr__(self, name):
method __enter__ (line 31) | def __enter__(self):
method __exit__ (line 34) | def __exit__(self, exc_type, exc_value, traceback):
method set_context (line 37) | def set_context(self, **kwargs):
method run_callable (line 51) | def run_callable(self, func):
method run_method (line 84) | def run_method(self, method_name, **method_args):
FILE: pygmtsar/pygmtsar/NCubeVTK.py
class NCubeVTK (line 12) | class NCubeVTK:
method ImageOnTopography (line 41) | def ImageOnTopography(dataset, band_mask=None, use_sealevel=False):
FILE: pygmtsar/pygmtsar/PRM.py
class PRM (line 13) | class PRM(datagrid, PRM_gmtsar):
method SC_timestamp (line 18) | def SC_timestamp(SC_clock):
method to_numeric_or_original (line 35) | def to_numeric_or_original(val):
method robust_trend2d (line 52) | def robust_trend2d(data, rank):
method from_list (line 204) | def from_list(prm_list):
method from_str (line 223) | def from_str(prm_string):
method from_file (line 248) | def from_file(prm_filename):
method _from_io (line 268) | def _from_io(prm):
method __init__ (line 289) | def __init__(self, prm=None):
method __eq__ (line 319) | def __eq__(self, other):
method __str__ (line 335) | def __str__(self):
method __repr__ (line 346) | def __repr__(self):
method set (line 363) | def set(self, prm=None, **kwargs):
method to_dataframe (line 390) | def to_dataframe(self):
method to_file (line 401) | def to_file(self, prm):
method update (line 420) | def update(self, debug=False):
method to_str (line 442) | def to_str(self):
method _to_io (line 453) | def _to_io(self, output=None):
method sel (line 470) | def sel(self, *args):
method __add__ (line 486) | def __add__(self, other):
method __sub__ (line 509) | def __sub__(self, other):
method get (line 532) | def get(self, *args):
method shift_atime (line 553) | def shift_atime(self, lines, inplace=False):
method fitoffset (line 580) | def fitoffset(rank_rng, rank_azi, matrix=None, matrix_fromfile=None, S...
method diff (line 672) | def diff(self, other, gformat=True):
method fix_merged (line 704) | def fix_merged(self, maxy, maxx, minh):
method fix_aligned (line 716) | def fix_aligned(self):
method read_SLC_int (line 763) | def read_SLC_int(self, scale=2.5e-07, shape=None):
method read_LED (line 897) | def read_LED(self):
method get_seconds (line 948) | def get_seconds(self):
method get_height (line 954) | def get_height(self, x, y, z):
method get_baseline_projections (line 964) | def get_baseline_projections(self, other, baseline, alpha):
method get_components (line 986) | def get_components(self, baseline, ref_pos, rep_pos):
method get_spacing (line 1267) | def get_spacing(self, grid=1):
FILE: pygmtsar/pygmtsar/PRM_gmtsar.py
class PRM_gmtsar (line 11) | class PRM_gmtsar:
method calc_dop_orb (line 13) | def calc_dop_orb(self, earth_radius=0, doppler_centroid=0, inplace=Fal...
method SAT_baseline (line 53) | def SAT_baseline(self, other, tail=None, debug=False):
method SAT_llt2rat (line 120) | def SAT_llt2rat(self, coords=None, fromfile=None, tofile=None, precise...
method SAT_look (line 209) | def SAT_look(self, coords=None, fromfile=None, tofile=None, binary=Fal...
FILE: pygmtsar/pygmtsar/S1.py
class S1 (line 12) | class S1(tqdm_joblib):
method download_orbits (line 31) | def download_orbits(basedir: str, scenes: list | pd.DataFrame,
method scan_slc (line 190) | def scan_slc(datadir, orbit=None, mission=None, subswath=None, polariz...
method geoloc2bursts (line 407) | def geoloc2bursts(metapath):
method read_annotation (line 431) | def read_annotation(filename):
method get_geoloc (line 455) | def get_geoloc(annotation):
FILE: pygmtsar/pygmtsar/Stack.py
class Stack (line 14) | class Stack(Stack_export):
method __init__ (line 22) | def __init__(self, basedir, drop_if_exists=False):
method set_scenes (line 57) | def set_scenes(self, scenes):
method plot_AOI (line 81) | def plot_AOI(geometry=None, ax='auto', **kwargs):
method plot_POI (line 99) | def plot_POI(geometry=None, ax='auto', **kwargs):
method plot_scenes (line 119) | def plot_scenes(self, dem='auto', image=None, alpha=None, caption='Est...
method plots_AOI (line 148) | def plots_AOI(fg, geometry=None, **kwargs):
method plots_POI (line 173) | def plots_POI(fg, geometry=None, **kwargs):
FILE: pygmtsar/pygmtsar/Stack_align.py
class Stack_align (line 13) | class Stack_align(Stack_dem):
method _offset2shift (line 15) | def _offset2shift(self, xyz, rmax, amax, method='linear'):
method _get_topo_llt (line 51) | def _get_topo_llt(self, subswath, degrees, debug=False):
method _align_ref_subswath (line 92) | def _align_ref_subswath(self, subswath, debug=False):
method _align_rep_subswath (line 129) | def _align_rep_subswath(self, subswath, date=None, degrees=12.0/3600, ...
method baseline_table (line 275) | def baseline_table(self, n_jobs=-1, debug=False):
method compute_align (line 338) | def compute_align(self, geometry='auto', dates=None, n_jobs=-1, degree...
FILE: pygmtsar/pygmtsar/Stack_base.py
class Stack_base (line 14) | class Stack_base(tqdm_joblib, IO):
method __repr__ (line 16) | def __repr__(self):
method to_dataframe (line 19) | def to_dataframe(self):
method get_subswath_prefix (line 34) | def get_subswath_prefix(self, subswath, date=None):
method set_reference (line 48) | def set_reference(self, reference):
method get_reference (line 95) | def get_reference(self, subswath=None):
method get_repeat (line 122) | def get_repeat(self, subswath=None, date=None):
method get_subswaths (line 148) | def get_subswaths(self):
method get_subswath (line 184) | def get_subswath(self, subswath=None):
method get_pairs (line 206) | def get_pairs(self, pairs, dates=False):
method get_pairs_matrix (line 262) | def get_pairs_matrix(self, pairs):
method phase_to_positive_range (line 297) | def phase_to_positive_range(phase):
method phase_to_symmetric_range (line 320) | def phase_to_symmetric_range(phase):
FILE: pygmtsar/pygmtsar/Stack_dem.py
class Stack_dem (line 15) | class Stack_dem(Stack_reframe):
method get_extent_ra (line 19) | def get_extent_ra(self):
method get_geoid (line 42) | def get_geoid(self, grid=None):
method set_dem (line 73) | def set_dem(self, dem_filename):
method get_dem (line 113) | def get_dem(self):
method load_dem (line 172) | def load_dem(self, data, geometry='auto', buffer_degrees=None):
method download_dem (line 265) | def download_dem(self, geometry='auto', product='1s'):
FILE: pygmtsar/pygmtsar/Stack_detrend.py
class Stack_detrend (line 12) | class Stack_detrend(Stack_unwrap):
method regression (line 148) | def regression(data, variables, weight=None, algorithm='linear', degre...
method regression_linear (line 303) | def regression_linear(self, data, variables, weight=None, degree=1, wr...
method regression_sgd (line 326) | def regression_sgd(self, data, variables, weight=None, degree=1, wrap=...
method regression_xgboost (line 380) | def regression_xgboost(self, data, variables, weight=None, wrap=False,...
method polyfit (line 427) | def polyfit(self, data, weight=None, degree=0, days=None, count=None, ...
method regression_pairs (line 431) | def regression_pairs(self, data, weight=None, degree=0, days=None, cou...
method gaussian (line 590) | def gaussian(self, data, wavelength, truncate=3.0, resolution=60, debu...
method turbulence (line 1047) | def turbulence(self, phase, weight=None):
method velocity (line 1084) | def velocity(self, data):
method plot_velocity (line 1112) | def plot_velocity(self, data, caption='Velocity, [rad/year]',
method plot_velocity_los_mm (line 1144) | def plot_velocity_los_mm(self, data, caption='Velocity, [mm/year]',
method trend (line 1150) | def trend(self, data, dim='auto', degree=1):
method regression1d (line 1154) | def regression1d(self, data, dim='auto', degree=1, wrap=False):
FILE: pygmtsar/pygmtsar/Stack_export.py
class Stack_export (line 13) | class Stack_export(Stack_ps):
method as_geo (line 15) | def as_geo(self, da):
method as_vtk (line 58) | def as_vtk(dataset):
method export_geotiff (line 170) | def export_geotiff(self, data, name, caption='Exporting WGS84 GeoTIFF(...
method export_geojson (line 227) | def export_geojson(self, data, name, caption='Exporting WGS84 GeoJSON'...
method export_csv (line 340) | def export_csv(self, data, name, caption='Exporting WGS84 CSV', delimi...
method export_netcdf (line 456) | def export_netcdf(self, data, name, caption='Exporting WGS84 NetCDF', ...
method export_vtk (line 516) | def export_vtk(self, data, name, caption='Exporting WGS84 VTK(s)', top...
FILE: pygmtsar/pygmtsar/Stack_geocode.py
class Stack_geocode (line 13) | class Stack_geocode(Stack_sbas):
method compute_geocode (line 15) | def compute_geocode(self, coarsen=60.):
method geocode (line 59) | def geocode(self, geometry, z_offset=None):
method ra2ll (line 152) | def ra2ll(self, data, autoscale=True):
method ll2ra (line 325) | def ll2ra(self, data, autoscale=True):
FILE: pygmtsar/pygmtsar/Stack_incidence.py
class Stack_incidence (line 13) | class Stack_incidence(Stack_geocode):
method los_projection (line 15) | def los_projection(self, data):
method get_satellite_look_vector (line 242) | def get_satellite_look_vector(self):
method los_displacement_mm (line 264) | def los_displacement_mm(self, data):
method incidence_angle (line 308) | def incidence_angle(self):
method plot_incidence_angle (line 336) | def plot_incidence_angle(self, data='auto', caption='Incidence Angle i...
method vertical_displacement_mm (line 353) | def vertical_displacement_mm(self, data):
method eastwest_displacement_mm (line 379) | def eastwest_displacement_mm(self, data):
method elevation_m (line 412) | def elevation_m(self, data, baseline=1):
method compute_satellite_look_vector (line 457) | def compute_satellite_look_vector(self, interactive=False):
FILE: pygmtsar/pygmtsar/Stack_landmask.py
class Stack_landmask (line 12) | class Stack_landmask(Stack_multilooking):
method set_landmask (line 14) | def set_landmask(self, landmask_filename):
method get_landmask (line 41) | def get_landmask(self):
method download_landmask (line 83) | def download_landmask(self, product='1s', debug=False):
method load_landmask (line 87) | def load_landmask(self, data, geometry='auto'):
method plot_landmask (line 155) | def plot_landmask(self, landmask='auto', caption='Land Mask', cmap='bi...
FILE: pygmtsar/pygmtsar/Stack_lstsq.py
class Stack_lstsq (line 13) | class Stack_lstsq(Stack_tidal):
method lstsq1d (line 16) | def lstsq1d(x, w, matrix, cumsum=True):
method lstsq_matrix (line 87) | def lstsq_matrix(self, pairs):
method lstsq_matrix_edge (line 106) | def lstsq_matrix_edge(self, pairs):
method lstsq (line 126) | def lstsq(self, data, weight=None, matrix='auto', cumsum=True, debug=F...
method rmse (line 310) | def rmse(self, data, solution, weight=None):
method plot_displacement (line 359) | def plot_displacement(self, data, caption='Cumulative LOS Displacement...
method plot_displacement_los_mm (line 388) | def plot_displacement_los_mm(self, data, caption='Cumulative LOS Displ...
method plot_displacements (line 395) | def plot_displacements(self, data, caption='Cumulative LOS Displacemen...
method plot_displacements_los_mm (line 430) | def plot_displacements_los_mm(self, data, caption='Cumulative LOS Disp...
method plot_rmse (line 436) | def plot_rmse(self, data, caption='RMSE, [rad]', cmap='turbo',
method plot_rmse_los_mm (line 468) | def plot_rmse_los_mm(self, data, caption='RMSE, [mm]', cmap='turbo',
FILE: pygmtsar/pygmtsar/Stack_multilooking.py
class Stack_multilooking (line 13) | class Stack_multilooking(Stack_phasediff):
method decimator (line 16) | def decimator(self, resolution=60, grid=(1, 4), func='mean', debug=Fal...
method multilooking (line 239) | def multilooking(self, data, weight=None, wavelength=None, coarsen=Non...
FILE: pygmtsar/pygmtsar/Stack_orbits.py
class Stack_orbits (line 12) | class Stack_orbits(Stack_prm):
method download_orbits (line 15) | def download_orbits(self, strict=False):
FILE: pygmtsar/pygmtsar/Stack_phasediff.py
class Stack_phasediff (line 15) | class Stack_phasediff(Stack_topo):
method compute_interferogram (line 17) | def compute_interferogram(self, pairs, name, subswath=None, weight=Non...
method compute_interferogram_singlelook (line 135) | def compute_interferogram_singlelook(self, pairs, name, subswath=None,...
method compute_interferogram_multilook (line 146) | def compute_interferogram_multilook(self, pairs, name, subswath=None, ...
method interferogram (line 156) | def interferogram(phase, debug=False):
method correlation (line 180) | def correlation(self, phase, intensity, debug=False):
method phasediff (line 465) | def phasediff(self, pairs, data='auto', topo='auto', phase=None, metho...
method goldstein (line 538) | def goldstein(self, phase, corr, psize=32, debug=False):
method plot_phase (line 655) | def plot_phase(self, data, caption='Phase, [rad]',
method plot_phases (line 685) | def plot_phases(self, data, caption='Phase, [rad]', cols=4, size=4, nb...
method plot_interferogram (line 720) | def plot_interferogram(self, data, caption='Phase, [rad]', cmap='gist_...
method plot_interferograms (line 737) | def plot_interferograms(self, data, caption='Phase, [rad]', cols=4, si...
method plot_correlation (line 760) | def plot_correlation(self, data, caption='Correlation', cmap='gray', a...
method plot_correlations (line 775) | def plot_correlations(self, data, caption='Correlation', cmap='auto', ...
method plot_correlation_stack (line 803) | def plot_correlation_stack(self, data, threshold=None, caption='Correl...
FILE: pygmtsar/pygmtsar/Stack_prm.py
class Stack_prm (line 13) | class Stack_prm(Stack_base):
method PRM (line 15) | def PRM(self, date=None, subswath=None):
method PRM_merged (line 46) | def PRM_merged(self, date=None, offsets='auto'):
method prm_offsets (line 55) | def prm_offsets(self, debug=False):
FILE: pygmtsar/pygmtsar/Stack_ps.py
class Stack_ps (line 13) | class Stack_ps(Stack_stl):
method get_ps (line 15) | def get_ps(self, name='ps'):
method compute_ps (line 28) | def compute_ps(self, geometry=None, dates=None, data='auto', name='ps'...
method psfunction (line 69) | def psfunction(self, ps='auto', name='ps'):
method plot_psfunction (line 76) | def plot_psfunction(self, data='auto', caption='PS Function', cmap='gr...
method plot_amplitudes (line 163) | def plot_amplitudes(self, dates=None, data='auto', norm='auto', func=N...
FILE: pygmtsar/pygmtsar/Stack_reframe.py
class Stack_reframe (line 14) | class Stack_reframe(Stack_reframe_gmtsar):
method _reframe_subswath (line 16) | def _reframe_subswath(self, subswath, date, geometry, debug=False):
method compute_reframe (line 274) | def compute_reframe(self, geometry=None, n_jobs=-1, queue=16, caption=...
FILE: pygmtsar/pygmtsar/Stack_reframe_gmtsar.py
class Stack_reframe_gmtsar (line 13) | class Stack_reframe_gmtsar(Stack_orbits):
method _ext_orb_s1a (line 15) | def _ext_orb_s1a(self, subswath, date=None, debug=False):
method _make_s1a_tops (line 62) | def _make_s1a_tops(self, subswath, date=None, mode=0, rshift_fromfile=...
method _assemble_tops (line 131) | def _assemble_tops(self, subswath, date, azi_1, azi_2, debug=False):
FILE: pygmtsar/pygmtsar/Stack_sbas.py
class Stack_sbas (line 14) | class Stack_sbas(Stack_detrend):
method baseline_pairs (line 16) | def baseline_pairs(self, days=100, meters=None, invert=False, **kwargs):
method sbas_pairs_filter_dates (line 20) | def sbas_pairs_filter_dates(self, pairs, dates):
method sbas_pairs_limit (line 23) | def sbas_pairs_limit(self, pairs, limit=2, iterations=1):
method sbas_pairs (line 51) | def sbas_pairs(self, days=None, meters=None, invert=False, dates=None):
method sbas_pairs_extend (line 134) | def sbas_pairs_extend(self, pairs):
method sbas_pairs_covering (line 157) | def sbas_pairs_covering(pairs, column, count, func='min'):
method sbas_pairs_covering_deviation (line 183) | def sbas_pairs_covering_deviation(self, pairs, count, column='stddev'):
method sbas_pairs_covering_correlation (line 186) | def sbas_pairs_covering_correlation(self, pairs, count, column='corr'):
method sbas_pairs_extend (line 189) | def sbas_pairs_extend(self, pairs):
method correlation_extend (line 213) | def correlation_extend(self, corr, pairs):
method interferogram_extend (line 233) | def interferogram_extend(self, phase, pairs, wrap=True):
method sbas_pairs_fill (line 255) | def sbas_pairs_fill(self, data):
method interferogram_fill (line 283) | def interferogram_fill(self, phase):
method correlation_fill (line 301) | def correlation_fill(self, corr):
method baseline_plot (line 319) | def baseline_plot(self, pairs, caption='Baseline'):
method plot_baseline (line 323) | def plot_baseline(self, pairs, caption='Baseline'):
method plot_baseline_duration (line 360) | def plot_baseline_duration(self, pairs, interval_days=6, caption='Dura...
method plot_baseline_attribute (line 422) | def plot_baseline_attribute(self, pairs, pairs_best=None, column=None,...
method plot_baseline_correlation (line 456) | def plot_baseline_correlation(self, pairs, pairs_best=None, column='co...
method plot_baseline_deviation (line 460) | def plot_baseline_deviation(self, pairs, pairs_best=None, column='stdd...
method plot_baseline_displacement (line 464) | def plot_baseline_displacement(self, phase, corr=None, caption='LSQ an...
method plot_baseline_displacement_los_mm (line 620) | def plot_baseline_displacement_los_mm(self, phase, corr=None, caption=...
FILE: pygmtsar/pygmtsar/Stack_stl.py
class Stack_stl (line 13) | class Stack_stl(Stack_lstsq):
method stl1d (line 16) | def stl1d(ts, dt, dt_periodic, periods=52, robust=False):
method stl_periodic (line 71) | def stl_periodic(dates, freq='W'):
method stl (line 88) | def stl(self, data, freq='W', periods=52, robust=False):
FILE: pygmtsar/pygmtsar/Stack_tidal.py
class Stack_tidal (line 12) | class Stack_tidal(Stack_incidence):
method tidal_los_rad (line 84) | def tidal_los_rad(self, stack):
method tidal_los (line 173) | def tidal_los(self, stack):
method tidal_los_rad (line 246) | def tidal_los_rad(self, stack):
method tidal_correction_wrap (line 252) | def tidal_correction_wrap(self, stack):
method get_tidal (line 258) | def get_tidal(self):
method _tidal (line 261) | def _tidal(self, date, grid):
method compute_tidal (line 292) | def compute_tidal(self, dates=None, coarsen=32, n_jobs=-1, interactive...
method plot_tidal (line 533) | def plot_tidal(self, data=None, caption='Tidal Displacement Amplitude,...
method plot_tidal_los_rad (line 577) | def plot_tidal_los_rad(self, data=None, caption='Tidal Phase, [rad]', ...
FILE: pygmtsar/pygmtsar/Stack_topo.py
class Stack_topo (line 13) | class Stack_topo(Stack_trans_inv):
method get_topo (line 15) | def get_topo(self):
method plot_topo (line 35) | def plot_topo(self, data='auto', caption='Topography on WGS84 ellipsoi...
method topo_phase (line 67) | def topo_phase(self, dates, topo='auto', grid=None, method='nearest', ...
method topo_slope (line 238) | def topo_slope(self, topo='auto', edge_order=1):
method topo_variation (line 256) | def topo_variation(self):
FILE: pygmtsar/pygmtsar/Stack_trans.py
class Stack_trans (line 13) | class Stack_trans(Stack_align):
method define_trans_grid (line 15) | def define_trans_grid(self, coarsen):
method get_trans (line 42) | def get_trans(self):
method compute_trans (line 63) | def compute_trans(self, coarsen, dem='auto', interactive=False):
FILE: pygmtsar/pygmtsar/Stack_trans_inv.py
class Stack_trans_inv (line 13) | class Stack_trans_inv(Stack_trans):
method get_trans_inv (line 15) | def get_trans_inv(self):
method compute_trans_inv (line 36) | def compute_trans_inv(self, coarsen, trans='auto', interactive=False):
FILE: pygmtsar/pygmtsar/Stack_unwrap.py
class Stack_unwrap (line 16) | class Stack_unwrap(Stack_unwrap_snaphu):
method wrap (line 19) | def wrap(data_pairs):
method unwrap_pairs (line 212) | def unwrap_pairs(data : np.ndarray,
method unwrap_matrix (line 390) | def unwrap_matrix(self, pairs):
method unwrap1d (line 411) | def unwrap1d(self, data, weight=None, tolerance=np.pi/2):
method unwrap_snaphu (line 464) | def unwrap_snaphu(self, phase, weight=None, conf=None, conncomp=False):
method interpolate_nearest (line 521) | def interpolate_nearest(self, data, search_radius_pixels=None):
method conncomp_main (line 554) | def conncomp_main(data, start=0):
method plot_conncomps (line 577) | def plot_conncomps(self, data, caption='Connected Components', cols=4,...
FILE: pygmtsar/pygmtsar/Stack_unwrap_snaphu.py
class Stack_unwrap_snaphu (line 12) | class Stack_unwrap_snaphu(Stack_landmask):
method snaphu (line 17) | def snaphu(self, phase, corr=None, conf=None, conncomp=False, debug=Fa...
method snaphu_config (line 155) | def snaphu_config(self, defomax=0, **kwargs):
FILE: pygmtsar/pygmtsar/Tiles.py
class Tiles (line 13) | class Tiles(datagrid, tqdm_joblib):
method _download_tile (line 30) | def _download_tile(self, base_url, path_id, tile_id, file_id, archive,...
method download (line 170) | def download(self, base_url, path_id, tile_id, archive, filetype,
method download_landmask (line 237) | def download_landmask(self, geometry, filename=None, product='1s', ski...
method download_dem_glo (line 264) | def download_dem_glo(self, geometry, filename=None, product='1s', skip...
method download_dem_srtm (line 291) | def download_dem_srtm(self, geometry, filename=None, product='1s', ski...
method download_dem_alos (line 318) | def download_dem_alos(self, geometry, filename=None, product='1s', ski...
method download_dem (line 341) | def download_dem(self, geometry, filename=None, product='1s', provider...
FILE: pygmtsar/pygmtsar/XYZTiles.py
class XYZTiles (line 13) | class XYZTiles(datagrid, tqdm_joblib):
method download_googlemaps (line 29) | def download_googlemaps(self, geometry, zoom, filename=None, **kwargs):
method download_googlesatellite (line 33) | def download_googlesatellite(self, geometry, zoom, filename=None, **kw...
method download_googlesatellitehybrid (line 37) | def download_googlesatellitehybrid(self, geometry, zoom, filename=None...
method download_openstreetmap (line 41) | def download_openstreetmap(self, geometry, zoom, filename=None, **kwar...
method download_openrailwaymap (line 45) | def download_openrailwaymap(self, geometry, zoom, filename=None, backg...
method download (line 59) | def download(self, geometry, zoom, filename=None, url='https://mt1.goo...
FILE: pygmtsar/pygmtsar/datagrid.py
class datagrid (line 11) | class datagrid:
method _compression (line 53) | def _compression(self, shape=None, chunksize=None):
method is_ra (line 118) | def is_ra(grid):
method is_geo (line 141) | def is_geo(grid):
method cropna (line 165) | def cropna(das, index=-1):
method gaussian_kernel (line 212) | def gaussian_kernel(size=(5,5), std=(1,1)):
method get_bounds (line 244) | def get_bounds(geometry):
method nearest_grid (line 332) | def nearest_grid(self, in_grid, search_radius_pixels=None):
method get_spacing (line 434) | def get_spacing(self, grid=1):
method get_coarsen (line 469) | def get_coarsen(self, factor):
method calculate_coarsen_start (line 498) | def calculate_coarsen_start(da, name, spacing, grid_factor=1):
FILE: pygmtsar/pygmtsar/tqdm_dask.py
class TqdmDaskProgress (line 15) | class TqdmDaskProgress(ProgressBar):
method __init__ (line 23) | def __init__(
method loop (line 61) | def loop(self):
method loop (line 80) | def loop(self, value):
method _draw_bar (line 94) | def _draw_bar(self, remaining, all, **kwargs):
method _draw_stop (line 112) | def _draw_stop(self, **kwargs):
function tqdm_dask (line 121) | def tqdm_dask(futures, **kwargs):
FILE: pygmtsar/pygmtsar/tqdm_joblib.py
class tqdm_joblib (line 11) | class tqdm_joblib:
method tqdm_joblib (line 22) | def tqdm_joblib(tqdm_object):
FILE: pygmtsar/pygmtsar/utils.py
class utils (line 10) | class utils():
method interp2d_like (line 39) | def interp2d_like(data, grid, method='cubic', **kwargs):
method nanconvolve2d_gaussian (line 196) | def nanconvolve2d_gaussian(data,
method histogram (line 267) | def histogram(data, bins, range):
method corrcoef (line 278) | def corrcoef(data, mask_diagonal=False):
method binary_erosion (line 322) | def binary_erosion(data, *args, **kwargs):
method binary_dilation (line 332) | def binary_dilation(data, *args, **kwargs):
method binary_opening (line 342) | def binary_opening(data, *args, **kwargs):
method binary_closing (line 355) | def binary_closing(data, *args, **kwargs):
FILE: pygmtsar/setup.py
function get_version (line 15) | def get_version():
FILE: tests/goldenvalley.py
function mpl_settings (line 86) | def mpl_settings(settings):
FILE: tests/imperial_valley_2015.py
function mpl_settings (line 91) | def mpl_settings(settings):
function load_mesh (line 336) | def load_mesh(plotter, index, offset=None):
function load_mesh (line 359) | def load_mesh(plotter, index):
function point_to_rectangle (line 399) | def point_to_rectangle(row):
function velocity_to_color (line 474) | def velocity_to_color(velocity, limits=[-60, 60]):
FILE: tests/iran–iraq_earthquake_2017.py
function mpl_settings (line 84) | def mpl_settings(settings):
FILE: tests/kalkarindji_flooding_2024.py
function mpl_settings (line 84) | def mpl_settings(settings):
function read_mesh (line 254) | def read_mesh(name):
FILE: tests/la_cumbre_volcano_eruption_2020.py
function mpl_settings (line 84) | def mpl_settings(settings):
FILE: tests/lakesarez_landslides_2017.py
function mpl_settings (line 88) | def mpl_settings(settings):
FILE: tests/pico_do_fogo_volcano_eruption_2014.py
function mpl_settings (line 84) | def mpl_settings(settings):
FILE: tests/türkiye_earthquakes_2023.py
function mpl_settings (line 86) | def mpl_settings(settings):
FILE: tests/türkiye_elevation_2019.py
function mpl_settings (line 86) | def mpl_settings(settings):
FILE: todo/orbit/hermite_c.c
function hermite_c (line 20) | void hermite_c(double *x, double *y, double *z, int nmax, int nval, doub...
FILE: todo/orbit/test_hermite_c.c
function test_hermite_c (line 9) | void test_hermite_c() {
function main (line 59) | int main() {
FILE: todo/phasefilt_Goldstein.c
function die (line 8) | void die(char *s1, char *s2) {
function apply_pspec (line 27) | int apply_pspec(int m, int n, float alpha, float complex *in, float comp...
function main (line 43) | int main() {
FILE: todo/pixel_decimator.py
function pixel_decimator (line 4) | def pixel_decimator(self, resolution_meters=60, grid=(1, 4), debug=False):
FILE: todo/test_gmt_fft_2d.c
function main (line 7) | int main() {
FILE: todo/test_gmt_fft_2d.py
function main (line 3) | def main():
FILE: todo/test_gmt_ifft_2d.c
function main (line 7) | int main() {
FILE: todo/test_gmt_ifft_2d.py
function main (line 3) | def main():
FILE: todo/test_hermite.py
function test_hermite (line 4) | def test_hermite():
FILE: todo/test_make_wgt.c
function make_wgt (line 7) | int make_wgt(float *wgt, int nxp, int nyp) {
function main (line 26) | int main() {
FILE: todo/test_phasediff.c
function die (line 6) | void die(char *s1, char *s2) {
function calc_drho (line 10) | void calc_drho(int xdim, double *range, double *topo, double avet, doubl...
function main (line 53) | int main() {
Copy disabled (too large)
Download .json
Condensed preview — 112 files, each showing path, character count, and a content snippet. Download the .json file for the full structured content (24,216K chars).
[
{
"path": ".github/FUNDING.yml",
"chars": 772,
"preview": "# These are supported funding model platforms\n\ngithub: # Replace with up to 4 GitHub Sponsors-enabled usernames e.g., [u"
},
{
"path": ".github/ISSUE_TEMPLATE/bug_report.md",
"chars": 652,
"preview": "---\nname: Bug report\nabout: Create a report to help us improve\ntitle: \"[Bug]: \"\nlabels: ''\nassignees: ''\n\n---\n\n**Describ"
},
{
"path": ".github/ISSUE_TEMPLATE/feature_request.md",
"chars": 1412,
"preview": "---\nname: Feature request\nabout: Suggest an idea for PyGMTSAR\ntitle: \"[Feature]: \"\nlabels: ''\nassignees: ''\n\n---\n\n**Is y"
},
{
"path": ".github/ISSUE_TEMPLATE/general-help.md",
"chars": 1099,
"preview": "---\nname: General help\nabout: Help with problems users met during software installation, data processing, etc.\ntitle: \"["
},
{
"path": ".github/workflows/dockerhub-cache-cleanup.yml",
"chars": 853,
"preview": "name: Cleanup Docker Hub Cache\n\non:\n schedule:\n - cron: '0 0 * * 0'\n\njobs:\n delete-cache:\n runs-on: ubuntu-lates"
},
{
"path": ".github/workflows/macos.yml",
"chars": 2242,
"preview": "# This workflow will install Python dependencies, run tests and lint with a single version of Python\n# For more informat"
},
{
"path": ".github/workflows/ubuntu.yml",
"chars": 2906,
"preview": "# This workflow will install Python dependencies, run tests and lint with a single version of Python\n# For more informat"
},
{
"path": ".github/workflows/ubuntu_docker.yml",
"chars": 1520,
"preview": "name: Docker Build (Ubuntu amd64/arm64)\n\n#on:\n# workflow_dispatch:\n\non:\n push:\n branches: [ \"pygmtsar2\" ]\n pull_re"
},
{
"path": ".gitignore",
"chars": 8737,
"preview": "autom4te.cache/\nbin/\nconfig.log\nconfig.mk\nconfig.status\nconfigure\ngmtsar/SAT_baseline\ngmtsar/SAT_baseline.o\ngmtsar/SAT_l"
},
{
"path": "CITATION.cff",
"chars": 331,
"preview": "cff-version: 1.2.0\nmessage: \"If you use this software, please cite it as below.\"\nauthors:\n- family-names: \"Pechnikov\"\n "
},
{
"path": "CODE_OF_CONDUCT.md",
"chars": 5224,
"preview": "# Contributor Covenant Code of Conduct\n\n## Our Pledge\n\nWe as members, contributors, and leaders pledge to make participa"
},
{
"path": "LICENSE.TXT",
"chars": 1563,
"preview": "BSD 3-Clause License\n\nCopyright (c) 2023, Alexey Pechnikov, https://orcid.org/0000-0001-9626-8615 (ORCID)\nAll rights res"
},
{
"path": "README.md",
"chars": 11908,
"preview": "[](https://github.com/AlexeyPechnikov/pygm"
},
{
"path": "docker/README.md",
"chars": 9584,
"preview": "[](https://github.com/AlexeyPechnikov/pygm"
},
{
"path": "docker/pygmtsar.Dockerfile",
"chars": 6948,
"preview": "# https://jupyter-docker-stacks.readthedocs.io/en/latest/using/selecting.html\n# host platform compilation:\n# docker buil"
},
{
"path": "docker/pygmtsar.ubuntu2204.Dockerfile",
"chars": 2747,
"preview": "# https://jupyter-docker-stacks.readthedocs.io/en/latest/using/selecting.html\n# https://github.com/jupyter/docker-stacks"
},
{
"path": "docker/requirements.json",
"chars": 11133,
"preview": "[{\"name\": \"adjustText\", \"version\": \"1.0.4\"}, {\"name\": \"affine\", \"version\": \"2.4.0\"}, {\"name\": \"aiohttp\", \"version\": \"3.9"
},
{
"path": "docker/requirements.sh",
"chars": 717,
"preview": "#!/bin/bash\n\n# Get the list of all installed packages with their versions inside Docker container\n#pip list --format=jso"
},
{
"path": "notebooks/dload.sh",
"chars": 1083,
"preview": "#!/bin/sh\n\nDIR=\"notebooks\"\nif [ $# -gt 0 ]; then\n DIR=\"$1\"\nfi\n\nrm -fr README.md \"$DIR\"\nmkdir -p \"$DIR\"\nwget -c 'https"
},
{
"path": "pubs/README.md",
"chars": 3210,
"preview": "## Publications that Cite PyGMTSAR\n\nHave you published a paper or released a publicly accessible project that **explicit"
},
{
"path": "pygmtsar/LICENSE.txt",
"chars": 1563,
"preview": "BSD 3-Clause License\n\nCopyright (c) 2023, Alexey Pechnikov, https://orcid.org/0000-0001-9626-8615 (ORCID)\nAll rights res"
},
{
"path": "pygmtsar/MANIFEST.in",
"chars": 82,
"preview": "include pygmtsar/data/geoid_egm96_icgem.grd\ninclude pygmtsar/data/google_colab.sh\n"
},
{
"path": "pygmtsar/pygmtsar/ASF.py",
"chars": 37579,
"preview": "# ----------------------------------------------------------------------------\n# PyGMTSAR\n# \n# This file is part of the "
},
{
"path": "pygmtsar/pygmtsar/AWS.py",
"chars": 927,
"preview": "# ----------------------------------------------------------------------------\n# PyGMTSAR\n# \n# This file is part of the "
},
{
"path": "pygmtsar/pygmtsar/GMT.py",
"chars": 2219,
"preview": "# ----------------------------------------------------------------------------\n# PyGMTSAR\n# \n# This file is part of the "
},
{
"path": "pygmtsar/pygmtsar/IO.py",
"chars": 43340,
"preview": "# ----------------------------------------------------------------------------\n# PyGMTSAR\n#.\n# This file is part of the "
},
{
"path": "pygmtsar/pygmtsar/MultiInstanceManager.py",
"chars": 4934,
"preview": "# ----------------------------------------------------------------------------\n# PyGMTSAR\n# \n# This file is part of the "
},
{
"path": "pygmtsar/pygmtsar/NCubeVTK.py",
"chars": 8891,
"preview": "# ----------------------------------------------------------------------------\n# PyGMTSAR\n# \n# This file is part of the "
},
{
"path": "pygmtsar/pygmtsar/PRM.py",
"chars": 54253,
"preview": "# ----------------------------------------------------------------------------\n# PyGMTSAR\n# \n# This file is part of the "
},
{
"path": "pygmtsar/pygmtsar/PRM_gmtsar.py",
"chars": 13046,
"preview": "# ----------------------------------------------------------------------------\n# PyGMTSAR\n# \n# This file is part of the "
},
{
"path": "pygmtsar/pygmtsar/S1.py",
"chars": 23085,
"preview": "# ----------------------------------------------------------------------------\n# PyGMTSAR\n# \n# This file is part of the "
},
{
"path": "pygmtsar/pygmtsar/Stack.py",
"chars": 8363,
"preview": "# ----------------------------------------------------------------------------\n# PyGMTSAR\n# \n# This file is part of the "
},
{
"path": "pygmtsar/pygmtsar/Stack_align.py",
"chars": 16593,
"preview": "# ----------------------------------------------------------------------------\n# PyGMTSAR\n# \n# This file is part of the "
},
{
"path": "pygmtsar/pygmtsar/Stack_base.py",
"chars": 11678,
"preview": "# ----------------------------------------------------------------------------\n# PyGMTSAR\n# \n# This file is part of the "
},
{
"path": "pygmtsar/pygmtsar/Stack_dem.py",
"chars": 10317,
"preview": "# ----------------------------------------------------------------------------\n# PyGMTSAR\n# \n# This file is part of the "
},
{
"path": "pygmtsar/pygmtsar/Stack_detrend.py",
"chars": 58758,
"preview": "# ----------------------------------------------------------------------------\n# PyGMTSAR\n# \n# This file is part of the "
},
{
"path": "pygmtsar/pygmtsar/Stack_export.py",
"chars": 29326,
"preview": "# ----------------------------------------------------------------------------\n# PyGMTSAR\n#.\n# This file is part of the "
},
{
"path": "pygmtsar/pygmtsar/Stack_geocode.py",
"chars": 20514,
"preview": "# ----------------------------------------------------------------------------\n# PyGMTSAR\n# \n# This file is part of the "
},
{
"path": "pygmtsar/pygmtsar/Stack_incidence.py",
"chars": 19994,
"preview": "# ----------------------------------------------------------------------------\n# PyGMTSAR\n# \n# This file is part of the "
},
{
"path": "pygmtsar/pygmtsar/Stack_landmask.py",
"chars": 6278,
"preview": "# ----------------------------------------------------------------------------\n# PyGMTSAR\n# \n# This file is part of the "
},
{
"path": "pygmtsar/pygmtsar/Stack_lstsq.py",
"chars": 21766,
"preview": "# ----------------------------------------------------------------------------\n# PyGMTSAR\n# \n# This file is part of the "
},
{
"path": "pygmtsar/pygmtsar/Stack_multilooking.py",
"chars": 16316,
"preview": "# ----------------------------------------------------------------------------\n# PyGMTSAR\n# \n# This file is part of the "
},
{
"path": "pygmtsar/pygmtsar/Stack_orbits.py",
"chars": 2516,
"preview": "# ----------------------------------------------------------------------------\n# PyGMTSAR\n# \n# This file is part of the "
},
{
"path": "pygmtsar/pygmtsar/Stack_phasediff.py",
"chars": 40449,
"preview": "# ----------------------------------------------------------------------------\n# PyGMTSAR\n# \n# This file is part of the "
},
{
"path": "pygmtsar/pygmtsar/Stack_prm.py",
"chars": 5665,
"preview": "# ----------------------------------------------------------------------------\n# PyGMTSAR\n# \n# This file is part of the "
},
{
"path": "pygmtsar/pygmtsar/Stack_ps.py",
"chars": 9915,
"preview": "# ----------------------------------------------------------------------------\n# PyGMTSAR\n# \n# This file is part of the "
},
{
"path": "pygmtsar/pygmtsar/Stack_reframe.py",
"chars": 16479,
"preview": "# ----------------------------------------------------------------------------\n# PyGMTSAR\n# \n# This file is part of the "
},
{
"path": "pygmtsar/pygmtsar/Stack_reframe_gmtsar.py",
"chars": 7572,
"preview": "# ----------------------------------------------------------------------------\n# PyGMTSAR\n# \n# This file is part of the "
},
{
"path": "pygmtsar/pygmtsar/Stack_sbas.py",
"chars": 29325,
"preview": "# ----------------------------------------------------------------------------\n# PyGMTSAR\n# \n# This file is part of the "
},
{
"path": "pygmtsar/pygmtsar/Stack_stl.py",
"chars": 10525,
"preview": "# ----------------------------------------------------------------------------\n# PyGMTSAR\n# \n# This file is part of the "
},
{
"path": "pygmtsar/pygmtsar/Stack_tidal.py",
"chars": 26282,
"preview": "# ----------------------------------------------------------------------------\n# PyGMTSAR\n# \n# This file is part of the "
},
{
"path": "pygmtsar/pygmtsar/Stack_topo.py",
"chars": 10851,
"preview": "# ----------------------------------------------------------------------------\n# PyGMTSAR\n# \n# This file is part of the "
},
{
"path": "pygmtsar/pygmtsar/Stack_trans.py",
"chars": 11681,
"preview": "# ----------------------------------------------------------------------------\n# PyGMTSAR\n# \n# This file is part of the "
},
{
"path": "pygmtsar/pygmtsar/Stack_trans_inv.py",
"chars": 10430,
"preview": "# ----------------------------------------------------------------------------\n# PyGMTSAR\n# \n# This file is part of the "
},
{
"path": "pygmtsar/pygmtsar/Stack_unwrap.py",
"chars": 29559,
"preview": "# ----------------------------------------------------------------------------\n# PyGMTSAR\n# \n# This file is part of the "
},
{
"path": "pygmtsar/pygmtsar/Stack_unwrap_snaphu.py",
"chars": 8812,
"preview": "# ----------------------------------------------------------------------------\n# PyGMTSAR\n# \n# This file is part of the "
},
{
"path": "pygmtsar/pygmtsar/Tiles.py",
"chars": 19813,
"preview": "# ----------------------------------------------------------------------------\n# PyGMTSAR\n# \n# This file is part of the "
},
{
"path": "pygmtsar/pygmtsar/XYZTiles.py",
"chars": 10902,
"preview": "# ----------------------------------------------------------------------------\n# PyGMTSAR\n# \n# This file is part of the "
},
{
"path": "pygmtsar/pygmtsar/__init__.py",
"chars": 1134,
"preview": "# ----------------------------------------------------------------------------\n# PyGMTSAR\n# \n# This file is part of the "
},
{
"path": "pygmtsar/pygmtsar/data/google_colab.sh",
"chars": 1643,
"preview": "#!/bin/sh\n\n# Install GMTSAR if needed\ncount=$(ls /usr/local | grep -c GMTSAR)\nif [ \"$count\" -eq 0 ]; then\n export DEB"
},
{
"path": "pygmtsar/pygmtsar/datagrid.py",
"chars": 27793,
"preview": "# ----------------------------------------------------------------------------\n# PyGMTSAR\n# \n# This file is part of the "
},
{
"path": "pygmtsar/pygmtsar/tqdm_dask.py",
"chars": 4073,
"preview": "# ----------------------------------------------------------------------------\n# PyGMTSAR\n# \n# This file is part of the "
},
{
"path": "pygmtsar/pygmtsar/tqdm_joblib.py",
"chars": 2073,
"preview": "# ----------------------------------------------------------------------------\n# PyGMTSAR\n# \n# This file is part of the "
},
{
"path": "pygmtsar/pygmtsar/utils.py",
"chars": 15538,
"preview": "# ----------------------------------------------------------------------------\n# PyGMTSAR\n# \n# This file is part of the "
},
{
"path": "pygmtsar/setup.py",
"chars": 3415,
"preview": "#!/usr/bin/env python\n# ----------------------------------------------------------------------------\n# PyGMTSAR\n# \n# Thi"
},
{
"path": "tests/goldenvalley.py",
"chars": 27974,
"preview": "# -*- coding: utf-8 -*-\n\"\"\"GoldenValley.ipynb\n\nAutomatically generated by Colab.\n\nOriginal file is located at\n https:"
},
{
"path": "tests/imperial_valley_2015.py",
"chars": 20966,
"preview": "# -*- coding: utf-8 -*-\n\"\"\"Imperial_Valley_2015.ipynb\n\nAutomatically generated by Colab.\n\nOriginal file is located at\n "
},
{
"path": "tests/iran–iraq_earthquake_2017.py",
"chars": 14612,
"preview": "# -*- coding: utf-8 -*-\n\"\"\"Iran–Iraq_Earthquake_2017.ipynb\n\nAutomatically generated by Colab.\n\nOriginal file is located "
},
{
"path": "tests/kalkarindji_flooding_2024.py",
"chars": 11337,
"preview": "# -*- coding: utf-8 -*-\n\"\"\"Kalkarindji_Flooding_2024.ipynb\n\nAutomatically generated by Colab.\n\nOriginal file is located "
},
{
"path": "tests/la_cumbre_volcano_eruption_2020.py",
"chars": 16207,
"preview": "# -*- coding: utf-8 -*-\n\"\"\"La_Cumbre_volcano_eruption_2020.ipynb\n\nAutomatically generated by Colab.\n\nOriginal file is lo"
},
{
"path": "tests/lakesarez_landslides_2017.py",
"chars": 26128,
"preview": "# -*- coding: utf-8 -*-\n\"\"\"LakeSarez_Landslides_2017.ipynb\n\nAutomatically generated by Colab.\n\nOriginal file is located "
},
{
"path": "tests/pico_do_fogo_volcano_eruption_2014.py",
"chars": 14843,
"preview": "# -*- coding: utf-8 -*-\n\"\"\"Pico_do_Fogo_Volcano_Eruption_2014.ipynb\n\nAutomatically generated by Colab.\n\nOriginal file is"
},
{
"path": "tests/türkiye_earthquakes_2023.py",
"chars": 26124,
"preview": "# -*- coding: utf-8 -*-\n\"\"\"Türkiye_Earthquakes_2023.ipynb\n\nAutomatically generated by Colab.\n\nOriginal file is located "
},
{
"path": "tests/türkiye_elevation_2019.py",
"chars": 15427,
"preview": "# -*- coding: utf-8 -*-\n\"\"\"Türkiye_Elevation_2019.ipynb\n\nAutomatically generated by Colab.\n\nOriginal file is located at\n"
},
{
"path": "todo/PRM.robust_trend2d.ipynb",
"chars": 305812,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"id\": \"64c24af2\",\n \"metadata\": {},\n \"source\": [\n \"# My realizati"
},
{
"path": "todo/baseline/S1_20201222_ALL_F2.LED",
"chars": 31377,
"preview": "281 2020 356 51252.000000 10.000 \n2020 356 51252.000000 2126632.873229 4552169.567944 -4989856.255835 3884.01937700 3895"
},
{
"path": "todo/baseline/S1_20201222_ALL_F2.PRM",
"chars": 1230,
"preview": "num_valid_az = 1456\nnrows = 1456\nfirst_line = 1\ndeskew = n\ncaltone = 0.0\nst_rng_bin = 1\nFlip_iq = n\noffset_video = n\naz_"
},
{
"path": "todo/baseline/S1_20210103_ALL_F2.LED",
"chars": 30815,
"preview": "281 2021 2 51252.000000 10.000 \n2021 2 51252.000000 2128971.046230 4554424.773679 -4986805.208206 3882.94245500 3892.555"
},
{
"path": "todo/baseline/S1_20210103_ALL_F2.PRM",
"chars": 1228,
"preview": "num_valid_az = 1456\nnrows = 1456\nfirst_line = 1\ndeskew = n\ncaltone = 0.0\nst_rng_bin = 1\nFlip_iq = n\noffset_video = n\naz_"
},
{
"path": "todo/baseline/SAT_baseline.sh",
"chars": 301,
"preview": "#!/bin/sh\n\nrm -f SAT_baseline\ngcc \\\n -I/usr/local/GMTSAR/gmtsar -L/usr/local/GMTSAR/gmtsar -lgmtsar \\\n -I/opt/home"
},
{
"path": "todo/baseline/baseline.v7.ipynb",
"chars": 52007,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"id\": \"ce0f962c\",\n \"metadata\": {},\n \"source\": [\n \"# Test orbit f"
},
{
"path": "todo/branch_cut.ipynb",
"chars": 3396050,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"code\",\n \"execution_count\": 1,\n \"id\": \"4d39e6c0-eacd-4c1d-8373-7213d03e97b5\",\n \""
},
{
"path": "todo/bursts_extraction_remote.ipynb",
"chars": 1803937,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"id\": \"08ed44a8-9a05-417e-ab40-d8d8adba7f0d\",\n \"metadata\": {},\n \"so"
},
{
"path": "todo/gmt_fft_2d.md",
"chars": 6437,
"preview": "## Check GMTSAR Utility phasefilt Function gmt_fft_2d\n\n### Make Test Direct Transform \n\n```c\n#include <stdio.h>\n#include"
},
{
"path": "todo/interp.ipynb",
"chars": 1424791,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"code\",\n \"execution_count\": 50,\n \"id\": \"c16e2c40-212c-4731-b4e6-00986d608069\",\n "
},
{
"path": "todo/orbit/S1_20150521_ALL_F1.LED",
"chars": 31885,
"preview": "281 2015 140 48064.000000 10.000 \n2015 140 48064.000000 1941649.304103 2690324.257829 6239973.534877 -1656.75984900 -659"
},
{
"path": "todo/orbit/S1_20150521_ALL_F1.extend.LED",
"chars": 45056,
"preview": "421 2015 140 47364.000000 10.000000 \n2015 140 47364.000000 2155311.749264 8995848.779606 -711390.418660 1526.195265 -886"
},
{
"path": "todo/orbit/hermite_c.c",
"chars": 3015,
"preview": "/*******************************************************************************\n * Hermite orbit interpolator based on "
},
{
"path": "todo/orbit/read_orb_hermite.ipynb",
"chars": 93974,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"id\": \"ce0f962c\",\n \"metadata\": {},\n \"source\": [\n \"# Test orbit f"
},
{
"path": "todo/orbit/test_hermite_c.c",
"chars": 3374,
"preview": "#include <stdio.h>\n#include <assert.h>\n#include <math.h>\n#include \"gmtsar.h\"\n#include \"lib_functions.h\"\n\nvoid hermite_c("
},
{
"path": "todo/phasediff.md",
"chars": 828,
"preview": "## Check GMTSAR Utility phasediff\n\n```c\npshif = Cexp(pha);\n\niptr2[k] = Conjg(iptr2[k]); \n\nintfp[k] = Cmul(intfp[k], iptr"
},
{
"path": "todo/phasediff_calc_drho.md",
"chars": 4393,
"preview": "## Check GMTSAR Utility phasediff Function calc_drho\n\n### Make Test\n\n```c\n#include <stdio.h>\n#include <math.h>\n#include "
},
{
"path": "todo/phasediff_spline_,evals_.md",
"chars": 5511,
"preview": "## Check GMTSAR Functions evals\\_, evals\\_\n\n### Make Test\n\n```c\n#include <stdio.h>\n\nvoid spline_(int *istart, int *nn, d"
},
{
"path": "todo/phasefilt.ipynb",
"chars": 2475678,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"id\": \"c195cc87\",\n \"metadata\": {},\n \"source\": [\n \"# Pure Python "
},
{
"path": "todo/phasefilt_Goldstein.c",
"chars": 1929,
"preview": "#include <stdio.h>\n#include <stdlib.h>\n#include <math.h>\n#include <complex.h>\n\n// code from GMTSAR\n// https://github.com"
},
{
"path": "todo/phasefilt_Goldstein.md",
"chars": 5272,
"preview": "## Check GMTSAR Utility phasefilt Function apply_pspec (Goldstein filter)\n\n### Make Test\n\n```c\n#include <stdio.h>\n#inclu"
},
{
"path": "todo/phasefilt_calc_corr.md",
"chars": 4564,
"preview": "## Check GMTSAR Utility phasefilt Functions calc_corr, phasefilt_read_data\n\n### Notes\n\n```\n/* if amp files are defined s"
},
{
"path": "todo/phasefilt_lazy.ipynb",
"chars": 7774353,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"id\": \"fe8609be\",\n \"metadata\": {},\n \"source\": [\n \"# Pure Python "
},
{
"path": "todo/phasefilt_make_wgt.md",
"chars": 3057,
"preview": "## Check GMTSAR Utility phasefilt Function make_wgt\n\n### Notes\n\nMatrix should be 2n*2m size.\n\n```\n\t/* create weights for"
},
{
"path": "todo/pixel_decimator.py",
"chars": 5012,
"preview": "# use weighted decimation\n# amplify by weights and multi-look and divide by weights multi-look\n#decimator = lambda da: d"
},
{
"path": "todo/test_gmt_fft_2d.c",
"chars": 1656,
"preview": "#include <stdio.h>\n#include <stdlib.h>\n#include <math.h>\n#include <complex.h>\n#include \"gmt.h\"\n\nint main() {\n void *A"
},
{
"path": "todo/test_gmt_fft_2d.py",
"chars": 640,
"preview": "import numpy as np\n\ndef main():\n n_columns = 4\n n_rows = 4\n \n data = np.array([\n [1 + 0j, 2 + 0j, 3 +"
},
{
"path": "todo/test_gmt_ifft_2d.c",
"chars": 1300,
"preview": "#include <stdio.h>\n#include <stdlib.h>\n#include <math.h>\n#include <complex.h>\n#include \"gmt.h\"\n\nint main() {\n void *A"
},
{
"path": "todo/test_gmt_ifft_2d.py",
"chars": 719,
"preview": "import numpy as np\n\ndef main():\n n_columns = 4\n n_rows = 4\n \n forward_transformed_data = np.array([\n "
},
{
"path": "todo/test_hermite.py",
"chars": 1370,
"preview": "import numpy as np\nfrom numpy.polynomial.hermite import hermval\n\ndef test_hermite():\n x = np.array([\n 48064.00"
},
{
"path": "todo/test_hermite_interpolation.py",
"chars": 583,
"preview": "import numpy as np\nfrom numpy.polynomial.hermite import hermfit, hermval\n\n# Define the data points\nx = np.array([0, 1, 2"
},
{
"path": "todo/test_make_wgt.c",
"chars": 1119,
"preview": "#include <stdio.h>\n#include <stdlib.h>\n#include <math.h>\n\nint debug = 0;\n\nint make_wgt(float *wgt, int nxp, int nyp) {\n "
},
{
"path": "todo/test_phasediff.c",
"chars": 2271,
"preview": "#include <stdio.h>\n#include <math.h>\n#include <stdlib.h>\n\n// from file https://github.com/gmtsar/gmtsar/blob/master/gmts"
},
{
"path": "todo/trans_dat.ipynb",
"chars": 1187486,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {\n \"id\": \"3ja1oNR7WLzm\"\n },\n \"source\": [\n \"## Loa"
},
{
"path": "ui/Imperial_Valley_2015_ipyleaflet.html",
"chars": 4135729,
"preview": "<!DOCTYPE html>\n<html lang=\"en\">\n<head>\n <meta charset=\"UTF-8\">\n <title>InSAR Velocity Map</title>\n</head>\n<body>\n"
}
]
// ... and 2 more files (download for full content)
About this extraction
This page contains the full source code of the AlexeyPechnikov/pygmtsar GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 112 files (74.5 MB), approximately 6.0M tokens, and a symbol index with 349 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.
Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.