Full Code of TIERS/wildnav for AI

main b921515b935d cached
15 files
46.4 KB
12.3k tokens
23 symbols
1 requests
Download .txt
Repository: TIERS/wildnav
Branch: main
Commit: b921515b935d
Files: 15
Total size: 46.4 KB

Directory structure:
gitextract_2endp2pu/

├── .github/
│   └── workflows/
│       ├── pylint.yml
│       ├── python-app.yml
│       └── python-publish.yml
├── .gitignore
├── .gitmodules
├── LICENSE
├── README.md
├── assets/
│   ├── map/
│   │   └── map.csv
│   └── query/
│       └── photo_metadata.csv
├── requirements.txt
└── src/
    ├── build_map.py
    ├── extract_image_meta_exif.py
    ├── plot_data.py
    ├── superglue_utils.py
    └── wildnav.py

================================================
FILE CONTENTS
================================================

================================================
FILE: .github/workflows/pylint.yml
================================================
name: Pylint

on: [push]

jobs:
  build:
    runs-on: ubuntu-latest
    strategy:
      matrix:
        python-version: ["3.8", "3.9", "3.10"]
    steps:
    - uses: actions/checkout@v3
    - name: Set up Python ${{ matrix.python-version }}
      uses: actions/setup-python@v3
      with:
        python-version: ${{ matrix.python-version }}
    - name: Install dependencies
      run: |
        python -m pip install --upgrade pip
        pip install pylint
        pip3 install -r requirements.txt
    - name: Analysing the code with pylint
      run: |
        pylint $(git ls-files '*.py')


================================================
FILE: .github/workflows/python-app.yml
================================================
# This workflow will install Python dependencies, run tests and lint with a single version of Python
# For more information see: https://help.github.com/actions/language-and-framework-guides/using-python-with-github-actions

name: Python application

on:
  push:
    branches: [ "main" ]
  pull_request:
    branches: [ "main" ]

permissions:
  contents: read

jobs:
  build:

    runs-on: ubuntu-latest

    steps:
    - uses: actions/checkout@v3
    - name: Set up Python 3.10
      uses: actions/setup-python@v3
      with:
        python-version: "3.10"
    - name: Install dependencies
      run: |
        python -m pip install --upgrade pip
        pip install flake8 pytest
        if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
    - name: Lint with flake8
      run: |
        # stop the build if there are Python syntax errors or undefined names
        flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
        # exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
        flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
#     - name: Test with pytest
#       run: |
#         pytest src/extract_image_meta_exif.py


================================================
FILE: .github/workflows/python-publish.yml
================================================
# This workflow will upload a Python Package using Twine when a release is created
# For more information see: https://help.github.com/en/actions/language-and-framework-guides/using-python-with-github-actions#publishing-to-package-registries

# This workflow uses actions that are not certified by GitHub.
# They are provided by a third-party and are governed by
# separate terms of service, privacy policy, and support
# documentation.

name: Upload Python Package

on:
  release:
    types: [published]

permissions:
  contents: read

jobs:
  deploy:

    runs-on: ubuntu-latest

    steps:
    - uses: actions/checkout@v3
    - name: Set up Python
      uses: actions/setup-python@v3
      with:
        python-version: '3.10'
    - name: Install dependencies
      run: |
        python -m pip install --upgrade pip
        pip install build
    - name: Build package
      run: python -m build
    - name: Publish package
      uses: pypa/gh-action-pypi-publish@27b31702a0e7fc50959f5ad993c78deac1bdfc29
      with:
        user: __token__
        password: ${{ secrets.PYPI_API_TOKEN }}


================================================
FILE: .gitignore
================================================
devel/
logs/
build/
bin/
lib/
msg_gen/
srv_gen/
msg/*Action.msg
msg/*ActionFeedback.msg
msg/*ActionGoal.msg
msg/*ActionResult.msg
msg/*Feedback.msg
msg/*Goal.msg
msg/*Result.msg
msg/_*.py
build_isolated/
devel_isolated/

# Generated by dynamic reconfigure
*.cfgc
/cfg/cpp/
/cfg/*.py

# Ignore generated docs
*.dox
*.wikidoc

# eclipse stuff
.project
.cproject

# qcreator stuff
CMakeLists.txt.user

srv/_*.py
*.pcd
*.pyc
qtcreator-*
*.user

/planning/cfg
/planning/docs
/planning/src

*~

# Emacs
.#*

# Catkin custom files
CATKIN_IGNORE
git_stuff.txt


================================================
FILE: .gitmodules
================================================
[submodule "src/superglue_lib"]
	path = src/superglue_lib
	url = https://github.com/magicleap/SuperGluePretrainedNetwork.git


================================================
FILE: LICENSE
================================================
BSD 3-Clause License

Copyright (c) 2022, TIERS
All rights reserved.

Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:

1. Redistributions of source code must retain the above copyright notice, this
   list of conditions and the following disclaimer.

2. Redistributions in binary form must reproduce the above copyright notice,
   this list of conditions and the following disclaimer in the documentation
   and/or other materials provided with the distribution.

3. Neither the name of the copyright holder nor the names of its
   contributors may be used to endorse or promote products derived from
   this software without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.


================================================
FILE: README.md
================================================
![ubuntu label](https://img.shields.io/badge/Ubuntu-20.04-brightgreen)
[![build passing label](https://img.shields.io/badge/build-passing-brightgreen)](https://github.com/TIERS/wildnav/actions/workflows/python-app.yml)
# WildNav: GNSS-Free Localization in the Wild

Check out our paper "Vision-Based GNSS-Free Localization for UAVs in the Wild" here [DOI](https://doi.org/10.1109/ICMERR56497.2022.10097798) or [UTU](https://research.utu.fi/converis/portal/detail/Publication/179738211?lang=fi_FI). 
<!-- Access a sample [dataset here](https://utufi.sharepoint.com/:f:/s/msteams_0ed7e9/EsXaX0CKlpxIpOzVnUmn8-sB4yvmsxUohqh1d8nWKD9-BA?e=gPca2s).-->

###  Abstract

Considering the accelerated development of Unmanned Aerial Vehicles (UAVs) applications in both industrial and research scenarios, there is an increasing need for localizing these aerial systems in non-urban environments, using GNSS-Free, vision-based methods. Our paper proposes a vision-based localization algorithm that utilizes deep features to compute geographical coordinates of a UAV flying in the wild. The method is based on matching salient features of RGB photographs captured by the drone camera and sections of a pre-built map consisting of georeferenced open-source satellite images. Experimental results prove that vision-based localization has comparable accuracy with traditional GNSS-based methods, which serve as ground truth. Compared to state-of-the-art Visual Odometry (VO) approaches, our solution is designed for long-distance, high-altitude UAV flights.

### Overview

The main advantage of Wildnav is its capability of matching drone images (left) with georeferenced satellite images (right). The aim is for this to work for drones flying in non-urban areas, in environments with sparse features.

<div align=center>
<img src="assets/overview/project_overview.png" width="800px">
<p align="center">Vision-based localization algorithm overview </p>
</div>

<div align=center>
<img src="assets/overview/good_match_1.png" width="800px">
<p align="center">Robust against rotation </p>
</div>

<div align=center>
<img src="assets/overview/good_match_2.png" width="800px">
<p align="center"> Able to visually identify the drone pose even in environments with sparse feature </p>
</div>

<div align=center>
<img src="assets/overview/good_match_3.png" width="800px">
<p align="center"> The drone image (left) can be taken from a very different perspective, compared to the matched satellite image(right) </p>
</div>

<div align=center>
<img src="assets/overview/good_match_4.png" width="800px">
<p align="center"> Successful visual-based localization in non-urban areas </p>
</div>



## Run it yourself

The algorithm was tested on Ubuntu 20.04 with Python 3.10. Nevertheless, it should work with other versions as well.


   0. **(Highly recommended)** Create a new python3 virtual environment
      ```
      python3 -m venv env 
      ```
      Activate it
      ```
      source env/bin/activate
      ```
   1. Clone the repo
      ```
      git clone git@github.com:TIERS/wildnav.git
      ```
   3. Install superglue dependencies:
      ```
      cd wildnav
      git submodule update --init --recursive
      ```
   3. Install python dependencies
      ```
      pip3 install -r requirements.txt
      ```
   4. Run the localization script (see below to add a dataset, but you can try with a few samples already in the repo)
      ```
      cd src
      python3 wildnav.py
      ```


## Add your own drone image dataset

Before running the code, you need to have a dataset, reference images (map) 

   1. Add your drone photos to ```assets/query```. Feel free to use our dataset from [here](https://utufi.sharepoint.com/:f:/s/msteams_0ed7e9/EsXaX0CKlpxIpOzVnUmn8-sB4yvmsxUohqh1d8nWKD9-BA?e=gPca2s).

   2. Add your satellite map images to ```assets/map``` together with a csv file containing geodata for the images (see ```assets/map/map.csv```) 

   3. Run python script to generate csv file containing photo metadata with GNSS coordinates
      ```
      python3 extract_image_meta_exif.py
      ```
   4. Run wildnav algorithm
      ```
      python3 wildnav.py
      ```
   
## Common problems and fixes

1. Runtime error due to incompatible version of ```torch``` installed
Error message: "NVIDIA GeForce RTX 3070 with CUDA capability sm_86 is not compatible with the current PyTorch installation. The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_70.If you want to use the NVIDIA GeForce RTX 3070 GPU with PyTorch."

**Fix:**

Uninstall current torch installation:
```
pip3 uninstall torch
```
Follow instructions on the official [pytorch](https://pytorch.org/get-started/locally/) website to install the right version of torch for your system (it depends on your graphics card and CUDA version).

2. No dedicated GPU available on your system.

**Fix:**


The algorithm can run, albeit much slower, on CPU. Simply change ```force_cpu``` flag in ```src/superglue_utils.py``` to ```True```.

**NOTE**: If you encounter any problems which are not listed here, please open a new issue in this repository. We will try to fix it as soon as possible.

## Datasets

Photographs used for experimental validation of the algorithm can be found [here](https://utufi.sharepoint.com/:f:/s/msteams_0ed7e9/EsXaX0CKlpxIpOzVnUmn8-sB4yvmsxUohqh1d8nWKD9-BA?e=gPca2s).

<div align=center>
<img src="assets/overview/experiments_flight_area.png" width="800px">
<p align="center">Satellite view of the flight zone (highlighted rectangle). The yellow pin is
located at 60.403091° latitude and 22.461824° longitude </p>
</div>


## Results

<div align=center>

|           	| Total 	| Localized 	| MAE (m) 	|
|:---------:	|:-----:	|:---------:	|:-------:	|
| Dataset 1 	|  124  	|  77 (62%) 	|  15.82  	|
| Dataset 2 	|   78  	|  44 (56%) 	|  26.58  	|

</div>



<div align=center>
<img src="assets/overview/dataset_1_abs_coord.png" width="800px">
<p align="center">Dataset 1 absolute coordinates of localized photographs </p>
</div>

<div align=center>
<img src="assets/overview/dataset_1_error.png" width="800px">
<p align="center">Dataset 1 localization error </p>
</div>

<div align=center>
<img src="assets/overview/dataset_2_abs_coord.png" width="800px">
<p align="center">Dataset 2 absolute coordinates of localized photographs</p>
</div>

<div align=center>
<img src="assets/overview/dataset_2_error.png" width="800px">
<p align="center">Dataset 2 localization error </p>
</div>

<div align=center>
<img src="assets/overview/error_comparison.png" width="800px">
<p align="center">Error comparison </p>
</div>

## Contact
Feel free to send me an email at mmgurg@utu.fi if you have any questions about the project.


================================================
FILE: assets/map/map.csv
================================================
Filename, Top_left_lat,Top_left_lon,Bottom_right_lat,Bottom_right_long
"sat_map_00.png",   60.403962,    22.460441,   60.402409,    22.464059
"sat_map_01.png",  60.403963,  22.464054,  60.402409,  22.467672
"sat_map_02.png",  60.402410,  22.460440,  60.400857,  22.464058
"sat_map_03.png",  60.402412,  22.464056,  60.400859,  22.467674
"sat_map_04.png",  60.402412,  22.467673,  60.400859,  22.471291
"sat_map_05.png",  60.403962,  22.467672,  60.402408,  22.471290
"sat_map_06.png",  60.405516,  22.460443,  60.403962,  22.464061
"sat_map_07.png",  60.405514,  22.464053,  60.403961,  22.467671
"sat_map_08.png",  60.405512,  22.467667,  60.403959,  22.471285
"sat_map_09.png",  60.407068,  22.460446,  60.405515,  22.464064
"sat_map_10.png",  60.407067,  22.464064,  60.405514,  22.467682
"sat_map_11.png",  60.407064,  22.467667,  60.405511,  22.471285
"sat_map_12.png",  60.408617,  22.460445,  60.407064,  22.464063
"sat_map_13.png",  60.408619,  22.464056,  60.407066,  22.467674
"sat_map_14.png",  60.408616,  22.467670,  60.407063,  22.471288

================================================
FILE: assets/query/photo_metadata.csv
================================================
Filename,Latitude,Longitude,Altitude,Gimball_Roll,Gimball_Yaw,Gimball_Pitch,Flight_Roll,Flight_Yaw,Flight_Pitch
drone_image_1.jpg,60.403241666666666,22.462133333333334,+119.98,+0.00,-4.70,-90.30,-8.50,+1.60,-11.00
drone_image_2.jpg,60.40225833333333,22.465155555555555,+120.01,+0.00,-92.60,-90.30,-1.00,-86.80,-10.90

================================================
FILE: requirements.txt
================================================
haversine==2.5.1
matplotlib>=3.6.1
numpy>=1.23.3
opencv_python==4.5.5.64
pandas==1.4.2
requests>=2.27.1
scikit_learn==1.1.2
seaborn>=0.11.2
torch==1.12.1


================================================
FILE: src/build_map.py
================================================
"""Module used for automatically building satellite maps with Google Maps API"""
import csv
import math
import shutil
import requests



############################################################################################
# Satellite map building script
# Requires access to Google Static Maps API,
#   see https://developers.google.com/maps/documentation/maps-static/overview
# It will not work with current API_KEY, you need to get your own
############################################################################################

# Path to the folder where the map will be saved
MAP_PATH = "../assets/maps/map_2/"
class FlightZone:
    """A rectangle shaped flight area defined by 2 points (latitudes and longitudes)"""
    def __init__(self, top_left_lat, top_left_lon,\
         bottom_right_lat, bottom_right_lon, filename = "default"):
        self.filename = filename
        self.top_left_lat = top_left_lat
        self.top_left_lon = top_left_lon
        self.bottom_right_lat = bottom_right_lat
        self.bottom_right_lon = bottom_right_lon

    def __str__(self):
        return f"{self.__class__.__name__}; \n{self.top_left_lat}: %f \nt{self.top_left_lon}: \
            \n{self.bottom_right_lat}: %f \n{self.bottom_right_lon} %f" 
class PatchSize:
    """Size of a map patch in decimal latitude and longitude"""
    def __init__(self, lat, lon):
        self.lat = lat
        self.lon = lon

    def __str__(self):
        return f"{self.__class__.__name__}; \n{self.lat}: %f \n{self.lon}"

def csv_write_image_location():
    """Writes a csv file with the geographical location of the downloaded satellite images"""
    header = ["Filename, Top_left_lat,Top_left_lon,Bottom_right_lat,Bottom_right_long"]
    file_path = MAP_PATH + 'map.csv'
    with open(file_path, 'w', encoding='UTF8') as file:
        writer = csv.writer(file)
        writer.writerow(header)
        for photo in photo_list:
            line = [photo.filename, str(photo.top_left_lat), str(photo.top_left_lon), \
                   str(photo.bottom_right_lat), str(photo.bottom_right_lon)]
            writer.writerow(line)

# These 2 variables determine the number of images that form the map

#define as a pair of coordinates determining a rectangle in which the satellite photos will be taken
flight_zone = FlightZone(60.408615, 22.460445, 60.400855, 22.471289)

#define as height (latitude) and width (longitude) of the patch to be taken
patch_size = PatchSize(0.001676, 0.00341)

# Number of satellite patches needed to build the map
width = math.floor((flight_zone.bottom_right_lon -  flight_zone.top_left_lon) / patch_size.lon)
height = math.floor((flight_zone.top_left_lat - flight_zone.bottom_right_lat) / patch_size.lat)

total = width * height
print(f"The script will download : {total} images. Do you want to continue? [Y/N]")
answer = input()

while answer not in ('Y', 'N', 'y', 'n'):
    print("Please answer with Y or N")
    answer = input()

if answer in ('Y', 'y'):
    print("Downloading images...")
elif answer in ('N', 'n'):
    print("Exiting...")
    exit(1)

current_center = PatchSize(flight_zone.top_left_lat - patch_size.lat / 2, \
     flight_zone.top_left_lon + patch_size.lon / 2)
current_top_left = PatchSize(flight_zone.top_left_lat, flight_zone.top_left_lon)
current_bottom_right = PatchSize(current_top_left.lat - patch_size.lat, \
     current_top_left.lon + patch_size.lon)

center = str(current_center.lat) + "," + str(current_center.lon)
zoom = "18" # optimized for the perspective of a wide angle camera at 120 m altitude
size = "640x640"  # maximum allowed size
maptype = "satellite"
scale = "2" # maximum allowed scale
#restricted by IP address, so you will have to generate your own
API_KEY = "AIzaSyAclBCbWo0rwQIaezGcXM6X3S_Otv-hHOQ"

URL = "https://maps.googleapis.com/maps/api/staticmap?" # base URL for the Google Maps API

#defining a params dict for the parameters to be sent to the API
PARAMS = {'center':center,
          'zoom':zoom,
          'size':size,
          'scale':scale,
          'maptype':maptype,
          'key':API_KEY
        }

photo_list = [] # list of all the photos that will be downloaded
index = 0 # index of the current image

# build map by downloading and stitching together satellite images
for i in range(0, height):
    for j in range(0, width):
        photo_name = 'sat_patch' + '_' + f"{index:04d}"+ '.png'

        #send GET request to the API which returns a satellite image upon success
        r = requests.get(url = URL, params = PARAMS, stream=True, timeout=10)


        # if request successful, write image to file
        if r.status_code == 200:
            with open(MAP_PATH + photo_name, 'wb') as out_file:
                shutil.copyfileobj(r.raw, out_file)
                print("image "+ str(index) + " downloaded")
        else:
            print("Error " + r.status_code + " downloading image " + str(index))
        
        current_patch = FlightZone(current_top_left.lat, current_top_left.lon, \
             current_bottom_right.lat, current_bottom_right.lon, photo_name)
        photo_list.append(current_patch)
        index += 1
        current_center.lon = current_center.lon + patch_size.lon
        current_top_left.lon += patch_size.lon
        current_bottom_right.lon += patch_size.lon
        PARAMS['center'] = str(current_center.lat) + "," + str(current_center.lon)
    current_top_left.lat -= patch_size.lat
    current_top_left.lon = flight_zone.top_left_lon
    current_bottom_right.lat -= patch_size.lat
    current_bottom_right.lon = flight_zone.top_left_lon + patch_size.lon
    current_center.lat = current_center.lat - patch_size.lat
    current_center.lon = flight_zone.top_left_lon + patch_size.lon / 2
    PARAMS['center'] = str(current_center.lat) + "," + str(current_center.lon)

print("Map images downloaded, writing csv file")
csv_write_image_location()
print("Map built successfully, check the map folder: " + MAP_PATH)


================================================
FILE: src/extract_image_meta_exif.py
================================================
"""Script that reads the EXIF data from the drone images and extracts the GNSS coordinates to a csv file"""
import subprocess
import re
import os

############################################################################################################
# Requires exiftool to be installed: https://exiftool.org/
# You might have to modify the scripts to match the EXIF metadata of your drone photos 
# Use https://www.metadata2go.com/ to easily check the metadata of your images
############################################################################################################

csv_filename = 'photo_metadata.csv' # csv file with drone image metadata containing GNSS location
photo_folder = '../assets/query/' # folder with drone images
 
csv_filename = photo_folder + csv_filename

def convert_gnss_coord(lat_or_lon):
    """
    Convert GNSS coordinate to decimal degrees instead of minutes and seconds
    """
    deg, deg_string, minutes,discard_1,  seconds, discard_2, direction =  re.split('[\s°\'"]', lat_or_lon)
    converted = (float(deg) + float(minutes)/60 + float(seconds)/(60*60)) * (-1 if direction in ['W', 'S'] else 1)
    return str(converted)

def load_images_from_folder(folder):
    images_list = []
    print("Loading images. Please wait...")
    for filename in os.listdir(folder):
        if filename.endswith((".JPG", ".jpg", ".png")):
            images_list.append(filename)
    print("Done loading " +  str(len(images_list)) + " images.")

    images_list.sort()
    return images_list

images_list = load_images_from_folder(photo_folder)

f = open(csv_filename, "a")

filesize = os.path.getsize(csv_filename)
if filesize == 0:
    f.write("Filename,Latitude,Longitude,Altitude,Gimball_Roll,Gimball_Yaw,Gimball_Pitch,Flight_Roll,Flight_Yaw,Flight_Pitch")

print("Reading metadata. Please wait...")
for image in images_list:
    infoDict = {} #Creating the dict to get the metadata tags
    exifToolPath = 'exiftool'
    imgPath = photo_folder + image
    process = subprocess.Popen([exifToolPath,imgPath],stdout=subprocess.PIPE, stderr=subprocess.STDOUT,universal_newlines=True) 
    #get the tags in dict 
    for tag in process.stdout:
        line = tag.strip().split(':')
        infoDict[line[0].strip()] = line[-1].strip()

    #Default values
    altitude = "NaN"
    gimball_roll = "NaN"
    gimball_yaw = "NaN"
    gimball_pitch = "NaN"
    flight_roll = "NaN"
    flight_yaw = "NaN"
    flight_pitch = "NaN"

    if 'File Name' in infoDict:
        filename = infoDict['File Name']
    if 'GPS Latitude' in infoDict:
        lat = infoDict['GPS Latitude']
    if 'GPS Longitude' in infoDict:
        lon = infoDict['GPS Longitude']
    if 'Relative Altitude' in infoDict:
        altitude = infoDict['Relative Altitude']
    elif 'GPS Altitude' in infoDict:
        altitude = infoDict['GPS Altitude']
        altitude =  re.search(r'\d+', altitude).group() #keep only the number value, discard text

    if 'Gimbal Roll Degree' in infoDict:
        gimball_roll = infoDict['Gimbal Roll Degree']
    if 'Gimbal Yaw Degree' in infoDict:
        gimball_yaw = infoDict['Gimbal Yaw Degree']
    if 'Gimbal Pitch Degree' in infoDict:
        gimball_pitch = infoDict['Gimbal Pitch Degree']
    if 'Flight Roll Degree' in infoDict:
        flight_roll = infoDict['Flight Roll Degree']
    if 'Flight Yaw Degree' in infoDict:
        flight_yaw = infoDict['Flight Yaw Degree']
    if 'Flight Pitch Degree' in infoDict:    
        flight_pitch = infoDict['Flight Pitch Degree']

    lat = convert_gnss_coord(lat)
    lon = convert_gnss_coord(lon)

    f.write('\n' + filename + ',' + lat + ',' + lon + ',' + altitude + ',' + gimball_roll + ',' + gimball_yaw + ',' + gimball_pitch + ',' + flight_roll + ','  + flight_yaw + ','+ flight_pitch)

print("Done reading metadata. Metadata saved to " + csv_filename)

f.close()




================================================
FILE: src/plot_data.py
================================================
"""Plots data output from wildnav"""
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from matplotlib import rcParams
from sklearn.metrics import mean_squared_error 
from sklearn.metrics import mean_absolute_error

############################################################################
# Script what plots ground truth coordinates and calculated coordinates 
############################################################################

def mse(actual, predicted):
    return np.square(np.subtract(np.array(actual), np.array(predicted))).mean()

def dif(actual, predicted):
    return np.subtract(np.array(actual), np.array(predicted))


sns.set_style("whitegrid")
sns.set(font_scale = 2)
plt.rcParams['font.sans-serif'] = ['Arial']
plt.rcParams['axes.edgecolor'] = 'white'
plt.rcParams['axes.unicode_minus'] = False
filename = "calculated_coordinates_real_data_1.csv"



data = pd.read_csv(filename)
total_images = data.shape[0]
data = data.loc[data['Calculated_Latitude'] != -1]
data = data.loc[data['Meters_Error'] < 50]

located_images = data.shape[0]

dataset_list = []
for i in range(0,data.shape[0]):
    dataset_list.append("Dataset 1")

data['Dataset'] = dataset_list

pd_zeros = pd.DataFrame(np.zeros(data.shape[0]))
 
print("Total images ", total_images) 
print("Located images ", located_images)
print("Located images %", located_images *100 / total_images)

# Using DataFrame.mean() method to get column average
df2 = data["Meters_Error"].mean()
print(data['Meters_Error'])
print("Mean meter error:", df2)
print("RMS meter error:", mse(pd_zeros, data['Meters_Error']))
mse_sci_kit = mean_squared_error(pd_zeros, data['Meters_Error'], squared=False)
mae_sci_kit = mean_absolute_error(pd_zeros, data['Meters_Error'], )
print("RMS meter error Sci_kit: ", mse_sci_kit)
print("MAE meter error Sci_kit: ", mae_sci_kit)

print("Filename:", filename)
print("RMS Latitude:", mse(data['Latitude'], data['Calculated_Latitude']))

print("RMS Longitude:", mse(data['Longitude'], data['Calculated_Longitude']))


print(data)


fig=plt.figure(figsize=(15,8))
ax=plt.gca()
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.patch.set_facecolor('lightgray')
ax.patch.set_alpha(0.4)
sns.lineplot(data = data, x = data.index , y = "Meters_Error", marker="o", color='r')
ax.set_xlabel("Photograph Index")
ax.set_ylabel("Localization Error [m]")

plt.savefig("mean_error_dataset_1.png")
plt.savefig("mean_error_dataset_1.pdf")


plt.show()



print("RMS Latitude:", mse(data['Latitude'], data['Calculated_Latitude']))

print("RMS Longitude:", mse(data['Longitude'], data['Calculated_Longitude']))

error = dif(data['Latitude'], data['Calculated_Latitude'])
print(error)






filename = "calculated_coordinates_real_data_2.csv"



data_2 = pd.read_csv(filename)
total_images = data_2.shape[0]
data_2 = data_2.loc[data_2['Calculated_Latitude'] != -1]
data_2 = data_2.loc[data_2['Meters_Error'] < 50]

located_images = data_2.shape[0]

dataset_list = []
for i in range(0,data_2.shape[0]):
    dataset_list.append("Dataset 2")

data_2['Dataset'] = dataset_list

concatenated = pd.concat([data, data_2])

fig=plt.figure(figsize=(15,8))
ax=plt.gca()
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.patch.set_facecolor('lightgray')
ax.patch.set_alpha(0.4)
sns.boxplot(x = "Dataset",y="Meters_Error", data=concatenated)
#ax.set_xlabel("Photograph Index")
#ax.set_ylabel("Localization Error [m]")

plt.savefig("boxplot_error_dataset_1.png")
plt.savefig("boxplot_error_dataset_1.pdf")


plt.show()










================================================
FILE: src/superglue_utils.py
================================================
from pathlib import Path
import cv2
import matplotlib.cm as cm
import torch
import numpy as np


from superglue_lib.models.matching import Matching
from superglue_lib.models.utils import (AverageTimer, VideoStreamer,
                          make_matching_plot_fast, frame2tensor)

torch.set_grad_enabled(False)


def match_image(): 
    """
    Wrapper function for matching two images, provides an interface to superglue model
    """
    center = None
    input = '../assets/map/'
    output_dir = "../results"
    image_glob = ['*.png', '*.jpg', '*.jpeg', '*.JPG']
    skip = 1
    max_length = 1000000


    # Important parameters to modify if you wish to improve the feature matching performance. 
    resize = [800] # Resize the image to this size before processing. Set to None to disable resizing.
    superglue = 'outdoor' # The SuperGlue model to use. Either 'indoor' or 'outdoor'.
    max_keypoints = -1 # -1 keep all keypoints  
    keypoint_threshold = 0.01 # Remove keypoints with low confidence. Set to -1 to keep all keypoints.
    nms_radius = 4 # Non-maxima suppression: keypoints with similar responses in a small neighborhood are removed.
    sinkhorn_iterations = 20 # Number of Sinkhorn iterations for matching.
    match_threshold = 0.5 # Remove matches with low confidence. Set to -1 to keep all matches.
    show_keypoints = True # Show the detected keypoints.
    no_display = False
    force_cpu = False # Force CPU mode. It is significantly slower, but allows the model to run on systems withou dedicated GPU.
    
   
    if len(resize) == 2 and resize[1] == -1:
        resize = resize[0:1]
    if len(resize) == 2:
        print('Will resize to {}x{} (WxH)'.format(
            resize[0], resize[1]))
    elif len(resize) == 1 and resize[0] > 0:
        print('Will resize max dimension to {}'.format(resize[0]))
    elif len(resize) == 1:
        print('Will not resize images')
    else:
        raise ValueError('Cannot specify more than two integers for --resize')

    
    device = 'cuda' if torch.cuda.is_available() and not force_cpu else 'cpu'
    print('Running inference on device \"{}\"'.format(device))
    config = {
        'superpoint': {
            'nms_radius': nms_radius,
            'keypoint_threshold': keypoint_threshold,
            'max_keypoints': max_keypoints
        },
        'superglue': {
            'weights': superglue,
            'sinkhorn_iterations': sinkhorn_iterations,
            'match_threshold': match_threshold,
        }
    }
    matching = Matching(config).eval().to(device)
    keys = ['keypoints', 'scores', 'descriptors']

    vs = VideoStreamer(input, resize, skip,
                       image_glob, max_length)
    frame, ret = vs.next_frame()
    assert ret, 'Error when reading the first frame (try different --input?)'

    frame_tensor = frame2tensor(frame, device)
    last_data = matching.superpoint({'image': frame_tensor})
    last_data = {k+'0': last_data[k] for k in keys}
    last_data['image0'] = frame_tensor
    last_frame = frame
    last_image_id = 0

    if output_dir is not None:
        print('==> Will write outputs to {}'.format(output_dir))
        Path(output_dir).mkdir(exist_ok=True)

    # Create a window to display the demo.
    if not no_display:
        cv2.namedWindow('SuperGlue matches', cv2.WINDOW_NORMAL)
        cv2.resizeWindow('SuperGlue matches', 640*2*2, 480*2)
    else:
        print('Skipping visualization, will not show a GUI.')

    timer = AverageTimer()

    satellite_map_index = None # index of the satellite photo in the map where the best match was found
    index = 0 # index of the sattelite photo if a match was found
    max_matches = -1 # max number of matches, used to keep track of the best match
    MATCHED = False # flag to indicate if a match was found
    located_image = None # the image of the satellite photo where the best match was found
    features_mean = [0,0] # mean values of feature pixel coordinates 

    while True:
        
        #current sattelite image to be matched
        frame, ret = vs.next_frame()
        if not ret:
            print('Finished demo_superglue.py')
            break
        timer.update('data')
        stem0, stem1 = last_image_id, vs.i - 1


        

        frame_tensor = frame2tensor(frame, device)
        pred = matching({**last_data, 'image1': frame_tensor})
        kpts0 = last_data['keypoints0'][0].cpu().numpy()
        kpts1 = pred['keypoints1'][0].cpu().numpy()
        matches = pred['matches0'][0].cpu().numpy()
        confidence = pred['matching_scores0'][0].cpu().numpy()
        timer.update('forward')

        valid = matches > -1
        mkpts0 = kpts0[valid]
        mkpts1 = kpts1[matches[valid]]

        """
        Find image in sattelite map with findHomography        
        """
        #At least 4 matched features are needed to compute homography
        MATCHED = False
        print("Matches found:", len(mkpts1))
        if (len(mkpts1) >= 4): 
            perspective_tranform_error = False           
            M, mask = cv2.findHomography(mkpts0, mkpts1, cv2.RANSAC,5.0)
            h,w = last_frame.shape
            pts = np.float32([ [0,0],[0,h-1],[w-1,h-1],[w-1,0] ]).reshape(-1,1,2)
            try: 
                dst = cv2.perspectiveTransform(pts,M)
            except:
                print("Perspective transform error. Abort matching.")
                perspective_tranform_error = True    

            if (len(mkpts1) > max_matches) and not perspective_tranform_error: 
                frame = cv2.polylines(frame,[np.int32(dst)],True,255,3, cv2.LINE_AA) 
                moments = cv2.moments(dst)
                cX = int(moments["m10"] / moments["m00"])
                cY = int(moments["m01"] / moments["m00"])
                center = (cX  ,cY) #shape[0] is Y coord, shape[1] is X coord
                #use ratio here instead of pixels because image is reshaped in superglue
                features_mean = np.mean(mkpts0, axis = 0)

                #Draw the center of the area which has been matched
                cv2.circle(frame, center, radius = 10, color = (255, 0, 255), thickness = 5)
                cv2.circle(last_frame, (int(features_mean[0]), int(features_mean[1])), radius = 10, color = (255, 0, 0), thickness = 2)
                center = (cX / frame.shape[1] ,cY /frame.shape[0] )
                satellite_map_index = index
                max_matches = len(mkpts1)
                MATCHED = True

        else:
            print("Photos were NOT matched")
      
        color = cm.jet(confidence[valid])
        k_thresh = matching.superpoint.config['keypoint_threshold']
        m_thresh = matching.superglue.config['match_threshold']
        small_text = [
            'Keypoint Threshold: {:.4f}'.format(k_thresh),
            'Match Threshold: {:.2f}'.format(m_thresh),
            'Image Pair: {:06}:{:06}'.format(stem0, stem1),
        ]
     
        out = make_matching_plot_fast(
            last_frame, frame, kpts0, kpts1, mkpts0, mkpts1, color, text='',
            path=None, show_keypoints=show_keypoints, small_text='')

        if MATCHED == True:
            located_image = out
        

        if not no_display:
            cv2.imshow('SuperGlue matches', out)
            key = chr(cv2.waitKey(1) & 0xFF)
            if key == 'q':
                vs.cleanup()
                print('Exiting (via q) demo_superglue.py')
                break
            elif key == 'n':  # set the current frame as anchor
                last_data = {k+'0': pred[k+'1'] for k in keys}
                last_data['image0'] = frame_tensor
                last_frame = frame
                last_image_id = (vs.i - 1)
            elif key in ['e', 'r']:
                # Increase/decrease keypoint threshold by 10% each keypress.
                d = 0.1 * (-1 if key == 'e' else 1)
                matching.superpoint.config['keypoint_threshold'] = min(max(
                    0.0001, matching.superpoint.config['keypoint_threshold']*(1+d)), 1)
                print('\nChanged the keypoint threshold to {:.4f}'.format(
                    matching.superpoint.config['keypoint_threshold']))
            elif key in ['d', 'f']:
                # Increase/decrease match threshold by 0.05 each keypress.
                d = 0.05 * (-1 if key == 'd' else 1)
                matching.superglue.config['match_threshold'] = min(max(
                    0.05, matching.superglue.config['match_threshold']+d), .95)
                print('\nChanged the match threshold to {:.2f}'.format(
                    matching.superglue.config['match_threshold']))
            elif key == 'k':
                show_keypoints = not show_keypoints

        timer.update('viz')
        timer.print()  
        index += 1  

        if output_dir is not None:
            #stem = 'matches_{:06}_{:06}'.format(last_image_id, vs.i-1)
            stem = 'matches_{:06}_{:06}'.format(stem0, stem1)
            out_file = str(Path(output_dir, stem + '.png'))
            print('\nWriting image to {}'.format(out_file))
            cv2.imwrite(out_file, out)


    cv2.destroyAllWindows()
    vs.cleanup()
    
    return satellite_map_index, center, located_image, features_mean, last_frame, max_matches
    



================================================
FILE: src/wildnav.py
================================================
"""Core module. Contains the main functions for the project."""
import csv
import cv2
import haversine as hs
from haversine import Unit
import superglue_utils

############################################################################################################
# Important variables
############################################################################################################

map_path = "../assets/map/"
map_filename = "../assets/map/map.csv" #  csv file with the sattelite geo tagged images
drone_photos_filename = "../assets/query/photo_metadata.csv" # csv file with the geo tagged drone images;
                                                             # the geo coordinates are only used to compare
                                                             # the calculated coordinates with the real ones
                                                             # after the feature matching

############################################################################################################
# Class definitios
############################################################################################################
class GeoPhotoDrone:
    """Stores a drone photo together with GNSS location
    and camera rotation parameters
    """
    def __init__(self,filename, photo=0, latitude=0, longitude = 0 ,\
         altitude = 0 ,gimball_roll = 0, gimball_yaw = 0, gimball_pitch = 0, flight_roll = 0, flight_yaw = 0, flight_pitch = 0):
        self.filename = filename
        self.photo = photo
        self.latitude = latitude
        self.longitude = longitude
        self.latitude_calculated = -1
        self.longitude_calculated = -1
        self.altitude = altitude
        self.gimball_roll = gimball_roll
        self.gimball_yaw = gimball_yaw
        self.gimball_pitch = gimball_pitch
        self.flight_roll = flight_roll
        self.flight_yaw = flight_yaw
        self.flight_pitch = flight_pitch
        self.corrected = False
        self.matched = False

    def __str__(self):
        return "%s; \nlatitude: %f \nlongitude: %f \naltitude: %f \ngimball_roll: %f \ngimball_yaw: %f \ngimball_pitch: %f \nflight_roll: %f \nflight_yaw: %f \nflight_pitch: %f" % (self.filename, self.latitude, self.longitude, self.altitude, self.gimball_roll, self.gimball_yaw, self.gimball_pitch, self.flight_roll, self.flight_yaw, self.flight_pitch )
        
class GeoPhoto:
    """Stores a satellite photo together with (latitude, longitude) for top_left and bottom_right_corner
    """
    def __init__(self, filename, photo, geo_top_left, geo_bottom_right):
        self.filename = filename
        self.photo = photo
        self.top_left_coord = geo_top_left
        self.bottom_right_coord = geo_bottom_right

    def __lt__(self, other):
         return self.filename < other.filename

    def __str__(self):
        return "%s; \n\ttop_left_latitude: %f \n\ttop_left_lon: %f \n\tbottom_right_lat: %f \n\tbottom_right_lon %f " % (self.filename, self.top_left_coord[0], self.top_left_coord[1], self.bottom_right_coord[0], self.bottom_right_coord[1])


############################################################################################################
# Functions for data writing and reading csv files
############################################################################################################
def csv_read_drone_images(filename):
    """Builds a list with drone geo tagged photos by reading a csv file with this format:
    Filename, Top_left_lat,Top_left_lon,Bottom_right_lat,Bottom_right_long
    "photo_name.png",60.506787,22.311631,60.501037,22.324467
    """
    geo_list_drone = []
    photo_path = "../assets/query/"
    with open(filename) as csv_file:
        csv_reader = csv.reader(csv_file, delimiter=',')
        line_count = 0
        for row in csv_reader:
            if line_count == 0:
                print(f'Column names are {", ".join(row)}')
                line_count += 1
            else:                
                #img = cv2.imread(photo_path + row[0],0)
                geo_photo = GeoPhotoDrone(photo_path + row[0], 0, float(row[1]), float(row[2]), float(row[3]), float(row[4]), float(row[5]), float(row[6]), float(row[7]), float(row[8]), float(row[9]))
                geo_list_drone.append(geo_photo)
                line_count += 1

        print(f'Processed {line_count} lines.')
        return geo_list_drone

def csv_read_sat_map(filename):
    """Builds a list with satellite geo tagged photos by reading a csv file with this format:
    Filename, Top_left_lat,Top_left_lon,Bottom_right_lat,Bottom_right_long
    "photo_name.png",60.506787,22.311631,60.501037,22.324467
    """
    geo_list = []
    photo_path = "../assets/map/"
    print("opening: ",filename)
    with open(filename) as csv_file:
        csv_reader = csv.reader(csv_file, delimiter=',')
        line_count = 0
        for row in csv_reader:
            if line_count == 0:
                print(f'Column names are {", ".join(row)}')
                line_count += 1
            else:            
                img = cv2.imread(photo_path + row[0],0)
                geo_photo = GeoPhoto(photo_path + row[0],img,(float(row[1]),float(row[2])), (float(row[3]), float(row[4])))
                geo_list.append(geo_photo)
                line_count += 1

        print(f'Processed {line_count} lines.')
        geo_list.sort() # sort alphabetically by filename to ensure that the feature matcher return the right index of the matched sat image
        return geo_list
    
def csv_write_image_location(photo):
    header = ['Filename', 'Latitude', 'Longitude', 'Calculated_Latitude', 'Calculated_Longitude', 'Latitude_Error', 'Longitude_Error', 'Meters_Error', 'Corrected', 'Matched']
    with open('../results/calculated_coordinates.csv', 'a', encoding='UTF8') as f:
        writer = csv.writer(f)        
        photo_name = photo.filename.split("/")[-1]
        loc1 = ( photo.latitude, photo.longitude)
        loc2 = ( photo.latitude_calculated, photo.longitude_calculated)
        dist_error =  hs.haversine(loc1,loc2,unit=Unit.METERS)
        lat_error = photo.latitude - photo.latitude_calculated
        lon_error = photo.longitude - photo.longitude_calculated
        line = [photo_name, str(photo.latitude), str(photo.longitude), str(photo.latitude_calculated), str(photo.longitude_calculated), str(lat_error), str(lon_error), str(dist_error), str(drone_image.corrected), str(drone_image.matched), str(drone_image.gimball_yaw + drone_image.flight_yaw - 15)]
        writer.writerow(line)


def calculate_geo_pose(geo_photo, center, features_mean,  shape):
    """
    Calculates the geographical location of the drone image.
    Input: satellite geotagged image, relative pixel center of the drone image, 
    (center with x = 0.5 and y = 0.5 means the located features are in the middle of the sat image)
    pixel coordinatess (horizontal and vertical) of where the features are localted in the sat image, shape of the sat image
    """
    #use ratio here instead of pixels because image is reshaped in superglue    
    latitude = geo_photo.top_left_coord[0] + abs( center[1])  * ( geo_photo.bottom_right_coord[0] - geo_photo.top_left_coord[0])
    longitude = geo_photo.top_left_coord[1] + abs(center[0])  * ( geo_photo.bottom_right_coord[1] - geo_photo.top_left_coord[1])
    
    return latitude, longitude



#######################################
# MAIN
#######################################

#Read all the geo tagged images that make up the sattelite map used for reference
geo_images_list = csv_read_sat_map(map_filename)

#Read all the geo tagged drone that will located in the map
drone_images_list = csv_read_drone_images(drone_photos_filename)

latitude_truth = []
longitude_truth = []
latitude_calculated = []
longitude_calculated = []

print(str(len(drone_images_list)) + " drone photos were loaded.")


# Iterate through all the drone images
for drone_image in drone_images_list:
    latitude_truth.append(drone_image.latitude) # ground truth from drone image metadata for later comparison
    longitude_truth.append(drone_image.longitude) # ground truth for later comparison
    photo =  cv2.imread(drone_image.filename) # read the drone image
    

    max_features = 0 # keep track of the best match, more features = better match
    located = False # flag to indicate if the drone image was located in the map
    center = None # center of the drone image in the map


    rotations = [20] # list of rotations to try
                     # keep in mind GNSS metadata could have wrong rotation angle
                     # so we try to match the image with different (manually established) rotations

    # Iterate through all the rotations, in this case only one rotation
    for rot in rotations:
        
        # Write the query photo to the map folder
        cv2.imwrite(map_path + "1_query_image.png", photo)

        #Call superglue wrapper function to match the query image to the map
        satellite_map_index_new, center_new, located_image_new, features_mean_new, query_image_new, feature_number = superglue_utils.match_image()
        
        # If the drone image was located in the map and the number of features is greater than the previous best match, then update the best match
        # Sometimes the pixel center returned by the perspective transform exceeds 1, discard the resuls in that case
        if (feature_number > max_features and center_new[0] < 1 and center_new[1] < 1):
            satellite_map_index = satellite_map_index_new
            center = center_new
            located_image = located_image_new
            features_mean = features_mean_new
            query_image = query_image_new
            max_features = feature_number
            located = True
    photo_name = drone_image.filename.split("/")[-1]

    # If the drone image was located in the map, calculate the geographical location of the drone image
    if center != None and located:        
        current_location = calculate_geo_pose(geo_images_list[satellite_map_index], center, features_mean, query_image.shape )
        
        # Write the results to the image result file with the best match
        cv2.putText(located_image, "Calculated: " + str(current_location), org = (10,625),fontFace =  cv2.FONT_HERSHEY_DUPLEX, fontScale = 0.8,  color = (0, 0, 0))
        cv2.putText(located_image, "Ground truth: " + str(drone_image.latitude) + ", " + str(drone_image.longitude), org = (10,655),fontFace =  cv2.FONT_HERSHEY_DUPLEX, fontScale = 0.8,  color = (0, 0, 0))
        cv2.imwrite("../results/" + photo_name + "_located.png", located_image)
        
        print("Image " + str(photo_name) + " was successfully located in the map")
        print("Calculated location: ", str(current_location[0:2]))
        print("Ground Truth: ", drone_image.latitude, drone_image.longitude)   
        
        # Save the calculated location for later comparison with the ground truth
        drone_image.matched = True
        drone_image.latitude_calculated = current_location[0]
        drone_image.longitude_calculated = current_location[1]
        
        latitude_calculated.append(drone_image.latitude_calculated)
        longitude_calculated.append(drone_image.longitude_calculated)

    else:
        print("NOT MATCHED:", photo_name)

    # Write the results to the csv file    
    csv_write_image_location(drone_image)

Download .txt
gitextract_2endp2pu/

├── .github/
│   └── workflows/
│       ├── pylint.yml
│       ├── python-app.yml
│       └── python-publish.yml
├── .gitignore
├── .gitmodules
├── LICENSE
├── README.md
├── assets/
│   ├── map/
│   │   └── map.csv
│   └── query/
│       └── photo_metadata.csv
├── requirements.txt
└── src/
    ├── build_map.py
    ├── extract_image_meta_exif.py
    ├── plot_data.py
    ├── superglue_utils.py
    └── wildnav.py
Download .txt
SYMBOL INDEX (23 symbols across 5 files)

FILE: src/build_map.py
  class FlightZone (line 18) | class FlightZone:
    method __init__ (line 20) | def __init__(self, top_left_lat, top_left_lon,\
    method __str__ (line 28) | def __str__(self):
  class PatchSize (line 31) | class PatchSize:
    method __init__ (line 33) | def __init__(self, lat, lon):
    method __str__ (line 37) | def __str__(self):
  function csv_write_image_location (line 40) | def csv_write_image_location():

FILE: src/extract_image_meta_exif.py
  function convert_gnss_coord (line 17) | def convert_gnss_coord(lat_or_lon):
  function load_images_from_folder (line 25) | def load_images_from_folder(folder):

FILE: src/plot_data.py
  function mse (line 14) | def mse(actual, predicted):
  function dif (line 17) | def dif(actual, predicted):

FILE: src/superglue_utils.py
  function match_image (line 15) | def match_image():

FILE: src/wildnav.py
  class GeoPhotoDrone (line 22) | class GeoPhotoDrone:
    method __init__ (line 26) | def __init__(self,filename, photo=0, latitude=0, longitude = 0 ,\
    method __str__ (line 44) | def __str__(self):
  class GeoPhoto (line 47) | class GeoPhoto:
    method __init__ (line 50) | def __init__(self, filename, photo, geo_top_left, geo_bottom_right):
    method __lt__ (line 56) | def __lt__(self, other):
    method __str__ (line 59) | def __str__(self):
  function csv_read_drone_images (line 66) | def csv_read_drone_images(filename):
  function csv_read_sat_map (line 89) | def csv_read_sat_map(filename):
  function csv_write_image_location (line 114) | def csv_write_image_location(photo):
  function calculate_geo_pose (line 128) | def calculate_geo_pose(geo_photo, center, features_mean,  shape):
Condensed preview — 15 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (50K chars).
[
  {
    "path": ".github/workflows/pylint.yml",
    "chars": 594,
    "preview": "name: Pylint\n\non: [push]\n\njobs:\n  build:\n    runs-on: ubuntu-latest\n    strategy:\n      matrix:\n        python-version: "
  },
  {
    "path": ".github/workflows/python-app.yml",
    "chars": 1227,
    "preview": "# This workflow will install Python dependencies, run tests and lint with a single version of Python\n# For more informat"
  },
  {
    "path": ".github/workflows/python-publish.yml",
    "chars": 1092,
    "preview": "# This workflow will upload a Python Package using Twine when a release is created\n# For more information see: https://h"
  },
  {
    "path": ".gitignore",
    "chars": 552,
    "preview": "devel/\nlogs/\nbuild/\nbin/\nlib/\nmsg_gen/\nsrv_gen/\nmsg/*Action.msg\nmsg/*ActionFeedback.msg\nmsg/*ActionGoal.msg\nmsg/*ActionR"
  },
  {
    "path": ".gitmodules",
    "chars": 125,
    "preview": "[submodule \"src/superglue_lib\"]\n\tpath = src/superglue_lib\n\turl = https://github.com/magicleap/SuperGluePretrainedNetwork"
  },
  {
    "path": "LICENSE",
    "chars": 1513,
    "preview": "BSD 3-Clause License\n\nCopyright (c) 2022, TIERS\nAll rights reserved.\n\nRedistribution and use in source and binary forms,"
  },
  {
    "path": "README.md",
    "chars": 6718,
    "preview": "![ubuntu label](https://img.shields.io/badge/Ubuntu-20.04-brightgreen)\n[![build passing label](https://img.shields.io/ba"
  },
  {
    "path": "assets/map/map.csv",
    "chars": 1051,
    "preview": "Filename, Top_left_lat,Top_left_lon,Bottom_right_lat,Bottom_right_long\n\"sat_map_00.png\",   60.403962,    22.460441,   60"
  },
  {
    "path": "assets/query/photo_metadata.csv",
    "chars": 316,
    "preview": "Filename,Latitude,Longitude,Altitude,Gimball_Roll,Gimball_Yaw,Gimball_Pitch,Flight_Roll,Flight_Yaw,Flight_Pitch\ndrone_im"
  },
  {
    "path": "requirements.txt",
    "chars": 154,
    "preview": "haversine==2.5.1\nmatplotlib>=3.6.1\nnumpy>=1.23.3\nopencv_python==4.5.5.64\npandas==1.4.2\nrequests>=2.27.1\nscikit_learn==1."
  },
  {
    "path": "src/build_map.py",
    "chars": 5957,
    "preview": "\"\"\"Module used for automatically building satellite maps with Google Maps API\"\"\"\nimport csv\nimport math\nimport shutil\nim"
  },
  {
    "path": "src/extract_image_meta_exif.py",
    "chars": 3866,
    "preview": "\"\"\"Script that reads the EXIF data from the drone images and extracts the GNSS coordinates to a csv file\"\"\"\nimport subpr"
  },
  {
    "path": "src/plot_data.py",
    "chars": 3594,
    "preview": "\"\"\"Plots data output from wildnav\"\"\"\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport seabo"
  },
  {
    "path": "src/superglue_utils.py",
    "chars": 9271,
    "preview": "from pathlib import Path\nimport cv2\nimport matplotlib.cm as cm\nimport torch\nimport numpy as np\n\n\nfrom superglue_lib.mode"
  },
  {
    "path": "src/wildnav.py",
    "chars": 11464,
    "preview": "\"\"\"Core module. Contains the main functions for the project.\"\"\"\nimport csv\nimport cv2\nimport haversine as hs\nfrom havers"
  }
]

About this extraction

This page contains the full source code of the TIERS/wildnav GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 15 files (46.4 KB), approximately 12.3k tokens, and a symbol index with 23 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!