master 0d2d91888337 cached
7 files
47.9 KB
12.1k tokens
47 symbols
1 requests
Download .txt
Repository: andymatuschak/Bear-Markdown-Export
Branch: master
Commit: 0d2d91888337
Files: 7
Total size: 47.9 KB

Directory structure:
gitextract_jcda48_b/

├── .gitattributes
├── .gitignore
├── Bear Import.md
├── LICENSE
├── README.md
├── bear_export_sync.py
└── bear_import.py

================================================
FILE CONTENTS
================================================

================================================
FILE: .gitattributes
================================================
# Auto detect text files and perform LF normalization
* text=auto


================================================
FILE: .gitignore
================================================
.DS_Store


================================================
FILE: Bear Import.md
================================================
## Bear Markdown and textbundle import – with tags from file and folder.

***bear_import.py***  
*Version 1.0.0 - 2018-02-10 at 17:37 EST*

*See also:* **[bear_export_sync.py](https://github.com/rovest/Bear-Markdown-Export/blob/master/README.md)** *for export with sync-back.*


### Features 

* Imports markdown or textbundles from nested folders under a `BearImport/input/' folder
* Foldernames are converted to Bear tags
* Also imports MacOS file tags as Bear tags
* Imported notes are also tagged with `#.imported/yyyy-MM-dd` for convenience.
* Import-files are then cleared to a `BearImport/done/' folder
* Use for email input to Bear with Zapier's "Gmail to Dropbox" zap.
* Or for import of nested groups and sheets from Ulysses, images and keywords included.


### Trigger script with Automator Folder Action

1. New Automator file as `Folder Action` 
2. Set `Folder action receives files and folders added to`: `{user}/Dropbox/BearImport/Input`
3. Add action: `Run Shell Script` choose `bin/bash`
4. Insert one line with full paths to python and script (Use "" if spaces in paths!):  
`/Library/Frameworks/Python.framework/Versions/3.6/bin/python3.6 "/Users/username/scripts/bear_import.py"`
5. Save as `Bear Import` or whatever you choose.

Or skip all this and run it manually :)


### Get mail to Bear with "Zapier Gmail to Dropbox" action

1. Create a free zapier.com account.
2. Use a dedicated gmail account or setup a filter assigning a label used by zapier. 
3. Make a Zapier zap. See: [Add new Gmail emails to Dropbox as text files](https://zapier.com/apps/dropbox/integrations/gmail/10323/add-new-gmail-emails-to-dropbox-as-text-files)
	1. Set zap to monitor inbox with label (assigned by filter in step 2.)
	2. Set zap Dropbox output to `{user}/Dropbox/BearImport/Input` 

- Zap will now check for new email (with matching gmail label) every 15 minutes and script above will import to Bear.
- Alternately on iOS: use this workflow (import to Bear from same Dropbox folder): [Gmail-DB zap to Bear](https://workflow.is/workflows/827b9b2518d5476ca0158a67d5b492fa)

### Import from Ulysses’ external folders on Mac

1. Add `{user}/Dropbox/BearImport/Input` as external folder
2. Edit folder settings to `.textbundle` and `Inline Links`!
3. Drag any library group to this folder in Ulysses' sidebar.
4. Voilà – Imports to Bear with images and tags (both from group names and keywords).




================================================
FILE: LICENSE
================================================
MIT License

Copyright (c) 2018 rovest

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.


================================================
FILE: README.md
================================================
## Markdown export and sync of Bear notes

***bear_export_sync.py***   
*Version 1.4, 2020-01-11*

Python script for export and roundtrip sync of Bear's notes to OneDrive, Dropbox, etc. and edit online with [StackEdit](https://stackedit.io/app), or use a markdown editor like *Typora* on Windows or a suitable app on Android. Remote edits and new notes get synced back into Bear with this script.

**See also: [Bear Markdown and textbundle import – with tags from file and folder](https://github.com/rovest/Bear-Markdown-Export/blob/master/Bear%20Import.md)**

Set up seamless syncing with Ulysses’ external folders on Mac, with images included!  
Write and add photos in Bear, then reorder, glue, and publish, export, or print with styles in Ulysses—  
bears and butterflies are best friends ;)  
(PS. The manual order you set for notes in Ulysses' external folder, is maintained during syncs, unless title is changed.) 

Suitable for use with https://github.com/andymatuschak/note-link-janitor.

BEAR IN MIND! This is a free to use version, and please improve or modify as needed. But do be careful! both `rsync` and `shutil.rmtree` used here, are powerful commands that can wipe clean a whole folder tree or even your complete HD if paths are set incorrectly! Also, be safe, take a fresh backup of both Bear and your Mac before first run.

*See also: [Bear Power Pack](https://github.com/rovest/Bear-Power-Pack/blob/master/README.md)*

## Usage

```
python bear_export_sync.py --out ~/Notes/Bear --backup ~/Notes/Backup
```

See `--help` for more.

## Features

* Bear notes exported as plain Markdown or Textbundles with images.
* Syncs external edits back to Bear with original image links intact. 
* New external `.md` files or `.textbundles` are added.  
(Tags created from sub folder name)
* Export option: Make nested folders from tags.   
For first tag only, or all tags (duplicates notes)
* Export option: Include or exclude export of notes with specific tags.
* Export option: Export as `.textbundles` with images included. 
* Or as: `.md` with links to common image repository 
* Export option: Hide tags in HTML comments like: `<!-- #mytag -->` if `hide_tags_in_comment_block = True`
* **NEW** Hybrid export: `.textbundles` of notes with images, otherwise regular `.md` (Makes it easier to browse and edit on other platforms.)
* **NEW** Writes log to `bear_export_sync_log.txt` in `BearSyncBackup` folder.

Edit your Bear notes online in browser on [OneDrive.com](https://onedrive.live.com). It has a ok editor for plain text/markdown. Or with [StackEdit](https://stackedit.io/app), an amazing online markdown editor that can sync with *Dropbox* or *Google Drive*

Read and edit your Bear notes on *Windows* or *Android* with any markdown editor of choice. Remote edits or new notes will be synced back into Bear again. *Typora* works great on Windows, and displays images of text bundles as well.

NOTE! If syncing with Ulysses’ external folders on Mac, remember to edit that folder settings to `.textbundle` and `Inline Links`!

Run script manually or add it to a cron job for automatic syncing (every 5 – 15 minutes, or whatever you prefer).  
([LaunchD Task Scheduler](https://itunes.apple.com/us/app/launchd-task-scheduler/id620249105?mt=12) Is easy to configure and works very well for this) 


### Syncs external edits back into Bear
Script first checks for external edits in Markdown files or textbundles (previously exported from Bear as described below):

* It replaces text in original note with `bear://x-callback-url/add-text?mode=replace` command   
(That way keeping original note ID and creation date)  
If any changes to title, new title will be added just below original title.  
(`mode=replace` does not replace title)
* Original note in `sqlite` database and external edit are both backed up as markdown-files to BearSyncBackup folder before import to bear.
* If a sync conflict, both original and new version will be in Bear (the new one with a sync conflict message and link to original).
* New notes created online, are just added to Bear  
(with the `bear://x-callback-url/create` command)
* If a textbundle gets new images from an external app, it will be opened and imported as a new note in Bear, with message and link to original note.  
(The `subprocess.call(['open', '-a', '/applications/bear.app', bundle])` command is used for this)


### Markdown export to Dropbox, OneDrive, or other:
Then exports all notes from Bear's database.sqlite as plain markdown files:

* Checks modified timestamp on database.sqlite, so exports only when needed.
* Sets Bear note's modification date on exported markdown files.
* Appends Bear note's creation date to filename to avoid “title-filename-collisions”
* Note IDs are included at bottom of markdown files to match original note on sync back:  
	{BearID:730A5BD2-0245-4EF7-BE16-A5217468DF0E-33519-0000429ADFD9221A}  
(these ID's are striped off again when synced back into Bear)
* Uses rsync for copying (from a temp folder), so only changed notes will be synced to Dropbox (or other sync services)
* rsync also takes care of deleting trashed notes
* "Hides” tags from being displayed as H1 in other markdown apps by adding `period+space` in front of first tag on a line:   
`. #bear #idea #python`   
* Or hide tags in HTML comment blocks like: `<!-- #mytag -->` if `hide_tags_in_comment_block = True`   
(these are striped off again when synced back into Bear)
* Makes subfolders named with first tag in note if `make_tag_folders = True`
* Files can now be copied to multiple tag-folders if `multi_tags = True`
* Export can now be restricted to a list of spesific tags: `limit_export_to_tags = ['bear/github', 'writings']`  
or leave list empty for all notes: `limit_export_to_tags = []`
* Can export and link to images in common image repository
* Or export as textbundles with images included 


You have Bear on Mac but also want your notes on your Android phone, on Linux or Windows machine at your office. Or you want them available online in a browser from any desktop computer. Here is a solution (or call it workaround) for now, until Bear comes with an online, Windows, or Android solution ;)

Happy syncing! ;)


================================================
FILE: bear_export_sync.py
================================================
# encoding=utf-8
# python3.6
# bear_export_sync.py
# Developed with Visual Studio Code with MS Python Extension.

import shlex
import objc
from AppKit import NSWorkspace, NSWorkspaceOpenConfiguration, NSURL

'''
# Markdown export from Bear sqlite database 
Version 1.4, 2020-01-11
modified by: github/andymatuschak, andy_matuschak@twitter
original author: github/rovest, rorves@twitter

See also: bear_import.py for auto import to bear script.

## Sync external updates:
First checks for changes in external Markdown files (previously exported from Bear)
* Replacing text in original note with callback-url replace command   
  (Keeping original creation date)
  If changes in title it will be added just below original title
* New notes are added to Bear (with x-callback-url command)
* New notes get tags from sub folder names, or `#.inbox` if root
* Backing up original note as file to BearSyncBackup folder  
  (unless a sync conflict, then both notes will be there)

## Export:
Then exporting Markdown from Bear sqlite db.
* check_if_modified() on database.sqlite to see if export is needed
* Uses rsync for copying, so only markdown files of changed sheets will be updated  
  and synced by Dropbox (or other sync services)
* "Hides" tags with `period+space` on beginning of line: `. #tag` not appear as H1 in other apps.   
  (This is removed if sync-back above)
* Or instead hide tags in HTML comment blocks like: `<!-- #mytag -->` if `hide_tags_in_comment_block = True`
* Makes subfolders named with first tag in note if `make_tag_folders = True`
* Files can now be copied to multiple tag-folders if `multi_tags = True`
* Export can now be restricted to a list of spesific tags: `limit_export_to_tags = ['bear/github', 'writings']`  
or leave list empty for all notes: `limit_export_to_tags = []`
* Can export and link to images in common image repository
* Or export as textbundles with images included 
'''

make_tag_folders = False  # Exports to folders using first tag only, if `multi_tag_folders = False`
multi_tag_folders = True  # Copies notes to all 'tag-paths' found in note!
                          # Only active if `make_tag_folders = True`
hide_tags_in_comment_block = False  # Hide tags in HTML comments: `<!-- #mytag -->`

# The following two lists are more or less mutually exclusive, so use only one of them.
# (You can use both if you have some nested tags where that makes sense)
# Also, they only work if `make_tag_folders = True`.
only_export_these_tags = []  # Leave this list empty for all notes! See below for sample
# only_export_these_tags = ['bear/github', 'writings'] 

export_as_textbundles = False  # Exports as Textbundles with images included
export_as_hybrids = True  # Exports as .textbundle only if images included, otherwise as .md
                          # Only used if `export_as_textbundles = True`
export_image_repository = True  # Export all notes as md but link images to 
                                 # a common repository exported to: `assets_path` 
                                 # Only used if `export_as_textbundles = False`

import os
HOME = os.getenv('HOME', '')
default_out_folder = os.path.join(HOME, "Work", "BearNotes")
default_backup_folder = os.path.join(HOME, "Work", "BearSyncBackup")

# NOTE! Your user 'HOME' path and '/BearNotes' is added below!
# NOTE! So do not change anything below here!!!

import sqlite3
import datetime
import re
import subprocess
import urllib.parse
import time
import shutil
import fnmatch
import json
import argparse

parser = argparse.ArgumentParser(description="Sync Bear notes")
parser.add_argument("--out", default=default_out_folder, help="Path where Bear notes will be synced")
parser.add_argument("--backup", default=default_backup_folder, help="Path where conflicts will be backed up (must be outside of --out)")
parser.add_argument("--images", default=None, help="Path where images will be stored")
parser.add_argument("--skipImport", action="store_const", const=True, default=False, help="When present, the script only exports from Bear to Markdown; it skips the import step.")
parser.add_argument("--excludeTag", action="append", default=[], help="Don't export notes with this tag. Can be used multiple times.")
parser.add_argument("--hideTags", action="store_const", const=True, default=False, help="Wrap tags in <!-- -->")

parsed_args = vars(parser.parse_args())


set_logging_on = True

# NOTE! if 'BearNotes' is left blank, all other files in my_sync_service will be deleted!! 
export_path = parsed_args.get("out")
no_export_tags = parsed_args.get("excludeTag")  # If a tag in note matches one in this list, it will not be exported.
hide_tags_in_comment_block = parsed_args.get("hideTags");

# NOTE! "export_path" is used for sync-back to Bear, so don't change this variable name!
multi_export = [(export_path, True)]  # only one folder output here. 
# Use if you want export to severa places like: Dropbox and OneDrive, etc. See below
# Sample for multi folder export:
# export_path_aux1 = os.path.join(HOME, 'OneDrive', 'BearNotes')
# export_path_aux2 = os.path.join(HOME, 'Box', 'BearNotes')

# NOTE! All files in export path not in Bear will be deleted if delete flag is "True"!
# Set this flag fo False only for folders to keep old deleted versions of notes
# multi_export = [(export_path, True), (export_path_aux1, False), (export_path_aux2, True)]

temp_path = os.path.join(HOME, 'Temp', 'BearExportTemp')  # NOTE! Do not change the "BearExportTemp" folder name!!!
bear_db = os.path.join(HOME, 'Library/Group Containers/9K33E3U3T4.net.shinyfrog.bear/Application Data/database.sqlite')
sync_backup = parsed_args.get("backup") # Backup of original note before sync to Bear.
log_file = os.path.join(sync_backup, 'bear_export_sync_log.txt')

# Paths used in image exports:
bear_image_path = os.path.join(HOME,
    'Library/Group Containers/9K33E3U3T4.net.shinyfrog.bear/Application Data/Local Files/Note Images')
assets_path = parsed_args.get("images") if parsed_args.get("images") else os.path.join(export_path, 'BearImages')

sync_ts = '.sync-time.log'
export_ts = '.export-time.log'

sync_ts_file = os.path.join(export_path, sync_ts)
sync_ts_file_temp = os.path.join(temp_path, sync_ts)
export_ts_file_exp = os.path.join(export_path, export_ts)
export_ts_file = os.path.join(temp_path, export_ts)

gettag_sh = os.path.join(HOME, 'temp/gettag.sh')
gettag_txt = os.path.join(HOME, 'temp/gettag.txt')


def main():
    init_gettag_script()
    if not parsed_args.get("skipImport"):
        sync_md_updates()
    if check_db_modified():
        delete_old_temp_files()
        note_count = export_markdown()
        write_time_stamp()
        rsync_files_from_temp()
        if export_image_repository and not export_as_textbundles:
            copy_bear_images()
        # notify('Export completed')
        write_log(str(note_count) + ' notes exported to: ' + export_path)
        exit(1)
    else:
        print('*** No notes needed exports')
        exit(0)


def write_log(message):
    if set_logging_on == True:
        if not os.path.exists(sync_backup):
            os.makedirs(sync_backup)
        time_stamp = datetime.datetime.now().strftime("%Y-%m-%d at %H:%M:%S")
        message = message.replace(export_path + '/', '')
        with open(log_file, 'a', encoding='utf-8') as f:
            f.write(time_stamp + ': ' + message + '\n')


def check_db_modified():
    if not os.path.exists(sync_ts_file):
        return True
    db_ts = get_file_date(bear_db)
    last_export_ts = get_file_date(export_ts_file_exp)
    return db_ts > last_export_ts


def export_markdown():
    with sqlite3.connect(bear_db) as conn:
        conn.row_factory = sqlite3.Row
        query = "SELECT * FROM `ZSFNOTE` WHERE `ZTRASHED` LIKE '0' AND `ZARCHIVED` LIKE '0'"
        c = conn.execute(query)
        note_count = 0
        for row in c:
            title = row['ZTITLE']
            md_text = row['ZTEXT'].rstrip()
            creation_date = row['ZCREATIONDATE']
            modified = row['ZMODIFICATIONDATE']
            uuid = row['ZUNIQUEIDENTIFIER']
            pk = row['Z_PK']
            filename = clean_title(title)
            file_list = []
            if make_tag_folders:
                file_list = sub_path_from_tag(temp_path, filename, md_text)
            else:
                is_excluded = False
                for no_tag in no_export_tags:
                    if ("#" + no_tag) in md_text:
                        is_excluded = True
                        break
                if not is_excluded:
                    file_list.append(os.path.join(temp_path, filename))
            if file_list:
                mod_dt = dt_conv(modified)
                md_text = hide_tags(md_text)
                md_text += '\n\n<!-- {BearID:' + uuid + '} -->\n'
                for filepath in file_list:
                    note_count += 1
                    # print(filepath)
                    if export_as_textbundles:
                        if check_image_hybrid(md_text):
                            make_text_bundle(md_text, filepath, mod_dt)                        
                        else:
                            write_file(filepath + '.md', md_text, mod_dt, creation_date)
                    elif export_image_repository:
                        md_proc_text = process_image_links(md_text, filepath, conn, pk)
                        write_file(filepath + '.md', md_proc_text, mod_dt, creation_date)
                    else:
                        write_file(filepath + '.md', md_text, mod_dt, creation_date)
    return note_count


def check_image_hybrid(md_text):
    if export_as_hybrids:
        if re.search(r'\[image:(.+?)\]', md_text):
            return True
        else:
            return False
    else:
        return True


def make_text_bundle(md_text, filepath, mod_dt):
    '''
    Exports as Textbundles with images included 
    '''
    bundle_path = filepath + '.textbundle'
    assets_path = os.path.join(bundle_path, 'assets')    
    if not os.path.exists(bundle_path):
        os.makedirs(bundle_path)
        os.makedirs(assets_path)
        
    info = '''{
    "transient" : true,
    "type" : "net.daringfireball.markdown",
    "creatorIdentifier" : "net.shinyfrog.bear",
    "version" : 2
    }'''
    matches = re.findall(r'\[image:(.+?)\]', md_text)
    for match in matches:
        image_name = match
        new_name = image_name.replace('/', '_')
        source = os.path.join(bear_image_path, image_name)
        target = os.path.join(assets_path, new_name)
        shutil.copy2(source, target)

    md_text = re.sub(r'\[image:(.+?)/(.+?)\]', r'![](assets/\1_\2)', md_text)
    write_file(bundle_path + '/text.md', md_text, mod_dt, 0)
    write_file(bundle_path + '/info.json', info, mod_dt, 0)
    os.utime(bundle_path, (-1, mod_dt))


def sub_path_from_tag(temp_path, filename, md_text):
    # Get tags in note:
    pattern1 = r'(?<!\S)\#([.\w\/\-]+)[ \n]?(?!([\/ \w]+\w[#]))'
    pattern2 = r'(?<![\S])\#([^ \d][.\w\/ ]+?)\#([ \n]|$)'
    if multi_tag_folders:
        # Files copied to all tag-folders found in note
        tags = []
        for matches in re.findall(pattern1, md_text):
            tag = matches[0]
            tags.append(tag)
        for matches2 in re.findall(pattern2, md_text):
            tag2 = matches2[0]
            tags.append(tag2)
        if len(tags) == 0:
            # No tags found, copy to root level only
            return [os.path.join(temp_path, filename)]
    else:
        # Only folder for first tag
        match1 =  re.search(pattern1, md_text)
        match2 =  re.search(pattern2, md_text)
        if match1 and match2:
            if match1.start(1) < match2.start(1):
                tag = match1.group(1)
            else:
                tag = match2.group(1)
        elif match1:
            tag = match1.group(1)
        elif match2:
            tag = match2.group(1)
        else:
            # No tags found, copy to root level only
            return [os.path.join(temp_path, filename)]
        tags = [tag]
    paths = [os.path.join(temp_path, filename)]
    for tag in tags:
        if tag == '/':
            continue
        if only_export_these_tags:
            export = False
            for export_tag in only_export_these_tags:
                if tag.lower().startswith(export_tag.lower()):
                    export = True
                    break
            if not export:
                continue
        for no_tag in no_export_tags:
            if tag.lower().startswith(no_tag.lower()):
                return []
        if tag.startswith('.'):
            # Avoid hidden path if it starts with a '.'
            sub_path = '_' + tag[1:]     
        else:
            sub_path = tag    
        tag_path = os.path.join(temp_path, sub_path)
        if not os.path.exists(tag_path):
            os.makedirs(tag_path)
        paths.append(os.path.join(tag_path, filename))      
    return paths


def process_image_links(md_text, filepath, conn, pk):
    image_map = None
    remaining_images = set()
    def replace_image_link(match):
        # We're only processing local assets.
        if match.group(2).startswith("http"):
            return match.group(0)

        nonlocal image_map
        if image_map is None:
            image_map = {}
            files = conn.execute("SELECT * FROM `ZSFNOTEFILE` WHERE ZNOTE = ?", (pk,))
            for row in files:
                filename = row["ZFILENAME"]
                uuid = row["ZUNIQUEIDENTIFIER"]
                out_file_path = os.path.relpath(assets_path, export_path) + f"/{uuid}/{filename}"
                image_map[filename] = out_file_path
                remaining_images.add(filename)

        # Markdown image URLs are percent-encoded, but the Bear database is not.
        image_filename = urllib.parse.unquote(match.group(2))
        out_file_path = image_map.get(image_filename)
        if out_file_path is None:
            print(f"WARNING: Note {filepath} has image {image_filename} which was not found in database. Skipping.")
            return match.group(0)
        remaining_images.remove(image_filename)
        encoded_out_file_path = urllib.parse.quote(out_file_path)
        return f"![{match.group(1)}]({encoded_out_file_path})"

    if remaining_images:
        print(f"WARNING: Note {filepath} has images in the database which weren't matched in the note: {remaining_images}")
    out_text = re.sub(r'!\[(.*?)\]\((.+?)\)', replace_image_link, md_text)
    return out_text


def restore_image_links(md_text):
    # TODO: add new external images to Bear when necessary
    if export_as_textbundles:
        return re.sub(r'!\[(.*?)\]\(assets/(.+?)_(.+?)( ".+?")?\) ?', r'[image:\2/\3]\4 \1', md_text)
    elif export_image_repository:
        relative_asset_path = os.path.relpath(assets_path, export_path)
        return re.sub(r'!\[(.*?)\]\(' + re.escape(relative_asset_path) + r'/(.+?)/(.+?)\)', r'![\1](\3)', md_text)
        


def copy_bear_images():
    # Image files copied to a common image repository
    subprocess.call(['rsync', '-r', '-t', '--delete', 
                    bear_image_path + "/", assets_path])


def write_time_stamp():
    # write to time-stamp.txt file (used during sync)
    write_file(export_ts_file, "Markdown from Bear written at: " +
               datetime.datetime.now().strftime("%Y-%m-%d at %H:%M:%S"), 0, 0)
    write_file(sync_ts_file_temp, "Markdown from Bear written at: " +
               datetime.datetime.now().strftime("%Y-%m-%d at %H:%M:%S"), 0, 0)


def hide_tags(md_text):
    # Hide tags from being seen as H1, by placing `period+space` at start of line:
    if hide_tags_in_comment_block:
        md_text =  re.sub(r'(\n)[ \t]*(\#[^\s#].*)', r'\1<!-- \2 -->', md_text)
    return md_text


def restore_tags(md_text):
    # Tags back to normal Bear tags, stripping the `period+space` at start of line:
    if hide_tags_in_comment_block:
        md_text =  re.sub(r'(\n)<!--[ \t]*(\#[^\s#].*?) -->', r'\1\2', md_text)
    return md_text


def clean_title(title):
    title = title[:225].strip()
    if title == "":
        title = "Untitled"
    title = re.sub(r'[\/\\:]', r'-', title)
    title = re.sub(r'-$', r'', title)    
    return title.strip()


def write_file(filename, file_content, modified, created):
    with open(filename, "w", encoding='utf-8') as f:
        f.write(file_content)
    if modified > 0:
        os.utime(filename, (-1, modified))
    if created > 0:
        newnum = dt_conv(created)
        dtdate = datetime.datetime.fromtimestamp(newnum)
        datestring = dtdate.strftime("%m/%d/%Y %H:%M:%S")
        command = 'SetFile -d "' + datestring + '" ' + shlex.quote(filename)
        subprocess.call(command, shell=True)


def read_file(file_name):
    with open(file_name, "r", encoding='utf-8') as f:
        file_content = f.read()
    return file_content


def get_file_date(filename):
    try:
        t = os.path.getmtime(filename)
        return t
    except:
        return 0


def dt_conv(dtnum):
    # Formula for date offset based on trial and error:
    hour = 3600 # seconds
    year = 365.25 * 24 * hour
    offset = year * 31 + hour * 6
    return dtnum + offset


def date_time_conv(dtnum):
    newnum = dt_conv(dtnum) 
    dtdate = datetime.datetime.fromtimestamp(newnum)
    #print(newnum, dtdate)
    return dtdate.strftime(' - %Y-%m-%d_%H%M')


def time_stamp_ts(ts):
    dtdate = datetime.datetime.fromtimestamp(ts)
    return dtdate.strftime('%Y-%m-%d at %H:%M') 


def date_conv(dtnum):
    dtdate = datetime.datetime.fromtimestamp(dtnum)
    return dtdate.strftime('%Y-%m-%d')


def delete_old_temp_files():
    # Deletes all files in temp folder before new export using "shutil.rmtree()":
    # NOTE! CAUTION! Do not change this function unless you really know shutil.rmtree() well!
    if os.path.exists(temp_path) and "BearExportTemp" in temp_path:
        # *** NOTE! Double checking that temp_path folder actually contains "BearExportTemp"
        # *** Because if temp_path is accidentally empty or root,
        # *** shutil.rmtree() will delete all files on your complete Hard Drive ;(
        shutil.rmtree(temp_path)
        # *** NOTE: USE rmtree() WITH EXTREME CAUTION!
    os.makedirs(temp_path)


def rsync_files_from_temp():
    # Moves markdown files to new folder using rsync:
    # This is a very important step! 
    # By first exporting all Bear notes to an emptied temp folder,
    # rsync will only update destination if modified or size have changed.
    # So only changed notes will be synced by Dropbox or OneDrive destinations.
    # Rsync will also delete notes on destination if deleted in Bear.
    # So doing it this way saves a lot of otherwise very complex programing.
    # Thank you very much, Rsync! ;)
    for (dest_path, delete) in multi_export:
        if not os.path.exists(dest_path):
            os.makedirs(dest_path)
        if delete:
            subprocess.call(['rsync', '-r', '-t', '--crtimes', '-E', '--delete',
                             '--exclude', 'BearImages/',
                             '--exclude', '.obsidian/',
                             '--exclude', '.Ulysses*',
                             '--exclude', '*.Ulysses_Public_Filter',
                             temp_path + "/", dest_path])
        else:
            subprocess.call(['rsync', '-r', '-t', '-E',
                            temp_path + "/", dest_path])


def sync_md_updates():
    updates_found = False
    if not os.path.exists(sync_ts_file) or not os.path.exists(export_ts_file):
        return False
    ts_last_sync = os.path.getmtime(sync_ts_file)
    ts_last_export = os.path.getmtime(export_ts_file)
    # Update synced timestamp file:
    update_sync_time_file(0)
    file_types = ('*.md', '*.txt', '*.markdown')
    for (root, dirnames, filenames) in os.walk(export_path):
        if '.obsidian' in dirnames:
            dirnames.remove('.obsidian')
        '''
        This step walks down into all sub folders, if any.
        '''
        for pattern in file_types:
            for filename in fnmatch.filter(filenames, pattern):
                md_file = os.path.join(root, filename)
                ts = os.path.getmtime(md_file)
                if ts > ts_last_sync:
                    if not updates_found:  # Yet
                        # Wait 5 sec at first for external files to finish downloading from dropbox.
                        # Otherwise images in textbundles might be missing in import:
                        time.sleep(5)
                    updates_found = True
                    md_text = read_file(md_file)
                    backup_ext_note(md_file)
                    if check_if_image_added(md_text, md_file):
                        textbundle_to_bear(md_text, md_file, ts)
                        write_log('Imported to Bear: ' + md_file)
                    else:
                        update_bear_note(md_text, md_file, ts, ts_last_export)
                        write_log('Bear Note Updated: ' + md_file)
    if updates_found:
        # Give Bear time to process updates:
        time.sleep(3)
        # Check again, just in case new updates synced from remote (OneDrive/Dropbox) 
        # during this process!
        # The logic is not 100% fool proof, but should be close to 99.99%
        sync_md_updates() # Recursive call
    return updates_found


def check_if_image_added(md_text, md_file):
    if not '.textbundle/' in md_file:
        return False
    matches = re.findall(r'!\[.*?\]\(assets/(.+?_).+?\)', md_text)
    for image_match in matches:
        'F89CDA3D-3FCC-4E92-88C1-CC4AF46FA733-10097-00002BBE9F7FF804_IMG_2280.JPG'
        if not re.match(r'[0-9A-F]{8}-([0-9A-F]{4}-){3}[0-9A-F]{12}-[0-9A-F]{3,5}-[0-9A-F]{16}_', image_match):
            return True
    return False        


def textbundle_to_bear(md_text, md_file, mod_dt):
    md_text = restore_tags(md_text)
    bundle = os.path.split(md_file)[0]
    match = re.search(r'\{BearID:(.+?)\}', md_text)
    if match:
        uuid = match.group(1)
        # Remove old BearID: from new note
        md_text = re.sub(r'\<\!-- ?\{BearID\:' + uuid + r'\} ?--\>', '', md_text).rstrip() + '\n'
        md_text = insert_link_top_note(md_text, 'Images added! Link to original note: ', uuid)
    else:
        # New textbundle (with images), add path as tag: 
        md_text = get_tag_from_path(md_text, bundle, export_path)
    write_file(md_file, md_text, mod_dt, 0)
    os.utime(bundle, (-1, mod_dt))
    subprocess.call(['open', '-a', '/applications/bear.app', bundle])
    time.sleep(0.5)


def backup_ext_note(md_file):
    if '.textbundle' in md_file:
        bundle_path = os.path.split(md_file)[0]
        bundle_name = os.path.split(bundle_path)[1]
        target = os.path.join(sync_backup, bundle_name)
        bundle_raw = os.path.splitext(target)[0]
        count = 2
        while os.path.exists(target):
            # Adding sequence number to identical filenames, preventing overwrite:
            target = bundle_raw + " - " + str(count).zfill(2) + ".textbundle"
            count += 1
        shutil.copytree(bundle_path, target)
    else:
        # Overwrite former bacups of incoming changes, only keeps last one:
        shutil.copy2(md_file, sync_backup + '/')


def update_sync_time_file(ts):
    write_file(sync_ts_file,
        "Checked for Markdown updates to sync at: " +
        datetime.datetime.now().strftime("%Y-%m-%d at %H:%M:%S"), ts, 0)


def update_bear_note(md_text, md_file, ts, ts_last_export):
    md_text = restore_tags(md_text)
    md_text = restore_image_links(md_text)
    uuid = ''
    match = re.search(r'\{BearID:(.+?)\}', md_text)
    sync_conflict = False
    if match:
        uuid = match.group(1)
        # Remove old BearID: from new note
        md_text = re.sub(r'\<\!-- ?\{BearID\:' + uuid + r'\} ?--\>', '', md_text).rstrip() + '\n'

        sync_conflict = check_sync_conflict(uuid, ts_last_export)
        if sync_conflict:
            link_original = 'bear://x-callback-url/open-note?id=' + uuid
            message = '::Sync conflict! External update: ' + time_stamp_ts(ts) + '::'
            message += '\n[Click here to see original Bear note](' + link_original + ')'
            x_create = 'bear://x-callback-url/create?show_window=no&open_note=no' 
            bear_x_callback(x_create, md_text, message, '')   
        else:
            # Regular external update
            orig_title = backup_bear_note(uuid)
            # message = '::External update: ' + time_stamp_ts(ts) + '::'   
            x_replace = 'bear://x-callback-url/add-text?show_window=no&open_note=no&mode=replace&id=' + uuid
            bear_x_callback(x_replace, md_text, '', orig_title)
            # # Trash old original note:
            # x_trash = 'bear://x-callback-url/trash?show_window=no&id=' + uuid
            # subprocess.call(["open", x_trash])
            # time.sleep(.2)
    else:
        # New external md Note, since no Bear uuid found in text: 
        # message = '::New external Note - ' + time_stamp_ts(ts) + '::' 
        md_text = get_tag_from_path(md_text, md_file, export_path)
        x_create = 'bear://x-callback-url/create?show_window=no' 
        bear_x_callback(x_create, md_text, '', '')
    return


def get_tag_from_path(md_text, md_file, root_path, inbox_for_root=False, extra_tag=''):
    # extra_tag should be passed as '#tag' or '#space tag#'
    path = md_file.replace(root_path, '')[1:]
    sub_path = os.path.split(path)[0]
    tags = []
    if '.textbundle' in sub_path:
        sub_path = os.path.split(sub_path)[0]
    if sub_path == '': 
        if inbox_for_root:
            tag = '#.inbox'
        else:
            tag = ''
    elif sub_path.startswith('_'):
        tag = '#.' + sub_path[1:].strip()
    else:
        tag = '#' + sub_path.strip()
    if ' ' in tag: 
        tag += "#"               
    if tag != '': 
        tags.append(tag)
    if extra_tag != '':
        tags.append(extra_tag)
    for tag in get_file_tags(md_file):
        tag = '#' + tag.strip()
        if ' ' in tag: tag += "#"                   
        tags.append(tag)
    return md_text.strip() + '\n\n' + ' '.join(tags) + '\n'


def get_file_tags(md_file):
    try:
        subprocess.call([gettag_sh, md_file, gettag_txt])
        text = re.sub(r'\\n\d{1,2}', r'', read_file(gettag_txt))
        tag_list = json.loads(text)
        return tag_list
    except:
        return []


open_config = NSWorkspaceOpenConfiguration.alloc().init()
open_config.setActivates_(False)

def bear_x_callback(x_command, md_text, message, orig_title):
    if message != '':
        lines = md_text.splitlines()
        lines.insert(1, message)
        md_text = '\n'.join(lines)
    if orig_title != '':
        lines = md_text.splitlines()
        title = re.sub(r'^#+ ', r'', lines[0])
        if title != orig_title:
            md_text = '\n'.join(lines)
        else:
            md_text = '\n'.join(lines[1:])        
    x_command_text = x_command + '&text=' + urllib.parse.quote(md_text)
    url = NSURL.URLWithString_(x_command_text)
    NSWorkspace.sharedWorkspace().openURL_configuration_completionHandler_(url, open_config, None)
    time.sleep(.2)


def check_sync_conflict(uuid, ts_last_export):
    conflict = False
    # Check modified date of original note in Bear sqlite db!
    with sqlite3.connect(bear_db) as conn:
        conn.row_factory = sqlite3.Row
        query = "SELECT * FROM `ZSFNOTE` WHERE `ZTRASHED` LIKE '0' AND `ZUNIQUEIDENTIFIER` LIKE '" + uuid + "'"
        c = conn.execute(query)
    for row in c:
        modified = row['ZMODIFICATIONDATE']
        uuid = row['ZUNIQUEIDENTIFIER']
        mod_dt = dt_conv(modified)
        conflict = mod_dt > ts_last_export
    return conflict


def backup_bear_note(uuid):
    # Get single note from Bear sqlite db!
    with sqlite3.connect(bear_db) as conn:
        conn.row_factory = sqlite3.Row
        query = "SELECT * FROM `ZSFNOTE` WHERE `ZUNIQUEIDENTIFIER` LIKE '" + uuid + "'"
        c = conn.execute(query)
    title = ''
    for row in c:  # Will only get one row if uuid is found!
        title = row['ZTITLE']
        md_text = row['ZTEXT'].rstrip()
        modified = row['ZMODIFICATIONDATE']
        mod_dt = dt_conv(modified)
        created = row['ZCREATIONDATE']
        cre_dt = dt_conv(created)
        md_text = insert_link_top_note(md_text, 'Link to updated note: ', uuid)
        dtdate = datetime.datetime.fromtimestamp(cre_dt)
        filename = clean_title(title) + dtdate.strftime(' - %Y-%m-%d_%H%M')
        if not os.path.exists(sync_backup):
            os.makedirs(sync_backup)
        file_part = os.path.join(sync_backup, filename) 
        # This is a Bear text file, not exactly markdown.
        backup_file = file_part + ".txt"
        count = 2
        while os.path.exists(backup_file):
            # Adding sequence number to identical filenames, preventing overwrite:
            backup_file = file_part + " - " + str(count).zfill(2) + ".txt"
            count += 1
        write_file(backup_file, md_text, mod_dt, created)
        filename2 = os.path.split(backup_file)[1]
        write_log('Original to sync_backup: ' + filename2)
    return title


def insert_link_top_note(md_text, message, uuid):
    lines = md_text.split('\n')
    title = re.sub(r'^#{1,6} ', r'', lines[0])
    link = '::' + message + '[' + title + '](bear://x-callback-url/open-note?id=' + uuid + ')::'        
    lines.insert(1, link) 
    return '\n'.join(lines)


def init_gettag_script():
    gettag_script = \
    '''#!/bin/bash
    if [[ ! -e $1 ]] ; then
    echo 'file missing or not specified'
    exit 0
    fi
    JSON="$(xattr -p com.apple.metadata:_kMDItemUserTags "$1" | xxd -r -p | plutil -convert json - -o -)"
    echo $JSON > "$2"
    '''
    temp = os.path.join(HOME, 'temp')
    if not os.path.exists(temp):
        os.makedirs(temp)
    write_file(gettag_sh, gettag_script, 0, 0)
    subprocess.call(['chmod', '777', gettag_sh])
    

def notify(message):
    title = "ul_sync_md.py"
    try:
        # Uses "terminal-notifier", download at:
        # https://github.com/julienXX/terminal-notifier/releases/download/2.0.0/terminal-notifier-2.0.0.zip
        # Only works with MacOS 10.11+
        subprocess.call(['/Applications/terminal-notifier.app/Contents/MacOS/terminal-notifier',
                         '-message', message, "-title", title, '-sound', 'default'])
    except:
        write_log('"terminal-notifier.app" is missing!')        
    return


if __name__ == '__main__':
    main()


================================================
FILE: bear_import.py
================================================
# encoding=utf-8
# python3.6
# bear_import.py
# Developed with Visual Studio Code with MS Python Extension.

'''
# Markdown import to Bear from folder  
Version 1.0.0 - 2018-02-10 at 17:37 EST
github/rovest, rorves@twitter

## NEW import function: 
* Imports markdown or textbundles from nested folders under a `BearImport/input/' folder
* Foldernames are converted to Bear tags
* Also imports MacOS file tags as Bear tags
* Imported notes are also tagged with `#.imported/yyyy-MM-dd` for convenience.
* Import-files are then cleared to a `BearImport/done/' folder
* Use for email input to Bear with Zapier's "Gmail to Dropbox" zap.
* Or for import of nested groups and sheets from Ulysses, images and keywords included.
'''

my_sync_service = 'Dropbox'  # Change 'Dropbox' to 'Box', 'Onedrive',
    # or whatever folder of sync service you need.
    # Your user "Home" folder is added below.

use_filename_as_title = False  # Set to `True` if importing Simplenotes synced with nvALT.
set_logging_on = True

# This tag is added for convenience (easy deletion of imported notes they are not wanted.)
# (Easier to delete one tag, than finding a bunch of tagless imported notes.)

import datetime
import re
import subprocess
import urllib.parse
import os
import time
import shutil
import fnmatch
import json

import_tag = '#.imported/' + datetime.datetime.now().strftime('%Y-%m-%d')
# import_tag = ''  # Blank if not needed

HOME = os.getenv('HOME', '')

# Import folder for files from other apps, 
# or incoming emails via "Gmail to Dropbox" Zapier zap or IFTTT
bear_import = os.path.join(HOME, my_sync_service, 'BearImport')
import_path = os.path.join(bear_import, 'input')
import_done = os.path.join(bear_import, 'done')

gettag_sh = os.path.join(HOME, 'temp/gettag.sh')
gettag_txt = os.path.join(HOME, 'temp/gettag.txt')


def main():
    if not os.path.exists(import_path):
        os.makedirs(import_path)
        print('New path, use it for import to Bear:', import_path)
        return False
    if not os.path.exists(import_done):
        os.makedirs(import_done)
    init_gettag_script()
    count = import_external_files()
    print(str(count), 'files imported.  Job done!')


def import_external_files():
    files_found = False
    file_types = ('*.md', '*.txt', '*.markdown')
    count = 0
    time.sleep(3)  # Wait a little bit after being triggered by Automator Folder Action
    for (root, dirnames, filenames) in os.walk(import_path):
        '''
        This step walks down into all sub folders, if any.
        '''
        for pattern in file_types:
            for filename in fnmatch.filter(filenames, pattern):
                if not files_found:  # Yet
                    # Wait 5 sec at first for external files to finish downloading from dropbox.
                    # Otherwise images in textbundles might be missing in import:
                    time.sleep(5)
                files_found = True
                md_file = os.path.join(root, filename)
                mod_dt = os.path.getmtime(md_file)
                md_text = read_file(md_file)
                if pattern == '*.txt':
                    # Replace rich text bullets to markdown:
                    # (When using with IFTTT or Zapier and Gmail to Dropbox zap.)
                    md_text = md_text.replace('\n• ', '\n- ')
                    md_text = md_text.replace('\n    • ', '\n\t- ')
                    md_text = md_text.replace('\n        • ', '\n\t\t- ')
                if re.search(r'!\[.*?\]\(assets/.+?\)', md_text) \
                    and '.textbundle/' in md_file:
                    # New textbundle with images:
                    bundle = os.path.split(md_file)[0]
                    md_text = get_tag_from_path(md_text, bundle, import_path, False)
                    write_file(md_file, md_text, mod_dt)
                    os.utime(bundle, (-1, mod_dt))
                    subprocess.call(['open', '-a', '/applications/bear.app', bundle])
                    time.sleep(0.5)
                    move_import_to_done(bundle, import_path, import_done)
                else:
                    title = ''
                    # No images, import markdown only even if textbundle:
                    if '.textbundle/' in md_file:
                        file_bundle = os.path.split(md_file)[0]
                    else:
                        file_bundle = md_file
                        if use_filename_as_title:
                            title = os.path.splitext(os.path.split(md_file)[1])[0]
                    md_text = get_tag_from_path(md_text, file_bundle, import_path, False)                    
                    x_create = 'bear://x-callback-url/create?show_window=no' 
                    bear_x_callback(x_create, md_text, title)
                    move_import_to_done(file_bundle, import_path, import_done)
                write_log('Imported to Bear: ', file_bundle)
                count += 1
    if files_found:
        # cleanup empty input sub folders here ??? 
        # But quite tricky since new files may appear. Bette to do that manually when needed.
        # Recursive call to look for leftovers/newly downloaded files: 
        count += import_external_files()
    return count


def move_import_to_done(file_bundle, import_path, import_done):
    file_path = file_bundle.replace(import_path + '/', '')
    sub_path = os.path.split(file_path)[0]
    dest_path = os.path.join(import_done, sub_path)
    if not os.path.exists(dest_path):
        os.makedirs(dest_path)
    count = 2
    file_name = os.path.split(file_bundle)[1]
    dest_file = os.path.join(dest_path, file_name)
    (file_raw, ext) = os.path.splitext(file_name)
    while os.path.exists(dest_file):
        # Adding sequence number to identical filenames, preventing overwrite:
        dest_file = os.path.join(dest_path, file_raw + " - " + str(count).zfill(2) + ext)
        count += 1
    # dest_path = os.path.split(dest_file)[0]
    shutil.move(file_bundle, dest_file)


def get_tag_from_path(md_text, file_bundle, root_path, inbox_for_root=True):
    path = file_bundle.replace(root_path, '')[1:]
    sub_path = os.path.split(path)[0]
    tags = []
    if sub_path == '': 
        if inbox_for_root:
            tag = '#.inbox'
        else:
            tag = ''
    elif sub_path.startswith('_'):
        tag = '#.' + sub_path[1:].strip()
    else:
        tag = '#' + sub_path.strip()
    if ' ' in tag: 
        tag += "#"               
    if tag != '': 
        tags.append(tag)
    if import_tag != '':
        tags.append(import_tag)
    for tag in get_file_tags(file_bundle):
        tag = '#' + tag.strip()
        if ' ' in tag: tag += "#"                   
        tags.append(tag)
    return md_text.strip() + '\n\n' + ' '.join(tags) + '\n'


def get_file_tags(file_bundle):
    try:
        subprocess.call([gettag_sh, file_bundle, gettag_txt])
        tags_raw = read_file(gettag_txt)
        tags_text = re.sub(r'\\n\d{1,2}', r'', tags_raw)
        tag_list = json.loads(tags_text)
        return tag_list
    except:
        return []


def bear_x_callback(x_command, md_text, title):
    if title != '' and not title.startswith("#"):
        md_text = '# ' + title + '\n' + md_text
    x_command_text = x_command + '&text=' + urllib.parse.quote(md_text)
    subprocess.call(["open", x_command_text])
    time.sleep(.2)


def init_gettag_script():
    gettag_script = \
    '''#!/bin/bash
    if [[ ! -e $1 ]] ; then
    echo 'file missing or not specified'
    exit 0
    fi
    JSON="$(xattr -p com.apple.metadata:_kMDItemUserTags "$1" | xxd -r -p | plutil -convert json - -o -)"
    echo $JSON > "$2"
    '''
    temp = os.path.join(HOME, 'temp')
    if not os.path.exists(temp):
        os.makedirs(temp)
    write_file(gettag_sh, gettag_script, 0)
    subprocess.call(['chmod', '777', gettag_sh])


def write_log(message, file_bundle):
    if set_logging_on == True:
        log_file = os.path.join(import_done, 'bear_import_log.txt')
        time_stamp = datetime.datetime.now().strftime("%Y-%m-%d at %H:%M:%S")
        # file_name = os.path.split(file_path)[1]
        file_path = file_bundle.replace(import_path + '/', '')
        with open(log_file, 'a', encoding='utf-8') as f:
            f.write(time_stamp + ': ' + message + file_path +'\n')


def write_file(filename, file_content, modified):
    with open(filename, "w", encoding='utf-8') as f:
        f.write(file_content)
    if modified > 0:
        os.utime(filename, (-1, modified))


def read_file(file_name):
    with open(file_name, "r", encoding='utf-8') as f:
        file_content = f.read()
    return file_content


if __name__ == '__main__':
    main()
Download .txt
gitextract_jcda48_b/

├── .gitattributes
├── .gitignore
├── Bear Import.md
├── LICENSE
├── README.md
├── bear_export_sync.py
└── bear_import.py
Download .txt
SYMBOL INDEX (47 symbols across 2 files)

FILE: bear_export_sync.py
  function main (line 132) | def main():
  function write_log (line 151) | def write_log(message):
  function check_db_modified (line 161) | def check_db_modified():
  function export_markdown (line 169) | def export_markdown():
  function check_image_hybrid (line 214) | def check_image_hybrid(md_text):
  function make_text_bundle (line 224) | def make_text_bundle(md_text, filepath, mod_dt):
  function sub_path_from_tag (line 254) | def sub_path_from_tag(temp_path, filename, md_text):
  function process_image_links (line 314) | def process_image_links(md_text, filepath, conn, pk):
  function restore_image_links (line 349) | def restore_image_links(md_text):
  function copy_bear_images (line 359) | def copy_bear_images():
  function write_time_stamp (line 365) | def write_time_stamp():
  function hide_tags (line 373) | def hide_tags(md_text):
  function restore_tags (line 380) | def restore_tags(md_text):
  function clean_title (line 387) | def clean_title(title):
  function write_file (line 396) | def write_file(filename, file_content, modified, created):
  function read_file (line 409) | def read_file(file_name):
  function get_file_date (line 415) | def get_file_date(filename):
  function dt_conv (line 423) | def dt_conv(dtnum):
  function date_time_conv (line 431) | def date_time_conv(dtnum):
  function time_stamp_ts (line 438) | def time_stamp_ts(ts):
  function date_conv (line 443) | def date_conv(dtnum):
  function delete_old_temp_files (line 448) | def delete_old_temp_files():
  function rsync_files_from_temp (line 460) | def rsync_files_from_temp():
  function sync_md_updates (line 484) | def sync_md_updates():
  function check_if_image_added (line 527) | def check_if_image_added(md_text, md_file):
  function textbundle_to_bear (line 538) | def textbundle_to_bear(md_text, md_file, mod_dt):
  function backup_ext_note (line 556) | def backup_ext_note(md_file):
  function update_sync_time_file (line 573) | def update_sync_time_file(ts):
  function update_bear_note (line 579) | def update_bear_note(md_text, md_file, ts, ts_last_export):
  function get_tag_from_path (line 616) | def get_tag_from_path(md_text, md_file, root_path, inbox_for_root=False,...
  function get_file_tags (line 645) | def get_file_tags(md_file):
  function bear_x_callback (line 658) | def bear_x_callback(x_command, md_text, message, orig_title):
  function check_sync_conflict (line 676) | def check_sync_conflict(uuid, ts_last_export):
  function backup_bear_note (line 691) | def backup_bear_note(uuid):
  function insert_link_top_note (line 724) | def insert_link_top_note(md_text, message, uuid):
  function init_gettag_script (line 732) | def init_gettag_script():
  function notify (line 749) | def notify(message):

FILE: bear_import.py
  function main (line 56) | def main():
  function import_external_files (line 68) | def import_external_files():
  function move_import_to_done (line 126) | def move_import_to_done(file_bundle, import_path, import_done):
  function get_tag_from_path (line 144) | def get_tag_from_path(md_text, file_bundle, root_path, inbox_for_root=Tr...
  function get_file_tags (line 170) | def get_file_tags(file_bundle):
  function bear_x_callback (line 181) | def bear_x_callback(x_command, md_text, title):
  function init_gettag_script (line 189) | def init_gettag_script():
  function write_log (line 206) | def write_log(message, file_bundle):
  function write_file (line 216) | def write_file(filename, file_content, modified):
  function read_file (line 223) | def read_file(file_name):
Condensed preview — 7 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (51K chars).
[
  {
    "path": ".gitattributes",
    "chars": 66,
    "preview": "# Auto detect text files and perform LF normalization\n* text=auto\n"
  },
  {
    "path": ".gitignore",
    "chars": 10,
    "preview": ".DS_Store\n"
  },
  {
    "path": "Bear Import.md",
    "chars": 2402,
    "preview": "## Bear Markdown and textbundle import – with tags from file and folder.\n\n***bear_import.py***  \n*Version 1.0.0 - 2018-0"
  },
  {
    "path": "LICENSE",
    "chars": 1063,
    "preview": "MIT License\n\nCopyright (c) 2018 rovest\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof "
  },
  {
    "path": "README.md",
    "chars": 6206,
    "preview": "## Markdown export and sync of Bear notes\n\n***bear_export_sync.py***   \n*Version 1.4, 2020-01-11*\n\nPython script for exp"
  },
  {
    "path": "bear_export_sync.py",
    "chars": 30583,
    "preview": "# encoding=utf-8\n# python3.6\n# bear_export_sync.py\n# Developed with Visual Studio Code with MS Python Extension.\n\nimport"
  },
  {
    "path": "bear_import.py",
    "chars": 8674,
    "preview": "# encoding=utf-8\n# python3.6\n# bear_import.py\n# Developed with Visual Studio Code with MS Python Extension.\n\n'''\n# Markd"
  }
]

About this extraction

This page contains the full source code of the andymatuschak/Bear-Markdown-Export GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 7 files (47.9 KB), approximately 12.1k tokens, and a symbol index with 47 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!