[
  {
    "path": ".gitattributes",
    "content": "# Auto detect text files and perform LF normalization\n* text=auto\n"
  },
  {
    "path": ".gitignore",
    "content": ".DS_Store\n"
  },
  {
    "path": "Bear Import.md",
    "content": "## Bear Markdown and textbundle import – with tags from file and folder.\n\n***bear_import.py***  \n*Version 1.0.0 - 2018-02-10 at 17:37 EST*\n\n*See also:* **[bear_export_sync.py](https://github.com/rovest/Bear-Markdown-Export/blob/master/README.md)** *for export with sync-back.*\n\n\n### Features \n\n* Imports markdown or textbundles from nested folders under a `BearImport/input/' folder\n* Foldernames are converted to Bear tags\n* Also imports MacOS file tags as Bear tags\n* Imported notes are also tagged with `#.imported/yyyy-MM-dd` for convenience.\n* Import-files are then cleared to a `BearImport/done/' folder\n* Use for email input to Bear with Zapier's \"Gmail to Dropbox\" zap.\n* Or for import of nested groups and sheets from Ulysses, images and keywords included.\n\n\n### Trigger script with Automator Folder Action\n\n1. New Automator file as `Folder Action` \n2. Set `Folder action receives files and folders added to`: `{user}/Dropbox/BearImport/Input`\n3. Add action: `Run Shell Script` choose `bin/bash`\n4. Insert one line with full paths to python and script (Use \"\" if spaces in paths!):  \n`/Library/Frameworks/Python.framework/Versions/3.6/bin/python3.6 \"/Users/username/scripts/bear_import.py\"`\n5. Save as `Bear Import` or whatever you choose.\n\nOr skip all this and run it manually :)\n\n\n### Get mail to Bear with \"Zapier Gmail to Dropbox\" action\n\n1. Create a free zapier.com account.\n2. Use a dedicated gmail account or setup a filter assigning a label used by zapier. \n3. Make a Zapier zap. See: [Add new Gmail emails to Dropbox as text files](https://zapier.com/apps/dropbox/integrations/gmail/10323/add-new-gmail-emails-to-dropbox-as-text-files)\n\t1. Set zap to monitor inbox with label (assigned by filter in step 2.)\n\t2. Set zap Dropbox output to `{user}/Dropbox/BearImport/Input` \n\n- Zap will now check for new email (with matching gmail label) every 15 minutes and script above will import to Bear.\n- Alternately on iOS: use this workflow (import to Bear from same Dropbox folder): [Gmail-DB zap to Bear](https://workflow.is/workflows/827b9b2518d5476ca0158a67d5b492fa)\n\n### Import from Ulysses’ external folders on Mac\n\n1. Add `{user}/Dropbox/BearImport/Input` as external folder\n2. Edit folder settings to `.textbundle` and `Inline Links`!\n3. Drag any library group to this folder in Ulysses' sidebar.\n4. Voilà – Imports to Bear with images and tags (both from group names and keywords).\n\n\n"
  },
  {
    "path": "LICENSE",
    "content": "MIT License\n\nCopyright (c) 2018 rovest\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "README.md",
    "content": "## Markdown export and sync of Bear notes\n\n***bear_export_sync.py***   \n*Version 1.4, 2020-01-11*\n\nPython script for export and roundtrip sync of Bear's notes to OneDrive, Dropbox, etc. and edit online with [StackEdit](https://stackedit.io/app), or use a markdown editor like *Typora* on Windows or a suitable app on Android. Remote edits and new notes get synced back into Bear with this script.\n\n**See also: [Bear Markdown and textbundle import – with tags from file and folder](https://github.com/rovest/Bear-Markdown-Export/blob/master/Bear%20Import.md)**\n\nSet up seamless syncing with Ulysses’ external folders on Mac, with images included!  \nWrite and add photos in Bear, then reorder, glue, and publish, export, or print with styles in Ulysses—  \nbears and butterflies are best friends ;)  \n(PS. The manual order you set for notes in Ulysses' external folder, is maintained during syncs, unless title is changed.) \n\nSuitable for use with https://github.com/andymatuschak/note-link-janitor.\n\nBEAR IN MIND! This is a free to use version, and please improve or modify as needed. But do be careful! both `rsync` and `shutil.rmtree` used here, are powerful commands that can wipe clean a whole folder tree or even your complete HD if paths are set incorrectly! Also, be safe, take a fresh backup of both Bear and your Mac before first run.\n\n*See also: [Bear Power Pack](https://github.com/rovest/Bear-Power-Pack/blob/master/README.md)*\n\n## Usage\n\n```\npython bear_export_sync.py --out ~/Notes/Bear --backup ~/Notes/Backup\n```\n\nSee `--help` for more.\n\n## Features\n\n* Bear notes exported as plain Markdown or Textbundles with images.\n* Syncs external edits back to Bear with original image links intact. \n* New external `.md` files or `.textbundles` are added.  \n(Tags created from sub folder name)\n* Export option: Make nested folders from tags.   \nFor first tag only, or all tags (duplicates notes)\n* Export option: Include or exclude export of notes with specific tags.\n* Export option: Export as `.textbundles` with images included. \n* Or as: `.md` with links to common image repository \n* Export option: Hide tags in HTML comments like: `<!-- #mytag -->` if `hide_tags_in_comment_block = True`\n* **NEW** Hybrid export: `.textbundles` of notes with images, otherwise regular `.md` (Makes it easier to browse and edit on other platforms.)\n* **NEW** Writes log to `bear_export_sync_log.txt` in `BearSyncBackup` folder.\n\nEdit your Bear notes online in browser on [OneDrive.com](https://onedrive.live.com). It has a ok editor for plain text/markdown. Or with [StackEdit](https://stackedit.io/app), an amazing online markdown editor that can sync with *Dropbox* or *Google Drive*\n\nRead and edit your Bear notes on *Windows* or *Android* with any markdown editor of choice. Remote edits or new notes will be synced back into Bear again. *Typora* works great on Windows, and displays images of text bundles as well.\n\nNOTE! If syncing with Ulysses’ external folders on Mac, remember to edit that folder settings to `.textbundle` and `Inline Links`!\n\nRun script manually or add it to a cron job for automatic syncing (every 5 – 15 minutes, or whatever you prefer).  \n([LaunchD Task Scheduler](https://itunes.apple.com/us/app/launchd-task-scheduler/id620249105?mt=12) Is easy to configure and works very well for this) \n\n\n### Syncs external edits back into Bear\nScript first checks for external edits in Markdown files or textbundles (previously exported from Bear as described below):\n\n* It replaces text in original note with `bear://x-callback-url/add-text?mode=replace` command   \n(That way keeping original note ID and creation date)  \nIf any changes to title, new title will be added just below original title.  \n(`mode=replace` does not replace title)\n* Original note in `sqlite` database and external edit are both backed up as markdown-files to BearSyncBackup folder before import to bear.\n* If a sync conflict, both original and new version will be in Bear (the new one with a sync conflict message and link to original).\n* New notes created online, are just added to Bear  \n(with the `bear://x-callback-url/create` command)\n* If a textbundle gets new images from an external app, it will be opened and imported as a new note in Bear, with message and link to original note.  \n(The `subprocess.call(['open', '-a', '/applications/bear.app', bundle])` command is used for this)\n\n\n### Markdown export to Dropbox, OneDrive, or other:\nThen exports all notes from Bear's database.sqlite as plain markdown files:\n\n* Checks modified timestamp on database.sqlite, so exports only when needed.\n* Sets Bear note's modification date on exported markdown files.\n* Appends Bear note's creation date to filename to avoid “title-filename-collisions”\n* Note IDs are included at bottom of markdown files to match original note on sync back:  \n\t{BearID:730A5BD2-0245-4EF7-BE16-A5217468DF0E-33519-0000429ADFD9221A}  \n(these ID's are striped off again when synced back into Bear)\n* Uses rsync for copying (from a temp folder), so only changed notes will be synced to Dropbox (or other sync services)\n* rsync also takes care of deleting trashed notes\n* \"Hides” tags from being displayed as H1 in other markdown apps by adding `period+space` in front of first tag on a line:   \n`. #bear #idea #python`   \n* Or hide tags in HTML comment blocks like: `<!-- #mytag -->` if `hide_tags_in_comment_block = True`   \n(these are striped off again when synced back into Bear)\n* Makes subfolders named with first tag in note if `make_tag_folders = True`\n* Files can now be copied to multiple tag-folders if `multi_tags = True`\n* Export can now be restricted to a list of spesific tags: `limit_export_to_tags = ['bear/github', 'writings']`  \nor leave list empty for all notes: `limit_export_to_tags = []`\n* Can export and link to images in common image repository\n* Or export as textbundles with images included \n\n\nYou have Bear on Mac but also want your notes on your Android phone, on Linux or Windows machine at your office. Or you want them available online in a browser from any desktop computer. Here is a solution (or call it workaround) for now, until Bear comes with an online, Windows, or Android solution ;)\n\nHappy syncing! ;)\n"
  },
  {
    "path": "bear_export_sync.py",
    "content": "# encoding=utf-8\n# python3.6\n# bear_export_sync.py\n# Developed with Visual Studio Code with MS Python Extension.\n\nimport shlex\nimport objc\nfrom AppKit import NSWorkspace, NSWorkspaceOpenConfiguration, NSURL\n\n'''\n# Markdown export from Bear sqlite database \nVersion 1.4, 2020-01-11\nmodified by: github/andymatuschak, andy_matuschak@twitter\noriginal author: github/rovest, rorves@twitter\n\nSee also: bear_import.py for auto import to bear script.\n\n## Sync external updates:\nFirst checks for changes in external Markdown files (previously exported from Bear)\n* Replacing text in original note with callback-url replace command   \n  (Keeping original creation date)\n  If changes in title it will be added just below original title\n* New notes are added to Bear (with x-callback-url command)\n* New notes get tags from sub folder names, or `#.inbox` if root\n* Backing up original note as file to BearSyncBackup folder  \n  (unless a sync conflict, then both notes will be there)\n\n## Export:\nThen exporting Markdown from Bear sqlite db.\n* check_if_modified() on database.sqlite to see if export is needed\n* Uses rsync for copying, so only markdown files of changed sheets will be updated  \n  and synced by Dropbox (or other sync services)\n* \"Hides\" tags with `period+space` on beginning of line: `. #tag` not appear as H1 in other apps.   \n  (This is removed if sync-back above)\n* Or instead hide tags in HTML comment blocks like: `<!-- #mytag -->` if `hide_tags_in_comment_block = True`\n* Makes subfolders named with first tag in note if `make_tag_folders = True`\n* Files can now be copied to multiple tag-folders if `multi_tags = True`\n* Export can now be restricted to a list of spesific tags: `limit_export_to_tags = ['bear/github', 'writings']`  \nor leave list empty for all notes: `limit_export_to_tags = []`\n* Can export and link to images in common image repository\n* Or export as textbundles with images included \n'''\n\nmake_tag_folders = False  # Exports to folders using first tag only, if `multi_tag_folders = False`\nmulti_tag_folders = True  # Copies notes to all 'tag-paths' found in note!\n                          # Only active if `make_tag_folders = True`\nhide_tags_in_comment_block = False  # Hide tags in HTML comments: `<!-- #mytag -->`\n\n# The following two lists are more or less mutually exclusive, so use only one of them.\n# (You can use both if you have some nested tags where that makes sense)\n# Also, they only work if `make_tag_folders = True`.\nonly_export_these_tags = []  # Leave this list empty for all notes! See below for sample\n# only_export_these_tags = ['bear/github', 'writings'] \n\nexport_as_textbundles = False  # Exports as Textbundles with images included\nexport_as_hybrids = True  # Exports as .textbundle only if images included, otherwise as .md\n                          # Only used if `export_as_textbundles = True`\nexport_image_repository = True  # Export all notes as md but link images to \n                                 # a common repository exported to: `assets_path` \n                                 # Only used if `export_as_textbundles = False`\n\nimport os\nHOME = os.getenv('HOME', '')\ndefault_out_folder = os.path.join(HOME, \"Work\", \"BearNotes\")\ndefault_backup_folder = os.path.join(HOME, \"Work\", \"BearSyncBackup\")\n\n# NOTE! Your user 'HOME' path and '/BearNotes' is added below!\n# NOTE! So do not change anything below here!!!\n\nimport sqlite3\nimport datetime\nimport re\nimport subprocess\nimport urllib.parse\nimport time\nimport shutil\nimport fnmatch\nimport json\nimport argparse\n\nparser = argparse.ArgumentParser(description=\"Sync Bear notes\")\nparser.add_argument(\"--out\", default=default_out_folder, help=\"Path where Bear notes will be synced\")\nparser.add_argument(\"--backup\", default=default_backup_folder, help=\"Path where conflicts will be backed up (must be outside of --out)\")\nparser.add_argument(\"--images\", default=None, help=\"Path where images will be stored\")\nparser.add_argument(\"--skipImport\", action=\"store_const\", const=True, default=False, help=\"When present, the script only exports from Bear to Markdown; it skips the import step.\")\nparser.add_argument(\"--excludeTag\", action=\"append\", default=[], help=\"Don't export notes with this tag. Can be used multiple times.\")\nparser.add_argument(\"--hideTags\", action=\"store_const\", const=True, default=False, help=\"Wrap tags in <!-- -->\")\n\nparsed_args = vars(parser.parse_args())\n\n\nset_logging_on = True\n\n# NOTE! if 'BearNotes' is left blank, all other files in my_sync_service will be deleted!! \nexport_path = parsed_args.get(\"out\")\nno_export_tags = parsed_args.get(\"excludeTag\")  # If a tag in note matches one in this list, it will not be exported.\nhide_tags_in_comment_block = parsed_args.get(\"hideTags\");\n\n# NOTE! \"export_path\" is used for sync-back to Bear, so don't change this variable name!\nmulti_export = [(export_path, True)]  # only one folder output here. \n# Use if you want export to severa places like: Dropbox and OneDrive, etc. See below\n# Sample for multi folder export:\n# export_path_aux1 = os.path.join(HOME, 'OneDrive', 'BearNotes')\n# export_path_aux2 = os.path.join(HOME, 'Box', 'BearNotes')\n\n# NOTE! All files in export path not in Bear will be deleted if delete flag is \"True\"!\n# Set this flag fo False only for folders to keep old deleted versions of notes\n# multi_export = [(export_path, True), (export_path_aux1, False), (export_path_aux2, True)]\n\ntemp_path = os.path.join(HOME, 'Temp', 'BearExportTemp')  # NOTE! Do not change the \"BearExportTemp\" folder name!!!\nbear_db = os.path.join(HOME, 'Library/Group Containers/9K33E3U3T4.net.shinyfrog.bear/Application Data/database.sqlite')\nsync_backup = parsed_args.get(\"backup\") # Backup of original note before sync to Bear.\nlog_file = os.path.join(sync_backup, 'bear_export_sync_log.txt')\n\n# Paths used in image exports:\nbear_image_path = os.path.join(HOME,\n    'Library/Group Containers/9K33E3U3T4.net.shinyfrog.bear/Application Data/Local Files/Note Images')\nassets_path = parsed_args.get(\"images\") if parsed_args.get(\"images\") else os.path.join(export_path, 'BearImages')\n\nsync_ts = '.sync-time.log'\nexport_ts = '.export-time.log'\n\nsync_ts_file = os.path.join(export_path, sync_ts)\nsync_ts_file_temp = os.path.join(temp_path, sync_ts)\nexport_ts_file_exp = os.path.join(export_path, export_ts)\nexport_ts_file = os.path.join(temp_path, export_ts)\n\ngettag_sh = os.path.join(HOME, 'temp/gettag.sh')\ngettag_txt = os.path.join(HOME, 'temp/gettag.txt')\n\n\ndef main():\n    init_gettag_script()\n    if not parsed_args.get(\"skipImport\"):\n        sync_md_updates()\n    if check_db_modified():\n        delete_old_temp_files()\n        note_count = export_markdown()\n        write_time_stamp()\n        rsync_files_from_temp()\n        if export_image_repository and not export_as_textbundles:\n            copy_bear_images()\n        # notify('Export completed')\n        write_log(str(note_count) + ' notes exported to: ' + export_path)\n        exit(1)\n    else:\n        print('*** No notes needed exports')\n        exit(0)\n\n\ndef write_log(message):\n    if set_logging_on == True:\n        if not os.path.exists(sync_backup):\n            os.makedirs(sync_backup)\n        time_stamp = datetime.datetime.now().strftime(\"%Y-%m-%d at %H:%M:%S\")\n        message = message.replace(export_path + '/', '')\n        with open(log_file, 'a', encoding='utf-8') as f:\n            f.write(time_stamp + ': ' + message + '\\n')\n\n\ndef check_db_modified():\n    if not os.path.exists(sync_ts_file):\n        return True\n    db_ts = get_file_date(bear_db)\n    last_export_ts = get_file_date(export_ts_file_exp)\n    return db_ts > last_export_ts\n\n\ndef export_markdown():\n    with sqlite3.connect(bear_db) as conn:\n        conn.row_factory = sqlite3.Row\n        query = \"SELECT * FROM `ZSFNOTE` WHERE `ZTRASHED` LIKE '0' AND `ZARCHIVED` LIKE '0'\"\n        c = conn.execute(query)\n        note_count = 0\n        for row in c:\n            title = row['ZTITLE']\n            md_text = row['ZTEXT'].rstrip()\n            creation_date = row['ZCREATIONDATE']\n            modified = row['ZMODIFICATIONDATE']\n            uuid = row['ZUNIQUEIDENTIFIER']\n            pk = row['Z_PK']\n            filename = clean_title(title)\n            file_list = []\n            if make_tag_folders:\n                file_list = sub_path_from_tag(temp_path, filename, md_text)\n            else:\n                is_excluded = False\n                for no_tag in no_export_tags:\n                    if (\"#\" + no_tag) in md_text:\n                        is_excluded = True\n                        break\n                if not is_excluded:\n                    file_list.append(os.path.join(temp_path, filename))\n            if file_list:\n                mod_dt = dt_conv(modified)\n                md_text = hide_tags(md_text)\n                md_text += '\\n\\n<!-- {BearID:' + uuid + '} -->\\n'\n                for filepath in file_list:\n                    note_count += 1\n                    # print(filepath)\n                    if export_as_textbundles:\n                        if check_image_hybrid(md_text):\n                            make_text_bundle(md_text, filepath, mod_dt)                        \n                        else:\n                            write_file(filepath + '.md', md_text, mod_dt, creation_date)\n                    elif export_image_repository:\n                        md_proc_text = process_image_links(md_text, filepath, conn, pk)\n                        write_file(filepath + '.md', md_proc_text, mod_dt, creation_date)\n                    else:\n                        write_file(filepath + '.md', md_text, mod_dt, creation_date)\n    return note_count\n\n\ndef check_image_hybrid(md_text):\n    if export_as_hybrids:\n        if re.search(r'\\[image:(.+?)\\]', md_text):\n            return True\n        else:\n            return False\n    else:\n        return True\n\n\ndef make_text_bundle(md_text, filepath, mod_dt):\n    '''\n    Exports as Textbundles with images included \n    '''\n    bundle_path = filepath + '.textbundle'\n    assets_path = os.path.join(bundle_path, 'assets')    \n    if not os.path.exists(bundle_path):\n        os.makedirs(bundle_path)\n        os.makedirs(assets_path)\n        \n    info = '''{\n    \"transient\" : true,\n    \"type\" : \"net.daringfireball.markdown\",\n    \"creatorIdentifier\" : \"net.shinyfrog.bear\",\n    \"version\" : 2\n    }'''\n    matches = re.findall(r'\\[image:(.+?)\\]', md_text)\n    for match in matches:\n        image_name = match\n        new_name = image_name.replace('/', '_')\n        source = os.path.join(bear_image_path, image_name)\n        target = os.path.join(assets_path, new_name)\n        shutil.copy2(source, target)\n\n    md_text = re.sub(r'\\[image:(.+?)/(.+?)\\]', r'![](assets/\\1_\\2)', md_text)\n    write_file(bundle_path + '/text.md', md_text, mod_dt, 0)\n    write_file(bundle_path + '/info.json', info, mod_dt, 0)\n    os.utime(bundle_path, (-1, mod_dt))\n\n\ndef sub_path_from_tag(temp_path, filename, md_text):\n    # Get tags in note:\n    pattern1 = r'(?<!\\S)\\#([.\\w\\/\\-]+)[ \\n]?(?!([\\/ \\w]+\\w[#]))'\n    pattern2 = r'(?<![\\S])\\#([^ \\d][.\\w\\/ ]+?)\\#([ \\n]|$)'\n    if multi_tag_folders:\n        # Files copied to all tag-folders found in note\n        tags = []\n        for matches in re.findall(pattern1, md_text):\n            tag = matches[0]\n            tags.append(tag)\n        for matches2 in re.findall(pattern2, md_text):\n            tag2 = matches2[0]\n            tags.append(tag2)\n        if len(tags) == 0:\n            # No tags found, copy to root level only\n            return [os.path.join(temp_path, filename)]\n    else:\n        # Only folder for first tag\n        match1 =  re.search(pattern1, md_text)\n        match2 =  re.search(pattern2, md_text)\n        if match1 and match2:\n            if match1.start(1) < match2.start(1):\n                tag = match1.group(1)\n            else:\n                tag = match2.group(1)\n        elif match1:\n            tag = match1.group(1)\n        elif match2:\n            tag = match2.group(1)\n        else:\n            # No tags found, copy to root level only\n            return [os.path.join(temp_path, filename)]\n        tags = [tag]\n    paths = [os.path.join(temp_path, filename)]\n    for tag in tags:\n        if tag == '/':\n            continue\n        if only_export_these_tags:\n            export = False\n            for export_tag in only_export_these_tags:\n                if tag.lower().startswith(export_tag.lower()):\n                    export = True\n                    break\n            if not export:\n                continue\n        for no_tag in no_export_tags:\n            if tag.lower().startswith(no_tag.lower()):\n                return []\n        if tag.startswith('.'):\n            # Avoid hidden path if it starts with a '.'\n            sub_path = '_' + tag[1:]     \n        else:\n            sub_path = tag    \n        tag_path = os.path.join(temp_path, sub_path)\n        if not os.path.exists(tag_path):\n            os.makedirs(tag_path)\n        paths.append(os.path.join(tag_path, filename))      \n    return paths\n\n\ndef process_image_links(md_text, filepath, conn, pk):\n    image_map = None\n    remaining_images = set()\n    def replace_image_link(match):\n        # We're only processing local assets.\n        if match.group(2).startswith(\"http\"):\n            return match.group(0)\n\n        nonlocal image_map\n        if image_map is None:\n            image_map = {}\n            files = conn.execute(\"SELECT * FROM `ZSFNOTEFILE` WHERE ZNOTE = ?\", (pk,))\n            for row in files:\n                filename = row[\"ZFILENAME\"]\n                uuid = row[\"ZUNIQUEIDENTIFIER\"]\n                out_file_path = os.path.relpath(assets_path, export_path) + f\"/{uuid}/{filename}\"\n                image_map[filename] = out_file_path\n                remaining_images.add(filename)\n\n        # Markdown image URLs are percent-encoded, but the Bear database is not.\n        image_filename = urllib.parse.unquote(match.group(2))\n        out_file_path = image_map.get(image_filename)\n        if out_file_path is None:\n            print(f\"WARNING: Note {filepath} has image {image_filename} which was not found in database. Skipping.\")\n            return match.group(0)\n        remaining_images.remove(image_filename)\n        encoded_out_file_path = urllib.parse.quote(out_file_path)\n        return f\"![{match.group(1)}]({encoded_out_file_path})\"\n\n    if remaining_images:\n        print(f\"WARNING: Note {filepath} has images in the database which weren't matched in the note: {remaining_images}\")\n    out_text = re.sub(r'!\\[(.*?)\\]\\((.+?)\\)', replace_image_link, md_text)\n    return out_text\n\n\ndef restore_image_links(md_text):\n    # TODO: add new external images to Bear when necessary\n    if export_as_textbundles:\n        return re.sub(r'!\\[(.*?)\\]\\(assets/(.+?)_(.+?)( \".+?\")?\\) ?', r'[image:\\2/\\3]\\4 \\1', md_text)\n    elif export_image_repository:\n        relative_asset_path = os.path.relpath(assets_path, export_path)\n        return re.sub(r'!\\[(.*?)\\]\\(' + re.escape(relative_asset_path) + r'/(.+?)/(.+?)\\)', r'![\\1](\\3)', md_text)\n        \n\n\ndef copy_bear_images():\n    # Image files copied to a common image repository\n    subprocess.call(['rsync', '-r', '-t', '--delete', \n                    bear_image_path + \"/\", assets_path])\n\n\ndef write_time_stamp():\n    # write to time-stamp.txt file (used during sync)\n    write_file(export_ts_file, \"Markdown from Bear written at: \" +\n               datetime.datetime.now().strftime(\"%Y-%m-%d at %H:%M:%S\"), 0, 0)\n    write_file(sync_ts_file_temp, \"Markdown from Bear written at: \" +\n               datetime.datetime.now().strftime(\"%Y-%m-%d at %H:%M:%S\"), 0, 0)\n\n\ndef hide_tags(md_text):\n    # Hide tags from being seen as H1, by placing `period+space` at start of line:\n    if hide_tags_in_comment_block:\n        md_text =  re.sub(r'(\\n)[ \\t]*(\\#[^\\s#].*)', r'\\1<!-- \\2 -->', md_text)\n    return md_text\n\n\ndef restore_tags(md_text):\n    # Tags back to normal Bear tags, stripping the `period+space` at start of line:\n    if hide_tags_in_comment_block:\n        md_text =  re.sub(r'(\\n)<!--[ \\t]*(\\#[^\\s#].*?) -->', r'\\1\\2', md_text)\n    return md_text\n\n\ndef clean_title(title):\n    title = title[:225].strip()\n    if title == \"\":\n        title = \"Untitled\"\n    title = re.sub(r'[\\/\\\\:]', r'-', title)\n    title = re.sub(r'-$', r'', title)    \n    return title.strip()\n\n\ndef write_file(filename, file_content, modified, created):\n    with open(filename, \"w\", encoding='utf-8') as f:\n        f.write(file_content)\n    if modified > 0:\n        os.utime(filename, (-1, modified))\n    if created > 0:\n        newnum = dt_conv(created)\n        dtdate = datetime.datetime.fromtimestamp(newnum)\n        datestring = dtdate.strftime(\"%m/%d/%Y %H:%M:%S\")\n        command = 'SetFile -d \"' + datestring + '\" ' + shlex.quote(filename)\n        subprocess.call(command, shell=True)\n\n\ndef read_file(file_name):\n    with open(file_name, \"r\", encoding='utf-8') as f:\n        file_content = f.read()\n    return file_content\n\n\ndef get_file_date(filename):\n    try:\n        t = os.path.getmtime(filename)\n        return t\n    except:\n        return 0\n\n\ndef dt_conv(dtnum):\n    # Formula for date offset based on trial and error:\n    hour = 3600 # seconds\n    year = 365.25 * 24 * hour\n    offset = year * 31 + hour * 6\n    return dtnum + offset\n\n\ndef date_time_conv(dtnum):\n    newnum = dt_conv(dtnum) \n    dtdate = datetime.datetime.fromtimestamp(newnum)\n    #print(newnum, dtdate)\n    return dtdate.strftime(' - %Y-%m-%d_%H%M')\n\n\ndef time_stamp_ts(ts):\n    dtdate = datetime.datetime.fromtimestamp(ts)\n    return dtdate.strftime('%Y-%m-%d at %H:%M') \n\n\ndef date_conv(dtnum):\n    dtdate = datetime.datetime.fromtimestamp(dtnum)\n    return dtdate.strftime('%Y-%m-%d')\n\n\ndef delete_old_temp_files():\n    # Deletes all files in temp folder before new export using \"shutil.rmtree()\":\n    # NOTE! CAUTION! Do not change this function unless you really know shutil.rmtree() well!\n    if os.path.exists(temp_path) and \"BearExportTemp\" in temp_path:\n        # *** NOTE! Double checking that temp_path folder actually contains \"BearExportTemp\"\n        # *** Because if temp_path is accidentally empty or root,\n        # *** shutil.rmtree() will delete all files on your complete Hard Drive ;(\n        shutil.rmtree(temp_path)\n        # *** NOTE: USE rmtree() WITH EXTREME CAUTION!\n    os.makedirs(temp_path)\n\n\ndef rsync_files_from_temp():\n    # Moves markdown files to new folder using rsync:\n    # This is a very important step! \n    # By first exporting all Bear notes to an emptied temp folder,\n    # rsync will only update destination if modified or size have changed.\n    # So only changed notes will be synced by Dropbox or OneDrive destinations.\n    # Rsync will also delete notes on destination if deleted in Bear.\n    # So doing it this way saves a lot of otherwise very complex programing.\n    # Thank you very much, Rsync! ;)\n    for (dest_path, delete) in multi_export:\n        if not os.path.exists(dest_path):\n            os.makedirs(dest_path)\n        if delete:\n            subprocess.call(['rsync', '-r', '-t', '--crtimes', '-E', '--delete',\n                             '--exclude', 'BearImages/',\n                             '--exclude', '.obsidian/',\n                             '--exclude', '.Ulysses*',\n                             '--exclude', '*.Ulysses_Public_Filter',\n                             temp_path + \"/\", dest_path])\n        else:\n            subprocess.call(['rsync', '-r', '-t', '-E',\n                            temp_path + \"/\", dest_path])\n\n\ndef sync_md_updates():\n    updates_found = False\n    if not os.path.exists(sync_ts_file) or not os.path.exists(export_ts_file):\n        return False\n    ts_last_sync = os.path.getmtime(sync_ts_file)\n    ts_last_export = os.path.getmtime(export_ts_file)\n    # Update synced timestamp file:\n    update_sync_time_file(0)\n    file_types = ('*.md', '*.txt', '*.markdown')\n    for (root, dirnames, filenames) in os.walk(export_path):\n        if '.obsidian' in dirnames:\n            dirnames.remove('.obsidian')\n        '''\n        This step walks down into all sub folders, if any.\n        '''\n        for pattern in file_types:\n            for filename in fnmatch.filter(filenames, pattern):\n                md_file = os.path.join(root, filename)\n                ts = os.path.getmtime(md_file)\n                if ts > ts_last_sync:\n                    if not updates_found:  # Yet\n                        # Wait 5 sec at first for external files to finish downloading from dropbox.\n                        # Otherwise images in textbundles might be missing in import:\n                        time.sleep(5)\n                    updates_found = True\n                    md_text = read_file(md_file)\n                    backup_ext_note(md_file)\n                    if check_if_image_added(md_text, md_file):\n                        textbundle_to_bear(md_text, md_file, ts)\n                        write_log('Imported to Bear: ' + md_file)\n                    else:\n                        update_bear_note(md_text, md_file, ts, ts_last_export)\n                        write_log('Bear Note Updated: ' + md_file)\n    if updates_found:\n        # Give Bear time to process updates:\n        time.sleep(3)\n        # Check again, just in case new updates synced from remote (OneDrive/Dropbox) \n        # during this process!\n        # The logic is not 100% fool proof, but should be close to 99.99%\n        sync_md_updates() # Recursive call\n    return updates_found\n\n\ndef check_if_image_added(md_text, md_file):\n    if not '.textbundle/' in md_file:\n        return False\n    matches = re.findall(r'!\\[.*?\\]\\(assets/(.+?_).+?\\)', md_text)\n    for image_match in matches:\n        'F89CDA3D-3FCC-4E92-88C1-CC4AF46FA733-10097-00002BBE9F7FF804_IMG_2280.JPG'\n        if not re.match(r'[0-9A-F]{8}-([0-9A-F]{4}-){3}[0-9A-F]{12}-[0-9A-F]{3,5}-[0-9A-F]{16}_', image_match):\n            return True\n    return False        \n\n\ndef textbundle_to_bear(md_text, md_file, mod_dt):\n    md_text = restore_tags(md_text)\n    bundle = os.path.split(md_file)[0]\n    match = re.search(r'\\{BearID:(.+?)\\}', md_text)\n    if match:\n        uuid = match.group(1)\n        # Remove old BearID: from new note\n        md_text = re.sub(r'\\<\\!-- ?\\{BearID\\:' + uuid + r'\\} ?--\\>', '', md_text).rstrip() + '\\n'\n        md_text = insert_link_top_note(md_text, 'Images added! Link to original note: ', uuid)\n    else:\n        # New textbundle (with images), add path as tag: \n        md_text = get_tag_from_path(md_text, bundle, export_path)\n    write_file(md_file, md_text, mod_dt, 0)\n    os.utime(bundle, (-1, mod_dt))\n    subprocess.call(['open', '-a', '/applications/bear.app', bundle])\n    time.sleep(0.5)\n\n\ndef backup_ext_note(md_file):\n    if '.textbundle' in md_file:\n        bundle_path = os.path.split(md_file)[0]\n        bundle_name = os.path.split(bundle_path)[1]\n        target = os.path.join(sync_backup, bundle_name)\n        bundle_raw = os.path.splitext(target)[0]\n        count = 2\n        while os.path.exists(target):\n            # Adding sequence number to identical filenames, preventing overwrite:\n            target = bundle_raw + \" - \" + str(count).zfill(2) + \".textbundle\"\n            count += 1\n        shutil.copytree(bundle_path, target)\n    else:\n        # Overwrite former bacups of incoming changes, only keeps last one:\n        shutil.copy2(md_file, sync_backup + '/')\n\n\ndef update_sync_time_file(ts):\n    write_file(sync_ts_file,\n        \"Checked for Markdown updates to sync at: \" +\n        datetime.datetime.now().strftime(\"%Y-%m-%d at %H:%M:%S\"), ts, 0)\n\n\ndef update_bear_note(md_text, md_file, ts, ts_last_export):\n    md_text = restore_tags(md_text)\n    md_text = restore_image_links(md_text)\n    uuid = ''\n    match = re.search(r'\\{BearID:(.+?)\\}', md_text)\n    sync_conflict = False\n    if match:\n        uuid = match.group(1)\n        # Remove old BearID: from new note\n        md_text = re.sub(r'\\<\\!-- ?\\{BearID\\:' + uuid + r'\\} ?--\\>', '', md_text).rstrip() + '\\n'\n\n        sync_conflict = check_sync_conflict(uuid, ts_last_export)\n        if sync_conflict:\n            link_original = 'bear://x-callback-url/open-note?id=' + uuid\n            message = '::Sync conflict! External update: ' + time_stamp_ts(ts) + '::'\n            message += '\\n[Click here to see original Bear note](' + link_original + ')'\n            x_create = 'bear://x-callback-url/create?show_window=no&open_note=no' \n            bear_x_callback(x_create, md_text, message, '')   \n        else:\n            # Regular external update\n            orig_title = backup_bear_note(uuid)\n            # message = '::External update: ' + time_stamp_ts(ts) + '::'   \n            x_replace = 'bear://x-callback-url/add-text?show_window=no&open_note=no&mode=replace&id=' + uuid\n            bear_x_callback(x_replace, md_text, '', orig_title)\n            # # Trash old original note:\n            # x_trash = 'bear://x-callback-url/trash?show_window=no&id=' + uuid\n            # subprocess.call([\"open\", x_trash])\n            # time.sleep(.2)\n    else:\n        # New external md Note, since no Bear uuid found in text: \n        # message = '::New external Note - ' + time_stamp_ts(ts) + '::' \n        md_text = get_tag_from_path(md_text, md_file, export_path)\n        x_create = 'bear://x-callback-url/create?show_window=no' \n        bear_x_callback(x_create, md_text, '', '')\n    return\n\n\ndef get_tag_from_path(md_text, md_file, root_path, inbox_for_root=False, extra_tag=''):\n    # extra_tag should be passed as '#tag' or '#space tag#'\n    path = md_file.replace(root_path, '')[1:]\n    sub_path = os.path.split(path)[0]\n    tags = []\n    if '.textbundle' in sub_path:\n        sub_path = os.path.split(sub_path)[0]\n    if sub_path == '': \n        if inbox_for_root:\n            tag = '#.inbox'\n        else:\n            tag = ''\n    elif sub_path.startswith('_'):\n        tag = '#.' + sub_path[1:].strip()\n    else:\n        tag = '#' + sub_path.strip()\n    if ' ' in tag: \n        tag += \"#\"               \n    if tag != '': \n        tags.append(tag)\n    if extra_tag != '':\n        tags.append(extra_tag)\n    for tag in get_file_tags(md_file):\n        tag = '#' + tag.strip()\n        if ' ' in tag: tag += \"#\"                   \n        tags.append(tag)\n    return md_text.strip() + '\\n\\n' + ' '.join(tags) + '\\n'\n\n\ndef get_file_tags(md_file):\n    try:\n        subprocess.call([gettag_sh, md_file, gettag_txt])\n        text = re.sub(r'\\\\n\\d{1,2}', r'', read_file(gettag_txt))\n        tag_list = json.loads(text)\n        return tag_list\n    except:\n        return []\n\n\nopen_config = NSWorkspaceOpenConfiguration.alloc().init()\nopen_config.setActivates_(False)\n\ndef bear_x_callback(x_command, md_text, message, orig_title):\n    if message != '':\n        lines = md_text.splitlines()\n        lines.insert(1, message)\n        md_text = '\\n'.join(lines)\n    if orig_title != '':\n        lines = md_text.splitlines()\n        title = re.sub(r'^#+ ', r'', lines[0])\n        if title != orig_title:\n            md_text = '\\n'.join(lines)\n        else:\n            md_text = '\\n'.join(lines[1:])        \n    x_command_text = x_command + '&text=' + urllib.parse.quote(md_text)\n    url = NSURL.URLWithString_(x_command_text)\n    NSWorkspace.sharedWorkspace().openURL_configuration_completionHandler_(url, open_config, None)\n    time.sleep(.2)\n\n\ndef check_sync_conflict(uuid, ts_last_export):\n    conflict = False\n    # Check modified date of original note in Bear sqlite db!\n    with sqlite3.connect(bear_db) as conn:\n        conn.row_factory = sqlite3.Row\n        query = \"SELECT * FROM `ZSFNOTE` WHERE `ZTRASHED` LIKE '0' AND `ZUNIQUEIDENTIFIER` LIKE '\" + uuid + \"'\"\n        c = conn.execute(query)\n    for row in c:\n        modified = row['ZMODIFICATIONDATE']\n        uuid = row['ZUNIQUEIDENTIFIER']\n        mod_dt = dt_conv(modified)\n        conflict = mod_dt > ts_last_export\n    return conflict\n\n\ndef backup_bear_note(uuid):\n    # Get single note from Bear sqlite db!\n    with sqlite3.connect(bear_db) as conn:\n        conn.row_factory = sqlite3.Row\n        query = \"SELECT * FROM `ZSFNOTE` WHERE `ZUNIQUEIDENTIFIER` LIKE '\" + uuid + \"'\"\n        c = conn.execute(query)\n    title = ''\n    for row in c:  # Will only get one row if uuid is found!\n        title = row['ZTITLE']\n        md_text = row['ZTEXT'].rstrip()\n        modified = row['ZMODIFICATIONDATE']\n        mod_dt = dt_conv(modified)\n        created = row['ZCREATIONDATE']\n        cre_dt = dt_conv(created)\n        md_text = insert_link_top_note(md_text, 'Link to updated note: ', uuid)\n        dtdate = datetime.datetime.fromtimestamp(cre_dt)\n        filename = clean_title(title) + dtdate.strftime(' - %Y-%m-%d_%H%M')\n        if not os.path.exists(sync_backup):\n            os.makedirs(sync_backup)\n        file_part = os.path.join(sync_backup, filename) \n        # This is a Bear text file, not exactly markdown.\n        backup_file = file_part + \".txt\"\n        count = 2\n        while os.path.exists(backup_file):\n            # Adding sequence number to identical filenames, preventing overwrite:\n            backup_file = file_part + \" - \" + str(count).zfill(2) + \".txt\"\n            count += 1\n        write_file(backup_file, md_text, mod_dt, created)\n        filename2 = os.path.split(backup_file)[1]\n        write_log('Original to sync_backup: ' + filename2)\n    return title\n\n\ndef insert_link_top_note(md_text, message, uuid):\n    lines = md_text.split('\\n')\n    title = re.sub(r'^#{1,6} ', r'', lines[0])\n    link = '::' + message + '[' + title + '](bear://x-callback-url/open-note?id=' + uuid + ')::'        \n    lines.insert(1, link) \n    return '\\n'.join(lines)\n\n\ndef init_gettag_script():\n    gettag_script = \\\n    '''#!/bin/bash\n    if [[ ! -e $1 ]] ; then\n    echo 'file missing or not specified'\n    exit 0\n    fi\n    JSON=\"$(xattr -p com.apple.metadata:_kMDItemUserTags \"$1\" | xxd -r -p | plutil -convert json - -o -)\"\n    echo $JSON > \"$2\"\n    '''\n    temp = os.path.join(HOME, 'temp')\n    if not os.path.exists(temp):\n        os.makedirs(temp)\n    write_file(gettag_sh, gettag_script, 0, 0)\n    subprocess.call(['chmod', '777', gettag_sh])\n    \n\ndef notify(message):\n    title = \"ul_sync_md.py\"\n    try:\n        # Uses \"terminal-notifier\", download at:\n        # https://github.com/julienXX/terminal-notifier/releases/download/2.0.0/terminal-notifier-2.0.0.zip\n        # Only works with MacOS 10.11+\n        subprocess.call(['/Applications/terminal-notifier.app/Contents/MacOS/terminal-notifier',\n                         '-message', message, \"-title\", title, '-sound', 'default'])\n    except:\n        write_log('\"terminal-notifier.app\" is missing!')        \n    return\n\n\nif __name__ == '__main__':\n    main()\n"
  },
  {
    "path": "bear_import.py",
    "content": "# encoding=utf-8\n# python3.6\n# bear_import.py\n# Developed with Visual Studio Code with MS Python Extension.\n\n'''\n# Markdown import to Bear from folder  \nVersion 1.0.0 - 2018-02-10 at 17:37 EST\ngithub/rovest, rorves@twitter\n\n## NEW import function: \n* Imports markdown or textbundles from nested folders under a `BearImport/input/' folder\n* Foldernames are converted to Bear tags\n* Also imports MacOS file tags as Bear tags\n* Imported notes are also tagged with `#.imported/yyyy-MM-dd` for convenience.\n* Import-files are then cleared to a `BearImport/done/' folder\n* Use for email input to Bear with Zapier's \"Gmail to Dropbox\" zap.\n* Or for import of nested groups and sheets from Ulysses, images and keywords included.\n'''\n\nmy_sync_service = 'Dropbox'  # Change 'Dropbox' to 'Box', 'Onedrive',\n    # or whatever folder of sync service you need.\n    # Your user \"Home\" folder is added below.\n\nuse_filename_as_title = False  # Set to `True` if importing Simplenotes synced with nvALT.\nset_logging_on = True\n\n# This tag is added for convenience (easy deletion of imported notes they are not wanted.)\n# (Easier to delete one tag, than finding a bunch of tagless imported notes.)\n\nimport datetime\nimport re\nimport subprocess\nimport urllib.parse\nimport os\nimport time\nimport shutil\nimport fnmatch\nimport json\n\nimport_tag = '#.imported/' + datetime.datetime.now().strftime('%Y-%m-%d')\n# import_tag = ''  # Blank if not needed\n\nHOME = os.getenv('HOME', '')\n\n# Import folder for files from other apps, \n# or incoming emails via \"Gmail to Dropbox\" Zapier zap or IFTTT\nbear_import = os.path.join(HOME, my_sync_service, 'BearImport')\nimport_path = os.path.join(bear_import, 'input')\nimport_done = os.path.join(bear_import, 'done')\n\ngettag_sh = os.path.join(HOME, 'temp/gettag.sh')\ngettag_txt = os.path.join(HOME, 'temp/gettag.txt')\n\n\ndef main():\n    if not os.path.exists(import_path):\n        os.makedirs(import_path)\n        print('New path, use it for import to Bear:', import_path)\n        return False\n    if not os.path.exists(import_done):\n        os.makedirs(import_done)\n    init_gettag_script()\n    count = import_external_files()\n    print(str(count), 'files imported.  Job done!')\n\n\ndef import_external_files():\n    files_found = False\n    file_types = ('*.md', '*.txt', '*.markdown')\n    count = 0\n    time.sleep(3)  # Wait a little bit after being triggered by Automator Folder Action\n    for (root, dirnames, filenames) in os.walk(import_path):\n        '''\n        This step walks down into all sub folders, if any.\n        '''\n        for pattern in file_types:\n            for filename in fnmatch.filter(filenames, pattern):\n                if not files_found:  # Yet\n                    # Wait 5 sec at first for external files to finish downloading from dropbox.\n                    # Otherwise images in textbundles might be missing in import:\n                    time.sleep(5)\n                files_found = True\n                md_file = os.path.join(root, filename)\n                mod_dt = os.path.getmtime(md_file)\n                md_text = read_file(md_file)\n                if pattern == '*.txt':\n                    # Replace rich text bullets to markdown:\n                    # (When using with IFTTT or Zapier and Gmail to Dropbox zap.)\n                    md_text = md_text.replace('\\n• ', '\\n- ')\n                    md_text = md_text.replace('\\n    • ', '\\n\\t- ')\n                    md_text = md_text.replace('\\n        • ', '\\n\\t\\t- ')\n                if re.search(r'!\\[.*?\\]\\(assets/.+?\\)', md_text) \\\n                    and '.textbundle/' in md_file:\n                    # New textbundle with images:\n                    bundle = os.path.split(md_file)[0]\n                    md_text = get_tag_from_path(md_text, bundle, import_path, False)\n                    write_file(md_file, md_text, mod_dt)\n                    os.utime(bundle, (-1, mod_dt))\n                    subprocess.call(['open', '-a', '/applications/bear.app', bundle])\n                    time.sleep(0.5)\n                    move_import_to_done(bundle, import_path, import_done)\n                else:\n                    title = ''\n                    # No images, import markdown only even if textbundle:\n                    if '.textbundle/' in md_file:\n                        file_bundle = os.path.split(md_file)[0]\n                    else:\n                        file_bundle = md_file\n                        if use_filename_as_title:\n                            title = os.path.splitext(os.path.split(md_file)[1])[0]\n                    md_text = get_tag_from_path(md_text, file_bundle, import_path, False)                    \n                    x_create = 'bear://x-callback-url/create?show_window=no' \n                    bear_x_callback(x_create, md_text, title)\n                    move_import_to_done(file_bundle, import_path, import_done)\n                write_log('Imported to Bear: ', file_bundle)\n                count += 1\n    if files_found:\n        # cleanup empty input sub folders here ??? \n        # But quite tricky since new files may appear. Bette to do that manually when needed.\n        # Recursive call to look for leftovers/newly downloaded files: \n        count += import_external_files()\n    return count\n\n\ndef move_import_to_done(file_bundle, import_path, import_done):\n    file_path = file_bundle.replace(import_path + '/', '')\n    sub_path = os.path.split(file_path)[0]\n    dest_path = os.path.join(import_done, sub_path)\n    if not os.path.exists(dest_path):\n        os.makedirs(dest_path)\n    count = 2\n    file_name = os.path.split(file_bundle)[1]\n    dest_file = os.path.join(dest_path, file_name)\n    (file_raw, ext) = os.path.splitext(file_name)\n    while os.path.exists(dest_file):\n        # Adding sequence number to identical filenames, preventing overwrite:\n        dest_file = os.path.join(dest_path, file_raw + \" - \" + str(count).zfill(2) + ext)\n        count += 1\n    # dest_path = os.path.split(dest_file)[0]\n    shutil.move(file_bundle, dest_file)\n\n\ndef get_tag_from_path(md_text, file_bundle, root_path, inbox_for_root=True):\n    path = file_bundle.replace(root_path, '')[1:]\n    sub_path = os.path.split(path)[0]\n    tags = []\n    if sub_path == '': \n        if inbox_for_root:\n            tag = '#.inbox'\n        else:\n            tag = ''\n    elif sub_path.startswith('_'):\n        tag = '#.' + sub_path[1:].strip()\n    else:\n        tag = '#' + sub_path.strip()\n    if ' ' in tag: \n        tag += \"#\"               \n    if tag != '': \n        tags.append(tag)\n    if import_tag != '':\n        tags.append(import_tag)\n    for tag in get_file_tags(file_bundle):\n        tag = '#' + tag.strip()\n        if ' ' in tag: tag += \"#\"                   \n        tags.append(tag)\n    return md_text.strip() + '\\n\\n' + ' '.join(tags) + '\\n'\n\n\ndef get_file_tags(file_bundle):\n    try:\n        subprocess.call([gettag_sh, file_bundle, gettag_txt])\n        tags_raw = read_file(gettag_txt)\n        tags_text = re.sub(r'\\\\n\\d{1,2}', r'', tags_raw)\n        tag_list = json.loads(tags_text)\n        return tag_list\n    except:\n        return []\n\n\ndef bear_x_callback(x_command, md_text, title):\n    if title != '' and not title.startswith(\"#\"):\n        md_text = '# ' + title + '\\n' + md_text\n    x_command_text = x_command + '&text=' + urllib.parse.quote(md_text)\n    subprocess.call([\"open\", x_command_text])\n    time.sleep(.2)\n\n\ndef init_gettag_script():\n    gettag_script = \\\n    '''#!/bin/bash\n    if [[ ! -e $1 ]] ; then\n    echo 'file missing or not specified'\n    exit 0\n    fi\n    JSON=\"$(xattr -p com.apple.metadata:_kMDItemUserTags \"$1\" | xxd -r -p | plutil -convert json - -o -)\"\n    echo $JSON > \"$2\"\n    '''\n    temp = os.path.join(HOME, 'temp')\n    if not os.path.exists(temp):\n        os.makedirs(temp)\n    write_file(gettag_sh, gettag_script, 0)\n    subprocess.call(['chmod', '777', gettag_sh])\n\n\ndef write_log(message, file_bundle):\n    if set_logging_on == True:\n        log_file = os.path.join(import_done, 'bear_import_log.txt')\n        time_stamp = datetime.datetime.now().strftime(\"%Y-%m-%d at %H:%M:%S\")\n        # file_name = os.path.split(file_path)[1]\n        file_path = file_bundle.replace(import_path + '/', '')\n        with open(log_file, 'a', encoding='utf-8') as f:\n            f.write(time_stamp + ': ' + message + file_path +'\\n')\n\n\ndef write_file(filename, file_content, modified):\n    with open(filename, \"w\", encoding='utf-8') as f:\n        f.write(file_content)\n    if modified > 0:\n        os.utime(filename, (-1, modified))\n\n\ndef read_file(file_name):\n    with open(file_name, \"r\", encoding='utf-8') as f:\n        file_content = f.read()\n    return file_content\n\n\nif __name__ == '__main__':\n    main()\n"
  }
]