[
  {
    "path": "LICENSE.md",
    "content": "Copyright 2017 Parallax Agency Ltd\n\nPermission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n"
  },
  {
    "path": "README.md",
    "content": "# Parallax SVG Animation Tools\n\nA simple set of python functions to help working with animated SVGs exported from Illustrator. More features coming soon!\nWe used it to create animations like this.\n\n[Viva La Velo](https://parall.ax/viva-le-velo)\n\n![Viva La Velo intro animation](vlv-intro-gif.gif)\n\n\n## Overview\n\nPart of animating with SVGs is getting references to elements in code and passing them to animation functions. For complicated animations this becomes difficult and hand editing SVG code is slow and gets overwritten when your artwork updates. We decided to write a post-processer for SVGs produced by Illustrator to help speed this up. Layer names are used to create attributes, classes and ID's making selecting them in JS or CSS far easier.\n\nThis is the what the svg code looks like before and after the processing step.\n\n```xml\n<!-- Before post processer -->\n<svg id=\"Layer_1\" data-name=\"Layer 1\" xmlns=\"http://www.w3.org/2000/svg\" viewBox=\"0 0 800 600\">\n  <rect id=\"_class_my-element_origin_144_234\" data-name=\"#class=my-element, origin=144 234\" x=\"144\" y=\"234\" width=\"148\" height=\"148\"/>\n  <rect id=\"_id_my-unique-element\" data-name=\"#id=my-unique-element\" x=\"316\" y=\"234\" width=\"148\" height=\"148\" fill=\"#29abe2\"/>\n  <rect id=\"_class_my-element\" data-name=\"#class=my-element\" x=\"488\" y=\"234\" width=\"148\" height=\"148\" fill=\"#fbb03b\"/>\n</svg>\n\n<!-- After post processer -->\n<svg xmlns=\"http://www.w3.org/2000/svg\" viewBox=\"0 0 800 600\">\n  <rect id=\"my-unique-element\" x=\"316\" y=\"234\" width=\"148\" height=\"148\" fill=\"#29abe2\"/>\n  <rect class=\"my-element\" data-svg-origin=\"144 234\" x=\"144\" y=\"234\" width=\"148\" height=\"148\"/>\n  <rect class=\"my-element\" x=\"488\" y=\"234\" width=\"148\" height=\"148\" fill=\"#fbb03b\"/>\n</svg>\n```\n\n![Illustrator layers example](example-image.png)\n\n\n## Quick Example\n\nDownload the [svg tools](parallax_svg_tools.zip) and unzip them into your project folder.\n\nCreate an Illustrator file, add an element and change its layer name to say `#class=my-element`. Export the SVG using the **File > Export > Export for Screens** option with the following settings. Call the svg `animation.svg`.\n\n![Illustrator svg export settings](svg-settings.png)\n\nCreate a HTML file as below. The import statements inline the SVG inline into our HTML file so we don't have to do any copy and pasting. Not strictly neccessary but makes the workflow a little easier. Save it as `animation.html`.\n\n```html\n<!DOCTYPE html>\n<html>\n<head>\n\t<meta charset='utf-8'/>\n</head>\n<body>\n\n//import processed_animation.svg\n\n</body>\n</html>\n```\n\n\nOpen the file called `run.py`. Here you can edit how the SVGs will be processed. The default looks like this. The sections below describe what the various options do.\n\n```javascript\nfrom svg import *\n\ncompile_svg('animation.svg', 'processed_animation.svg', \n{\n\t'process_layer_names': True,\n\t'namespace' : 'example'\n})\n\ninline_svg('animation.html', 'output/animation.html')\n```\n\nOpen the command line and navigate to your project folder. Call the script using `python parallax_svg_tools/run.py`. You should see a list of processed files (or just one in this case) printed to the console if everything worked correctly. Note that the script must be called from a directory that has access to the svg files.\n\nThere should now be a folder called `output` containing an `animation.html` file with your processed SVG in it. All that is left to do is animate it with your tool of choice (ours is [GSAP](https://greensock.com/)).\n\n\n## Functions\n\n### process\\_svg(src\\_path, dst\\_path, options)\nProcesses a single SVG and places it in the supplied destination directory. The following options are available.\n\n+ **process\\_layer\\_names:**\nConverts layer names as defined in Illustator into attributes. Begin the layer name with a '#' to indicate the layer should be parsed. \nFor example `#id=my-id, class=my-class my-other-class, role=my-role` ...etc.\nThis is useful for fetching elements with Javascript as well as marking up elements for accessibility - see this [CSS Tricks Accessible SVG ](https://css-tricks.com/accessible-svgs/) article.\nNOTE: Requires using commas to separate the attributes as that makes the parsing code a lot simpler :)\n\n+ **expand_origin:**\nAllows you to use `origin=100 100` to set origins for rotating/scaling with GSAP (expands to data-svg-origin). \n\n+ **namespace:** \nAppends a namespace to classes and IDs if one is provided. Useful for avoiding conflicts with other SVG files for things like masks and clipPaths.\n\n+ **nowhitespace:**\nRemoves unneeded whitespace. We don't do anything fancier than that so as to not break animations. Use the excellent [SVGO](<https://github.com/svg/svgo>) if you need better minification.\n\n+ **attributes:**\nAn object of key:value strings that will be applied as attributes to the root SVG element.\n\n+ **title:**\nSets the title or removes it completely when set to `false`\n\n+ **description:**\nSets the description or removes it completely when set to `false`\n\n+ **convert_svg_text_to_html:**\nConverts SVG text in HTML text via the foriegn object tag reducing file bloat and allowing you to style it with CSS. Requires the text be grouped inside a rectangle with the layer name set to `#TEXT`. \n\n+ **spirit:**\nExpands `#spirit=my-id` to `data-spirit-id` when set to `true` for use with the [Spirit animation app](<https://spiritapp.io/>)\n\n\n### inline\\_svg(src\\_path, dst\\_path)\nIn order to animate SVGs markup needs to be placed in-line with HTML. This function will look at the source HTML file and include any references defined by `//import` statements to SVGs that it finds."
  },
  {
    "path": "example/animation.html",
    "content": "<!DOCTYPE html>\n<html>\n<head>\n\t<meta charset='utf-8'/>\n</head>\n<body>\n\n//import processed_animation.svg\n\n</body>\n</html>"
  },
  {
    "path": "example/output/animation.html",
    "content": "<!DOCTYPE html>\n<html>\n<head>\n\t<meta charset='utf-8'/>\n</head>\n<body>\n\n<svg viewbox=\"0 0 800 600\" xmlns=\"http://www.w3.org/2000/svg\">\n<title>animation</title>\n<rect class=\"my-element\" height=\"148\" origin=\"144 234\" width=\"148\" x=\"144\" y=\"234\"/>\n<rect fill=\"#29abe2\" height=\"148\" id=\"my-unique-element\" width=\"148\" x=\"316\" y=\"234\"/>\n<rect class=\"my-element\" fill=\"#fbb03b\" height=\"148\" width=\"148\" x=\"488\" y=\"234\"/>\n</svg>\n\n</body>\n</html>"
  },
  {
    "path": "example/parallax_svg_tools/bs4/__init__.py",
    "content": "\"\"\"Beautiful Soup\nElixir and Tonic\n\"The Screen-Scraper's Friend\"\nhttp://www.crummy.com/software/BeautifulSoup/\n\nBeautiful Soup uses a pluggable XML or HTML parser to parse a\n(possibly invalid) document into a tree representation. Beautiful Soup\nprovides methods and Pythonic idioms that make it easy to navigate,\nsearch, and modify the parse tree.\n\nBeautiful Soup works with Python 2.7 and up. It works better if lxml\nand/or html5lib is installed.\n\nFor more than you ever wanted to know about Beautiful Soup, see the\ndocumentation:\nhttp://www.crummy.com/software/BeautifulSoup/bs4/doc/\n\n\"\"\"\n\n# Use of this source code is governed by a BSD-style license that can be\n# found in the LICENSE file.\n\n__author__ = \"Leonard Richardson (leonardr@segfault.org)\"\n__version__ = \"4.5.1\"\n__copyright__ = \"Copyright (c) 2004-2016 Leonard Richardson\"\n__license__ = \"MIT\"\n\n__all__ = ['BeautifulSoup']\n\nimport os\nimport re\nimport traceback\nimport warnings\n\nfrom .builder import builder_registry, ParserRejectedMarkup\nfrom .dammit import UnicodeDammit\nfrom .element import (\n    CData,\n    Comment,\n    DEFAULT_OUTPUT_ENCODING,\n    Declaration,\n    Doctype,\n    NavigableString,\n    PageElement,\n    ProcessingInstruction,\n    ResultSet,\n    SoupStrainer,\n    Tag,\n    )\n\n# The very first thing we do is give a useful error if someone is\n# running this code under Python 3 without converting it.\n'You are trying to run the Python 2 version of Beautiful Soup under Python 3. This will not work.'<>'You need to convert the code, either by installing it (`python setup.py install`) or by running 2to3 (`2to3 -w bs4`).'\n\nclass BeautifulSoup(Tag):\n    \"\"\"\n    This class defines the basic interface called by the tree builders.\n\n    These methods will be called by the parser:\n      reset()\n      feed(markup)\n\n    The tree builder may call these methods from its feed() implementation:\n      handle_starttag(name, attrs) # See note about return value\n      handle_endtag(name)\n      handle_data(data) # Appends to the current data node\n      endData(containerClass=NavigableString) # Ends the current data node\n\n    No matter how complicated the underlying parser is, you should be\n    able to build a tree using 'start tag' events, 'end tag' events,\n    'data' events, and \"done with data\" events.\n\n    If you encounter an empty-element tag (aka a self-closing tag,\n    like HTML's <br> tag), call handle_starttag and then\n    handle_endtag.\n    \"\"\"\n    ROOT_TAG_NAME = u'[document]'\n\n    # If the end-user gives no indication which tree builder they\n    # want, look for one with these features.\n    DEFAULT_BUILDER_FEATURES = ['html', 'fast']\n\n    ASCII_SPACES = '\\x20\\x0a\\x09\\x0c\\x0d'\n\n    NO_PARSER_SPECIFIED_WARNING = \"No parser was explicitly specified, so I'm using the best available %(markup_type)s parser for this system (\\\"%(parser)s\\\"). This usually isn't a problem, but if you run this code on another system, or in a different virtual environment, it may use a different parser and behave differently.\\n\\nThe code that caused this warning is on line %(line_number)s of the file %(filename)s. To get rid of this warning, change code that looks like this:\\n\\n BeautifulSoup([your markup])\\n\\nto this:\\n\\n BeautifulSoup([your markup], \\\"%(parser)s\\\")\\n\"\n\n    def __init__(self, markup=\"\", features=None, builder=None,\n                 parse_only=None, from_encoding=None, exclude_encodings=None,\n                 **kwargs):\n        \"\"\"The Soup object is initialized as the 'root tag', and the\n        provided markup (which can be a string or a file-like object)\n        is fed into the underlying parser.\"\"\"\n\n        if 'convertEntities' in kwargs:\n            warnings.warn(\n                \"BS4 does not respect the convertEntities argument to the \"\n                \"BeautifulSoup constructor. Entities are always converted \"\n                \"to Unicode characters.\")\n\n        if 'markupMassage' in kwargs:\n            del kwargs['markupMassage']\n            warnings.warn(\n                \"BS4 does not respect the markupMassage argument to the \"\n                \"BeautifulSoup constructor. The tree builder is responsible \"\n                \"for any necessary markup massage.\")\n\n        if 'smartQuotesTo' in kwargs:\n            del kwargs['smartQuotesTo']\n            warnings.warn(\n                \"BS4 does not respect the smartQuotesTo argument to the \"\n                \"BeautifulSoup constructor. Smart quotes are always converted \"\n                \"to Unicode characters.\")\n\n        if 'selfClosingTags' in kwargs:\n            del kwargs['selfClosingTags']\n            warnings.warn(\n                \"BS4 does not respect the selfClosingTags argument to the \"\n                \"BeautifulSoup constructor. The tree builder is responsible \"\n                \"for understanding self-closing tags.\")\n\n        if 'isHTML' in kwargs:\n            del kwargs['isHTML']\n            warnings.warn(\n                \"BS4 does not respect the isHTML argument to the \"\n                \"BeautifulSoup constructor. Suggest you use \"\n                \"features='lxml' for HTML and features='lxml-xml' for \"\n                \"XML.\")\n\n        def deprecated_argument(old_name, new_name):\n            if old_name in kwargs:\n                warnings.warn(\n                    'The \"%s\" argument to the BeautifulSoup constructor '\n                    'has been renamed to \"%s.\"' % (old_name, new_name))\n                value = kwargs[old_name]\n                del kwargs[old_name]\n                return value\n            return None\n\n        parse_only = parse_only or deprecated_argument(\n            \"parseOnlyThese\", \"parse_only\")\n\n        from_encoding = from_encoding or deprecated_argument(\n            \"fromEncoding\", \"from_encoding\")\n\n        if from_encoding and isinstance(markup, unicode):\n            warnings.warn(\"You provided Unicode markup but also provided a value for from_encoding. Your from_encoding will be ignored.\")\n            from_encoding = None\n\n        if len(kwargs) > 0:\n            arg = kwargs.keys().pop()\n            raise TypeError(\n                \"__init__() got an unexpected keyword argument '%s'\" % arg)\n\n        if builder is None:\n            original_features = features\n            if isinstance(features, basestring):\n                features = [features]\n            if features is None or len(features) == 0:\n                features = self.DEFAULT_BUILDER_FEATURES\n            builder_class = builder_registry.lookup(*features)\n            if builder_class is None:\n                raise FeatureNotFound(\n                    \"Couldn't find a tree builder with the features you \"\n                    \"requested: %s. Do you need to install a parser library?\"\n                    % \",\".join(features))\n            builder = builder_class()\n            if not (original_features == builder.NAME or\n                    original_features in builder.ALTERNATE_NAMES):\n                if builder.is_xml:\n                    markup_type = \"XML\"\n                else:\n                    markup_type = \"HTML\"\n\n                caller = traceback.extract_stack()[0]\n                filename = caller[0]\n                line_number = caller[1]\n                warnings.warn(self.NO_PARSER_SPECIFIED_WARNING % dict(\n                    filename=filename,\n                    line_number=line_number,\n                    parser=builder.NAME,\n                    markup_type=markup_type))\n\n        self.builder = builder\n        self.is_xml = builder.is_xml\n        self.known_xml = self.is_xml\n        self.builder.soup = self\n\n        self.parse_only = parse_only\n\n        if hasattr(markup, 'read'):        # It's a file-type object.\n            markup = markup.read()\n        elif len(markup) <= 256 and (\n                (isinstance(markup, bytes) and not b'<' in markup)\n                or (isinstance(markup, unicode) and not u'<' in markup)\n        ):\n            # Print out warnings for a couple beginner problems\n            # involving passing non-markup to Beautiful Soup.\n            # Beautiful Soup will still parse the input as markup,\n            # just in case that's what the user really wants.\n            if (isinstance(markup, unicode)\n                and not os.path.supports_unicode_filenames):\n                possible_filename = markup.encode(\"utf8\")\n            else:\n                possible_filename = markup\n            is_file = False\n            try:\n                is_file = os.path.exists(possible_filename)\n            except Exception, e:\n                # This is almost certainly a problem involving\n                # characters not valid in filenames on this\n                # system. Just let it go.\n                pass\n            if is_file:\n                if isinstance(markup, unicode):\n                    markup = markup.encode(\"utf8\")\n                warnings.warn(\n                    '\"%s\" looks like a filename, not markup. You should'\n                    'probably open this file and pass the filehandle into'\n                    'Beautiful Soup.' % markup)\n            self._check_markup_is_url(markup)\n\n        for (self.markup, self.original_encoding, self.declared_html_encoding,\n         self.contains_replacement_characters) in (\n             self.builder.prepare_markup(\n                 markup, from_encoding, exclude_encodings=exclude_encodings)):\n            self.reset()\n            try:\n                self._feed()\n                break\n            except ParserRejectedMarkup:\n                pass\n\n        # Clear out the markup and remove the builder's circular\n        # reference to this object.\n        self.markup = None\n        self.builder.soup = None\n\n    def __copy__(self):\n        copy = type(self)(\n            self.encode('utf-8'), builder=self.builder, from_encoding='utf-8'\n        )\n\n        # Although we encoded the tree to UTF-8, that may not have\n        # been the encoding of the original markup. Set the copy's\n        # .original_encoding to reflect the original object's\n        # .original_encoding.\n        copy.original_encoding = self.original_encoding\n        return copy\n\n    def __getstate__(self):\n        # Frequently a tree builder can't be pickled.\n        d = dict(self.__dict__)\n        if 'builder' in d and not self.builder.picklable:\n            d['builder'] = None\n        return d\n\n    @staticmethod\n    def _check_markup_is_url(markup):\n        \"\"\" \n        Check if markup looks like it's actually a url and raise a warning \n        if so. Markup can be unicode or str (py2) / bytes (py3).\n        \"\"\"\n        if isinstance(markup, bytes):\n            space = b' '\n            cant_start_with = (b\"http:\", b\"https:\")\n        elif isinstance(markup, unicode):\n            space = u' '\n            cant_start_with = (u\"http:\", u\"https:\")\n        else:\n            return\n\n        if any(markup.startswith(prefix) for prefix in cant_start_with):\n            if not space in markup:\n                if isinstance(markup, bytes):\n                    decoded_markup = markup.decode('utf-8', 'replace')\n                else:\n                    decoded_markup = markup\n                warnings.warn(\n                    '\"%s\" looks like a URL. Beautiful Soup is not an'\n                    ' HTTP client. You should probably use an HTTP client like'\n                    ' requests to get the document behind the URL, and feed'\n                    ' that document to Beautiful Soup.' % decoded_markup\n                )\n\n    def _feed(self):\n        # Convert the document to Unicode.\n        self.builder.reset()\n\n        self.builder.feed(self.markup)\n        # Close out any unfinished strings and close all the open tags.\n        self.endData()\n        while self.currentTag.name != self.ROOT_TAG_NAME:\n            self.popTag()\n\n    def reset(self):\n        Tag.__init__(self, self, self.builder, self.ROOT_TAG_NAME)\n        self.hidden = 1\n        self.builder.reset()\n        self.current_data = []\n        self.currentTag = None\n        self.tagStack = []\n        self.preserve_whitespace_tag_stack = []\n        self.pushTag(self)\n\n    def new_tag(self, name, namespace=None, nsprefix=None, **attrs):\n        \"\"\"Create a new tag associated with this soup.\"\"\"\n        return Tag(None, self.builder, name, namespace, nsprefix, attrs)\n\n    def new_string(self, s, subclass=NavigableString):\n        \"\"\"Create a new NavigableString associated with this soup.\"\"\"\n        return subclass(s)\n\n    def insert_before(self, successor):\n        raise NotImplementedError(\"BeautifulSoup objects don't support insert_before().\")\n\n    def insert_after(self, successor):\n        raise NotImplementedError(\"BeautifulSoup objects don't support insert_after().\")\n\n    def popTag(self):\n        tag = self.tagStack.pop()\n        if self.preserve_whitespace_tag_stack and tag == self.preserve_whitespace_tag_stack[-1]:\n            self.preserve_whitespace_tag_stack.pop()\n        #print \"Pop\", tag.name\n        if self.tagStack:\n            self.currentTag = self.tagStack[-1]\n        return self.currentTag\n\n    def pushTag(self, tag):\n        #print \"Push\", tag.name\n        if self.currentTag:\n            self.currentTag.contents.append(tag)\n        self.tagStack.append(tag)\n        self.currentTag = self.tagStack[-1]\n        if tag.name in self.builder.preserve_whitespace_tags:\n            self.preserve_whitespace_tag_stack.append(tag)\n\n    def endData(self, containerClass=NavigableString):\n        if self.current_data:\n            current_data = u''.join(self.current_data)\n            # If whitespace is not preserved, and this string contains\n            # nothing but ASCII spaces, replace it with a single space\n            # or newline.\n            if not self.preserve_whitespace_tag_stack:\n                strippable = True\n                for i in current_data:\n                    if i not in self.ASCII_SPACES:\n                        strippable = False\n                        break\n                if strippable:\n                    if '\\n' in current_data:\n                        current_data = '\\n'\n                    else:\n                        current_data = ' '\n\n            # Reset the data collector.\n            self.current_data = []\n\n            # Should we add this string to the tree at all?\n            if self.parse_only and len(self.tagStack) <= 1 and \\\n                   (not self.parse_only.text or \\\n                    not self.parse_only.search(current_data)):\n                return\n\n            o = containerClass(current_data)\n            self.object_was_parsed(o)\n\n    def object_was_parsed(self, o, parent=None, most_recent_element=None):\n        \"\"\"Add an object to the parse tree.\"\"\"\n        parent = parent or self.currentTag\n        previous_element = most_recent_element or self._most_recent_element\n\n        next_element = previous_sibling = next_sibling = None\n        if isinstance(o, Tag):\n            next_element = o.next_element\n            next_sibling = o.next_sibling\n            previous_sibling = o.previous_sibling\n            if not previous_element:\n                previous_element = o.previous_element\n\n        o.setup(parent, previous_element, next_element, previous_sibling, next_sibling)\n\n        self._most_recent_element = o\n        parent.contents.append(o)\n\n        if parent.next_sibling:\n            # This node is being inserted into an element that has\n            # already been parsed. Deal with any dangling references.\n            index = len(parent.contents)-1\n            while index >= 0:\n                if parent.contents[index] is o:\n                    break\n                index -= 1\n            else:\n                raise ValueError(\n                    \"Error building tree: supposedly %r was inserted \"\n                    \"into %r after the fact, but I don't see it!\" % (\n                        o, parent\n                    )\n                )\n            if index == 0:\n                previous_element = parent\n                previous_sibling = None\n            else:\n                previous_element = previous_sibling = parent.contents[index-1]\n            if index == len(parent.contents)-1:\n                next_element = parent.next_sibling\n                next_sibling = None\n            else:\n                next_element = next_sibling = parent.contents[index+1]\n\n            o.previous_element = previous_element\n            if previous_element:\n                previous_element.next_element = o\n            o.next_element = next_element\n            if next_element:\n                next_element.previous_element = o\n            o.next_sibling = next_sibling\n            if next_sibling:\n                next_sibling.previous_sibling = o\n            o.previous_sibling = previous_sibling\n            if previous_sibling:\n                previous_sibling.next_sibling = o\n\n    def _popToTag(self, name, nsprefix=None, inclusivePop=True):\n        \"\"\"Pops the tag stack up to and including the most recent\n        instance of the given tag. If inclusivePop is false, pops the tag\n        stack up to but *not* including the most recent instqance of\n        the given tag.\"\"\"\n        #print \"Popping to %s\" % name\n        if name == self.ROOT_TAG_NAME:\n            # The BeautifulSoup object itself can never be popped.\n            return\n\n        most_recently_popped = None\n\n        stack_size = len(self.tagStack)\n        for i in range(stack_size - 1, 0, -1):\n            t = self.tagStack[i]\n            if (name == t.name and nsprefix == t.prefix):\n                if inclusivePop:\n                    most_recently_popped = self.popTag()\n                break\n            most_recently_popped = self.popTag()\n\n        return most_recently_popped\n\n    def handle_starttag(self, name, namespace, nsprefix, attrs):\n        \"\"\"Push a start tag on to the stack.\n\n        If this method returns None, the tag was rejected by the\n        SoupStrainer. You should proceed as if the tag had not occurred\n        in the document. For instance, if this was a self-closing tag,\n        don't call handle_endtag.\n        \"\"\"\n\n        # print \"Start tag %s: %s\" % (name, attrs)\n        self.endData()\n\n        if (self.parse_only and len(self.tagStack) <= 1\n            and (self.parse_only.text\n                 or not self.parse_only.search_tag(name, attrs))):\n            return None\n\n        tag = Tag(self, self.builder, name, namespace, nsprefix, attrs,\n                  self.currentTag, self._most_recent_element)\n        if tag is None:\n            return tag\n        if self._most_recent_element:\n            self._most_recent_element.next_element = tag\n        self._most_recent_element = tag\n        self.pushTag(tag)\n        return tag\n\n    def handle_endtag(self, name, nsprefix=None):\n        #print \"End tag: \" + name\n        self.endData()\n        self._popToTag(name, nsprefix)\n\n    def handle_data(self, data):\n        self.current_data.append(data)\n\n    def decode(self, pretty_print=False,\n               eventual_encoding=DEFAULT_OUTPUT_ENCODING,\n               formatter=\"minimal\"):\n        \"\"\"Returns a string or Unicode representation of this document.\n        To get Unicode, pass None for encoding.\"\"\"\n\n        if self.is_xml:\n            # Print the XML declaration\n            encoding_part = ''\n            if eventual_encoding != None:\n                encoding_part = ' encoding=\"%s\"' % eventual_encoding\n            prefix = u'<?xml version=\"1.0\"%s?>\\n' % encoding_part\n        else:\n            prefix = u''\n        if not pretty_print:\n            indent_level = None\n        else:\n            indent_level = 0\n        return prefix + super(BeautifulSoup, self).decode(\n            indent_level, eventual_encoding, formatter)\n\n# Alias to make it easier to type import: 'from bs4 import _soup'\n_s = BeautifulSoup\n_soup = BeautifulSoup\n\nclass BeautifulStoneSoup(BeautifulSoup):\n    \"\"\"Deprecated interface to an XML parser.\"\"\"\n\n    def __init__(self, *args, **kwargs):\n        kwargs['features'] = 'xml'\n        warnings.warn(\n            'The BeautifulStoneSoup class is deprecated. Instead of using '\n            'it, pass features=\"xml\" into the BeautifulSoup constructor.')\n        super(BeautifulStoneSoup, self).__init__(*args, **kwargs)\n\n\nclass StopParsing(Exception):\n    pass\n\nclass FeatureNotFound(ValueError):\n    pass\n\n\n#By default, act as an HTML pretty-printer.\nif __name__ == '__main__':\n    import sys\n    soup = BeautifulSoup(sys.stdin)\n    print soup.prettify()\n"
  },
  {
    "path": "example/parallax_svg_tools/bs4/builder/__init__.py",
    "content": "# Use of this source code is governed by a BSD-style license that can be\n# found in the LICENSE file.\n\nfrom collections import defaultdict\nimport itertools\nimport sys\nfrom bs4.element import (\n    CharsetMetaAttributeValue,\n    ContentMetaAttributeValue,\n    HTMLAwareEntitySubstitution,\n    whitespace_re\n    )\n\n__all__ = [\n    'HTMLTreeBuilder',\n    'SAXTreeBuilder',\n    'TreeBuilder',\n    'TreeBuilderRegistry',\n    ]\n\n# Some useful features for a TreeBuilder to have.\nFAST = 'fast'\nPERMISSIVE = 'permissive'\nSTRICT = 'strict'\nXML = 'xml'\nHTML = 'html'\nHTML_5 = 'html5'\n\n\nclass TreeBuilderRegistry(object):\n\n    def __init__(self):\n        self.builders_for_feature = defaultdict(list)\n        self.builders = []\n\n    def register(self, treebuilder_class):\n        \"\"\"Register a treebuilder based on its advertised features.\"\"\"\n        for feature in treebuilder_class.features:\n            self.builders_for_feature[feature].insert(0, treebuilder_class)\n        self.builders.insert(0, treebuilder_class)\n\n    def lookup(self, *features):\n        if len(self.builders) == 0:\n            # There are no builders at all.\n            return None\n\n        if len(features) == 0:\n            # They didn't ask for any features. Give them the most\n            # recently registered builder.\n            return self.builders[0]\n\n        # Go down the list of features in order, and eliminate any builders\n        # that don't match every feature.\n        features = list(features)\n        features.reverse()\n        candidates = None\n        candidate_set = None\n        while len(features) > 0:\n            feature = features.pop()\n            we_have_the_feature = self.builders_for_feature.get(feature, [])\n            if len(we_have_the_feature) > 0:\n                if candidates is None:\n                    candidates = we_have_the_feature\n                    candidate_set = set(candidates)\n                else:\n                    # Eliminate any candidates that don't have this feature.\n                    candidate_set = candidate_set.intersection(\n                        set(we_have_the_feature))\n\n        # The only valid candidates are the ones in candidate_set.\n        # Go through the original list of candidates and pick the first one\n        # that's in candidate_set.\n        if candidate_set is None:\n            return None\n        for candidate in candidates:\n            if candidate in candidate_set:\n                return candidate\n        return None\n\n# The BeautifulSoup class will take feature lists from developers and use them\n# to look up builders in this registry.\nbuilder_registry = TreeBuilderRegistry()\n\nclass TreeBuilder(object):\n    \"\"\"Turn a document into a Beautiful Soup object tree.\"\"\"\n\n    NAME = \"[Unknown tree builder]\"\n    ALTERNATE_NAMES = []\n    features = []\n\n    is_xml = False\n    picklable = False\n    preserve_whitespace_tags = set()\n    empty_element_tags = None # A tag will be considered an empty-element\n                              # tag when and only when it has no contents.\n\n    # A value for these tag/attribute combinations is a space- or\n    # comma-separated list of CDATA, rather than a single CDATA.\n    cdata_list_attributes = {}\n\n\n    def __init__(self):\n        self.soup = None\n\n    def reset(self):\n        pass\n\n    def can_be_empty_element(self, tag_name):\n        \"\"\"Might a tag with this name be an empty-element tag?\n\n        The final markup may or may not actually present this tag as\n        self-closing.\n\n        For instance: an HTMLBuilder does not consider a <p> tag to be\n        an empty-element tag (it's not in\n        HTMLBuilder.empty_element_tags). This means an empty <p> tag\n        will be presented as \"<p></p>\", not \"<p />\".\n\n        The default implementation has no opinion about which tags are\n        empty-element tags, so a tag will be presented as an\n        empty-element tag if and only if it has no contents.\n        \"<foo></foo>\" will become \"<foo />\", and \"<foo>bar</foo>\" will\n        be left alone.\n        \"\"\"\n        if self.empty_element_tags is None:\n            return True\n        return tag_name in self.empty_element_tags\n\n    def feed(self, markup):\n        raise NotImplementedError()\n\n    def prepare_markup(self, markup, user_specified_encoding=None,\n                       document_declared_encoding=None):\n        return markup, None, None, False\n\n    def test_fragment_to_document(self, fragment):\n        \"\"\"Wrap an HTML fragment to make it look like a document.\n\n        Different parsers do this differently. For instance, lxml\n        introduces an empty <head> tag, and html5lib\n        doesn't. Abstracting this away lets us write simple tests\n        which run HTML fragments through the parser and compare the\n        results against other HTML fragments.\n\n        This method should not be used outside of tests.\n        \"\"\"\n        return fragment\n\n    def set_up_substitutions(self, tag):\n        return False\n\n    def _replace_cdata_list_attribute_values(self, tag_name, attrs):\n        \"\"\"Replaces class=\"foo bar\" with class=[\"foo\", \"bar\"]\n\n        Modifies its input in place.\n        \"\"\"\n        if not attrs:\n            return attrs\n        if self.cdata_list_attributes:\n            universal = self.cdata_list_attributes.get('*', [])\n            tag_specific = self.cdata_list_attributes.get(\n                tag_name.lower(), None)\n            for attr in attrs.keys():\n                if attr in universal or (tag_specific and attr in tag_specific):\n                    # We have a \"class\"-type attribute whose string\n                    # value is a whitespace-separated list of\n                    # values. Split it into a list.\n                    value = attrs[attr]\n                    if isinstance(value, basestring):\n                        values = whitespace_re.split(value)\n                    else:\n                        # html5lib sometimes calls setAttributes twice\n                        # for the same tag when rearranging the parse\n                        # tree. On the second call the attribute value\n                        # here is already a list.  If this happens,\n                        # leave the value alone rather than trying to\n                        # split it again.\n                        values = value\n                    attrs[attr] = values\n        return attrs\n\nclass SAXTreeBuilder(TreeBuilder):\n    \"\"\"A Beautiful Soup treebuilder that listens for SAX events.\"\"\"\n\n    def feed(self, markup):\n        raise NotImplementedError()\n\n    def close(self):\n        pass\n\n    def startElement(self, name, attrs):\n        attrs = dict((key[1], value) for key, value in list(attrs.items()))\n        #print \"Start %s, %r\" % (name, attrs)\n        self.soup.handle_starttag(name, attrs)\n\n    def endElement(self, name):\n        #print \"End %s\" % name\n        self.soup.handle_endtag(name)\n\n    def startElementNS(self, nsTuple, nodeName, attrs):\n        # Throw away (ns, nodeName) for now.\n        self.startElement(nodeName, attrs)\n\n    def endElementNS(self, nsTuple, nodeName):\n        # Throw away (ns, nodeName) for now.\n        self.endElement(nodeName)\n        #handler.endElementNS((ns, node.nodeName), node.nodeName)\n\n    def startPrefixMapping(self, prefix, nodeValue):\n        # Ignore the prefix for now.\n        pass\n\n    def endPrefixMapping(self, prefix):\n        # Ignore the prefix for now.\n        # handler.endPrefixMapping(prefix)\n        pass\n\n    def characters(self, content):\n        self.soup.handle_data(content)\n\n    def startDocument(self):\n        pass\n\n    def endDocument(self):\n        pass\n\n\nclass HTMLTreeBuilder(TreeBuilder):\n    \"\"\"This TreeBuilder knows facts about HTML.\n\n    Such as which tags are empty-element tags.\n    \"\"\"\n\n    preserve_whitespace_tags = HTMLAwareEntitySubstitution.preserve_whitespace_tags\n    empty_element_tags = set(['br' , 'hr', 'input', 'img', 'meta',\n                              'spacer', 'link', 'frame', 'base'])\n\n    # The HTML standard defines these attributes as containing a\n    # space-separated list of values, not a single value. That is,\n    # class=\"foo bar\" means that the 'class' attribute has two values,\n    # 'foo' and 'bar', not the single value 'foo bar'.  When we\n    # encounter one of these attributes, we will parse its value into\n    # a list of values if possible. Upon output, the list will be\n    # converted back into a string.\n    cdata_list_attributes = {\n        \"*\" : ['class', 'accesskey', 'dropzone'],\n        \"a\" : ['rel', 'rev'],\n        \"link\" :  ['rel', 'rev'],\n        \"td\" : [\"headers\"],\n        \"th\" : [\"headers\"],\n        \"td\" : [\"headers\"],\n        \"form\" : [\"accept-charset\"],\n        \"object\" : [\"archive\"],\n\n        # These are HTML5 specific, as are *.accesskey and *.dropzone above.\n        \"area\" : [\"rel\"],\n        \"icon\" : [\"sizes\"],\n        \"iframe\" : [\"sandbox\"],\n        \"output\" : [\"for\"],\n        }\n\n    def set_up_substitutions(self, tag):\n        # We are only interested in <meta> tags\n        if tag.name != 'meta':\n            return False\n\n        http_equiv = tag.get('http-equiv')\n        content = tag.get('content')\n        charset = tag.get('charset')\n\n        # We are interested in <meta> tags that say what encoding the\n        # document was originally in. This means HTML 5-style <meta>\n        # tags that provide the \"charset\" attribute. It also means\n        # HTML 4-style <meta> tags that provide the \"content\"\n        # attribute and have \"http-equiv\" set to \"content-type\".\n        #\n        # In both cases we will replace the value of the appropriate\n        # attribute with a standin object that can take on any\n        # encoding.\n        meta_encoding = None\n        if charset is not None:\n            # HTML 5 style:\n            # <meta charset=\"utf8\">\n            meta_encoding = charset\n            tag['charset'] = CharsetMetaAttributeValue(charset)\n\n        elif (content is not None and http_equiv is not None\n              and http_equiv.lower() == 'content-type'):\n            # HTML 4 style:\n            # <meta http-equiv=\"content-type\" content=\"text/html; charset=utf8\">\n            tag['content'] = ContentMetaAttributeValue(content)\n\n        return (meta_encoding is not None)\n\ndef register_treebuilders_from(module):\n    \"\"\"Copy TreeBuilders from the given module into this module.\"\"\"\n    # I'm fairly sure this is not the best way to do this.\n    this_module = sys.modules['bs4.builder']\n    for name in module.__all__:\n        obj = getattr(module, name)\n\n        if issubclass(obj, TreeBuilder):\n            setattr(this_module, name, obj)\n            this_module.__all__.append(name)\n            # Register the builder while we're at it.\n            this_module.builder_registry.register(obj)\n\nclass ParserRejectedMarkup(Exception):\n    pass\n\n# Builders are registered in reverse order of priority, so that custom\n# builder registrations will take precedence. In general, we want lxml\n# to take precedence over html5lib, because it's faster. And we only\n# want to use HTMLParser as a last result.\nfrom . import _htmlparser\nregister_treebuilders_from(_htmlparser)\ntry:\n    from . import _html5lib\n    register_treebuilders_from(_html5lib)\nexcept ImportError:\n    # They don't have html5lib installed.\n    pass\ntry:\n    from . import _lxml\n    register_treebuilders_from(_lxml)\nexcept ImportError:\n    # They don't have lxml installed.\n    pass\n"
  },
  {
    "path": "example/parallax_svg_tools/bs4/builder/_html5lib.py",
    "content": "# Use of this source code is governed by a BSD-style license that can be\n# found in the LICENSE file.\n\n__all__ = [\n    'HTML5TreeBuilder',\n    ]\n\nimport warnings\nfrom bs4.builder import (\n    PERMISSIVE,\n    HTML,\n    HTML_5,\n    HTMLTreeBuilder,\n    )\nfrom bs4.element import (\n    NamespacedAttribute,\n    whitespace_re,\n)\nimport html5lib\nfrom html5lib.constants import namespaces\nfrom bs4.element import (\n    Comment,\n    Doctype,\n    NavigableString,\n    Tag,\n    )\n\ntry:\n    # Pre-0.99999999\n    from html5lib.treebuilders import _base as treebuilder_base\n    new_html5lib = False\nexcept ImportError, e:\n    # 0.99999999 and up\n    from html5lib.treebuilders import base as treebuilder_base\n    new_html5lib = True\n\nclass HTML5TreeBuilder(HTMLTreeBuilder):\n    \"\"\"Use html5lib to build a tree.\"\"\"\n\n    NAME = \"html5lib\"\n\n    features = [NAME, PERMISSIVE, HTML_5, HTML]\n\n    def prepare_markup(self, markup, user_specified_encoding,\n                       document_declared_encoding=None, exclude_encodings=None):\n        # Store the user-specified encoding for use later on.\n        self.user_specified_encoding = user_specified_encoding\n\n        # document_declared_encoding and exclude_encodings aren't used\n        # ATM because the html5lib TreeBuilder doesn't use\n        # UnicodeDammit.\n        if exclude_encodings:\n            warnings.warn(\"You provided a value for exclude_encoding, but the html5lib tree builder doesn't support exclude_encoding.\")\n        yield (markup, None, None, False)\n\n    # These methods are defined by Beautiful Soup.\n    def feed(self, markup):\n        if self.soup.parse_only is not None:\n            warnings.warn(\"You provided a value for parse_only, but the html5lib tree builder doesn't support parse_only. The entire document will be parsed.\")\n        parser = html5lib.HTMLParser(tree=self.create_treebuilder)\n\n        extra_kwargs = dict()\n        if not isinstance(markup, unicode):\n            if new_html5lib:\n                extra_kwargs['override_encoding'] = self.user_specified_encoding\n            else:\n                extra_kwargs['encoding'] = self.user_specified_encoding\n        doc = parser.parse(markup, **extra_kwargs)\n\n        # Set the character encoding detected by the tokenizer.\n        if isinstance(markup, unicode):\n            # We need to special-case this because html5lib sets\n            # charEncoding to UTF-8 if it gets Unicode input.\n            doc.original_encoding = None\n        else:\n            original_encoding = parser.tokenizer.stream.charEncoding[0]\n            if not isinstance(original_encoding, basestring):\n                # In 0.99999999 and up, the encoding is an html5lib\n                # Encoding object. We want to use a string for compatibility\n                # with other tree builders.\n                original_encoding = original_encoding.name\n            doc.original_encoding = original_encoding\n\n    def create_treebuilder(self, namespaceHTMLElements):\n        self.underlying_builder = TreeBuilderForHtml5lib(\n            self.soup, namespaceHTMLElements)\n        return self.underlying_builder\n\n    def test_fragment_to_document(self, fragment):\n        \"\"\"See `TreeBuilder`.\"\"\"\n        return u'<html><head></head><body>%s</body></html>' % fragment\n\n\nclass TreeBuilderForHtml5lib(treebuilder_base.TreeBuilder):\n\n    def __init__(self, soup, namespaceHTMLElements):\n        self.soup = soup\n        super(TreeBuilderForHtml5lib, self).__init__(namespaceHTMLElements)\n\n    def documentClass(self):\n        self.soup.reset()\n        return Element(self.soup, self.soup, None)\n\n    def insertDoctype(self, token):\n        name = token[\"name\"]\n        publicId = token[\"publicId\"]\n        systemId = token[\"systemId\"]\n\n        doctype = Doctype.for_name_and_ids(name, publicId, systemId)\n        self.soup.object_was_parsed(doctype)\n\n    def elementClass(self, name, namespace):\n        tag = self.soup.new_tag(name, namespace)\n        return Element(tag, self.soup, namespace)\n\n    def commentClass(self, data):\n        return TextNode(Comment(data), self.soup)\n\n    def fragmentClass(self):\n        self.soup = BeautifulSoup(\"\")\n        self.soup.name = \"[document_fragment]\"\n        return Element(self.soup, self.soup, None)\n\n    def appendChild(self, node):\n        # XXX This code is not covered by the BS4 tests.\n        self.soup.append(node.element)\n\n    def getDocument(self):\n        return self.soup\n\n    def getFragment(self):\n        return treebuilder_base.TreeBuilder.getFragment(self).element\n\nclass AttrList(object):\n    def __init__(self, element):\n        self.element = element\n        self.attrs = dict(self.element.attrs)\n    def __iter__(self):\n        return list(self.attrs.items()).__iter__()\n    def __setitem__(self, name, value):\n        # If this attribute is a multi-valued attribute for this element,\n        # turn its value into a list.\n        list_attr = HTML5TreeBuilder.cdata_list_attributes\n        if (name in list_attr['*']\n            or (self.element.name in list_attr\n                and name in list_attr[self.element.name])):\n            # A node that is being cloned may have already undergone\n            # this procedure.\n            if not isinstance(value, list):\n                value = whitespace_re.split(value)\n        self.element[name] = value\n    def items(self):\n        return list(self.attrs.items())\n    def keys(self):\n        return list(self.attrs.keys())\n    def __len__(self):\n        return len(self.attrs)\n    def __getitem__(self, name):\n        return self.attrs[name]\n    def __contains__(self, name):\n        return name in list(self.attrs.keys())\n\n\nclass Element(treebuilder_base.Node):\n    def __init__(self, element, soup, namespace):\n        treebuilder_base.Node.__init__(self, element.name)\n        self.element = element\n        self.soup = soup\n        self.namespace = namespace\n\n    def appendChild(self, node):\n        string_child = child = None\n        if isinstance(node, basestring):\n            # Some other piece of code decided to pass in a string\n            # instead of creating a TextElement object to contain the\n            # string.\n            string_child = child = node\n        elif isinstance(node, Tag):\n            # Some other piece of code decided to pass in a Tag\n            # instead of creating an Element object to contain the\n            # Tag.\n            child = node\n        elif node.element.__class__ == NavigableString:\n            string_child = child = node.element\n        else:\n            child = node.element\n\n        if not isinstance(child, basestring) and child.parent is not None:\n            node.element.extract()\n\n        if (string_child and self.element.contents\n            and self.element.contents[-1].__class__ == NavigableString):\n            # We are appending a string onto another string.\n            # TODO This has O(n^2) performance, for input like\n            # \"a</a>a</a>a</a>...\"\n            old_element = self.element.contents[-1]\n            new_element = self.soup.new_string(old_element + string_child)\n            old_element.replace_with(new_element)\n            self.soup._most_recent_element = new_element\n        else:\n            if isinstance(node, basestring):\n                # Create a brand new NavigableString from this string.\n                child = self.soup.new_string(node)\n\n            # Tell Beautiful Soup to act as if it parsed this element\n            # immediately after the parent's last descendant. (Or\n            # immediately after the parent, if it has no children.)\n            if self.element.contents:\n                most_recent_element = self.element._last_descendant(False)\n            elif self.element.next_element is not None:\n                # Something from further ahead in the parse tree is\n                # being inserted into this earlier element. This is\n                # very annoying because it means an expensive search\n                # for the last element in the tree.\n                most_recent_element = self.soup._last_descendant()\n            else:\n                most_recent_element = self.element\n\n            self.soup.object_was_parsed(\n                child, parent=self.element,\n                most_recent_element=most_recent_element)\n\n    def getAttributes(self):\n        return AttrList(self.element)\n\n    def setAttributes(self, attributes):\n\n        if attributes is not None and len(attributes) > 0:\n\n            converted_attributes = []\n            for name, value in list(attributes.items()):\n                if isinstance(name, tuple):\n                    new_name = NamespacedAttribute(*name)\n                    del attributes[name]\n                    attributes[new_name] = value\n\n            self.soup.builder._replace_cdata_list_attribute_values(\n                self.name, attributes)\n            for name, value in attributes.items():\n                self.element[name] = value\n\n            # The attributes may contain variables that need substitution.\n            # Call set_up_substitutions manually.\n            #\n            # The Tag constructor called this method when the Tag was created,\n            # but we just set/changed the attributes, so call it again.\n            self.soup.builder.set_up_substitutions(self.element)\n    attributes = property(getAttributes, setAttributes)\n\n    def insertText(self, data, insertBefore=None):\n        if insertBefore:\n            text = TextNode(self.soup.new_string(data), self.soup)\n            self.insertBefore(data, insertBefore)\n        else:\n            self.appendChild(data)\n\n    def insertBefore(self, node, refNode):\n        index = self.element.index(refNode.element)\n        if (node.element.__class__ == NavigableString and self.element.contents\n            and self.element.contents[index-1].__class__ == NavigableString):\n            # (See comments in appendChild)\n            old_node = self.element.contents[index-1]\n            new_str = self.soup.new_string(old_node + node.element)\n            old_node.replace_with(new_str)\n        else:\n            self.element.insert(index, node.element)\n            node.parent = self\n\n    def removeChild(self, node):\n        node.element.extract()\n\n    def reparentChildren(self, new_parent):\n        \"\"\"Move all of this tag's children into another tag.\"\"\"\n        # print \"MOVE\", self.element.contents\n        # print \"FROM\", self.element\n        # print \"TO\", new_parent.element\n        element = self.element\n        new_parent_element = new_parent.element\n        # Determine what this tag's next_element will be once all the children\n        # are removed.\n        final_next_element = element.next_sibling\n\n        new_parents_last_descendant = new_parent_element._last_descendant(False, False)\n        if len(new_parent_element.contents) > 0:\n            # The new parent already contains children. We will be\n            # appending this tag's children to the end.\n            new_parents_last_child = new_parent_element.contents[-1]\n            new_parents_last_descendant_next_element = new_parents_last_descendant.next_element\n        else:\n            # The new parent contains no children.\n            new_parents_last_child = None\n            new_parents_last_descendant_next_element = new_parent_element.next_element\n\n        to_append = element.contents\n        append_after = new_parent_element.contents\n        if len(to_append) > 0:\n            # Set the first child's previous_element and previous_sibling\n            # to elements within the new parent\n            first_child = to_append[0]\n            if new_parents_last_descendant:\n                first_child.previous_element = new_parents_last_descendant\n            else:\n                first_child.previous_element = new_parent_element\n            first_child.previous_sibling = new_parents_last_child\n            if new_parents_last_descendant:\n                new_parents_last_descendant.next_element = first_child\n            else:\n                new_parent_element.next_element = first_child\n            if new_parents_last_child:\n                new_parents_last_child.next_sibling = first_child\n\n            # Fix the last child's next_element and next_sibling\n            last_child = to_append[-1]\n            last_child.next_element = new_parents_last_descendant_next_element\n            if new_parents_last_descendant_next_element:\n                new_parents_last_descendant_next_element.previous_element = last_child\n            last_child.next_sibling = None\n\n        for child in to_append:\n            child.parent = new_parent_element\n            new_parent_element.contents.append(child)\n\n        # Now that this element has no children, change its .next_element.\n        element.contents = []\n        element.next_element = final_next_element\n\n        # print \"DONE WITH MOVE\"\n        # print \"FROM\", self.element\n        # print \"TO\", new_parent_element\n\n    def cloneNode(self):\n        tag = self.soup.new_tag(self.element.name, self.namespace)\n        node = Element(tag, self.soup, self.namespace)\n        for key,value in self.attributes:\n            node.attributes[key] = value\n        return node\n\n    def hasContent(self):\n        return self.element.contents\n\n    def getNameTuple(self):\n        if self.namespace == None:\n            return namespaces[\"html\"], self.name\n        else:\n            return self.namespace, self.name\n\n    nameTuple = property(getNameTuple)\n\nclass TextNode(Element):\n    def __init__(self, element, soup):\n        treebuilder_base.Node.__init__(self, None)\n        self.element = element\n        self.soup = soup\n\n    def cloneNode(self):\n        raise NotImplementedError\n"
  },
  {
    "path": "example/parallax_svg_tools/bs4/builder/_htmlparser.py",
    "content": "\"\"\"Use the HTMLParser library to parse HTML files that aren't too bad.\"\"\"\n\n# Use of this source code is governed by a BSD-style license that can be\n# found in the LICENSE file.\n\n__all__ = [\n    'HTMLParserTreeBuilder',\n    ]\n\nfrom HTMLParser import HTMLParser\n\ntry:\n    from HTMLParser import HTMLParseError\nexcept ImportError, e:\n    # HTMLParseError is removed in Python 3.5. Since it can never be\n    # thrown in 3.5, we can just define our own class as a placeholder.\n    class HTMLParseError(Exception):\n        pass\n\nimport sys\nimport warnings\n\n# Starting in Python 3.2, the HTMLParser constructor takes a 'strict'\n# argument, which we'd like to set to False. Unfortunately,\n# http://bugs.python.org/issue13273 makes strict=True a better bet\n# before Python 3.2.3.\n#\n# At the end of this file, we monkeypatch HTMLParser so that\n# strict=True works well on Python 3.2.2.\nmajor, minor, release = sys.version_info[:3]\nCONSTRUCTOR_TAKES_STRICT = major == 3 and minor == 2 and release >= 3\nCONSTRUCTOR_STRICT_IS_DEPRECATED = major == 3 and minor == 3\nCONSTRUCTOR_TAKES_CONVERT_CHARREFS = major == 3 and minor >= 4\n\n\nfrom bs4.element import (\n    CData,\n    Comment,\n    Declaration,\n    Doctype,\n    ProcessingInstruction,\n    )\nfrom bs4.dammit import EntitySubstitution, UnicodeDammit\n\nfrom bs4.builder import (\n    HTML,\n    HTMLTreeBuilder,\n    STRICT,\n    )\n\n\nHTMLPARSER = 'html.parser'\n\nclass BeautifulSoupHTMLParser(HTMLParser):\n    def handle_starttag(self, name, attrs):\n        # XXX namespace\n        attr_dict = {}\n        for key, value in attrs:\n            # Change None attribute values to the empty string\n            # for consistency with the other tree builders.\n            if value is None:\n                value = ''\n            attr_dict[key] = value\n            attrvalue = '\"\"'\n        self.soup.handle_starttag(name, None, None, attr_dict)\n\n    def handle_endtag(self, name):\n        self.soup.handle_endtag(name)\n\n    def handle_data(self, data):\n        self.soup.handle_data(data)\n\n    def handle_charref(self, name):\n        # XXX workaround for a bug in HTMLParser. Remove this once\n        # it's fixed in all supported versions.\n        # http://bugs.python.org/issue13633\n        if name.startswith('x'):\n            real_name = int(name.lstrip('x'), 16)\n        elif name.startswith('X'):\n            real_name = int(name.lstrip('X'), 16)\n        else:\n            real_name = int(name)\n\n        try:\n            data = unichr(real_name)\n        except (ValueError, OverflowError), e:\n            data = u\"\\N{REPLACEMENT CHARACTER}\"\n\n        self.handle_data(data)\n\n    def handle_entityref(self, name):\n        character = EntitySubstitution.HTML_ENTITY_TO_CHARACTER.get(name)\n        if character is not None:\n            data = character\n        else:\n            data = \"&%s;\" % name\n        self.handle_data(data)\n\n    def handle_comment(self, data):\n        self.soup.endData()\n        self.soup.handle_data(data)\n        self.soup.endData(Comment)\n\n    def handle_decl(self, data):\n        self.soup.endData()\n        if data.startswith(\"DOCTYPE \"):\n            data = data[len(\"DOCTYPE \"):]\n        elif data == 'DOCTYPE':\n            # i.e. \"<!DOCTYPE>\"\n            data = ''\n        self.soup.handle_data(data)\n        self.soup.endData(Doctype)\n\n    def unknown_decl(self, data):\n        if data.upper().startswith('CDATA['):\n            cls = CData\n            data = data[len('CDATA['):]\n        else:\n            cls = Declaration\n        self.soup.endData()\n        self.soup.handle_data(data)\n        self.soup.endData(cls)\n\n    def handle_pi(self, data):\n        self.soup.endData()\n        self.soup.handle_data(data)\n        self.soup.endData(ProcessingInstruction)\n\n\nclass HTMLParserTreeBuilder(HTMLTreeBuilder):\n\n    is_xml = False\n    picklable = True\n    NAME = HTMLPARSER\n    features = [NAME, HTML, STRICT]\n\n    def __init__(self, *args, **kwargs):\n        if CONSTRUCTOR_TAKES_STRICT and not CONSTRUCTOR_STRICT_IS_DEPRECATED:\n            kwargs['strict'] = False\n        if CONSTRUCTOR_TAKES_CONVERT_CHARREFS:\n            kwargs['convert_charrefs'] = False\n        self.parser_args = (args, kwargs)\n\n    def prepare_markup(self, markup, user_specified_encoding=None,\n                       document_declared_encoding=None, exclude_encodings=None):\n        \"\"\"\n        :return: A 4-tuple (markup, original encoding, encoding\n        declared within markup, whether any characters had to be\n        replaced with REPLACEMENT CHARACTER).\n        \"\"\"\n        if isinstance(markup, unicode):\n            yield (markup, None, None, False)\n            return\n\n        try_encodings = [user_specified_encoding, document_declared_encoding]\n        dammit = UnicodeDammit(markup, try_encodings, is_html=True,\n                               exclude_encodings=exclude_encodings)\n        yield (dammit.markup, dammit.original_encoding,\n               dammit.declared_html_encoding,\n               dammit.contains_replacement_characters)\n\n    def feed(self, markup):\n        args, kwargs = self.parser_args\n        parser = BeautifulSoupHTMLParser(*args, **kwargs)\n        parser.soup = self.soup\n        try:\n            parser.feed(markup)\n        except HTMLParseError, e:\n            warnings.warn(RuntimeWarning(\n                \"Python's built-in HTMLParser cannot parse the given document. This is not a bug in Beautiful Soup. The best solution is to install an external parser (lxml or html5lib), and use Beautiful Soup with that parser. See http://www.crummy.com/software/BeautifulSoup/bs4/doc/#installing-a-parser for help.\"))\n            raise e\n\n# Patch 3.2 versions of HTMLParser earlier than 3.2.3 to use some\n# 3.2.3 code. This ensures they don't treat markup like <p></p> as a\n# string.\n#\n# XXX This code can be removed once most Python 3 users are on 3.2.3.\nif major == 3 and minor == 2 and not CONSTRUCTOR_TAKES_STRICT:\n    import re\n    attrfind_tolerant = re.compile(\n        r'\\s*((?<=[\\'\"\\s])[^\\s/>][^\\s/=>]*)(\\s*=+\\s*'\n        r'(\\'[^\\']*\\'|\"[^\"]*\"|(?![\\'\"])[^>\\s]*))?')\n    HTMLParserTreeBuilder.attrfind_tolerant = attrfind_tolerant\n\n    locatestarttagend = re.compile(r\"\"\"\n  <[a-zA-Z][-.a-zA-Z0-9:_]*          # tag name\n  (?:\\s+                             # whitespace before attribute name\n    (?:[a-zA-Z_][-.:a-zA-Z0-9_]*     # attribute name\n      (?:\\s*=\\s*                     # value indicator\n        (?:'[^']*'                   # LITA-enclosed value\n          |\\\"[^\\\"]*\\\"                # LIT-enclosed value\n          |[^'\\\">\\s]+                # bare value\n         )\n       )?\n     )\n   )*\n  \\s*                                # trailing whitespace\n\"\"\", re.VERBOSE)\n    BeautifulSoupHTMLParser.locatestarttagend = locatestarttagend\n\n    from html.parser import tagfind, attrfind\n\n    def parse_starttag(self, i):\n        self.__starttag_text = None\n        endpos = self.check_for_whole_start_tag(i)\n        if endpos < 0:\n            return endpos\n        rawdata = self.rawdata\n        self.__starttag_text = rawdata[i:endpos]\n\n        # Now parse the data between i+1 and j into a tag and attrs\n        attrs = []\n        match = tagfind.match(rawdata, i+1)\n        assert match, 'unexpected call to parse_starttag()'\n        k = match.end()\n        self.lasttag = tag = rawdata[i+1:k].lower()\n        while k < endpos:\n            if self.strict:\n                m = attrfind.match(rawdata, k)\n            else:\n                m = attrfind_tolerant.match(rawdata, k)\n            if not m:\n                break\n            attrname, rest, attrvalue = m.group(1, 2, 3)\n            if not rest:\n                attrvalue = None\n            elif attrvalue[:1] == '\\'' == attrvalue[-1:] or \\\n                 attrvalue[:1] == '\"' == attrvalue[-1:]:\n                attrvalue = attrvalue[1:-1]\n            if attrvalue:\n                attrvalue = self.unescape(attrvalue)\n            attrs.append((attrname.lower(), attrvalue))\n            k = m.end()\n\n        end = rawdata[k:endpos].strip()\n        if end not in (\">\", \"/>\"):\n            lineno, offset = self.getpos()\n            if \"\\n\" in self.__starttag_text:\n                lineno = lineno + self.__starttag_text.count(\"\\n\")\n                offset = len(self.__starttag_text) \\\n                         - self.__starttag_text.rfind(\"\\n\")\n            else:\n                offset = offset + len(self.__starttag_text)\n            if self.strict:\n                self.error(\"junk characters in start tag: %r\"\n                           % (rawdata[k:endpos][:20],))\n            self.handle_data(rawdata[i:endpos])\n            return endpos\n        if end.endswith('/>'):\n            # XHTML-style empty tag: <span attr=\"value\" />\n            self.handle_startendtag(tag, attrs)\n        else:\n            self.handle_starttag(tag, attrs)\n            if tag in self.CDATA_CONTENT_ELEMENTS:\n                self.set_cdata_mode(tag)\n        return endpos\n\n    def set_cdata_mode(self, elem):\n        self.cdata_elem = elem.lower()\n        self.interesting = re.compile(r'</\\s*%s\\s*>' % self.cdata_elem, re.I)\n\n    BeautifulSoupHTMLParser.parse_starttag = parse_starttag\n    BeautifulSoupHTMLParser.set_cdata_mode = set_cdata_mode\n\n    CONSTRUCTOR_TAKES_STRICT = True\n"
  },
  {
    "path": "example/parallax_svg_tools/bs4/builder/_lxml.py",
    "content": "# Use of this source code is governed by a BSD-style license that can be\n# found in the LICENSE file.\n__all__ = [\n    'LXMLTreeBuilderForXML',\n    'LXMLTreeBuilder',\n    ]\n\nfrom io import BytesIO\nfrom StringIO import StringIO\nimport collections\nfrom lxml import etree\nfrom bs4.element import (\n    Comment,\n    Doctype,\n    NamespacedAttribute,\n    ProcessingInstruction,\n    XMLProcessingInstruction,\n)\nfrom bs4.builder import (\n    FAST,\n    HTML,\n    HTMLTreeBuilder,\n    PERMISSIVE,\n    ParserRejectedMarkup,\n    TreeBuilder,\n    XML)\nfrom bs4.dammit import EncodingDetector\n\nLXML = 'lxml'\n\nclass LXMLTreeBuilderForXML(TreeBuilder):\n    DEFAULT_PARSER_CLASS = etree.XMLParser\n\n    is_xml = True\n    processing_instruction_class = XMLProcessingInstruction\n\n    NAME = \"lxml-xml\"\n    ALTERNATE_NAMES = [\"xml\"]\n\n    # Well, it's permissive by XML parser standards.\n    features = [NAME, LXML, XML, FAST, PERMISSIVE]\n\n    CHUNK_SIZE = 512\n\n    # This namespace mapping is specified in the XML Namespace\n    # standard.\n    DEFAULT_NSMAPS = {'http://www.w3.org/XML/1998/namespace' : \"xml\"}\n\n    def default_parser(self, encoding):\n        # This can either return a parser object or a class, which\n        # will be instantiated with default arguments.\n        if self._default_parser is not None:\n            return self._default_parser\n        return etree.XMLParser(\n            target=self, strip_cdata=False, recover=True, encoding=encoding)\n\n    def parser_for(self, encoding):\n        # Use the default parser.\n        parser = self.default_parser(encoding)\n\n        if isinstance(parser, collections.Callable):\n            # Instantiate the parser with default arguments\n            parser = parser(target=self, strip_cdata=False, encoding=encoding)\n        return parser\n\n    def __init__(self, parser=None, empty_element_tags=None):\n        # TODO: Issue a warning if parser is present but not a\n        # callable, since that means there's no way to create new\n        # parsers for different encodings.\n        self._default_parser = parser\n        if empty_element_tags is not None:\n            self.empty_element_tags = set(empty_element_tags)\n        self.soup = None\n        self.nsmaps = [self.DEFAULT_NSMAPS]\n\n    def _getNsTag(self, tag):\n        # Split the namespace URL out of a fully-qualified lxml tag\n        # name. Copied from lxml's src/lxml/sax.py.\n        if tag[0] == '{':\n            return tuple(tag[1:].split('}', 1))\n        else:\n            return (None, tag)\n\n    def prepare_markup(self, markup, user_specified_encoding=None,\n                       exclude_encodings=None,\n                       document_declared_encoding=None):\n        \"\"\"\n        :yield: A series of 4-tuples.\n         (markup, encoding, declared encoding,\n          has undergone character replacement)\n\n        Each 4-tuple represents a strategy for parsing the document.\n        \"\"\"\n        # Instead of using UnicodeDammit to convert the bytestring to\n        # Unicode using different encodings, use EncodingDetector to\n        # iterate over the encodings, and tell lxml to try to parse\n        # the document as each one in turn.\n        is_html = not self.is_xml\n        if is_html:\n            self.processing_instruction_class = ProcessingInstruction\n        else:\n            self.processing_instruction_class = XMLProcessingInstruction\n\n        if isinstance(markup, unicode):\n            # We were given Unicode. Maybe lxml can parse Unicode on\n            # this system?\n            yield markup, None, document_declared_encoding, False\n\n        if isinstance(markup, unicode):\n            # No, apparently not. Convert the Unicode to UTF-8 and\n            # tell lxml to parse it as UTF-8.\n            yield (markup.encode(\"utf8\"), \"utf8\",\n                   document_declared_encoding, False)\n\n        try_encodings = [user_specified_encoding, document_declared_encoding]\n        detector = EncodingDetector(\n            markup, try_encodings, is_html, exclude_encodings)\n        for encoding in detector.encodings:\n            yield (detector.markup, encoding, document_declared_encoding, False)\n\n    def feed(self, markup):\n        if isinstance(markup, bytes):\n            markup = BytesIO(markup)\n        elif isinstance(markup, unicode):\n            markup = StringIO(markup)\n\n        # Call feed() at least once, even if the markup is empty,\n        # or the parser won't be initialized.\n        data = markup.read(self.CHUNK_SIZE)\n        try:\n            self.parser = self.parser_for(self.soup.original_encoding)\n            self.parser.feed(data)\n            while len(data) != 0:\n                # Now call feed() on the rest of the data, chunk by chunk.\n                data = markup.read(self.CHUNK_SIZE)\n                if len(data) != 0:\n                    self.parser.feed(data)\n            self.parser.close()\n        except (UnicodeDecodeError, LookupError, etree.ParserError), e:\n            raise ParserRejectedMarkup(str(e))\n\n    def close(self):\n        self.nsmaps = [self.DEFAULT_NSMAPS]\n\n    def start(self, name, attrs, nsmap={}):\n        # Make sure attrs is a mutable dict--lxml may send an immutable dictproxy.\n        attrs = dict(attrs)\n        nsprefix = None\n        # Invert each namespace map as it comes in.\n        if len(self.nsmaps) > 1:\n            # There are no new namespaces for this tag, but\n            # non-default namespaces are in play, so we need a\n            # separate tag stack to know when they end.\n            self.nsmaps.append(None)\n        elif len(nsmap) > 0:\n            # A new namespace mapping has come into play.\n            inverted_nsmap = dict((value, key) for key, value in nsmap.items())\n            self.nsmaps.append(inverted_nsmap)\n            # Also treat the namespace mapping as a set of attributes on the\n            # tag, so we can recreate it later.\n            attrs = attrs.copy()\n            for prefix, namespace in nsmap.items():\n                attribute = NamespacedAttribute(\n                    \"xmlns\", prefix, \"http://www.w3.org/2000/xmlns/\")\n                attrs[attribute] = namespace\n\n        # Namespaces are in play. Find any attributes that came in\n        # from lxml with namespaces attached to their names, and\n        # turn then into NamespacedAttribute objects.\n        new_attrs = {}\n        for attr, value in attrs.items():\n            namespace, attr = self._getNsTag(attr)\n            if namespace is None:\n                new_attrs[attr] = value\n            else:\n                nsprefix = self._prefix_for_namespace(namespace)\n                attr = NamespacedAttribute(nsprefix, attr, namespace)\n                new_attrs[attr] = value\n        attrs = new_attrs\n\n        namespace, name = self._getNsTag(name)\n        nsprefix = self._prefix_for_namespace(namespace)\n        self.soup.handle_starttag(name, namespace, nsprefix, attrs)\n\n    def _prefix_for_namespace(self, namespace):\n        \"\"\"Find the currently active prefix for the given namespace.\"\"\"\n        if namespace is None:\n            return None\n        for inverted_nsmap in reversed(self.nsmaps):\n            if inverted_nsmap is not None and namespace in inverted_nsmap:\n                return inverted_nsmap[namespace]\n        return None\n\n    def end(self, name):\n        self.soup.endData()\n        completed_tag = self.soup.tagStack[-1]\n        namespace, name = self._getNsTag(name)\n        nsprefix = None\n        if namespace is not None:\n            for inverted_nsmap in reversed(self.nsmaps):\n                if inverted_nsmap is not None and namespace in inverted_nsmap:\n                    nsprefix = inverted_nsmap[namespace]\n                    break\n        self.soup.handle_endtag(name, nsprefix)\n        if len(self.nsmaps) > 1:\n            # This tag, or one of its parents, introduced a namespace\n            # mapping, so pop it off the stack.\n            self.nsmaps.pop()\n\n    def pi(self, target, data):\n        self.soup.endData()\n        self.soup.handle_data(target + ' ' + data)\n        self.soup.endData(self.processing_instruction_class)\n\n    def data(self, content):\n        self.soup.handle_data(content)\n\n    def doctype(self, name, pubid, system):\n        self.soup.endData()\n        doctype = Doctype.for_name_and_ids(name, pubid, system)\n        self.soup.object_was_parsed(doctype)\n\n    def comment(self, content):\n        \"Handle comments as Comment objects.\"\n        self.soup.endData()\n        self.soup.handle_data(content)\n        self.soup.endData(Comment)\n\n    def test_fragment_to_document(self, fragment):\n        \"\"\"See `TreeBuilder`.\"\"\"\n        return u'<?xml version=\"1.0\" encoding=\"utf-8\"?>\\n%s' % fragment\n\n\nclass LXMLTreeBuilder(HTMLTreeBuilder, LXMLTreeBuilderForXML):\n\n    NAME = LXML\n    ALTERNATE_NAMES = [\"lxml-html\"]\n\n    features = ALTERNATE_NAMES + [NAME, HTML, FAST, PERMISSIVE]\n    is_xml = False\n    processing_instruction_class = ProcessingInstruction\n\n    def default_parser(self, encoding):\n        return etree.HTMLParser\n\n    def feed(self, markup):\n        encoding = self.soup.original_encoding\n        try:\n            self.parser = self.parser_for(encoding)\n            self.parser.feed(markup)\n            self.parser.close()\n        except (UnicodeDecodeError, LookupError, etree.ParserError), e:\n            raise ParserRejectedMarkup(str(e))\n\n\n    def test_fragment_to_document(self, fragment):\n        \"\"\"See `TreeBuilder`.\"\"\"\n        return u'<html><body>%s</body></html>' % fragment\n"
  },
  {
    "path": "example/parallax_svg_tools/bs4/dammit.py",
    "content": "# -*- coding: utf-8 -*-\n\"\"\"Beautiful Soup bonus library: Unicode, Dammit\n\nThis library converts a bytestream to Unicode through any means\nnecessary. It is heavily based on code from Mark Pilgrim's Universal\nFeed Parser. It works best on XML and HTML, but it does not rewrite the\nXML or HTML to reflect a new encoding; that's the tree builder's job.\n\"\"\"\n# Use of this source code is governed by a BSD-style license that can be\n# found in the LICENSE file.\n__license__ = \"MIT\"\n\nimport codecs\nfrom htmlentitydefs import codepoint2name\nimport re\nimport logging\nimport string\n\n# Import a library to autodetect character encodings.\nchardet_type = None\ntry:\n    # First try the fast C implementation.\n    #  PyPI package: cchardet\n    import cchardet\n    def chardet_dammit(s):\n        return cchardet.detect(s)['encoding']\nexcept ImportError:\n    try:\n        # Fall back to the pure Python implementation\n        #  Debian package: python-chardet\n        #  PyPI package: chardet\n        import chardet\n        def chardet_dammit(s):\n            return chardet.detect(s)['encoding']\n        #import chardet.constants\n        #chardet.constants._debug = 1\n    except ImportError:\n        # No chardet available.\n        def chardet_dammit(s):\n            return None\n\n# Available from http://cjkpython.i18n.org/.\ntry:\n    import iconv_codec\nexcept ImportError:\n    pass\n\nxml_encoding_re = re.compile(\n    '^<\\?.*encoding=[\\'\"](.*?)[\\'\"].*\\?>'.encode(), re.I)\nhtml_meta_re = re.compile(\n    '<\\s*meta[^>]+charset\\s*=\\s*[\"\\']?([^>]*?)[ /;\\'\">]'.encode(), re.I)\n\nclass EntitySubstitution(object):\n\n    \"\"\"Substitute XML or HTML entities for the corresponding characters.\"\"\"\n\n    def _populate_class_variables():\n        lookup = {}\n        reverse_lookup = {}\n        characters_for_re = []\n        for codepoint, name in list(codepoint2name.items()):\n            character = unichr(codepoint)\n            if codepoint != 34:\n                # There's no point in turning the quotation mark into\n                # &quot;, unless it happens within an attribute value, which\n                # is handled elsewhere.\n                characters_for_re.append(character)\n                lookup[character] = name\n            # But we do want to turn &quot; into the quotation mark.\n            reverse_lookup[name] = character\n        re_definition = \"[%s]\" % \"\".join(characters_for_re)\n        return lookup, reverse_lookup, re.compile(re_definition)\n    (CHARACTER_TO_HTML_ENTITY, HTML_ENTITY_TO_CHARACTER,\n     CHARACTER_TO_HTML_ENTITY_RE) = _populate_class_variables()\n\n    CHARACTER_TO_XML_ENTITY = {\n        \"'\": \"apos\",\n        '\"': \"quot\",\n        \"&\": \"amp\",\n        \"<\": \"lt\",\n        \">\": \"gt\",\n        }\n\n    BARE_AMPERSAND_OR_BRACKET = re.compile(\"([<>]|\"\n                                           \"&(?!#\\d+;|#x[0-9a-fA-F]+;|\\w+;)\"\n                                           \")\")\n\n    AMPERSAND_OR_BRACKET = re.compile(\"([<>&])\")\n\n    @classmethod\n    def _substitute_html_entity(cls, matchobj):\n        entity = cls.CHARACTER_TO_HTML_ENTITY.get(matchobj.group(0))\n        return \"&%s;\" % entity\n\n    @classmethod\n    def _substitute_xml_entity(cls, matchobj):\n        \"\"\"Used with a regular expression to substitute the\n        appropriate XML entity for an XML special character.\"\"\"\n        entity = cls.CHARACTER_TO_XML_ENTITY[matchobj.group(0)]\n        return \"&%s;\" % entity\n\n    @classmethod\n    def quoted_attribute_value(self, value):\n        \"\"\"Make a value into a quoted XML attribute, possibly escaping it.\n\n         Most strings will be quoted using double quotes.\n\n          Bob's Bar -> \"Bob's Bar\"\n\n         If a string contains double quotes, it will be quoted using\n         single quotes.\n\n          Welcome to \"my bar\" -> 'Welcome to \"my bar\"'\n\n         If a string contains both single and double quotes, the\n         double quotes will be escaped, and the string will be quoted\n         using double quotes.\n\n          Welcome to \"Bob's Bar\" -> \"Welcome to &quot;Bob's bar&quot;\n        \"\"\"\n        quote_with = '\"'\n        if '\"' in value:\n            if \"'\" in value:\n                # The string contains both single and double\n                # quotes.  Turn the double quotes into\n                # entities. We quote the double quotes rather than\n                # the single quotes because the entity name is\n                # \"&quot;\" whether this is HTML or XML.  If we\n                # quoted the single quotes, we'd have to decide\n                # between &apos; and &squot;.\n                replace_with = \"&quot;\"\n                value = value.replace('\"', replace_with)\n            else:\n                # There are double quotes but no single quotes.\n                # We can use single quotes to quote the attribute.\n                quote_with = \"'\"\n        return quote_with + value + quote_with\n\n    @classmethod\n    def substitute_xml(cls, value, make_quoted_attribute=False):\n        \"\"\"Substitute XML entities for special XML characters.\n\n        :param value: A string to be substituted. The less-than sign\n          will become &lt;, the greater-than sign will become &gt;,\n          and any ampersands will become &amp;. If you want ampersands\n          that appear to be part of an entity definition to be left\n          alone, use substitute_xml_containing_entities() instead.\n\n        :param make_quoted_attribute: If True, then the string will be\n         quoted, as befits an attribute value.\n        \"\"\"\n        # Escape angle brackets and ampersands.\n        value = cls.AMPERSAND_OR_BRACKET.sub(\n            cls._substitute_xml_entity, value)\n\n        if make_quoted_attribute:\n            value = cls.quoted_attribute_value(value)\n        return value\n\n    @classmethod\n    def substitute_xml_containing_entities(\n        cls, value, make_quoted_attribute=False):\n        \"\"\"Substitute XML entities for special XML characters.\n\n        :param value: A string to be substituted. The less-than sign will\n          become &lt;, the greater-than sign will become &gt;, and any\n          ampersands that are not part of an entity defition will\n          become &amp;.\n\n        :param make_quoted_attribute: If True, then the string will be\n         quoted, as befits an attribute value.\n        \"\"\"\n        # Escape angle brackets, and ampersands that aren't part of\n        # entities.\n        value = cls.BARE_AMPERSAND_OR_BRACKET.sub(\n            cls._substitute_xml_entity, value)\n\n        if make_quoted_attribute:\n            value = cls.quoted_attribute_value(value)\n        return value\n\n    @classmethod\n    def substitute_html(cls, s):\n        \"\"\"Replace certain Unicode characters with named HTML entities.\n\n        This differs from data.encode(encoding, 'xmlcharrefreplace')\n        in that the goal is to make the result more readable (to those\n        with ASCII displays) rather than to recover from\n        errors. There's absolutely nothing wrong with a UTF-8 string\n        containg a LATIN SMALL LETTER E WITH ACUTE, but replacing that\n        character with \"&eacute;\" will make it more readable to some\n        people.\n        \"\"\"\n        return cls.CHARACTER_TO_HTML_ENTITY_RE.sub(\n            cls._substitute_html_entity, s)\n\n\nclass EncodingDetector:\n    \"\"\"Suggests a number of possible encodings for a bytestring.\n\n    Order of precedence:\n\n    1. Encodings you specifically tell EncodingDetector to try first\n    (the override_encodings argument to the constructor).\n\n    2. An encoding declared within the bytestring itself, either in an\n    XML declaration (if the bytestring is to be interpreted as an XML\n    document), or in a <meta> tag (if the bytestring is to be\n    interpreted as an HTML document.)\n\n    3. An encoding detected through textual analysis by chardet,\n    cchardet, or a similar external library.\n\n    4. UTF-8.\n\n    5. Windows-1252.\n    \"\"\"\n    def __init__(self, markup, override_encodings=None, is_html=False,\n                 exclude_encodings=None):\n        self.override_encodings = override_encodings or []\n        exclude_encodings = exclude_encodings or []\n        self.exclude_encodings = set([x.lower() for x in exclude_encodings])\n        self.chardet_encoding = None\n        self.is_html = is_html\n        self.declared_encoding = None\n\n        # First order of business: strip a byte-order mark.\n        self.markup, self.sniffed_encoding = self.strip_byte_order_mark(markup)\n\n    def _usable(self, encoding, tried):\n        if encoding is not None:\n            encoding = encoding.lower()\n            if encoding in self.exclude_encodings:\n                return False\n            if encoding not in tried:\n                tried.add(encoding)\n                return True\n        return False\n\n    @property\n    def encodings(self):\n        \"\"\"Yield a number of encodings that might work for this markup.\"\"\"\n        tried = set()\n        for e in self.override_encodings:\n            if self._usable(e, tried):\n                yield e\n\n        # Did the document originally start with a byte-order mark\n        # that indicated its encoding?\n        if self._usable(self.sniffed_encoding, tried):\n            yield self.sniffed_encoding\n\n        # Look within the document for an XML or HTML encoding\n        # declaration.\n        if self.declared_encoding is None:\n            self.declared_encoding = self.find_declared_encoding(\n                self.markup, self.is_html)\n        if self._usable(self.declared_encoding, tried):\n            yield self.declared_encoding\n\n        # Use third-party character set detection to guess at the\n        # encoding.\n        if self.chardet_encoding is None:\n            self.chardet_encoding = chardet_dammit(self.markup)\n        if self._usable(self.chardet_encoding, tried):\n            yield self.chardet_encoding\n\n        # As a last-ditch effort, try utf-8 and windows-1252.\n        for e in ('utf-8', 'windows-1252'):\n            if self._usable(e, tried):\n                yield e\n\n    @classmethod\n    def strip_byte_order_mark(cls, data):\n        \"\"\"If a byte-order mark is present, strip it and return the encoding it implies.\"\"\"\n        encoding = None\n        if isinstance(data, unicode):\n            # Unicode data cannot have a byte-order mark.\n            return data, encoding\n        if (len(data) >= 4) and (data[:2] == b'\\xfe\\xff') \\\n               and (data[2:4] != '\\x00\\x00'):\n            encoding = 'utf-16be'\n            data = data[2:]\n        elif (len(data) >= 4) and (data[:2] == b'\\xff\\xfe') \\\n                 and (data[2:4] != '\\x00\\x00'):\n            encoding = 'utf-16le'\n            data = data[2:]\n        elif data[:3] == b'\\xef\\xbb\\xbf':\n            encoding = 'utf-8'\n            data = data[3:]\n        elif data[:4] == b'\\x00\\x00\\xfe\\xff':\n            encoding = 'utf-32be'\n            data = data[4:]\n        elif data[:4] == b'\\xff\\xfe\\x00\\x00':\n            encoding = 'utf-32le'\n            data = data[4:]\n        return data, encoding\n\n    @classmethod\n    def find_declared_encoding(cls, markup, is_html=False, search_entire_document=False):\n        \"\"\"Given a document, tries to find its declared encoding.\n\n        An XML encoding is declared at the beginning of the document.\n\n        An HTML encoding is declared in a <meta> tag, hopefully near the\n        beginning of the document.\n        \"\"\"\n        if search_entire_document:\n            xml_endpos = html_endpos = len(markup)\n        else:\n            xml_endpos = 1024\n            html_endpos = max(2048, int(len(markup) * 0.05))\n            \n        declared_encoding = None\n        declared_encoding_match = xml_encoding_re.search(markup, endpos=xml_endpos)\n        if not declared_encoding_match and is_html:\n            declared_encoding_match = html_meta_re.search(markup, endpos=html_endpos)\n        if declared_encoding_match is not None:\n            declared_encoding = declared_encoding_match.groups()[0].decode(\n                'ascii', 'replace')\n        if declared_encoding:\n            return declared_encoding.lower()\n        return None\n\nclass UnicodeDammit:\n    \"\"\"A class for detecting the encoding of a *ML document and\n    converting it to a Unicode string. If the source encoding is\n    windows-1252, can replace MS smart quotes with their HTML or XML\n    equivalents.\"\"\"\n\n    # This dictionary maps commonly seen values for \"charset\" in HTML\n    # meta tags to the corresponding Python codec names. It only covers\n    # values that aren't in Python's aliases and can't be determined\n    # by the heuristics in find_codec.\n    CHARSET_ALIASES = {\"macintosh\": \"mac-roman\",\n                       \"x-sjis\": \"shift-jis\"}\n\n    ENCODINGS_WITH_SMART_QUOTES = [\n        \"windows-1252\",\n        \"iso-8859-1\",\n        \"iso-8859-2\",\n        ]\n\n    def __init__(self, markup, override_encodings=[],\n                 smart_quotes_to=None, is_html=False, exclude_encodings=[]):\n        self.smart_quotes_to = smart_quotes_to\n        self.tried_encodings = []\n        self.contains_replacement_characters = False\n        self.is_html = is_html\n        self.log = logging.getLogger(__name__)\n        self.detector = EncodingDetector(\n            markup, override_encodings, is_html, exclude_encodings)\n\n        # Short-circuit if the data is in Unicode to begin with.\n        if isinstance(markup, unicode) or markup == '':\n            self.markup = markup\n            self.unicode_markup = unicode(markup)\n            self.original_encoding = None\n            return\n\n        # The encoding detector may have stripped a byte-order mark.\n        # Use the stripped markup from this point on.\n        self.markup = self.detector.markup\n\n        u = None\n        for encoding in self.detector.encodings:\n            markup = self.detector.markup\n            u = self._convert_from(encoding)\n            if u is not None:\n                break\n\n        if not u:\n            # None of the encodings worked. As an absolute last resort,\n            # try them again with character replacement.\n\n            for encoding in self.detector.encodings:\n                if encoding != \"ascii\":\n                    u = self._convert_from(encoding, \"replace\")\n                if u is not None:\n                    self.log.warning(\n                            \"Some characters could not be decoded, and were \"\n                            \"replaced with REPLACEMENT CHARACTER.\"\n                    )\n                    self.contains_replacement_characters = True\n                    break\n\n        # If none of that worked, we could at this point force it to\n        # ASCII, but that would destroy so much data that I think\n        # giving up is better.\n        self.unicode_markup = u\n        if not u:\n            self.original_encoding = None\n\n    def _sub_ms_char(self, match):\n        \"\"\"Changes a MS smart quote character to an XML or HTML\n        entity, or an ASCII character.\"\"\"\n        orig = match.group(1)\n        if self.smart_quotes_to == 'ascii':\n            sub = self.MS_CHARS_TO_ASCII.get(orig).encode()\n        else:\n            sub = self.MS_CHARS.get(orig)\n            if type(sub) == tuple:\n                if self.smart_quotes_to == 'xml':\n                    sub = '&#x'.encode() + sub[1].encode() + ';'.encode()\n                else:\n                    sub = '&'.encode() + sub[0].encode() + ';'.encode()\n            else:\n                sub = sub.encode()\n        return sub\n\n    def _convert_from(self, proposed, errors=\"strict\"):\n        proposed = self.find_codec(proposed)\n        if not proposed or (proposed, errors) in self.tried_encodings:\n            return None\n        self.tried_encodings.append((proposed, errors))\n        markup = self.markup\n        # Convert smart quotes to HTML if coming from an encoding\n        # that might have them.\n        if (self.smart_quotes_to is not None\n            and proposed in self.ENCODINGS_WITH_SMART_QUOTES):\n            smart_quotes_re = b\"([\\x80-\\x9f])\"\n            smart_quotes_compiled = re.compile(smart_quotes_re)\n            markup = smart_quotes_compiled.sub(self._sub_ms_char, markup)\n\n        try:\n            #print \"Trying to convert document to %s (errors=%s)\" % (\n            #    proposed, errors)\n            u = self._to_unicode(markup, proposed, errors)\n            self.markup = u\n            self.original_encoding = proposed\n        except Exception as e:\n            #print \"That didn't work!\"\n            #print e\n            return None\n        #print \"Correct encoding: %s\" % proposed\n        return self.markup\n\n    def _to_unicode(self, data, encoding, errors=\"strict\"):\n        '''Given a string and its encoding, decodes the string into Unicode.\n        %encoding is a string recognized by encodings.aliases'''\n        return unicode(data, encoding, errors)\n\n    @property\n    def declared_html_encoding(self):\n        if not self.is_html:\n            return None\n        return self.detector.declared_encoding\n\n    def find_codec(self, charset):\n        value = (self._codec(self.CHARSET_ALIASES.get(charset, charset))\n               or (charset and self._codec(charset.replace(\"-\", \"\")))\n               or (charset and self._codec(charset.replace(\"-\", \"_\")))\n               or (charset and charset.lower())\n               or charset\n                )\n        if value:\n            return value.lower()\n        return None\n\n    def _codec(self, charset):\n        if not charset:\n            return charset\n        codec = None\n        try:\n            codecs.lookup(charset)\n            codec = charset\n        except (LookupError, ValueError):\n            pass\n        return codec\n\n\n    # A partial mapping of ISO-Latin-1 to HTML entities/XML numeric entities.\n    MS_CHARS = {b'\\x80': ('euro', '20AC'),\n                b'\\x81': ' ',\n                b'\\x82': ('sbquo', '201A'),\n                b'\\x83': ('fnof', '192'),\n                b'\\x84': ('bdquo', '201E'),\n                b'\\x85': ('hellip', '2026'),\n                b'\\x86': ('dagger', '2020'),\n                b'\\x87': ('Dagger', '2021'),\n                b'\\x88': ('circ', '2C6'),\n                b'\\x89': ('permil', '2030'),\n                b'\\x8A': ('Scaron', '160'),\n                b'\\x8B': ('lsaquo', '2039'),\n                b'\\x8C': ('OElig', '152'),\n                b'\\x8D': '?',\n                b'\\x8E': ('#x17D', '17D'),\n                b'\\x8F': '?',\n                b'\\x90': '?',\n                b'\\x91': ('lsquo', '2018'),\n                b'\\x92': ('rsquo', '2019'),\n                b'\\x93': ('ldquo', '201C'),\n                b'\\x94': ('rdquo', '201D'),\n                b'\\x95': ('bull', '2022'),\n                b'\\x96': ('ndash', '2013'),\n                b'\\x97': ('mdash', '2014'),\n                b'\\x98': ('tilde', '2DC'),\n                b'\\x99': ('trade', '2122'),\n                b'\\x9a': ('scaron', '161'),\n                b'\\x9b': ('rsaquo', '203A'),\n                b'\\x9c': ('oelig', '153'),\n                b'\\x9d': '?',\n                b'\\x9e': ('#x17E', '17E'),\n                b'\\x9f': ('Yuml', ''),}\n\n    # A parochial partial mapping of ISO-Latin-1 to ASCII. Contains\n    # horrors like stripping diacritical marks to turn á into a, but also\n    # contains non-horrors like turning “ into \".\n    MS_CHARS_TO_ASCII = {\n        b'\\x80' : 'EUR',\n        b'\\x81' : ' ',\n        b'\\x82' : ',',\n        b'\\x83' : 'f',\n        b'\\x84' : ',,',\n        b'\\x85' : '...',\n        b'\\x86' : '+',\n        b'\\x87' : '++',\n        b'\\x88' : '^',\n        b'\\x89' : '%',\n        b'\\x8a' : 'S',\n        b'\\x8b' : '<',\n        b'\\x8c' : 'OE',\n        b'\\x8d' : '?',\n        b'\\x8e' : 'Z',\n        b'\\x8f' : '?',\n        b'\\x90' : '?',\n        b'\\x91' : \"'\",\n        b'\\x92' : \"'\",\n        b'\\x93' : '\"',\n        b'\\x94' : '\"',\n        b'\\x95' : '*',\n        b'\\x96' : '-',\n        b'\\x97' : '--',\n        b'\\x98' : '~',\n        b'\\x99' : '(TM)',\n        b'\\x9a' : 's',\n        b'\\x9b' : '>',\n        b'\\x9c' : 'oe',\n        b'\\x9d' : '?',\n        b'\\x9e' : 'z',\n        b'\\x9f' : 'Y',\n        b'\\xa0' : ' ',\n        b'\\xa1' : '!',\n        b'\\xa2' : 'c',\n        b'\\xa3' : 'GBP',\n        b'\\xa4' : '$', #This approximation is especially parochial--this is the\n                       #generic currency symbol.\n        b'\\xa5' : 'YEN',\n        b'\\xa6' : '|',\n        b'\\xa7' : 'S',\n        b'\\xa8' : '..',\n        b'\\xa9' : '',\n        b'\\xaa' : '(th)',\n        b'\\xab' : '<<',\n        b'\\xac' : '!',\n        b'\\xad' : ' ',\n        b'\\xae' : '(R)',\n        b'\\xaf' : '-',\n        b'\\xb0' : 'o',\n        b'\\xb1' : '+-',\n        b'\\xb2' : '2',\n        b'\\xb3' : '3',\n        b'\\xb4' : (\"'\", 'acute'),\n        b'\\xb5' : 'u',\n        b'\\xb6' : 'P',\n        b'\\xb7' : '*',\n        b'\\xb8' : ',',\n        b'\\xb9' : '1',\n        b'\\xba' : '(th)',\n        b'\\xbb' : '>>',\n        b'\\xbc' : '1/4',\n        b'\\xbd' : '1/2',\n        b'\\xbe' : '3/4',\n        b'\\xbf' : '?',\n        b'\\xc0' : 'A',\n        b'\\xc1' : 'A',\n        b'\\xc2' : 'A',\n        b'\\xc3' : 'A',\n        b'\\xc4' : 'A',\n        b'\\xc5' : 'A',\n        b'\\xc6' : 'AE',\n        b'\\xc7' : 'C',\n        b'\\xc8' : 'E',\n        b'\\xc9' : 'E',\n        b'\\xca' : 'E',\n        b'\\xcb' : 'E',\n        b'\\xcc' : 'I',\n        b'\\xcd' : 'I',\n        b'\\xce' : 'I',\n        b'\\xcf' : 'I',\n        b'\\xd0' : 'D',\n        b'\\xd1' : 'N',\n        b'\\xd2' : 'O',\n        b'\\xd3' : 'O',\n        b'\\xd4' : 'O',\n        b'\\xd5' : 'O',\n        b'\\xd6' : 'O',\n        b'\\xd7' : '*',\n        b'\\xd8' : 'O',\n        b'\\xd9' : 'U',\n        b'\\xda' : 'U',\n        b'\\xdb' : 'U',\n        b'\\xdc' : 'U',\n        b'\\xdd' : 'Y',\n        b'\\xde' : 'b',\n        b'\\xdf' : 'B',\n        b'\\xe0' : 'a',\n        b'\\xe1' : 'a',\n        b'\\xe2' : 'a',\n        b'\\xe3' : 'a',\n        b'\\xe4' : 'a',\n        b'\\xe5' : 'a',\n        b'\\xe6' : 'ae',\n        b'\\xe7' : 'c',\n        b'\\xe8' : 'e',\n        b'\\xe9' : 'e',\n        b'\\xea' : 'e',\n        b'\\xeb' : 'e',\n        b'\\xec' : 'i',\n        b'\\xed' : 'i',\n        b'\\xee' : 'i',\n        b'\\xef' : 'i',\n        b'\\xf0' : 'o',\n        b'\\xf1' : 'n',\n        b'\\xf2' : 'o',\n        b'\\xf3' : 'o',\n        b'\\xf4' : 'o',\n        b'\\xf5' : 'o',\n        b'\\xf6' : 'o',\n        b'\\xf7' : '/',\n        b'\\xf8' : 'o',\n        b'\\xf9' : 'u',\n        b'\\xfa' : 'u',\n        b'\\xfb' : 'u',\n        b'\\xfc' : 'u',\n        b'\\xfd' : 'y',\n        b'\\xfe' : 'b',\n        b'\\xff' : 'y',\n        }\n\n    # A map used when removing rogue Windows-1252/ISO-8859-1\n    # characters in otherwise UTF-8 documents.\n    #\n    # Note that \\x81, \\x8d, \\x8f, \\x90, and \\x9d are undefined in\n    # Windows-1252.\n    WINDOWS_1252_TO_UTF8 = {\n        0x80 : b'\\xe2\\x82\\xac', # €\n        0x82 : b'\\xe2\\x80\\x9a', # ‚\n        0x83 : b'\\xc6\\x92',     # ƒ\n        0x84 : b'\\xe2\\x80\\x9e', # „\n        0x85 : b'\\xe2\\x80\\xa6', # …\n        0x86 : b'\\xe2\\x80\\xa0', # †\n        0x87 : b'\\xe2\\x80\\xa1', # ‡\n        0x88 : b'\\xcb\\x86',     # ˆ\n        0x89 : b'\\xe2\\x80\\xb0', # ‰\n        0x8a : b'\\xc5\\xa0',     # Š\n        0x8b : b'\\xe2\\x80\\xb9', # ‹\n        0x8c : b'\\xc5\\x92',     # Œ\n        0x8e : b'\\xc5\\xbd',     # Ž\n        0x91 : b'\\xe2\\x80\\x98', # ‘\n        0x92 : b'\\xe2\\x80\\x99', # ’\n        0x93 : b'\\xe2\\x80\\x9c', # “\n        0x94 : b'\\xe2\\x80\\x9d', # ”\n        0x95 : b'\\xe2\\x80\\xa2', # •\n        0x96 : b'\\xe2\\x80\\x93', # –\n        0x97 : b'\\xe2\\x80\\x94', # —\n        0x98 : b'\\xcb\\x9c',     # ˜\n        0x99 : b'\\xe2\\x84\\xa2', # ™\n        0x9a : b'\\xc5\\xa1',     # š\n        0x9b : b'\\xe2\\x80\\xba', # ›\n        0x9c : b'\\xc5\\x93',     # œ\n        0x9e : b'\\xc5\\xbe',     # ž\n        0x9f : b'\\xc5\\xb8',     # Ÿ\n        0xa0 : b'\\xc2\\xa0',     #  \n        0xa1 : b'\\xc2\\xa1',     # ¡\n        0xa2 : b'\\xc2\\xa2',     # ¢\n        0xa3 : b'\\xc2\\xa3',     # £\n        0xa4 : b'\\xc2\\xa4',     # ¤\n        0xa5 : b'\\xc2\\xa5',     # ¥\n        0xa6 : b'\\xc2\\xa6',     # ¦\n        0xa7 : b'\\xc2\\xa7',     # §\n        0xa8 : b'\\xc2\\xa8',     # ¨\n        0xa9 : b'\\xc2\\xa9',     # ©\n        0xaa : b'\\xc2\\xaa',     # ª\n        0xab : b'\\xc2\\xab',     # «\n        0xac : b'\\xc2\\xac',     # ¬\n        0xad : b'\\xc2\\xad',     # ­\n        0xae : b'\\xc2\\xae',     # ®\n        0xaf : b'\\xc2\\xaf',     # ¯\n        0xb0 : b'\\xc2\\xb0',     # °\n        0xb1 : b'\\xc2\\xb1',     # ±\n        0xb2 : b'\\xc2\\xb2',     # ²\n        0xb3 : b'\\xc2\\xb3',     # ³\n        0xb4 : b'\\xc2\\xb4',     # ´\n        0xb5 : b'\\xc2\\xb5',     # µ\n        0xb6 : b'\\xc2\\xb6',     # ¶\n        0xb7 : b'\\xc2\\xb7',     # ·\n        0xb8 : b'\\xc2\\xb8',     # ¸\n        0xb9 : b'\\xc2\\xb9',     # ¹\n        0xba : b'\\xc2\\xba',     # º\n        0xbb : b'\\xc2\\xbb',     # »\n        0xbc : b'\\xc2\\xbc',     # ¼\n        0xbd : b'\\xc2\\xbd',     # ½\n        0xbe : b'\\xc2\\xbe',     # ¾\n        0xbf : b'\\xc2\\xbf',     # ¿\n        0xc0 : b'\\xc3\\x80',     # À\n        0xc1 : b'\\xc3\\x81',     # Á\n        0xc2 : b'\\xc3\\x82',     # Â\n        0xc3 : b'\\xc3\\x83',     # Ã\n        0xc4 : b'\\xc3\\x84',     # Ä\n        0xc5 : b'\\xc3\\x85',     # Å\n        0xc6 : b'\\xc3\\x86',     # Æ\n        0xc7 : b'\\xc3\\x87',     # Ç\n        0xc8 : b'\\xc3\\x88',     # È\n        0xc9 : b'\\xc3\\x89',     # É\n        0xca : b'\\xc3\\x8a',     # Ê\n        0xcb : b'\\xc3\\x8b',     # Ë\n        0xcc : b'\\xc3\\x8c',     # Ì\n        0xcd : b'\\xc3\\x8d',     # Í\n        0xce : b'\\xc3\\x8e',     # Î\n        0xcf : b'\\xc3\\x8f',     # Ï\n        0xd0 : b'\\xc3\\x90',     # Ð\n        0xd1 : b'\\xc3\\x91',     # Ñ\n        0xd2 : b'\\xc3\\x92',     # Ò\n        0xd3 : b'\\xc3\\x93',     # Ó\n        0xd4 : b'\\xc3\\x94',     # Ô\n        0xd5 : b'\\xc3\\x95',     # Õ\n        0xd6 : b'\\xc3\\x96',     # Ö\n        0xd7 : b'\\xc3\\x97',     # ×\n        0xd8 : b'\\xc3\\x98',     # Ø\n        0xd9 : b'\\xc3\\x99',     # Ù\n        0xda : b'\\xc3\\x9a',     # Ú\n        0xdb : b'\\xc3\\x9b',     # Û\n        0xdc : b'\\xc3\\x9c',     # Ü\n        0xdd : b'\\xc3\\x9d',     # Ý\n        0xde : b'\\xc3\\x9e',     # Þ\n        0xdf : b'\\xc3\\x9f',     # ß\n        0xe0 : b'\\xc3\\xa0',     # à\n        0xe1 : b'\\xa1',     # á\n        0xe2 : b'\\xc3\\xa2',     # â\n        0xe3 : b'\\xc3\\xa3',     # ã\n        0xe4 : b'\\xc3\\xa4',     # ä\n        0xe5 : b'\\xc3\\xa5',     # å\n        0xe6 : b'\\xc3\\xa6',     # æ\n        0xe7 : b'\\xc3\\xa7',     # ç\n        0xe8 : b'\\xc3\\xa8',     # è\n        0xe9 : b'\\xc3\\xa9',     # é\n        0xea : b'\\xc3\\xaa',     # ê\n        0xeb : b'\\xc3\\xab',     # ë\n        0xec : b'\\xc3\\xac',     # ì\n        0xed : b'\\xc3\\xad',     # í\n        0xee : b'\\xc3\\xae',     # î\n        0xef : b'\\xc3\\xaf',     # ï\n        0xf0 : b'\\xc3\\xb0',     # ð\n        0xf1 : b'\\xc3\\xb1',     # ñ\n        0xf2 : b'\\xc3\\xb2',     # ò\n        0xf3 : b'\\xc3\\xb3',     # ó\n        0xf4 : b'\\xc3\\xb4',     # ô\n        0xf5 : b'\\xc3\\xb5',     # õ\n        0xf6 : b'\\xc3\\xb6',     # ö\n        0xf7 : b'\\xc3\\xb7',     # ÷\n        0xf8 : b'\\xc3\\xb8',     # ø\n        0xf9 : b'\\xc3\\xb9',     # ù\n        0xfa : b'\\xc3\\xba',     # ú\n        0xfb : b'\\xc3\\xbb',     # û\n        0xfc : b'\\xc3\\xbc',     # ü\n        0xfd : b'\\xc3\\xbd',     # ý\n        0xfe : b'\\xc3\\xbe',     # þ\n        }\n\n    MULTIBYTE_MARKERS_AND_SIZES = [\n        (0xc2, 0xdf, 2), # 2-byte characters start with a byte C2-DF\n        (0xe0, 0xef, 3), # 3-byte characters start with E0-EF\n        (0xf0, 0xf4, 4), # 4-byte characters start with F0-F4\n        ]\n\n    FIRST_MULTIBYTE_MARKER = MULTIBYTE_MARKERS_AND_SIZES[0][0]\n    LAST_MULTIBYTE_MARKER = MULTIBYTE_MARKERS_AND_SIZES[-1][1]\n\n    @classmethod\n    def detwingle(cls, in_bytes, main_encoding=\"utf8\",\n                  embedded_encoding=\"windows-1252\"):\n        \"\"\"Fix characters from one encoding embedded in some other encoding.\n\n        Currently the only situation supported is Windows-1252 (or its\n        subset ISO-8859-1), embedded in UTF-8.\n\n        The input must be a bytestring. If you've already converted\n        the document to Unicode, you're too late.\n\n        The output is a bytestring in which `embedded_encoding`\n        characters have been converted to their `main_encoding`\n        equivalents.\n        \"\"\"\n        if embedded_encoding.replace('_', '-').lower() not in (\n            'windows-1252', 'windows_1252'):\n            raise NotImplementedError(\n                \"Windows-1252 and ISO-8859-1 are the only currently supported \"\n                \"embedded encodings.\")\n\n        if main_encoding.lower() not in ('utf8', 'utf-8'):\n            raise NotImplementedError(\n                \"UTF-8 is the only currently supported main encoding.\")\n\n        byte_chunks = []\n\n        chunk_start = 0\n        pos = 0\n        while pos < len(in_bytes):\n            byte = in_bytes[pos]\n            if not isinstance(byte, int):\n                # Python 2.x\n                byte = ord(byte)\n            if (byte >= cls.FIRST_MULTIBYTE_MARKER\n                and byte <= cls.LAST_MULTIBYTE_MARKER):\n                # This is the start of a UTF-8 multibyte character. Skip\n                # to the end.\n                for start, end, size in cls.MULTIBYTE_MARKERS_AND_SIZES:\n                    if byte >= start and byte <= end:\n                        pos += size\n                        break\n            elif byte >= 0x80 and byte in cls.WINDOWS_1252_TO_UTF8:\n                # We found a Windows-1252 character!\n                # Save the string up to this point as a chunk.\n                byte_chunks.append(in_bytes[chunk_start:pos])\n\n                # Now translate the Windows-1252 character into UTF-8\n                # and add it as another, one-byte chunk.\n                byte_chunks.append(cls.WINDOWS_1252_TO_UTF8[byte])\n                pos += 1\n                chunk_start = pos\n            else:\n                # Go on to the next character.\n                pos += 1\n        if chunk_start == 0:\n            # The string is unchanged.\n            return in_bytes\n        else:\n            # Store the final chunk.\n            byte_chunks.append(in_bytes[chunk_start:])\n        return b''.join(byte_chunks)\n\n"
  },
  {
    "path": "example/parallax_svg_tools/bs4/diagnose.py",
    "content": "\"\"\"Diagnostic functions, mainly for use when doing tech support.\"\"\"\n\n# Use of this source code is governed by a BSD-style license that can be\n# found in the LICENSE file.\n__license__ = \"MIT\"\n\nimport cProfile\nfrom StringIO import StringIO\nfrom HTMLParser import HTMLParser\nimport bs4\nfrom bs4 import BeautifulSoup, __version__\nfrom bs4.builder import builder_registry\n\nimport os\nimport pstats\nimport random\nimport tempfile\nimport time\nimport traceback\nimport sys\nimport cProfile\n\ndef diagnose(data):\n    \"\"\"Diagnostic suite for isolating common problems.\"\"\"\n    print \"Diagnostic running on Beautiful Soup %s\" % __version__\n    print \"Python version %s\" % sys.version\n\n    basic_parsers = [\"html.parser\", \"html5lib\", \"lxml\"]\n    for name in basic_parsers:\n        for builder in builder_registry.builders:\n            if name in builder.features:\n                break\n        else:\n            basic_parsers.remove(name)\n            print (\n                \"I noticed that %s is not installed. Installing it may help.\" %\n                name)\n\n    if 'lxml' in basic_parsers:\n        basic_parsers.append([\"lxml\", \"xml\"])\n        try:\n            from lxml import etree\n            print \"Found lxml version %s\" % \".\".join(map(str,etree.LXML_VERSION))\n        except ImportError, e:\n            print (\n                \"lxml is not installed or couldn't be imported.\")\n\n\n    if 'html5lib' in basic_parsers:\n        try:\n            import html5lib\n            print \"Found html5lib version %s\" % html5lib.__version__\n        except ImportError, e:\n            print (\n                \"html5lib is not installed or couldn't be imported.\")\n\n    if hasattr(data, 'read'):\n        data = data.read()\n    elif os.path.exists(data):\n        print '\"%s\" looks like a filename. Reading data from the file.' % data\n        with open(data) as fp:\n            data = fp.read()\n    elif data.startswith(\"http:\") or data.startswith(\"https:\"):\n        print '\"%s\" looks like a URL. Beautiful Soup is not an HTTP client.' % data\n        print \"You need to use some other library to get the document behind the URL, and feed that document to Beautiful Soup.\"\n        return\n    print\n\n    for parser in basic_parsers:\n        print \"Trying to parse your markup with %s\" % parser\n        success = False\n        try:\n            soup = BeautifulSoup(data, parser)\n            success = True\n        except Exception, e:\n            print \"%s could not parse the markup.\" % parser\n            traceback.print_exc()\n        if success:\n            print \"Here's what %s did with the markup:\" % parser\n            print soup.prettify()\n\n        print \"-\" * 80\n\ndef lxml_trace(data, html=True, **kwargs):\n    \"\"\"Print out the lxml events that occur during parsing.\n\n    This lets you see how lxml parses a document when no Beautiful\n    Soup code is running.\n    \"\"\"\n    from lxml import etree\n    for event, element in etree.iterparse(StringIO(data), html=html, **kwargs):\n        print(\"%s, %4s, %s\" % (event, element.tag, element.text))\n\nclass AnnouncingParser(HTMLParser):\n    \"\"\"Announces HTMLParser parse events, without doing anything else.\"\"\"\n\n    def _p(self, s):\n        print(s)\n\n    def handle_starttag(self, name, attrs):\n        self._p(\"%s START\" % name)\n\n    def handle_endtag(self, name):\n        self._p(\"%s END\" % name)\n\n    def handle_data(self, data):\n        self._p(\"%s DATA\" % data)\n\n    def handle_charref(self, name):\n        self._p(\"%s CHARREF\" % name)\n\n    def handle_entityref(self, name):\n        self._p(\"%s ENTITYREF\" % name)\n\n    def handle_comment(self, data):\n        self._p(\"%s COMMENT\" % data)\n\n    def handle_decl(self, data):\n        self._p(\"%s DECL\" % data)\n\n    def unknown_decl(self, data):\n        self._p(\"%s UNKNOWN-DECL\" % data)\n\n    def handle_pi(self, data):\n        self._p(\"%s PI\" % data)\n\ndef htmlparser_trace(data):\n    \"\"\"Print out the HTMLParser events that occur during parsing.\n\n    This lets you see how HTMLParser parses a document when no\n    Beautiful Soup code is running.\n    \"\"\"\n    parser = AnnouncingParser()\n    parser.feed(data)\n\n_vowels = \"aeiou\"\n_consonants = \"bcdfghjklmnpqrstvwxyz\"\n\ndef rword(length=5):\n    \"Generate a random word-like string.\"\n    s = ''\n    for i in range(length):\n        if i % 2 == 0:\n            t = _consonants\n        else:\n            t = _vowels\n        s += random.choice(t)\n    return s\n\ndef rsentence(length=4):\n    \"Generate a random sentence-like string.\"\n    return \" \".join(rword(random.randint(4,9)) for i in range(length))\n        \ndef rdoc(num_elements=1000):\n    \"\"\"Randomly generate an invalid HTML document.\"\"\"\n    tag_names = ['p', 'div', 'span', 'i', 'b', 'script', 'table']\n    elements = []\n    for i in range(num_elements):\n        choice = random.randint(0,3)\n        if choice == 0:\n            # New tag.\n            tag_name = random.choice(tag_names)\n            elements.append(\"<%s>\" % tag_name)\n        elif choice == 1:\n            elements.append(rsentence(random.randint(1,4)))\n        elif choice == 2:\n            # Close a tag.\n            tag_name = random.choice(tag_names)\n            elements.append(\"</%s>\" % tag_name)\n    return \"<html>\" + \"\\n\".join(elements) + \"</html>\"\n\ndef benchmark_parsers(num_elements=100000):\n    \"\"\"Very basic head-to-head performance benchmark.\"\"\"\n    print \"Comparative parser benchmark on Beautiful Soup %s\" % __version__\n    data = rdoc(num_elements)\n    print \"Generated a large invalid HTML document (%d bytes).\" % len(data)\n    \n    for parser in [\"lxml\", [\"lxml\", \"html\"], \"html5lib\", \"html.parser\"]:\n        success = False\n        try:\n            a = time.time()\n            soup = BeautifulSoup(data, parser)\n            b = time.time()\n            success = True\n        except Exception, e:\n            print \"%s could not parse the markup.\" % parser\n            traceback.print_exc()\n        if success:\n            print \"BS4+%s parsed the markup in %.2fs.\" % (parser, b-a)\n\n    from lxml import etree\n    a = time.time()\n    etree.HTML(data)\n    b = time.time()\n    print \"Raw lxml parsed the markup in %.2fs.\" % (b-a)\n\n    import html5lib\n    parser = html5lib.HTMLParser()\n    a = time.time()\n    parser.parse(data)\n    b = time.time()\n    print \"Raw html5lib parsed the markup in %.2fs.\" % (b-a)\n\ndef profile(num_elements=100000, parser=\"lxml\"):\n\n    filehandle = tempfile.NamedTemporaryFile()\n    filename = filehandle.name\n\n    data = rdoc(num_elements)\n    vars = dict(bs4=bs4, data=data, parser=parser)\n    cProfile.runctx('bs4.BeautifulSoup(data, parser)' , vars, vars, filename)\n\n    stats = pstats.Stats(filename)\n    # stats.strip_dirs()\n    stats.sort_stats(\"cumulative\")\n    stats.print_stats('_html5lib|bs4', 50)\n\nif __name__ == '__main__':\n    diagnose(sys.stdin.read())\n"
  },
  {
    "path": "example/parallax_svg_tools/bs4/element.py",
    "content": "# Use of this source code is governed by a BSD-style license that can be\n# found in the LICENSE file.\n__license__ = \"MIT\"\n\nimport collections\nimport re\nimport shlex\nimport sys\nimport warnings\nfrom bs4.dammit import EntitySubstitution\n\nDEFAULT_OUTPUT_ENCODING = \"utf-8\"\nPY3K = (sys.version_info[0] > 2)\n\nwhitespace_re = re.compile(\"\\s+\")\n\ndef _alias(attr):\n    \"\"\"Alias one attribute name to another for backward compatibility\"\"\"\n    @property\n    def alias(self):\n        return getattr(self, attr)\n\n    @alias.setter\n    def alias(self):\n        return setattr(self, attr)\n    return alias\n\n\nclass NamespacedAttribute(unicode):\n\n    def __new__(cls, prefix, name, namespace=None):\n        if name is None:\n            obj = unicode.__new__(cls, prefix)\n        elif prefix is None:\n            # Not really namespaced.\n            obj = unicode.__new__(cls, name)\n        else:\n            obj = unicode.__new__(cls, prefix + \":\" + name)\n        obj.prefix = prefix\n        obj.name = name\n        obj.namespace = namespace\n        return obj\n\nclass AttributeValueWithCharsetSubstitution(unicode):\n    \"\"\"A stand-in object for a character encoding specified in HTML.\"\"\"\n\nclass CharsetMetaAttributeValue(AttributeValueWithCharsetSubstitution):\n    \"\"\"A generic stand-in for the value of a meta tag's 'charset' attribute.\n\n    When Beautiful Soup parses the markup '<meta charset=\"utf8\">', the\n    value of the 'charset' attribute will be one of these objects.\n    \"\"\"\n\n    def __new__(cls, original_value):\n        obj = unicode.__new__(cls, original_value)\n        obj.original_value = original_value\n        return obj\n\n    def encode(self, encoding):\n        return encoding\n\n\nclass ContentMetaAttributeValue(AttributeValueWithCharsetSubstitution):\n    \"\"\"A generic stand-in for the value of a meta tag's 'content' attribute.\n\n    When Beautiful Soup parses the markup:\n     <meta http-equiv=\"content-type\" content=\"text/html; charset=utf8\">\n\n    The value of the 'content' attribute will be one of these objects.\n    \"\"\"\n\n    CHARSET_RE = re.compile(\"((^|;)\\s*charset=)([^;]*)\", re.M)\n\n    def __new__(cls, original_value):\n        match = cls.CHARSET_RE.search(original_value)\n        if match is None:\n            # No substitution necessary.\n            return unicode.__new__(unicode, original_value)\n\n        obj = unicode.__new__(cls, original_value)\n        obj.original_value = original_value\n        return obj\n\n    def encode(self, encoding):\n        def rewrite(match):\n            return match.group(1) + encoding\n        return self.CHARSET_RE.sub(rewrite, self.original_value)\n\nclass HTMLAwareEntitySubstitution(EntitySubstitution):\n\n    \"\"\"Entity substitution rules that are aware of some HTML quirks.\n\n    Specifically, the contents of <script> and <style> tags should not\n    undergo entity substitution.\n\n    Incoming NavigableString objects are checked to see if they're the\n    direct children of a <script> or <style> tag.\n    \"\"\"\n\n    cdata_containing_tags = set([\"script\", \"style\"])\n\n    preformatted_tags = set([\"pre\"])\n\n    preserve_whitespace_tags = set(['pre', 'textarea'])\n\n    @classmethod\n    def _substitute_if_appropriate(cls, ns, f):\n        if (isinstance(ns, NavigableString)\n            and ns.parent is not None\n            and ns.parent.name in cls.cdata_containing_tags):\n            # Do nothing.\n            return ns\n        # Substitute.\n        return f(ns)\n\n    @classmethod\n    def substitute_html(cls, ns):\n        return cls._substitute_if_appropriate(\n            ns, EntitySubstitution.substitute_html)\n\n    @classmethod\n    def substitute_xml(cls, ns):\n        return cls._substitute_if_appropriate(\n            ns, EntitySubstitution.substitute_xml)\n\nclass PageElement(object):\n    \"\"\"Contains the navigational information for some part of the page\n    (either a tag or a piece of text)\"\"\"\n\n    # There are five possible values for the \"formatter\" argument passed in\n    # to methods like encode() and prettify():\n    #\n    # \"html\" - All Unicode characters with corresponding HTML entities\n    #   are converted to those entities on output.\n    # \"minimal\" - Bare ampersands and angle brackets are converted to\n    #   XML entities: &amp; &lt; &gt;\n    # None - The null formatter. Unicode characters are never\n    #   converted to entities.  This is not recommended, but it's\n    #   faster than \"minimal\".\n    # A function - This function will be called on every string that\n    #  needs to undergo entity substitution.\n    #\n\n    # In an HTML document, the default \"html\" and \"minimal\" functions\n    # will leave the contents of <script> and <style> tags alone. For\n    # an XML document, all tags will be given the same treatment.\n\n    HTML_FORMATTERS = {\n        \"html\" : HTMLAwareEntitySubstitution.substitute_html,\n        \"minimal\" : HTMLAwareEntitySubstitution.substitute_xml,\n        None : None\n        }\n\n    XML_FORMATTERS = {\n        \"html\" : EntitySubstitution.substitute_html,\n        \"minimal\" : EntitySubstitution.substitute_xml,\n        None : None\n        }\n\n    def format_string(self, s, formatter='minimal'):\n        \"\"\"Format the given string using the given formatter.\"\"\"\n        if not callable(formatter):\n            formatter = self._formatter_for_name(formatter)\n        if formatter is None:\n            output = s\n        else:\n            output = formatter(s)\n        return output\n\n    @property\n    def _is_xml(self):\n        \"\"\"Is this element part of an XML tree or an HTML tree?\n\n        This is used when mapping a formatter name (\"minimal\") to an\n        appropriate function (one that performs entity-substitution on\n        the contents of <script> and <style> tags, or not). It can be\n        inefficient, but it should be called very rarely.\n        \"\"\"\n        if self.known_xml is not None:\n            # Most of the time we will have determined this when the\n            # document is parsed.\n            return self.known_xml\n\n        # Otherwise, it's likely that this element was created by\n        # direct invocation of the constructor from within the user's\n        # Python code.\n        if self.parent is None:\n            # This is the top-level object. It should have .known_xml set\n            # from tree creation. If not, take a guess--BS is usually\n            # used on HTML markup.\n            return getattr(self, 'is_xml', False)\n        return self.parent._is_xml\n\n    def _formatter_for_name(self, name):\n        \"Look up a formatter function based on its name and the tree.\"\n        if self._is_xml:\n            return self.XML_FORMATTERS.get(\n                name, EntitySubstitution.substitute_xml)\n        else:\n            return self.HTML_FORMATTERS.get(\n                name, HTMLAwareEntitySubstitution.substitute_xml)\n\n    def setup(self, parent=None, previous_element=None, next_element=None,\n              previous_sibling=None, next_sibling=None):\n        \"\"\"Sets up the initial relations between this element and\n        other elements.\"\"\"\n        self.parent = parent\n\n        self.previous_element = previous_element\n        if previous_element is not None:\n            self.previous_element.next_element = self\n\n        self.next_element = next_element\n        if self.next_element:\n            self.next_element.previous_element = self\n\n        self.next_sibling = next_sibling\n        if self.next_sibling:\n            self.next_sibling.previous_sibling = self\n\n        if (not previous_sibling\n            and self.parent is not None and self.parent.contents):\n            previous_sibling = self.parent.contents[-1]\n\n        self.previous_sibling = previous_sibling\n        if previous_sibling:\n            self.previous_sibling.next_sibling = self\n\n    nextSibling = _alias(\"next_sibling\")  # BS3\n    previousSibling = _alias(\"previous_sibling\")  # BS3\n\n    def replace_with(self, replace_with):\n        if not self.parent:\n            raise ValueError(\n                \"Cannot replace one element with another when the\"\n                \"element to be replaced is not part of a tree.\")\n        if replace_with is self:\n            return\n        if replace_with is self.parent:\n            raise ValueError(\"Cannot replace a Tag with its parent.\")\n        old_parent = self.parent\n        my_index = self.parent.index(self)\n        self.extract()\n        old_parent.insert(my_index, replace_with)\n        return self\n    replaceWith = replace_with  # BS3\n\n    def unwrap(self):\n        my_parent = self.parent\n        if not self.parent:\n            raise ValueError(\n                \"Cannot replace an element with its contents when that\"\n                \"element is not part of a tree.\")\n        my_index = self.parent.index(self)\n        self.extract()\n        for child in reversed(self.contents[:]):\n            my_parent.insert(my_index, child)\n        return self\n    replace_with_children = unwrap\n    replaceWithChildren = unwrap  # BS3\n\n    def wrap(self, wrap_inside):\n        me = self.replace_with(wrap_inside)\n        wrap_inside.append(me)\n        return wrap_inside\n\n    def extract(self):\n        \"\"\"Destructively rips this element out of the tree.\"\"\"\n        if self.parent is not None:\n            del self.parent.contents[self.parent.index(self)]\n\n        #Find the two elements that would be next to each other if\n        #this element (and any children) hadn't been parsed. Connect\n        #the two.\n        last_child = self._last_descendant()\n        next_element = last_child.next_element\n\n        if (self.previous_element is not None and\n            self.previous_element is not next_element):\n            self.previous_element.next_element = next_element\n        if next_element is not None and next_element is not self.previous_element:\n            next_element.previous_element = self.previous_element\n        self.previous_element = None\n        last_child.next_element = None\n\n        self.parent = None\n        if (self.previous_sibling is not None\n            and self.previous_sibling is not self.next_sibling):\n            self.previous_sibling.next_sibling = self.next_sibling\n        if (self.next_sibling is not None\n            and self.next_sibling is not self.previous_sibling):\n            self.next_sibling.previous_sibling = self.previous_sibling\n        self.previous_sibling = self.next_sibling = None\n        return self\n\n    def _last_descendant(self, is_initialized=True, accept_self=True):\n        \"Finds the last element beneath this object to be parsed.\"\n        if is_initialized and self.next_sibling:\n            last_child = self.next_sibling.previous_element\n        else:\n            last_child = self\n            while isinstance(last_child, Tag) and last_child.contents:\n                last_child = last_child.contents[-1]\n        if not accept_self and last_child is self:\n            last_child = None\n        return last_child\n    # BS3: Not part of the API!\n    _lastRecursiveChild = _last_descendant\n\n    def insert(self, position, new_child):\n        if new_child is None:\n            raise ValueError(\"Cannot insert None into a tag.\")\n        if new_child is self:\n            raise ValueError(\"Cannot insert a tag into itself.\")\n        if (isinstance(new_child, basestring)\n            and not isinstance(new_child, NavigableString)):\n            new_child = NavigableString(new_child)\n\n        position = min(position, len(self.contents))\n        if hasattr(new_child, 'parent') and new_child.parent is not None:\n            # We're 'inserting' an element that's already one\n            # of this object's children.\n            if new_child.parent is self:\n                current_index = self.index(new_child)\n                if current_index < position:\n                    # We're moving this element further down the list\n                    # of this object's children. That means that when\n                    # we extract this element, our target index will\n                    # jump down one.\n                    position -= 1\n            new_child.extract()\n\n        new_child.parent = self\n        previous_child = None\n        if position == 0:\n            new_child.previous_sibling = None\n            new_child.previous_element = self\n        else:\n            previous_child = self.contents[position - 1]\n            new_child.previous_sibling = previous_child\n            new_child.previous_sibling.next_sibling = new_child\n            new_child.previous_element = previous_child._last_descendant(False)\n        if new_child.previous_element is not None:\n            new_child.previous_element.next_element = new_child\n\n        new_childs_last_element = new_child._last_descendant(False)\n\n        if position >= len(self.contents):\n            new_child.next_sibling = None\n\n            parent = self\n            parents_next_sibling = None\n            while parents_next_sibling is None and parent is not None:\n                parents_next_sibling = parent.next_sibling\n                parent = parent.parent\n                if parents_next_sibling is not None:\n                    # We found the element that comes next in the document.\n                    break\n            if parents_next_sibling is not None:\n                new_childs_last_element.next_element = parents_next_sibling\n            else:\n                # The last element of this tag is the last element in\n                # the document.\n                new_childs_last_element.next_element = None\n        else:\n            next_child = self.contents[position]\n            new_child.next_sibling = next_child\n            if new_child.next_sibling is not None:\n                new_child.next_sibling.previous_sibling = new_child\n            new_childs_last_element.next_element = next_child\n\n        if new_childs_last_element.next_element is not None:\n            new_childs_last_element.next_element.previous_element = new_childs_last_element\n        self.contents.insert(position, new_child)\n\n    def append(self, tag):\n        \"\"\"Appends the given tag to the contents of this tag.\"\"\"\n        self.insert(len(self.contents), tag)\n\n    def insert_before(self, predecessor):\n        \"\"\"Makes the given element the immediate predecessor of this one.\n\n        The two elements will have the same parent, and the given element\n        will be immediately before this one.\n        \"\"\"\n        if self is predecessor:\n            raise ValueError(\"Can't insert an element before itself.\")\n        parent = self.parent\n        if parent is None:\n            raise ValueError(\n                \"Element has no parent, so 'before' has no meaning.\")\n        # Extract first so that the index won't be screwed up if they\n        # are siblings.\n        if isinstance(predecessor, PageElement):\n            predecessor.extract()\n        index = parent.index(self)\n        parent.insert(index, predecessor)\n\n    def insert_after(self, successor):\n        \"\"\"Makes the given element the immediate successor of this one.\n\n        The two elements will have the same parent, and the given element\n        will be immediately after this one.\n        \"\"\"\n        if self is successor:\n            raise ValueError(\"Can't insert an element after itself.\")\n        parent = self.parent\n        if parent is None:\n            raise ValueError(\n                \"Element has no parent, so 'after' has no meaning.\")\n        # Extract first so that the index won't be screwed up if they\n        # are siblings.\n        if isinstance(successor, PageElement):\n            successor.extract()\n        index = parent.index(self)\n        parent.insert(index+1, successor)\n\n    def find_next(self, name=None, attrs={}, text=None, **kwargs):\n        \"\"\"Returns the first item that matches the given criteria and\n        appears after this Tag in the document.\"\"\"\n        return self._find_one(self.find_all_next, name, attrs, text, **kwargs)\n    findNext = find_next  # BS3\n\n    def find_all_next(self, name=None, attrs={}, text=None, limit=None,\n                    **kwargs):\n        \"\"\"Returns all items that match the given criteria and appear\n        after this Tag in the document.\"\"\"\n        return self._find_all(name, attrs, text, limit, self.next_elements,\n                             **kwargs)\n    findAllNext = find_all_next  # BS3\n\n    def find_next_sibling(self, name=None, attrs={}, text=None, **kwargs):\n        \"\"\"Returns the closest sibling to this Tag that matches the\n        given criteria and appears after this Tag in the document.\"\"\"\n        return self._find_one(self.find_next_siblings, name, attrs, text,\n                             **kwargs)\n    findNextSibling = find_next_sibling  # BS3\n\n    def find_next_siblings(self, name=None, attrs={}, text=None, limit=None,\n                           **kwargs):\n        \"\"\"Returns the siblings of this Tag that match the given\n        criteria and appear after this Tag in the document.\"\"\"\n        return self._find_all(name, attrs, text, limit,\n                              self.next_siblings, **kwargs)\n    findNextSiblings = find_next_siblings   # BS3\n    fetchNextSiblings = find_next_siblings  # BS2\n\n    def find_previous(self, name=None, attrs={}, text=None, **kwargs):\n        \"\"\"Returns the first item that matches the given criteria and\n        appears before this Tag in the document.\"\"\"\n        return self._find_one(\n            self.find_all_previous, name, attrs, text, **kwargs)\n    findPrevious = find_previous  # BS3\n\n    def find_all_previous(self, name=None, attrs={}, text=None, limit=None,\n                        **kwargs):\n        \"\"\"Returns all items that match the given criteria and appear\n        before this Tag in the document.\"\"\"\n        return self._find_all(name, attrs, text, limit, self.previous_elements,\n                           **kwargs)\n    findAllPrevious = find_all_previous  # BS3\n    fetchPrevious = find_all_previous    # BS2\n\n    def find_previous_sibling(self, name=None, attrs={}, text=None, **kwargs):\n        \"\"\"Returns the closest sibling to this Tag that matches the\n        given criteria and appears before this Tag in the document.\"\"\"\n        return self._find_one(self.find_previous_siblings, name, attrs, text,\n                             **kwargs)\n    findPreviousSibling = find_previous_sibling  # BS3\n\n    def find_previous_siblings(self, name=None, attrs={}, text=None,\n                               limit=None, **kwargs):\n        \"\"\"Returns the siblings of this Tag that match the given\n        criteria and appear before this Tag in the document.\"\"\"\n        return self._find_all(name, attrs, text, limit,\n                              self.previous_siblings, **kwargs)\n    findPreviousSiblings = find_previous_siblings   # BS3\n    fetchPreviousSiblings = find_previous_siblings  # BS2\n\n    def find_parent(self, name=None, attrs={}, **kwargs):\n        \"\"\"Returns the closest parent of this Tag that matches the given\n        criteria.\"\"\"\n        # NOTE: We can't use _find_one because findParents takes a different\n        # set of arguments.\n        r = None\n        l = self.find_parents(name, attrs, 1, **kwargs)\n        if l:\n            r = l[0]\n        return r\n    findParent = find_parent  # BS3\n\n    def find_parents(self, name=None, attrs={}, limit=None, **kwargs):\n        \"\"\"Returns the parents of this Tag that match the given\n        criteria.\"\"\"\n\n        return self._find_all(name, attrs, None, limit, self.parents,\n                             **kwargs)\n    findParents = find_parents   # BS3\n    fetchParents = find_parents  # BS2\n\n    @property\n    def next(self):\n        return self.next_element\n\n    @property\n    def previous(self):\n        return self.previous_element\n\n    #These methods do the real heavy lifting.\n\n    def _find_one(self, method, name, attrs, text, **kwargs):\n        r = None\n        l = method(name, attrs, text, 1, **kwargs)\n        if l:\n            r = l[0]\n        return r\n\n    def _find_all(self, name, attrs, text, limit, generator, **kwargs):\n        \"Iterates over a generator looking for things that match.\"\n\n        if text is None and 'string' in kwargs:\n            text = kwargs['string']\n            del kwargs['string']\n\n        if isinstance(name, SoupStrainer):\n            strainer = name\n        else:\n            strainer = SoupStrainer(name, attrs, text, **kwargs)\n\n        if text is None and not limit and not attrs and not kwargs:\n            if name is True or name is None:\n                # Optimization to find all tags.\n                result = (element for element in generator\n                          if isinstance(element, Tag))\n                return ResultSet(strainer, result)\n            elif isinstance(name, basestring):\n                # Optimization to find all tags with a given name.\n                result = (element for element in generator\n                          if isinstance(element, Tag)\n                            and element.name == name)\n                return ResultSet(strainer, result)\n        results = ResultSet(strainer)\n        while True:\n            try:\n                i = next(generator)\n            except StopIteration:\n                break\n            if i:\n                found = strainer.search(i)\n                if found:\n                    results.append(found)\n                    if limit and len(results) >= limit:\n                        break\n        return results\n\n    #These generators can be used to navigate starting from both\n    #NavigableStrings and Tags.\n    @property\n    def next_elements(self):\n        i = self.next_element\n        while i is not None:\n            yield i\n            i = i.next_element\n\n    @property\n    def next_siblings(self):\n        i = self.next_sibling\n        while i is not None:\n            yield i\n            i = i.next_sibling\n\n    @property\n    def previous_elements(self):\n        i = self.previous_element\n        while i is not None:\n            yield i\n            i = i.previous_element\n\n    @property\n    def previous_siblings(self):\n        i = self.previous_sibling\n        while i is not None:\n            yield i\n            i = i.previous_sibling\n\n    @property\n    def parents(self):\n        i = self.parent\n        while i is not None:\n            yield i\n            i = i.parent\n\n    # Methods for supporting CSS selectors.\n\n    tag_name_re = re.compile('^[a-zA-Z0-9][-.a-zA-Z0-9:_]*$')\n\n    # /^([a-zA-Z0-9][-.a-zA-Z0-9:_]*)\\[(\\w+)([=~\\|\\^\\$\\*]?)=?\"?([^\\]\"]*)\"?\\]$/\n    #   \\---------------------------/  \\---/\\-------------/    \\-------/\n    #     |                              |         |               |\n    #     |                              |         |           The value\n    #     |                              |    ~,|,^,$,* or =\n    #     |                           Attribute\n    #    Tag\n    attribselect_re = re.compile(\n        r'^(?P<tag>[a-zA-Z0-9][-.a-zA-Z0-9:_]*)?\\[(?P<attribute>[\\w-]+)(?P<operator>[=~\\|\\^\\$\\*]?)' +\n        r'=?\"?(?P<value>[^\\]\"]*)\"?\\]$'\n        )\n\n    def _attr_value_as_string(self, value, default=None):\n        \"\"\"Force an attribute value into a string representation.\n\n        A multi-valued attribute will be converted into a\n        space-separated stirng.\n        \"\"\"\n        value = self.get(value, default)\n        if isinstance(value, list) or isinstance(value, tuple):\n            value =\" \".join(value)\n        return value\n\n    def _tag_name_matches_and(self, function, tag_name):\n        if not tag_name:\n            return function\n        else:\n            def _match(tag):\n                return tag.name == tag_name and function(tag)\n            return _match\n\n    def _attribute_checker(self, operator, attribute, value=''):\n        \"\"\"Create a function that performs a CSS selector operation.\n\n        Takes an operator, attribute and optional value. Returns a\n        function that will return True for elements that match that\n        combination.\n        \"\"\"\n        if operator == '=':\n            # string representation of `attribute` is equal to `value`\n            return lambda el: el._attr_value_as_string(attribute) == value\n        elif operator == '~':\n            # space-separated list representation of `attribute`\n            # contains `value`\n            def _includes_value(element):\n                attribute_value = element.get(attribute, [])\n                if not isinstance(attribute_value, list):\n                    attribute_value = attribute_value.split()\n                return value in attribute_value\n            return _includes_value\n        elif operator == '^':\n            # string representation of `attribute` starts with `value`\n            return lambda el: el._attr_value_as_string(\n                attribute, '').startswith(value)\n        elif operator == '$':\n            # string representation of `attribute` ends with `value`\n            return lambda el: el._attr_value_as_string(\n                attribute, '').endswith(value)\n        elif operator == '*':\n            # string representation of `attribute` contains `value`\n            return lambda el: value in el._attr_value_as_string(attribute, '')\n        elif operator == '|':\n            # string representation of `attribute` is either exactly\n            # `value` or starts with `value` and then a dash.\n            def _is_or_starts_with_dash(element):\n                attribute_value = element._attr_value_as_string(attribute, '')\n                return (attribute_value == value or attribute_value.startswith(\n                        value + '-'))\n            return _is_or_starts_with_dash\n        else:\n            return lambda el: el.has_attr(attribute)\n\n    # Old non-property versions of the generators, for backwards\n    # compatibility with BS3.\n    def nextGenerator(self):\n        return self.next_elements\n\n    def nextSiblingGenerator(self):\n        return self.next_siblings\n\n    def previousGenerator(self):\n        return self.previous_elements\n\n    def previousSiblingGenerator(self):\n        return self.previous_siblings\n\n    def parentGenerator(self):\n        return self.parents\n\n\nclass NavigableString(unicode, PageElement):\n\n    PREFIX = ''\n    SUFFIX = ''\n\n    # We can't tell just by looking at a string whether it's contained\n    # in an XML document or an HTML document.\n\n    known_xml = None\n\n    def __new__(cls, value):\n        \"\"\"Create a new NavigableString.\n\n        When unpickling a NavigableString, this method is called with\n        the string in DEFAULT_OUTPUT_ENCODING. That encoding needs to be\n        passed in to the superclass's __new__ or the superclass won't know\n        how to handle non-ASCII characters.\n        \"\"\"\n        if isinstance(value, unicode):\n            u = unicode.__new__(cls, value)\n        else:\n            u = unicode.__new__(cls, value, DEFAULT_OUTPUT_ENCODING)\n        u.setup()\n        return u\n\n    def __copy__(self):\n        \"\"\"A copy of a NavigableString has the same contents and class\n        as the original, but it is not connected to the parse tree.\n        \"\"\"\n        return type(self)(self)\n\n    def __getnewargs__(self):\n        return (unicode(self),)\n\n    def __getattr__(self, attr):\n        \"\"\"text.string gives you text. This is for backwards\n        compatibility for Navigable*String, but for CData* it lets you\n        get the string without the CData wrapper.\"\"\"\n        if attr == 'string':\n            return self\n        else:\n            raise AttributeError(\n                \"'%s' object has no attribute '%s'\" % (\n                    self.__class__.__name__, attr))\n\n    def output_ready(self, formatter=\"minimal\"):\n        output = self.format_string(self, formatter)\n        return self.PREFIX + output + self.SUFFIX\n\n    @property\n    def name(self):\n        return None\n\n    @name.setter\n    def name(self, name):\n        raise AttributeError(\"A NavigableString cannot be given a name.\")\n\nclass PreformattedString(NavigableString):\n    \"\"\"A NavigableString not subject to the normal formatting rules.\n\n    The string will be passed into the formatter (to trigger side effects),\n    but the return value will be ignored.\n    \"\"\"\n\n    def output_ready(self, formatter=\"minimal\"):\n        \"\"\"CData strings are passed into the formatter.\n        But the return value is ignored.\"\"\"\n        self.format_string(self, formatter)\n        return self.PREFIX + self + self.SUFFIX\n\nclass CData(PreformattedString):\n\n    PREFIX = u'<![CDATA['\n    SUFFIX = u']]>'\n\nclass ProcessingInstruction(PreformattedString):\n    \"\"\"A SGML processing instruction.\"\"\"\n\n    PREFIX = u'<?'\n    SUFFIX = u'>'\n\nclass XMLProcessingInstruction(ProcessingInstruction):\n    \"\"\"An XML processing instruction.\"\"\"\n    PREFIX = u'<?'\n    SUFFIX = u'?>'\n\nclass Comment(PreformattedString):\n\n    PREFIX = u'<!--'\n    SUFFIX = u'-->'\n\n\nclass Declaration(PreformattedString):\n    PREFIX = u'<?'\n    SUFFIX = u'?>'\n\n\nclass Doctype(PreformattedString):\n\n    @classmethod\n    def for_name_and_ids(cls, name, pub_id, system_id):\n        value = name or ''\n        if pub_id is not None:\n            value += ' PUBLIC \"%s\"' % pub_id\n            if system_id is not None:\n                value += ' \"%s\"' % system_id\n        elif system_id is not None:\n            value += ' SYSTEM \"%s\"' % system_id\n\n        return Doctype(value)\n\n    PREFIX = u'<!DOCTYPE '\n    SUFFIX = u'>\\n'\n\n\nclass Tag(PageElement):\n\n    \"\"\"Represents a found HTML tag with its attributes and contents.\"\"\"\n\n    def __init__(self, parser=None, builder=None, name=None, namespace=None,\n                 prefix=None, attrs=None, parent=None, previous=None,\n                 is_xml=None):\n        \"Basic constructor.\"\n\n        if parser is None:\n            self.parser_class = None\n        else:\n            # We don't actually store the parser object: that lets extracted\n            # chunks be garbage-collected.\n            self.parser_class = parser.__class__\n        if name is None:\n            raise ValueError(\"No value provided for new tag's name.\")\n        self.name = name\n        self.namespace = namespace\n        self.prefix = prefix\n        if builder is not None:\n            preserve_whitespace_tags = builder.preserve_whitespace_tags\n        else:\n            if is_xml:\n                preserve_whitespace_tags = []\n            else:\n                preserve_whitespace_tags = HTMLAwareEntitySubstitution.preserve_whitespace_tags\n        self.preserve_whitespace_tags = preserve_whitespace_tags\n        if attrs is None:\n            attrs = {}\n        elif attrs:\n            if builder is not None and builder.cdata_list_attributes:\n                attrs = builder._replace_cdata_list_attribute_values(\n                    self.name, attrs)\n            else:\n                attrs = dict(attrs)\n        else:\n            attrs = dict(attrs)\n\n        # If possible, determine ahead of time whether this tag is an\n        # XML tag.\n        if builder:\n            self.known_xml = builder.is_xml\n        else:\n            self.known_xml = is_xml\n        self.attrs = attrs\n        self.contents = []\n        self.setup(parent, previous)\n        self.hidden = False\n\n        # Set up any substitutions, such as the charset in a META tag.\n        if builder is not None:\n            builder.set_up_substitutions(self)\n            self.can_be_empty_element = builder.can_be_empty_element(name)\n        else:\n            self.can_be_empty_element = False\n\n    parserClass = _alias(\"parser_class\")  # BS3\n\n    def __copy__(self):\n        \"\"\"A copy of a Tag is a new Tag, unconnected to the parse tree.\n        Its contents are a copy of the old Tag's contents.\n        \"\"\"\n        clone = type(self)(None, self.builder, self.name, self.namespace,\n                           self.nsprefix, self.attrs, is_xml=self._is_xml)\n        for attr in ('can_be_empty_element', 'hidden'):\n            setattr(clone, attr, getattr(self, attr))\n        for child in self.contents:\n            clone.append(child.__copy__())\n        return clone\n\n    @property\n    def is_empty_element(self):\n        \"\"\"Is this tag an empty-element tag? (aka a self-closing tag)\n\n        A tag that has contents is never an empty-element tag.\n\n        A tag that has no contents may or may not be an empty-element\n        tag. It depends on the builder used to create the tag. If the\n        builder has a designated list of empty-element tags, then only\n        a tag whose name shows up in that list is considered an\n        empty-element tag.\n\n        If the builder has no designated list of empty-element tags,\n        then any tag with no contents is an empty-element tag.\n        \"\"\"\n        return len(self.contents) == 0 and self.can_be_empty_element\n    isSelfClosing = is_empty_element  # BS3\n\n    @property\n    def string(self):\n        \"\"\"Convenience property to get the single string within this tag.\n\n        :Return: If this tag has a single string child, return value\n         is that string. If this tag has no children, or more than one\n         child, return value is None. If this tag has one child tag,\n         return value is the 'string' attribute of the child tag,\n         recursively.\n        \"\"\"\n        if len(self.contents) != 1:\n            return None\n        child = self.contents[0]\n        if isinstance(child, NavigableString):\n            return child\n        return child.string\n\n    @string.setter\n    def string(self, string):\n        self.clear()\n        self.append(string.__class__(string))\n\n    def _all_strings(self, strip=False, types=(NavigableString, CData)):\n        \"\"\"Yield all strings of certain classes, possibly stripping them.\n\n        By default, yields only NavigableString and CData objects. So\n        no comments, processing instructions, etc.\n        \"\"\"\n        for descendant in self.descendants:\n            if (\n                (types is None and not isinstance(descendant, NavigableString))\n                or\n                (types is not None and type(descendant) not in types)):\n                continue\n            if strip:\n                descendant = descendant.strip()\n                if len(descendant) == 0:\n                    continue\n            yield descendant\n\n    strings = property(_all_strings)\n\n    @property\n    def stripped_strings(self):\n        for string in self._all_strings(True):\n            yield string\n\n    def get_text(self, separator=u\"\", strip=False,\n                 types=(NavigableString, CData)):\n        \"\"\"\n        Get all child strings, concatenated using the given separator.\n        \"\"\"\n        return separator.join([s for s in self._all_strings(\n                    strip, types=types)])\n    getText = get_text\n    text = property(get_text)\n\n    def decompose(self):\n        \"\"\"Recursively destroys the contents of this tree.\"\"\"\n        self.extract()\n        i = self\n        while i is not None:\n            next = i.next_element\n            i.__dict__.clear()\n            i.contents = []\n            i = next\n\n    def clear(self, decompose=False):\n        \"\"\"\n        Extract all children. If decompose is True, decompose instead.\n        \"\"\"\n        if decompose:\n            for element in self.contents[:]:\n                if isinstance(element, Tag):\n                    element.decompose()\n                else:\n                    element.extract()\n        else:\n            for element in self.contents[:]:\n                element.extract()\n\n    def index(self, element):\n        \"\"\"\n        Find the index of a child by identity, not value. Avoids issues with\n        tag.contents.index(element) getting the index of equal elements.\n        \"\"\"\n        for i, child in enumerate(self.contents):\n            if child is element:\n                return i\n        raise ValueError(\"Tag.index: element not in tag\")\n\n    def get(self, key, default=None):\n        \"\"\"Returns the value of the 'key' attribute for the tag, or\n        the value given for 'default' if it doesn't have that\n        attribute.\"\"\"\n        return self.attrs.get(key, default)\n\n    def has_attr(self, key):\n        return key in self.attrs\n\n    def __hash__(self):\n        return str(self).__hash__()\n\n    def __getitem__(self, key):\n        \"\"\"tag[key] returns the value of the 'key' attribute for the tag,\n        and throws an exception if it's not there.\"\"\"\n        return self.attrs[key]\n\n    def __iter__(self):\n        \"Iterating over a tag iterates over its contents.\"\n        return iter(self.contents)\n\n    def __len__(self):\n        \"The length of a tag is the length of its list of contents.\"\n        return len(self.contents)\n\n    def __contains__(self, x):\n        return x in self.contents\n\n    def __nonzero__(self):\n        \"A tag is non-None even if it has no contents.\"\n        return True\n\n    def __setitem__(self, key, value):\n        \"\"\"Setting tag[key] sets the value of the 'key' attribute for the\n        tag.\"\"\"\n        self.attrs[key] = value\n\n    def __delitem__(self, key):\n        \"Deleting tag[key] deletes all 'key' attributes for the tag.\"\n        self.attrs.pop(key, None)\n\n    def __call__(self, *args, **kwargs):\n        \"\"\"Calling a tag like a function is the same as calling its\n        find_all() method. Eg. tag('a') returns a list of all the A tags\n        found within this tag.\"\"\"\n        return self.find_all(*args, **kwargs)\n\n    def __getattr__(self, tag):\n        #print \"Getattr %s.%s\" % (self.__class__, tag)\n        if len(tag) > 3 and tag.endswith('Tag'):\n            # BS3: soup.aTag -> \"soup.find(\"a\")\n            tag_name = tag[:-3]\n            warnings.warn(\n                '.%sTag is deprecated, use .find(\"%s\") instead.' % (\n                    tag_name, tag_name))\n            return self.find(tag_name)\n        # We special case contents to avoid recursion.\n        elif not tag.startswith(\"__\") and not tag == \"contents\":\n            return self.find(tag)\n        raise AttributeError(\n            \"'%s' object has no attribute '%s'\" % (self.__class__, tag))\n\n    def __eq__(self, other):\n        \"\"\"Returns true iff this tag has the same name, the same attributes,\n        and the same contents (recursively) as the given tag.\"\"\"\n        if self is other:\n            return True\n        if (not hasattr(other, 'name') or\n            not hasattr(other, 'attrs') or\n            not hasattr(other, 'contents') or\n            self.name != other.name or\n            self.attrs != other.attrs or\n            len(self) != len(other)):\n            return False\n        for i, my_child in enumerate(self.contents):\n            if my_child != other.contents[i]:\n                return False\n        return True\n\n    def __ne__(self, other):\n        \"\"\"Returns true iff this tag is not identical to the other tag,\n        as defined in __eq__.\"\"\"\n        return not self == other\n\n    def __repr__(self, encoding=\"unicode-escape\"):\n        \"\"\"Renders this tag as a string.\"\"\"\n        if PY3K:\n            # \"The return value must be a string object\", i.e. Unicode\n            return self.decode()\n        else:\n            # \"The return value must be a string object\", i.e. a bytestring.\n            # By convention, the return value of __repr__ should also be\n            # an ASCII string.\n            return self.encode(encoding)\n\n    def __unicode__(self):\n        return self.decode()\n\n    def __str__(self):\n        if PY3K:\n            return self.decode()\n        else:\n            return self.encode()\n\n    if PY3K:\n        __str__ = __repr__ = __unicode__\n\n    def encode(self, encoding=DEFAULT_OUTPUT_ENCODING,\n               indent_level=None, formatter=\"minimal\",\n               errors=\"xmlcharrefreplace\"):\n        # Turn the data structure into Unicode, then encode the\n        # Unicode.\n        u = self.decode(indent_level, encoding, formatter)\n        return u.encode(encoding, errors)\n\n    def _should_pretty_print(self, indent_level):\n        \"\"\"Should this tag be pretty-printed?\"\"\"\n\n        return (\n            indent_level is not None\n            and self.name not in self.preserve_whitespace_tags\n        )\n\n    def decode(self, indent_level=None,\n               eventual_encoding=DEFAULT_OUTPUT_ENCODING,\n               formatter=\"minimal\"):\n        \"\"\"Returns a Unicode representation of this tag and its contents.\n\n        :param eventual_encoding: The tag is destined to be\n           encoded into this encoding. This method is _not_\n           responsible for performing that encoding. This information\n           is passed in so that it can be substituted in if the\n           document contains a <META> tag that mentions the document's\n           encoding.\n        \"\"\"\n\n        # First off, turn a string formatter into a function. This\n        # will stop the lookup from happening over and over again.\n        if not callable(formatter):\n            formatter = self._formatter_for_name(formatter)\n\n        attrs = []\n        if self.attrs:\n            for key, val in sorted(self.attrs.items()):\n                if val is None:\n                    decoded = key\n                else:\n                    if isinstance(val, list) or isinstance(val, tuple):\n                        val = ' '.join(val)\n                    elif not isinstance(val, basestring):\n                        val = unicode(val)\n                    elif (\n                        isinstance(val, AttributeValueWithCharsetSubstitution)\n                        and eventual_encoding is not None):\n                        val = val.encode(eventual_encoding)\n\n                    text = self.format_string(val, formatter)\n                    decoded = (\n                        unicode(key) + '='\n                        + EntitySubstitution.quoted_attribute_value(text))\n                attrs.append(decoded)\n        close = ''\n        closeTag = ''\n\n        prefix = ''\n        if self.prefix:\n            prefix = self.prefix + \":\"\n\n        if self.is_empty_element:\n            close = '/'\n        else:\n            closeTag = '</%s%s>' % (prefix, self.name)\n\n        pretty_print = self._should_pretty_print(indent_level)\n        space = ''\n        indent_space = ''\n        if indent_level is not None:\n            indent_space = (' ' * (indent_level - 1))\n        if pretty_print:\n            space = indent_space\n            indent_contents = indent_level + 1\n        else:\n            indent_contents = None\n        contents = self.decode_contents(\n            indent_contents, eventual_encoding, formatter)\n\n        if self.hidden:\n            # This is the 'document root' object.\n            s = contents\n        else:\n            s = []\n            attribute_string = ''\n            if attrs:\n                attribute_string = ' ' + ' '.join(attrs)\n            if indent_level is not None:\n                # Even if this particular tag is not pretty-printed,\n                # we should indent up to the start of the tag.\n                s.append(indent_space)\n            s.append('<%s%s%s%s>' % (\n                    prefix, self.name, attribute_string, close))\n            if pretty_print:\n                s.append(\"\\n\")\n            s.append(contents)\n            if pretty_print and contents and contents[-1] != \"\\n\":\n                s.append(\"\\n\")\n            if pretty_print and closeTag:\n                s.append(space)\n            s.append(closeTag)\n            if indent_level is not None and closeTag and self.next_sibling:\n                # Even if this particular tag is not pretty-printed,\n                # we're now done with the tag, and we should add a\n                # newline if appropriate.\n                s.append(\"\\n\")\n            s = ''.join(s)\n        return s\n\n    def prettify(self, encoding=None, formatter=\"minimal\"):\n        if encoding is None:\n            return self.decode(True, formatter=formatter)\n        else:\n            return self.encode(encoding, True, formatter=formatter)\n\n    def decode_contents(self, indent_level=None,\n                       eventual_encoding=DEFAULT_OUTPUT_ENCODING,\n                       formatter=\"minimal\"):\n        \"\"\"Renders the contents of this tag as a Unicode string.\n\n        :param indent_level: Each line of the rendering will be\n           indented this many spaces.\n\n        :param eventual_encoding: The tag is destined to be\n           encoded into this encoding. This method is _not_\n           responsible for performing that encoding. This information\n           is passed in so that it can be substituted in if the\n           document contains a <META> tag that mentions the document's\n           encoding.\n\n        :param formatter: The output formatter responsible for converting\n           entities to Unicode characters.\n        \"\"\"\n        # First off, turn a string formatter into a function. This\n        # will stop the lookup from happening over and over again.\n        if not callable(formatter):\n            formatter = self._formatter_for_name(formatter)\n\n        pretty_print = (indent_level is not None)\n        s = []\n        for c in self:\n            text = None\n            if isinstance(c, NavigableString):\n                text = c.output_ready(formatter)\n            elif isinstance(c, Tag):\n                s.append(c.decode(indent_level, eventual_encoding,\n                                  formatter))\n            if text and indent_level and not self.name == 'pre':\n                text = text.strip()\n            if text:\n                if pretty_print and not self.name == 'pre':\n                    s.append(\" \" * (indent_level - 1))\n                s.append(text)\n                if pretty_print and not self.name == 'pre':\n                    s.append(\"\\n\")\n        return ''.join(s)\n\n    def encode_contents(\n        self, indent_level=None, encoding=DEFAULT_OUTPUT_ENCODING,\n        formatter=\"minimal\"):\n        \"\"\"Renders the contents of this tag as a bytestring.\n\n        :param indent_level: Each line of the rendering will be\n           indented this many spaces.\n\n        :param eventual_encoding: The bytestring will be in this encoding.\n\n        :param formatter: The output formatter responsible for converting\n           entities to Unicode characters.\n        \"\"\"\n\n        contents = self.decode_contents(indent_level, encoding, formatter)\n        return contents.encode(encoding)\n\n    # Old method for BS3 compatibility\n    def renderContents(self, encoding=DEFAULT_OUTPUT_ENCODING,\n                       prettyPrint=False, indentLevel=0):\n        if not prettyPrint:\n            indentLevel = None\n        return self.encode_contents(\n            indent_level=indentLevel, encoding=encoding)\n\n    #Soup methods\n\n    def find(self, name=None, attrs={}, recursive=True, text=None,\n             **kwargs):\n        \"\"\"Return only the first child of this Tag matching the given\n        criteria.\"\"\"\n        r = None\n        l = self.find_all(name, attrs, recursive, text, 1, **kwargs)\n        if l:\n            r = l[0]\n        return r\n    findChild = find\n\n    def find_all(self, name=None, attrs={}, recursive=True, text=None,\n                 limit=None, **kwargs):\n        \"\"\"Extracts a list of Tag objects that match the given\n        criteria.  You can specify the name of the Tag and any\n        attributes you want the Tag to have.\n\n        The value of a key-value pair in the 'attrs' map can be a\n        string, a list of strings, a regular expression object, or a\n        callable that takes a string and returns whether or not the\n        string matches for some custom definition of 'matches'. The\n        same is true of the tag name.\"\"\"\n\n        generator = self.descendants\n        if not recursive:\n            generator = self.children\n        return self._find_all(name, attrs, text, limit, generator, **kwargs)\n    findAll = find_all       # BS3\n    findChildren = find_all  # BS2\n\n    #Generator methods\n    @property\n    def children(self):\n        # return iter() to make the purpose of the method clear\n        return iter(self.contents)  # XXX This seems to be untested.\n\n    @property\n    def descendants(self):\n        if not len(self.contents):\n            return\n        stopNode = self._last_descendant().next_element\n        current = self.contents[0]\n        while current is not stopNode:\n            yield current\n            current = current.next_element\n\n    # CSS selector code\n\n    _selector_combinators = ['>', '+', '~']\n    _select_debug = False\n    quoted_colon = re.compile('\"[^\"]*:[^\"]*\"')\n    def select_one(self, selector):\n        \"\"\"Perform a CSS selection operation on the current element.\"\"\"\n        value = self.select(selector, limit=1)\n        if value:\n            return value[0]\n        return None\n\n    def select(self, selector, _candidate_generator=None, limit=None):\n        \"\"\"Perform a CSS selection operation on the current element.\"\"\"\n\n        # Handle grouping selectors if ',' exists, ie: p,a\n        if ',' in selector:\n            context = []\n            for partial_selector in selector.split(','):\n                partial_selector = partial_selector.strip()\n                if partial_selector == '':\n                    raise ValueError('Invalid group selection syntax: %s' % selector)\n                candidates = self.select(partial_selector, limit=limit)\n                for candidate in candidates:\n                    if candidate not in context:\n                        context.append(candidate)\n\n                if limit and len(context) >= limit:\n                    break\n            return context\n        tokens = shlex.split(selector)\n        current_context = [self]\n\n        if tokens[-1] in self._selector_combinators:\n            raise ValueError(\n                'Final combinator \"%s\" is missing an argument.' % tokens[-1])\n\n        if self._select_debug:\n            print 'Running CSS selector \"%s\"' % selector\n\n        for index, token in enumerate(tokens):\n            new_context = []\n            new_context_ids = set([])\n\n            if tokens[index-1] in self._selector_combinators:\n                # This token was consumed by the previous combinator. Skip it.\n                if self._select_debug:\n                    print '  Token was consumed by the previous combinator.'\n                continue\n\n            if self._select_debug:\n                print ' Considering token \"%s\"' % token\n            recursive_candidate_generator = None\n            tag_name = None\n\n            # Each operation corresponds to a checker function, a rule\n            # for determining whether a candidate matches the\n            # selector. Candidates are generated by the active\n            # iterator.\n            checker = None\n\n            m = self.attribselect_re.match(token)\n            if m is not None:\n                # Attribute selector\n                tag_name, attribute, operator, value = m.groups()\n                checker = self._attribute_checker(operator, attribute, value)\n\n            elif '#' in token:\n                # ID selector\n                tag_name, tag_id = token.split('#', 1)\n                def id_matches(tag):\n                    return tag.get('id', None) == tag_id\n                checker = id_matches\n\n            elif '.' in token:\n                # Class selector\n                tag_name, klass = token.split('.', 1)\n                classes = set(klass.split('.'))\n                def classes_match(candidate):\n                    return classes.issubset(candidate.get('class', []))\n                checker = classes_match\n\n            elif ':' in token and not self.quoted_colon.search(token):\n                # Pseudo-class\n                tag_name, pseudo = token.split(':', 1)\n                if tag_name == '':\n                    raise ValueError(\n                        \"A pseudo-class must be prefixed with a tag name.\")\n                pseudo_attributes = re.match('([a-zA-Z\\d-]+)\\(([a-zA-Z\\d]+)\\)', pseudo)\n                found = []\n                if pseudo_attributes is None:\n                    pseudo_type = pseudo\n                    pseudo_value = None\n                else:\n                    pseudo_type, pseudo_value = pseudo_attributes.groups()\n                if pseudo_type == 'nth-of-type':\n                    try:\n                        pseudo_value = int(pseudo_value)\n                    except:\n                        raise NotImplementedError(\n                            'Only numeric values are currently supported for the nth-of-type pseudo-class.')\n                    if pseudo_value < 1:\n                        raise ValueError(\n                            'nth-of-type pseudo-class value must be at least 1.')\n                    class Counter(object):\n                        def __init__(self, destination):\n                            self.count = 0\n                            self.destination = destination\n\n                        def nth_child_of_type(self, tag):\n                            self.count += 1\n                            if self.count == self.destination:\n                                return True\n                            else:\n                                return False\n                    checker = Counter(pseudo_value).nth_child_of_type\n                else:\n                    raise NotImplementedError(\n                        'Only the following pseudo-classes are implemented: nth-of-type.')\n\n            elif token == '*':\n                # Star selector -- matches everything\n                pass\n            elif token == '>':\n                # Run the next token as a CSS selector against the\n                # direct children of each tag in the current context.\n                recursive_candidate_generator = lambda tag: tag.children\n            elif token == '~':\n                # Run the next token as a CSS selector against the\n                # siblings of each tag in the current context.\n                recursive_candidate_generator = lambda tag: tag.next_siblings\n            elif token == '+':\n                # For each tag in the current context, run the next\n                # token as a CSS selector against the tag's next\n                # sibling that's a tag.\n                def next_tag_sibling(tag):\n                    yield tag.find_next_sibling(True)\n                recursive_candidate_generator = next_tag_sibling\n\n            elif self.tag_name_re.match(token):\n                # Just a tag name.\n                tag_name = token\n            else:\n                raise ValueError(\n                    'Unsupported or invalid CSS selector: \"%s\"' % token)\n            if recursive_candidate_generator:\n                # This happens when the selector looks like  \"> foo\".\n                #\n                # The generator calls select() recursively on every\n                # member of the current context, passing in a different\n                # candidate generator and a different selector.\n                #\n                # In the case of \"> foo\", the candidate generator is\n                # one that yields a tag's direct children (\">\"), and\n                # the selector is \"foo\".\n                next_token = tokens[index+1]\n                def recursive_select(tag):\n                    if self._select_debug:\n                        print '    Calling select(\"%s\") recursively on %s %s' % (next_token, tag.name, tag.attrs)\n                        print '-' * 40\n                    for i in tag.select(next_token, recursive_candidate_generator):\n                        if self._select_debug:\n                            print '(Recursive select picked up candidate %s %s)' % (i.name, i.attrs)\n                        yield i\n                    if self._select_debug:\n                        print '-' * 40\n                _use_candidate_generator = recursive_select\n            elif _candidate_generator is None:\n                # By default, a tag's candidates are all of its\n                # children. If tag_name is defined, only yield tags\n                # with that name.\n                if self._select_debug:\n                    if tag_name:\n                        check = \"[any]\"\n                    else:\n                        check = tag_name\n                    print '   Default candidate generator, tag name=\"%s\"' % check\n                if self._select_debug:\n                    # This is redundant with later code, but it stops\n                    # a bunch of bogus tags from cluttering up the\n                    # debug log.\n                    def default_candidate_generator(tag):\n                        for child in tag.descendants:\n                            if not isinstance(child, Tag):\n                                continue\n                            if tag_name and not child.name == tag_name:\n                                continue\n                            yield child\n                    _use_candidate_generator = default_candidate_generator\n                else:\n                    _use_candidate_generator = lambda tag: tag.descendants\n            else:\n                _use_candidate_generator = _candidate_generator\n\n            count = 0\n            for tag in current_context:\n                if self._select_debug:\n                    print \"    Running candidate generator on %s %s\" % (\n                        tag.name, repr(tag.attrs))\n                for candidate in _use_candidate_generator(tag):\n                    if not isinstance(candidate, Tag):\n                        continue\n                    if tag_name and candidate.name != tag_name:\n                        continue\n                    if checker is not None:\n                        try:\n                            result = checker(candidate)\n                        except StopIteration:\n                            # The checker has decided we should no longer\n                            # run the generator.\n                            break\n                    if checker is None or result:\n                        if self._select_debug:\n                            print \"     SUCCESS %s %s\" % (candidate.name, repr(candidate.attrs))\n                        if id(candidate) not in new_context_ids:\n                            # If a tag matches a selector more than once,\n                            # don't include it in the context more than once.\n                            new_context.append(candidate)\n                            new_context_ids.add(id(candidate))\n                    elif self._select_debug:\n                        print \"     FAILURE %s %s\" % (candidate.name, repr(candidate.attrs))\n\n            current_context = new_context\n        if limit and len(current_context) >= limit:\n            current_context = current_context[:limit]\n\n        if self._select_debug:\n            print \"Final verdict:\"\n            for i in current_context:\n                print \" %s %s\" % (i.name, i.attrs)\n        return current_context\n\n    # Old names for backwards compatibility\n    def childGenerator(self):\n        return self.children\n\n    def recursiveChildGenerator(self):\n        return self.descendants\n\n    def has_key(self, key):\n        \"\"\"This was kind of misleading because has_key() (attributes)\n        was different from __in__ (contents). has_key() is gone in\n        Python 3, anyway.\"\"\"\n        warnings.warn('has_key is deprecated. Use has_attr(\"%s\") instead.' % (\n                key))\n        return self.has_attr(key)\n\n# Next, a couple classes to represent queries and their results.\nclass SoupStrainer(object):\n    \"\"\"Encapsulates a number of ways of matching a markup element (tag or\n    text).\"\"\"\n\n    def __init__(self, name=None, attrs={}, text=None, **kwargs):\n        self.name = self._normalize_search_value(name)\n        if not isinstance(attrs, dict):\n            # Treat a non-dict value for attrs as a search for the 'class'\n            # attribute.\n            kwargs['class'] = attrs\n            attrs = None\n\n        if 'class_' in kwargs:\n            # Treat class_=\"foo\" as a search for the 'class'\n            # attribute, overriding any non-dict value for attrs.\n            kwargs['class'] = kwargs['class_']\n            del kwargs['class_']\n\n        if kwargs:\n            if attrs:\n                attrs = attrs.copy()\n                attrs.update(kwargs)\n            else:\n                attrs = kwargs\n        normalized_attrs = {}\n        for key, value in attrs.items():\n            normalized_attrs[key] = self._normalize_search_value(value)\n\n        self.attrs = normalized_attrs\n        self.text = self._normalize_search_value(text)\n\n    def _normalize_search_value(self, value):\n        # Leave it alone if it's a Unicode string, a callable, a\n        # regular expression, a boolean, or None.\n        if (isinstance(value, unicode) or callable(value) or hasattr(value, 'match')\n            or isinstance(value, bool) or value is None):\n            return value\n\n        # If it's a bytestring, convert it to Unicode, treating it as UTF-8.\n        if isinstance(value, bytes):\n            return value.decode(\"utf8\")\n\n        # If it's listlike, convert it into a list of strings.\n        if hasattr(value, '__iter__'):\n            new_value = []\n            for v in value:\n                if (hasattr(v, '__iter__') and not isinstance(v, bytes)\n                    and not isinstance(v, unicode)):\n                    # This is almost certainly the user's mistake. In the\n                    # interests of avoiding infinite loops, we'll let\n                    # it through as-is rather than doing a recursive call.\n                    new_value.append(v)\n                else:\n                    new_value.append(self._normalize_search_value(v))\n            return new_value\n\n        # Otherwise, convert it into a Unicode string.\n        # The unicode(str()) thing is so this will do the same thing on Python 2\n        # and Python 3.\n        return unicode(str(value))\n\n    def __str__(self):\n        if self.text:\n            return self.text\n        else:\n            return \"%s|%s\" % (self.name, self.attrs)\n\n    def search_tag(self, markup_name=None, markup_attrs={}):\n        found = None\n        markup = None\n        if isinstance(markup_name, Tag):\n            markup = markup_name\n            markup_attrs = markup\n        call_function_with_tag_data = (\n            isinstance(self.name, collections.Callable)\n            and not isinstance(markup_name, Tag))\n\n        if ((not self.name)\n            or call_function_with_tag_data\n            or (markup and self._matches(markup, self.name))\n            or (not markup and self._matches(markup_name, self.name))):\n            if call_function_with_tag_data:\n                match = self.name(markup_name, markup_attrs)\n            else:\n                match = True\n                markup_attr_map = None\n                for attr, match_against in list(self.attrs.items()):\n                    if not markup_attr_map:\n                        if hasattr(markup_attrs, 'get'):\n                            markup_attr_map = markup_attrs\n                        else:\n                            markup_attr_map = {}\n                            for k, v in markup_attrs:\n                                markup_attr_map[k] = v\n                    attr_value = markup_attr_map.get(attr)\n                    if not self._matches(attr_value, match_against):\n                        match = False\n                        break\n            if match:\n                if markup:\n                    found = markup\n                else:\n                    found = markup_name\n        if found and self.text and not self._matches(found.string, self.text):\n            found = None\n        return found\n    searchTag = search_tag\n\n    def search(self, markup):\n        # print 'looking for %s in %s' % (self, markup)\n        found = None\n        # If given a list of items, scan it for a text element that\n        # matches.\n        if hasattr(markup, '__iter__') and not isinstance(markup, (Tag, basestring)):\n            for element in markup:\n                if isinstance(element, NavigableString) \\\n                       and self.search(element):\n                    found = element\n                    break\n        # If it's a Tag, make sure its name or attributes match.\n        # Don't bother with Tags if we're searching for text.\n        elif isinstance(markup, Tag):\n            if not self.text or self.name or self.attrs:\n                found = self.search_tag(markup)\n        # If it's text, make sure the text matches.\n        elif isinstance(markup, NavigableString) or \\\n                 isinstance(markup, basestring):\n            if not self.name and not self.attrs and self._matches(markup, self.text):\n                found = markup\n        else:\n            raise Exception(\n                \"I don't know how to match against a %s\" % markup.__class__)\n        return found\n\n    def _matches(self, markup, match_against):\n        # print u\"Matching %s against %s\" % (markup, match_against)\n        result = False\n        if isinstance(markup, list) or isinstance(markup, tuple):\n            # This should only happen when searching a multi-valued attribute\n            # like 'class'.\n            for item in markup:\n                if self._matches(item, match_against):\n                    return True\n            # We didn't match any particular value of the multivalue\n            # attribute, but maybe we match the attribute value when\n            # considered as a string.\n            if self._matches(' '.join(markup), match_against):\n                return True\n            return False\n\n        if match_against is True:\n            # True matches any non-None value.\n            return markup is not None\n\n        if isinstance(match_against, collections.Callable):\n            return match_against(markup)\n\n        # Custom callables take the tag as an argument, but all\n        # other ways of matching match the tag name as a string.\n        if isinstance(markup, Tag):\n            markup = markup.name\n\n        # Ensure that `markup` is either a Unicode string, or None.\n        markup = self._normalize_search_value(markup)\n\n        if markup is None:\n            # None matches None, False, an empty string, an empty list, and so on.\n            return not match_against\n\n        if isinstance(match_against, unicode):\n            # Exact string match\n            return markup == match_against\n\n        if hasattr(match_against, 'match'):\n            # Regexp match\n            return match_against.search(markup)\n\n        if hasattr(match_against, '__iter__'):\n            # The markup must be an exact match against something\n            # in the iterable.\n            return markup in match_against\n\n\nclass ResultSet(list):\n    \"\"\"A ResultSet is just a list that keeps track of the SoupStrainer\n    that created it.\"\"\"\n    def __init__(self, source, result=()):\n        super(ResultSet, self).__init__(result)\n        self.source = source\n"
  },
  {
    "path": "example/parallax_svg_tools/run.py",
    "content": "from svg import * \r\n\r\ncompile_svg('animation.svg', 'processed_animation.svg', \r\n{\r\n\t'process_layer_names': True,\r\n\t'namespace': 'example'\r\n})\r\n\r\ninline_svg('animation.html', 'output/animation.html')"
  },
  {
    "path": "example/parallax_svg_tools/svg/__init__.py",
    "content": "# Super simple Illustrator SVG processor for animations. Uses the BeautifulSoup python xml library. \n\nimport os\nimport errno\nfrom bs4 import BeautifulSoup\n\ndef create_file(path, mode):\n\tdirectory = os.path.dirname(path)\n\tif directory != '' and not os.path.exists(directory):\n\t\ttry:\n\t\t\tos.makedirs(directory)\n\t\texcept OSError as e:\n\t\t    if e.errno != errno.EEXIST:\n\t\t        raise\n\t\n\tfile = open(path, mode)\n\treturn file\n\ndef parse_svg(path, namespace, options):\n\t#print(path)\n\tfile = open(path,'r')\n\tfile_string = file.read().decode('utf8')\n\tfile.close();\n\n\tif namespace == None:\n\t\tnamespace = ''\n\telse:\n\t\tnamespace = namespace + '-'\n\n\t# BeautifulSoup can't parse attributes with dashes so we replace them with underscores instead\t\t\n\tfile_string = file_string.replace('data-name', 'data_name')\n\n\t# Expand origin to data-svg-origin as its a pain in the ass to type\n\tif 'expand_origin' in options and options['expand_origin'] == True:\n\t\tfile_string = file_string.replace('origin=', 'data-svg-origin=')\n\t\n\t# Add namespaces to ids\n\tif namespace:\n\t\tfile_string = file_string.replace('id=\"', 'id=\"' + namespace)\n\t\tfile_string = file_string.replace('url(#', 'url(#' + namespace)\n\n\tsvg = BeautifulSoup(file_string, 'html.parser')\n\n\t# namespace symbols\n\tsymbol_elements = svg.select('symbol')\n\tfor element in symbol_elements:\n\t\tdel element['data_name']\n\n\tuse_elements = svg.select('use')\n\tfor element in use_elements:\n\t\tif namespace:\n\t\t\thref = element['xlink:href']\n\t\t\telement['xlink:href'] = href.replace('#', '#' + namespace)\n\n\t\tdel element['id']\n\n\n\t# remove titles\n\tif 'title' in options and options['title'] == False:\n\t\ttitles = svg.select('title')\n\t\tfor t in titles: t.extract()\n\n\n\tforeign_tags_to_add = []\n\tif 'convert_svg_text_to_html' in options and options['convert_svg_text_to_html'] == True:\n\t\ttext_elements = svg.select('[data_name=\"#TEXT\"]')\n\t\tfor element in text_elements:\n\n\t\t\tarea = element.rect\n\t\t\tif not area: \n\t\t\t\tprint('WARNING: Text areas require a rectangle to be in the same group as the text element')\n\t\t\t\tcontinue\n\n\t\t\ttext_element = element.select('text')[0]\n\t\t\tif not text_element:\n\t\t\t\tprint('WARNING: No text element found in text area')\n\t\t\t\tcontinue\n\n\t\t\tx = area['x']\n\t\t\ty = area['y']\n\t\t\twidth = area['width']\n\t\t\theight = area['height']\n\n\t\t\ttext_content = text_element.getText()\n\t\t\ttext_tag = BeautifulSoup(text_content, 'html.parser')\n\t\t\t\n\t\t\tdata_name = None\n\t\t\tif area.has_attr('data_name'): data_name = area['data_name']\n\t\t\t#print(data_name)\n\t\t\t\t\t\t\n\t\t\tarea.extract()\n\t\t\ttext_element.extract()\n\t\t\t\n\t\t\tforeign_object_tag = svg.new_tag('foreignObject')\n\t\t\tforeign_object_tag['requiredFeatures'] = \"http://www.w3.org/TR/SVG11/feature#Extensibility\"\n\t\t\tforeign_object_tag['transform'] = 'translate(' + x + ' ' + y + ')'\n\t\t\tforeign_object_tag['width'] = width + 'px'\n\t\t\tforeign_object_tag['height'] = height + 'px'\n\n\t\t\tif 'dont_overflow_text_areas' in options and options['dont_overflow_text_areas'] == True:\n\t\t\t\tforeign_object_tag['style'] = 'overflow:hidden'\n\n\t\t\tif data_name:\n\t\t\t\tval = data_name\n\t\t\t\tif not val.startswith('#'): continue\n\t\t\t\tval = val.replace('#', '')\n\t\t\t\t\n\t\t\t\tattributes = str.split(str(val), ',')\n\t\t\t\tfor a in attributes:\n\t\t\t\t\tsplit = str.split(a.strip(), '=')\n\t\t\t\t\tif (len(split) < 2): continue\n\t\t\t\t\tkey = split[0]\n\t\t\t\t\tvalue = split[1]\n\t\t\t\t\tif key == 'id': key = namespace + key\n\t\t\t\t\tforeign_object_tag[key] = value\n\t\t\t\n\t\t\tforeign_object_tag.append(text_tag)\n\n\t\t\t# modyfing the tree affects searches so we need to defer it until the end\n\t\t\tforeign_tags_to_add.append({'element':element, 'tag':foreign_object_tag})\n\t\t\t\n\n\tif (not 'process_layer_names' in options or ('process_layer_names' in options and options['process_layer_names'] == True)):\n\t\telements_with_data_names = svg.select('[data_name]')\n\t\tfor element in elements_with_data_names:\n\n\t\t\t# remove any existing id tag as we'll be making our own\n\t\t\tif element.has_attr('id'): del element.attrs['id']\n\t\t\t\n\t\t\tval = element['data_name']\n\t\t\t#print(val)\n\t\t\tdel element['data_name']\n\n\t\t\tif not val.startswith('#'): continue\n\t\t\tval = val.replace('#', '')\n\t\t\t\n\t\t\tattributes = str.split(str(val), ',')\n\t\t\tfor a in attributes:\n\t\t\t\tsplit = str.split(a.strip(), '=')\n\t\t\t\tif (len(split) < 2): continue\n\t\t\t\tkey = split[0]\n\t\t\t\tvalue = split[1]\n\t\t\t\tif key == 'id' or key == 'class': value = namespace + value\n\t\t\t\telement[key] = value\n\t\n\t\n\tif 'remove_text_attributes' in options and options['remove_text_attributes'] == True:\n\t\t#Remove attributes from text tags\n\t\ttext_elements = svg.select('text')\n\t\tfor element in text_elements:\n\t\t\tif element.has_attr('font-size'): del element.attrs['font-size']\n\t\t\tif element.has_attr('font-family'): del element.attrs['font-family']\n\t\t\tif element.has_attr('font-weight'): del element.attrs['font-weight']\n\t\t\tif element.has_attr('fill'): del element.attrs['fill']\n\n\t# Do tree modifications here\n\tif 'convert_svg_text_to_html' in options and options['convert_svg_text_to_html'] == True:\n\t\tfor t in foreign_tags_to_add:\n\t\t\tt['element'].append(t['tag'])\n\t\n\n\treturn svg\n\n\ndef write_svg(svg, dst_path, options):\n\t\n\tresult = str(svg)\n\tresult = unicode(result, \"utf8\")\t\n\t#Remove self closing tags\n\tresult = result.replace('></circle>','/>') \n\tresult = result.replace('></rect>','/>') \n\tresult = result.replace('></path>','/>') \n\tresult = result.replace('></polygon>','/>')\n\n\tif 'nowhitespace' in options and options['nowhitespace'] == True:\n\t\tresult = result.replace('\\n','')\n\t#else:\n\t#\tresult = svg.prettify()\n\n\t# bs4 incorrectly outputs clippath instead of clipPath \n\tresult = result.replace('clippath', 'clipPath')\n\tresult = result.encode('UTF8')\n\n\tresult_file = create_file(dst_path, 'wb')\n\tresult_file.write(result)\n\tresult_file.close()\n\n\n\ndef compile_svg(src_path, dst_path, options):\n\tnamespace = None\n\n\tif 'namespace' in options: \n\t\tnamespace = options['namespace']\n\tsvg = parse_svg(src_path, namespace, options)\n\n\tif 'attributes' in options: \n\t\tattrs = options['attributes']\n\t\tfor k in attrs:\n\t\t\tsvg.svg[k] = attrs[k]\n\n\tif 'description' in options:\n\t\tcurrent_desc = svg.select('description')\n\t\tif current_desc:\n\t\t\tcurrent_desc[0].string = options['description']\n\t\telse:\n\t\t\tdesc_tag = svg.new_tag('description');\n\t\t\tdesc_tag.string = options['description']\n\t\t\tsvg.svg.append(desc_tag)\n\t\t\n\twrite_svg(svg, dst_path, options)\n\n\n\ndef compile_master_svg(src_path, dst_path, options):\n\tprint('\\n')\n\tprint(src_path)\n\tfile = open(src_path)\n\tsvg = BeautifulSoup(file, 'html.parser')\n\tfile.close()\n\n\tmaster_viewbox = svg.svg.attrs['viewbox']\n\n\timport_tags = svg.select('[path]')\n\tfor tag in import_tags:\n\n\t\tcomponent_path = str(tag['path'])\n\t\t\n\t\tnamespace = None\n\t\tif tag.has_attr('namespace'): namespace = tag['namespace']\n\n\t\tcomponent = parse_svg(component_path, namespace, options)\n\n\t\tcomponent_viewbox = component.svg.attrs['viewbox']\n\t\tif master_viewbox != component_viewbox:\n\t\t\tprint('WARNING: Master viewbox: [' + master_viewbox + '] does not match component viewbox [' + component_viewbox + ']')\n\t\n\t\t# Moves the contents of the component svg file into the master svg\n\t\tfor child in component.svg: tag.contents.append(child)\n\n\t\t# Remove redundant path and namespace attributes from the import element\n\t\tdel tag.attrs['path']\n\t\tif namespace: del tag.attrs['namespace']\n\n\n\tif 'attributes' in options: \n\t\tattrs = options['attributes']\n\t\tfor k in attrs:\n\t\t\tprint(k + ' = ' + attrs[k])\n\t\t\tsvg.svg[k] = attrs[k]\n\n\n\tif 'title' in options and options['title'] is not False:\n\t\tcurrent_title = svg.select('title')\n\t\tif current_title:\n\t\t\tcurrent_title[0].string = options['title']\n\t\telse:\n\t\t\ttitle_tag = svg.new_tag('title');\n\t\t\ttitle_tag.string = options['title']\n\t\t\tsvg.svg.append(title_tag)\n\n\n\tif 'description' in options:\n\t\tcurrent_desc = svg.select('description')\n\t\tif current_desc:\n\t\t\tcurrent_desc[0].string = options['description']\n\t\telse:\n\t\t\tdesc_tag = svg.new_tag('description');\n\t\t\tdesc_tag.string = options['description']\n\t\t\tsvg.svg.append(desc_tag)\n\n\n\twrite_svg(svg, dst_path, options)\n\n\n# Super dumb / simple function that inlines svgs into html source files\n\ndef parse_markup(src_path, output):\n\tprint(src_path)\n\tread_state = 0\n\tfile = open(src_path, 'r')\n\tfor line in file:\n\t\tif line.startswith('//import'):\n\t\t\tpath = line.split('//import ')[1].rstrip('\\n').rstrip('\\r')\n\t\t\tparse_markup(path, output)\n\t\telse:\n\t\t\toutput.append(line)\n\n\tfile.close()\n\ndef inline_svg(src_path, dst_path):\n\toutput = [];\n\n\tfile = create_file(dst_path, 'w')\n\tparse_markup(src_path, output)\n\tfor line in output: file.write(line)\n\tfile.close()\n\tprint('')\t"
  },
  {
    "path": "parallax_svg_tools/bs4/__init__.py",
    "content": "\"\"\"Beautiful Soup\nElixir and Tonic\n\"The Screen-Scraper's Friend\"\nhttp://www.crummy.com/software/BeautifulSoup/\n\nBeautiful Soup uses a pluggable XML or HTML parser to parse a\n(possibly invalid) document into a tree representation. Beautiful Soup\nprovides methods and Pythonic idioms that make it easy to navigate,\nsearch, and modify the parse tree.\n\nBeautiful Soup works with Python 2.7 and up. It works better if lxml\nand/or html5lib is installed.\n\nFor more than you ever wanted to know about Beautiful Soup, see the\ndocumentation:\nhttp://www.crummy.com/software/BeautifulSoup/bs4/doc/\n\n\"\"\"\n\n# Use of this source code is governed by a BSD-style license that can be\n# found in the LICENSE file.\n\n__author__ = \"Leonard Richardson (leonardr@segfault.org)\"\n__version__ = \"4.5.1\"\n__copyright__ = \"Copyright (c) 2004-2016 Leonard Richardson\"\n__license__ = \"MIT\"\n\n__all__ = ['BeautifulSoup']\n\nimport os\nimport re\nimport traceback\nimport warnings\n\nfrom .builder import builder_registry, ParserRejectedMarkup\nfrom .dammit import UnicodeDammit\nfrom .element import (\n    CData,\n    Comment,\n    DEFAULT_OUTPUT_ENCODING,\n    Declaration,\n    Doctype,\n    NavigableString,\n    PageElement,\n    ProcessingInstruction,\n    ResultSet,\n    SoupStrainer,\n    Tag,\n    )\n\n# The very first thing we do is give a useful error if someone is\n# running this code under Python 3 without converting it.\n'You are trying to run the Python 2 version of Beautiful Soup under Python 3. This will not work.'<>'You need to convert the code, either by installing it (`python setup.py install`) or by running 2to3 (`2to3 -w bs4`).'\n\nclass BeautifulSoup(Tag):\n    \"\"\"\n    This class defines the basic interface called by the tree builders.\n\n    These methods will be called by the parser:\n      reset()\n      feed(markup)\n\n    The tree builder may call these methods from its feed() implementation:\n      handle_starttag(name, attrs) # See note about return value\n      handle_endtag(name)\n      handle_data(data) # Appends to the current data node\n      endData(containerClass=NavigableString) # Ends the current data node\n\n    No matter how complicated the underlying parser is, you should be\n    able to build a tree using 'start tag' events, 'end tag' events,\n    'data' events, and \"done with data\" events.\n\n    If you encounter an empty-element tag (aka a self-closing tag,\n    like HTML's <br> tag), call handle_starttag and then\n    handle_endtag.\n    \"\"\"\n    ROOT_TAG_NAME = u'[document]'\n\n    # If the end-user gives no indication which tree builder they\n    # want, look for one with these features.\n    DEFAULT_BUILDER_FEATURES = ['html', 'fast']\n\n    ASCII_SPACES = '\\x20\\x0a\\x09\\x0c\\x0d'\n\n    NO_PARSER_SPECIFIED_WARNING = \"No parser was explicitly specified, so I'm using the best available %(markup_type)s parser for this system (\\\"%(parser)s\\\"). This usually isn't a problem, but if you run this code on another system, or in a different virtual environment, it may use a different parser and behave differently.\\n\\nThe code that caused this warning is on line %(line_number)s of the file %(filename)s. To get rid of this warning, change code that looks like this:\\n\\n BeautifulSoup([your markup])\\n\\nto this:\\n\\n BeautifulSoup([your markup], \\\"%(parser)s\\\")\\n\"\n\n    def __init__(self, markup=\"\", features=None, builder=None,\n                 parse_only=None, from_encoding=None, exclude_encodings=None,\n                 **kwargs):\n        \"\"\"The Soup object is initialized as the 'root tag', and the\n        provided markup (which can be a string or a file-like object)\n        is fed into the underlying parser.\"\"\"\n\n        if 'convertEntities' in kwargs:\n            warnings.warn(\n                \"BS4 does not respect the convertEntities argument to the \"\n                \"BeautifulSoup constructor. Entities are always converted \"\n                \"to Unicode characters.\")\n\n        if 'markupMassage' in kwargs:\n            del kwargs['markupMassage']\n            warnings.warn(\n                \"BS4 does not respect the markupMassage argument to the \"\n                \"BeautifulSoup constructor. The tree builder is responsible \"\n                \"for any necessary markup massage.\")\n\n        if 'smartQuotesTo' in kwargs:\n            del kwargs['smartQuotesTo']\n            warnings.warn(\n                \"BS4 does not respect the smartQuotesTo argument to the \"\n                \"BeautifulSoup constructor. Smart quotes are always converted \"\n                \"to Unicode characters.\")\n\n        if 'selfClosingTags' in kwargs:\n            del kwargs['selfClosingTags']\n            warnings.warn(\n                \"BS4 does not respect the selfClosingTags argument to the \"\n                \"BeautifulSoup constructor. The tree builder is responsible \"\n                \"for understanding self-closing tags.\")\n\n        if 'isHTML' in kwargs:\n            del kwargs['isHTML']\n            warnings.warn(\n                \"BS4 does not respect the isHTML argument to the \"\n                \"BeautifulSoup constructor. Suggest you use \"\n                \"features='lxml' for HTML and features='lxml-xml' for \"\n                \"XML.\")\n\n        def deprecated_argument(old_name, new_name):\n            if old_name in kwargs:\n                warnings.warn(\n                    'The \"%s\" argument to the BeautifulSoup constructor '\n                    'has been renamed to \"%s.\"' % (old_name, new_name))\n                value = kwargs[old_name]\n                del kwargs[old_name]\n                return value\n            return None\n\n        parse_only = parse_only or deprecated_argument(\n            \"parseOnlyThese\", \"parse_only\")\n\n        from_encoding = from_encoding or deprecated_argument(\n            \"fromEncoding\", \"from_encoding\")\n\n        if from_encoding and isinstance(markup, unicode):\n            warnings.warn(\"You provided Unicode markup but also provided a value for from_encoding. Your from_encoding will be ignored.\")\n            from_encoding = None\n\n        if len(kwargs) > 0:\n            arg = kwargs.keys().pop()\n            raise TypeError(\n                \"__init__() got an unexpected keyword argument '%s'\" % arg)\n\n        if builder is None:\n            original_features = features\n            if isinstance(features, basestring):\n                features = [features]\n            if features is None or len(features) == 0:\n                features = self.DEFAULT_BUILDER_FEATURES\n            builder_class = builder_registry.lookup(*features)\n            if builder_class is None:\n                raise FeatureNotFound(\n                    \"Couldn't find a tree builder with the features you \"\n                    \"requested: %s. Do you need to install a parser library?\"\n                    % \",\".join(features))\n            builder = builder_class()\n            if not (original_features == builder.NAME or\n                    original_features in builder.ALTERNATE_NAMES):\n                if builder.is_xml:\n                    markup_type = \"XML\"\n                else:\n                    markup_type = \"HTML\"\n\n                caller = traceback.extract_stack()[0]\n                filename = caller[0]\n                line_number = caller[1]\n                warnings.warn(self.NO_PARSER_SPECIFIED_WARNING % dict(\n                    filename=filename,\n                    line_number=line_number,\n                    parser=builder.NAME,\n                    markup_type=markup_type))\n\n        self.builder = builder\n        self.is_xml = builder.is_xml\n        self.known_xml = self.is_xml\n        self.builder.soup = self\n\n        self.parse_only = parse_only\n\n        if hasattr(markup, 'read'):        # It's a file-type object.\n            markup = markup.read()\n        elif len(markup) <= 256 and (\n                (isinstance(markup, bytes) and not b'<' in markup)\n                or (isinstance(markup, unicode) and not u'<' in markup)\n        ):\n            # Print out warnings for a couple beginner problems\n            # involving passing non-markup to Beautiful Soup.\n            # Beautiful Soup will still parse the input as markup,\n            # just in case that's what the user really wants.\n            if (isinstance(markup, unicode)\n                and not os.path.supports_unicode_filenames):\n                possible_filename = markup.encode(\"utf8\")\n            else:\n                possible_filename = markup\n            is_file = False\n            try:\n                is_file = os.path.exists(possible_filename)\n            except Exception, e:\n                # This is almost certainly a problem involving\n                # characters not valid in filenames on this\n                # system. Just let it go.\n                pass\n            if is_file:\n                if isinstance(markup, unicode):\n                    markup = markup.encode(\"utf8\")\n                warnings.warn(\n                    '\"%s\" looks like a filename, not markup. You should'\n                    'probably open this file and pass the filehandle into'\n                    'Beautiful Soup.' % markup)\n            self._check_markup_is_url(markup)\n\n        for (self.markup, self.original_encoding, self.declared_html_encoding,\n         self.contains_replacement_characters) in (\n             self.builder.prepare_markup(\n                 markup, from_encoding, exclude_encodings=exclude_encodings)):\n            self.reset()\n            try:\n                self._feed()\n                break\n            except ParserRejectedMarkup:\n                pass\n\n        # Clear out the markup and remove the builder's circular\n        # reference to this object.\n        self.markup = None\n        self.builder.soup = None\n\n    def __copy__(self):\n        copy = type(self)(\n            self.encode('utf-8'), builder=self.builder, from_encoding='utf-8'\n        )\n\n        # Although we encoded the tree to UTF-8, that may not have\n        # been the encoding of the original markup. Set the copy's\n        # .original_encoding to reflect the original object's\n        # .original_encoding.\n        copy.original_encoding = self.original_encoding\n        return copy\n\n    def __getstate__(self):\n        # Frequently a tree builder can't be pickled.\n        d = dict(self.__dict__)\n        if 'builder' in d and not self.builder.picklable:\n            d['builder'] = None\n        return d\n\n    @staticmethod\n    def _check_markup_is_url(markup):\n        \"\"\" \n        Check if markup looks like it's actually a url and raise a warning \n        if so. Markup can be unicode or str (py2) / bytes (py3).\n        \"\"\"\n        if isinstance(markup, bytes):\n            space = b' '\n            cant_start_with = (b\"http:\", b\"https:\")\n        elif isinstance(markup, unicode):\n            space = u' '\n            cant_start_with = (u\"http:\", u\"https:\")\n        else:\n            return\n\n        if any(markup.startswith(prefix) for prefix in cant_start_with):\n            if not space in markup:\n                if isinstance(markup, bytes):\n                    decoded_markup = markup.decode('utf-8', 'replace')\n                else:\n                    decoded_markup = markup\n                warnings.warn(\n                    '\"%s\" looks like a URL. Beautiful Soup is not an'\n                    ' HTTP client. You should probably use an HTTP client like'\n                    ' requests to get the document behind the URL, and feed'\n                    ' that document to Beautiful Soup.' % decoded_markup\n                )\n\n    def _feed(self):\n        # Convert the document to Unicode.\n        self.builder.reset()\n\n        self.builder.feed(self.markup)\n        # Close out any unfinished strings and close all the open tags.\n        self.endData()\n        while self.currentTag.name != self.ROOT_TAG_NAME:\n            self.popTag()\n\n    def reset(self):\n        Tag.__init__(self, self, self.builder, self.ROOT_TAG_NAME)\n        self.hidden = 1\n        self.builder.reset()\n        self.current_data = []\n        self.currentTag = None\n        self.tagStack = []\n        self.preserve_whitespace_tag_stack = []\n        self.pushTag(self)\n\n    def new_tag(self, name, namespace=None, nsprefix=None, **attrs):\n        \"\"\"Create a new tag associated with this soup.\"\"\"\n        return Tag(None, self.builder, name, namespace, nsprefix, attrs)\n\n    def new_string(self, s, subclass=NavigableString):\n        \"\"\"Create a new NavigableString associated with this soup.\"\"\"\n        return subclass(s)\n\n    def insert_before(self, successor):\n        raise NotImplementedError(\"BeautifulSoup objects don't support insert_before().\")\n\n    def insert_after(self, successor):\n        raise NotImplementedError(\"BeautifulSoup objects don't support insert_after().\")\n\n    def popTag(self):\n        tag = self.tagStack.pop()\n        if self.preserve_whitespace_tag_stack and tag == self.preserve_whitespace_tag_stack[-1]:\n            self.preserve_whitespace_tag_stack.pop()\n        #print \"Pop\", tag.name\n        if self.tagStack:\n            self.currentTag = self.tagStack[-1]\n        return self.currentTag\n\n    def pushTag(self, tag):\n        #print \"Push\", tag.name\n        if self.currentTag:\n            self.currentTag.contents.append(tag)\n        self.tagStack.append(tag)\n        self.currentTag = self.tagStack[-1]\n        if tag.name in self.builder.preserve_whitespace_tags:\n            self.preserve_whitespace_tag_stack.append(tag)\n\n    def endData(self, containerClass=NavigableString):\n        if self.current_data:\n            current_data = u''.join(self.current_data)\n            # If whitespace is not preserved, and this string contains\n            # nothing but ASCII spaces, replace it with a single space\n            # or newline.\n            if not self.preserve_whitespace_tag_stack:\n                strippable = True\n                for i in current_data:\n                    if i not in self.ASCII_SPACES:\n                        strippable = False\n                        break\n                if strippable:\n                    if '\\n' in current_data:\n                        current_data = '\\n'\n                    else:\n                        current_data = ' '\n\n            # Reset the data collector.\n            self.current_data = []\n\n            # Should we add this string to the tree at all?\n            if self.parse_only and len(self.tagStack) <= 1 and \\\n                   (not self.parse_only.text or \\\n                    not self.parse_only.search(current_data)):\n                return\n\n            o = containerClass(current_data)\n            self.object_was_parsed(o)\n\n    def object_was_parsed(self, o, parent=None, most_recent_element=None):\n        \"\"\"Add an object to the parse tree.\"\"\"\n        parent = parent or self.currentTag\n        previous_element = most_recent_element or self._most_recent_element\n\n        next_element = previous_sibling = next_sibling = None\n        if isinstance(o, Tag):\n            next_element = o.next_element\n            next_sibling = o.next_sibling\n            previous_sibling = o.previous_sibling\n            if not previous_element:\n                previous_element = o.previous_element\n\n        o.setup(parent, previous_element, next_element, previous_sibling, next_sibling)\n\n        self._most_recent_element = o\n        parent.contents.append(o)\n\n        if parent.next_sibling:\n            # This node is being inserted into an element that has\n            # already been parsed. Deal with any dangling references.\n            index = len(parent.contents)-1\n            while index >= 0:\n                if parent.contents[index] is o:\n                    break\n                index -= 1\n            else:\n                raise ValueError(\n                    \"Error building tree: supposedly %r was inserted \"\n                    \"into %r after the fact, but I don't see it!\" % (\n                        o, parent\n                    )\n                )\n            if index == 0:\n                previous_element = parent\n                previous_sibling = None\n            else:\n                previous_element = previous_sibling = parent.contents[index-1]\n            if index == len(parent.contents)-1:\n                next_element = parent.next_sibling\n                next_sibling = None\n            else:\n                next_element = next_sibling = parent.contents[index+1]\n\n            o.previous_element = previous_element\n            if previous_element:\n                previous_element.next_element = o\n            o.next_element = next_element\n            if next_element:\n                next_element.previous_element = o\n            o.next_sibling = next_sibling\n            if next_sibling:\n                next_sibling.previous_sibling = o\n            o.previous_sibling = previous_sibling\n            if previous_sibling:\n                previous_sibling.next_sibling = o\n\n    def _popToTag(self, name, nsprefix=None, inclusivePop=True):\n        \"\"\"Pops the tag stack up to and including the most recent\n        instance of the given tag. If inclusivePop is false, pops the tag\n        stack up to but *not* including the most recent instqance of\n        the given tag.\"\"\"\n        #print \"Popping to %s\" % name\n        if name == self.ROOT_TAG_NAME:\n            # The BeautifulSoup object itself can never be popped.\n            return\n\n        most_recently_popped = None\n\n        stack_size = len(self.tagStack)\n        for i in range(stack_size - 1, 0, -1):\n            t = self.tagStack[i]\n            if (name == t.name and nsprefix == t.prefix):\n                if inclusivePop:\n                    most_recently_popped = self.popTag()\n                break\n            most_recently_popped = self.popTag()\n\n        return most_recently_popped\n\n    def handle_starttag(self, name, namespace, nsprefix, attrs):\n        \"\"\"Push a start tag on to the stack.\n\n        If this method returns None, the tag was rejected by the\n        SoupStrainer. You should proceed as if the tag had not occurred\n        in the document. For instance, if this was a self-closing tag,\n        don't call handle_endtag.\n        \"\"\"\n\n        # print \"Start tag %s: %s\" % (name, attrs)\n        self.endData()\n\n        if (self.parse_only and len(self.tagStack) <= 1\n            and (self.parse_only.text\n                 or not self.parse_only.search_tag(name, attrs))):\n            return None\n\n        tag = Tag(self, self.builder, name, namespace, nsprefix, attrs,\n                  self.currentTag, self._most_recent_element)\n        if tag is None:\n            return tag\n        if self._most_recent_element:\n            self._most_recent_element.next_element = tag\n        self._most_recent_element = tag\n        self.pushTag(tag)\n        return tag\n\n    def handle_endtag(self, name, nsprefix=None):\n        #print \"End tag: \" + name\n        self.endData()\n        self._popToTag(name, nsprefix)\n\n    def handle_data(self, data):\n        self.current_data.append(data)\n\n    def decode(self, pretty_print=False,\n               eventual_encoding=DEFAULT_OUTPUT_ENCODING,\n               formatter=\"minimal\"):\n        \"\"\"Returns a string or Unicode representation of this document.\n        To get Unicode, pass None for encoding.\"\"\"\n\n        if self.is_xml:\n            # Print the XML declaration\n            encoding_part = ''\n            if eventual_encoding != None:\n                encoding_part = ' encoding=\"%s\"' % eventual_encoding\n            prefix = u'<?xml version=\"1.0\"%s?>\\n' % encoding_part\n        else:\n            prefix = u''\n        if not pretty_print:\n            indent_level = None\n        else:\n            indent_level = 0\n        return prefix + super(BeautifulSoup, self).decode(\n            indent_level, eventual_encoding, formatter)\n\n# Alias to make it easier to type import: 'from bs4 import _soup'\n_s = BeautifulSoup\n_soup = BeautifulSoup\n\nclass BeautifulStoneSoup(BeautifulSoup):\n    \"\"\"Deprecated interface to an XML parser.\"\"\"\n\n    def __init__(self, *args, **kwargs):\n        kwargs['features'] = 'xml'\n        warnings.warn(\n            'The BeautifulStoneSoup class is deprecated. Instead of using '\n            'it, pass features=\"xml\" into the BeautifulSoup constructor.')\n        super(BeautifulStoneSoup, self).__init__(*args, **kwargs)\n\n\nclass StopParsing(Exception):\n    pass\n\nclass FeatureNotFound(ValueError):\n    pass\n\n\n#By default, act as an HTML pretty-printer.\nif __name__ == '__main__':\n    import sys\n    soup = BeautifulSoup(sys.stdin)\n    print soup.prettify()\n"
  },
  {
    "path": "parallax_svg_tools/bs4/builder/__init__.py",
    "content": "# Use of this source code is governed by a BSD-style license that can be\n# found in the LICENSE file.\n\nfrom collections import defaultdict\nimport itertools\nimport sys\nfrom bs4.element import (\n    CharsetMetaAttributeValue,\n    ContentMetaAttributeValue,\n    HTMLAwareEntitySubstitution,\n    whitespace_re\n    )\n\n__all__ = [\n    'HTMLTreeBuilder',\n    'SAXTreeBuilder',\n    'TreeBuilder',\n    'TreeBuilderRegistry',\n    ]\n\n# Some useful features for a TreeBuilder to have.\nFAST = 'fast'\nPERMISSIVE = 'permissive'\nSTRICT = 'strict'\nXML = 'xml'\nHTML = 'html'\nHTML_5 = 'html5'\n\n\nclass TreeBuilderRegistry(object):\n\n    def __init__(self):\n        self.builders_for_feature = defaultdict(list)\n        self.builders = []\n\n    def register(self, treebuilder_class):\n        \"\"\"Register a treebuilder based on its advertised features.\"\"\"\n        for feature in treebuilder_class.features:\n            self.builders_for_feature[feature].insert(0, treebuilder_class)\n        self.builders.insert(0, treebuilder_class)\n\n    def lookup(self, *features):\n        if len(self.builders) == 0:\n            # There are no builders at all.\n            return None\n\n        if len(features) == 0:\n            # They didn't ask for any features. Give them the most\n            # recently registered builder.\n            return self.builders[0]\n\n        # Go down the list of features in order, and eliminate any builders\n        # that don't match every feature.\n        features = list(features)\n        features.reverse()\n        candidates = None\n        candidate_set = None\n        while len(features) > 0:\n            feature = features.pop()\n            we_have_the_feature = self.builders_for_feature.get(feature, [])\n            if len(we_have_the_feature) > 0:\n                if candidates is None:\n                    candidates = we_have_the_feature\n                    candidate_set = set(candidates)\n                else:\n                    # Eliminate any candidates that don't have this feature.\n                    candidate_set = candidate_set.intersection(\n                        set(we_have_the_feature))\n\n        # The only valid candidates are the ones in candidate_set.\n        # Go through the original list of candidates and pick the first one\n        # that's in candidate_set.\n        if candidate_set is None:\n            return None\n        for candidate in candidates:\n            if candidate in candidate_set:\n                return candidate\n        return None\n\n# The BeautifulSoup class will take feature lists from developers and use them\n# to look up builders in this registry.\nbuilder_registry = TreeBuilderRegistry()\n\nclass TreeBuilder(object):\n    \"\"\"Turn a document into a Beautiful Soup object tree.\"\"\"\n\n    NAME = \"[Unknown tree builder]\"\n    ALTERNATE_NAMES = []\n    features = []\n\n    is_xml = False\n    picklable = False\n    preserve_whitespace_tags = set()\n    empty_element_tags = None # A tag will be considered an empty-element\n                              # tag when and only when it has no contents.\n\n    # A value for these tag/attribute combinations is a space- or\n    # comma-separated list of CDATA, rather than a single CDATA.\n    cdata_list_attributes = {}\n\n\n    def __init__(self):\n        self.soup = None\n\n    def reset(self):\n        pass\n\n    def can_be_empty_element(self, tag_name):\n        \"\"\"Might a tag with this name be an empty-element tag?\n\n        The final markup may or may not actually present this tag as\n        self-closing.\n\n        For instance: an HTMLBuilder does not consider a <p> tag to be\n        an empty-element tag (it's not in\n        HTMLBuilder.empty_element_tags). This means an empty <p> tag\n        will be presented as \"<p></p>\", not \"<p />\".\n\n        The default implementation has no opinion about which tags are\n        empty-element tags, so a tag will be presented as an\n        empty-element tag if and only if it has no contents.\n        \"<foo></foo>\" will become \"<foo />\", and \"<foo>bar</foo>\" will\n        be left alone.\n        \"\"\"\n        if self.empty_element_tags is None:\n            return True\n        return tag_name in self.empty_element_tags\n\n    def feed(self, markup):\n        raise NotImplementedError()\n\n    def prepare_markup(self, markup, user_specified_encoding=None,\n                       document_declared_encoding=None):\n        return markup, None, None, False\n\n    def test_fragment_to_document(self, fragment):\n        \"\"\"Wrap an HTML fragment to make it look like a document.\n\n        Different parsers do this differently. For instance, lxml\n        introduces an empty <head> tag, and html5lib\n        doesn't. Abstracting this away lets us write simple tests\n        which run HTML fragments through the parser and compare the\n        results against other HTML fragments.\n\n        This method should not be used outside of tests.\n        \"\"\"\n        return fragment\n\n    def set_up_substitutions(self, tag):\n        return False\n\n    def _replace_cdata_list_attribute_values(self, tag_name, attrs):\n        \"\"\"Replaces class=\"foo bar\" with class=[\"foo\", \"bar\"]\n\n        Modifies its input in place.\n        \"\"\"\n        if not attrs:\n            return attrs\n        if self.cdata_list_attributes:\n            universal = self.cdata_list_attributes.get('*', [])\n            tag_specific = self.cdata_list_attributes.get(\n                tag_name.lower(), None)\n            for attr in attrs.keys():\n                if attr in universal or (tag_specific and attr in tag_specific):\n                    # We have a \"class\"-type attribute whose string\n                    # value is a whitespace-separated list of\n                    # values. Split it into a list.\n                    value = attrs[attr]\n                    if isinstance(value, basestring):\n                        values = whitespace_re.split(value)\n                    else:\n                        # html5lib sometimes calls setAttributes twice\n                        # for the same tag when rearranging the parse\n                        # tree. On the second call the attribute value\n                        # here is already a list.  If this happens,\n                        # leave the value alone rather than trying to\n                        # split it again.\n                        values = value\n                    attrs[attr] = values\n        return attrs\n\nclass SAXTreeBuilder(TreeBuilder):\n    \"\"\"A Beautiful Soup treebuilder that listens for SAX events.\"\"\"\n\n    def feed(self, markup):\n        raise NotImplementedError()\n\n    def close(self):\n        pass\n\n    def startElement(self, name, attrs):\n        attrs = dict((key[1], value) for key, value in list(attrs.items()))\n        #print \"Start %s, %r\" % (name, attrs)\n        self.soup.handle_starttag(name, attrs)\n\n    def endElement(self, name):\n        #print \"End %s\" % name\n        self.soup.handle_endtag(name)\n\n    def startElementNS(self, nsTuple, nodeName, attrs):\n        # Throw away (ns, nodeName) for now.\n        self.startElement(nodeName, attrs)\n\n    def endElementNS(self, nsTuple, nodeName):\n        # Throw away (ns, nodeName) for now.\n        self.endElement(nodeName)\n        #handler.endElementNS((ns, node.nodeName), node.nodeName)\n\n    def startPrefixMapping(self, prefix, nodeValue):\n        # Ignore the prefix for now.\n        pass\n\n    def endPrefixMapping(self, prefix):\n        # Ignore the prefix for now.\n        # handler.endPrefixMapping(prefix)\n        pass\n\n    def characters(self, content):\n        self.soup.handle_data(content)\n\n    def startDocument(self):\n        pass\n\n    def endDocument(self):\n        pass\n\n\nclass HTMLTreeBuilder(TreeBuilder):\n    \"\"\"This TreeBuilder knows facts about HTML.\n\n    Such as which tags are empty-element tags.\n    \"\"\"\n\n    preserve_whitespace_tags = HTMLAwareEntitySubstitution.preserve_whitespace_tags\n    empty_element_tags = set(['br' , 'hr', 'input', 'img', 'meta',\n                              'spacer', 'link', 'frame', 'base'])\n\n    # The HTML standard defines these attributes as containing a\n    # space-separated list of values, not a single value. That is,\n    # class=\"foo bar\" means that the 'class' attribute has two values,\n    # 'foo' and 'bar', not the single value 'foo bar'.  When we\n    # encounter one of these attributes, we will parse its value into\n    # a list of values if possible. Upon output, the list will be\n    # converted back into a string.\n    cdata_list_attributes = {\n        \"*\" : ['class', 'accesskey', 'dropzone'],\n        \"a\" : ['rel', 'rev'],\n        \"link\" :  ['rel', 'rev'],\n        \"td\" : [\"headers\"],\n        \"th\" : [\"headers\"],\n        \"td\" : [\"headers\"],\n        \"form\" : [\"accept-charset\"],\n        \"object\" : [\"archive\"],\n\n        # These are HTML5 specific, as are *.accesskey and *.dropzone above.\n        \"area\" : [\"rel\"],\n        \"icon\" : [\"sizes\"],\n        \"iframe\" : [\"sandbox\"],\n        \"output\" : [\"for\"],\n        }\n\n    def set_up_substitutions(self, tag):\n        # We are only interested in <meta> tags\n        if tag.name != 'meta':\n            return False\n\n        http_equiv = tag.get('http-equiv')\n        content = tag.get('content')\n        charset = tag.get('charset')\n\n        # We are interested in <meta> tags that say what encoding the\n        # document was originally in. This means HTML 5-style <meta>\n        # tags that provide the \"charset\" attribute. It also means\n        # HTML 4-style <meta> tags that provide the \"content\"\n        # attribute and have \"http-equiv\" set to \"content-type\".\n        #\n        # In both cases we will replace the value of the appropriate\n        # attribute with a standin object that can take on any\n        # encoding.\n        meta_encoding = None\n        if charset is not None:\n            # HTML 5 style:\n            # <meta charset=\"utf8\">\n            meta_encoding = charset\n            tag['charset'] = CharsetMetaAttributeValue(charset)\n\n        elif (content is not None and http_equiv is not None\n              and http_equiv.lower() == 'content-type'):\n            # HTML 4 style:\n            # <meta http-equiv=\"content-type\" content=\"text/html; charset=utf8\">\n            tag['content'] = ContentMetaAttributeValue(content)\n\n        return (meta_encoding is not None)\n\ndef register_treebuilders_from(module):\n    \"\"\"Copy TreeBuilders from the given module into this module.\"\"\"\n    # I'm fairly sure this is not the best way to do this.\n    this_module = sys.modules['bs4.builder']\n    for name in module.__all__:\n        obj = getattr(module, name)\n\n        if issubclass(obj, TreeBuilder):\n            setattr(this_module, name, obj)\n            this_module.__all__.append(name)\n            # Register the builder while we're at it.\n            this_module.builder_registry.register(obj)\n\nclass ParserRejectedMarkup(Exception):\n    pass\n\n# Builders are registered in reverse order of priority, so that custom\n# builder registrations will take precedence. In general, we want lxml\n# to take precedence over html5lib, because it's faster. And we only\n# want to use HTMLParser as a last result.\nfrom . import _htmlparser\nregister_treebuilders_from(_htmlparser)\ntry:\n    from . import _html5lib\n    register_treebuilders_from(_html5lib)\nexcept ImportError:\n    # They don't have html5lib installed.\n    pass\ntry:\n    from . import _lxml\n    register_treebuilders_from(_lxml)\nexcept ImportError:\n    # They don't have lxml installed.\n    pass\n"
  },
  {
    "path": "parallax_svg_tools/bs4/builder/_html5lib.py",
    "content": "# Use of this source code is governed by a BSD-style license that can be\n# found in the LICENSE file.\n\n__all__ = [\n    'HTML5TreeBuilder',\n    ]\n\nimport warnings\nfrom bs4.builder import (\n    PERMISSIVE,\n    HTML,\n    HTML_5,\n    HTMLTreeBuilder,\n    )\nfrom bs4.element import (\n    NamespacedAttribute,\n    whitespace_re,\n)\nimport html5lib\nfrom html5lib.constants import namespaces\nfrom bs4.element import (\n    Comment,\n    Doctype,\n    NavigableString,\n    Tag,\n    )\n\ntry:\n    # Pre-0.99999999\n    from html5lib.treebuilders import _base as treebuilder_base\n    new_html5lib = False\nexcept ImportError, e:\n    # 0.99999999 and up\n    from html5lib.treebuilders import base as treebuilder_base\n    new_html5lib = True\n\nclass HTML5TreeBuilder(HTMLTreeBuilder):\n    \"\"\"Use html5lib to build a tree.\"\"\"\n\n    NAME = \"html5lib\"\n\n    features = [NAME, PERMISSIVE, HTML_5, HTML]\n\n    def prepare_markup(self, markup, user_specified_encoding,\n                       document_declared_encoding=None, exclude_encodings=None):\n        # Store the user-specified encoding for use later on.\n        self.user_specified_encoding = user_specified_encoding\n\n        # document_declared_encoding and exclude_encodings aren't used\n        # ATM because the html5lib TreeBuilder doesn't use\n        # UnicodeDammit.\n        if exclude_encodings:\n            warnings.warn(\"You provided a value for exclude_encoding, but the html5lib tree builder doesn't support exclude_encoding.\")\n        yield (markup, None, None, False)\n\n    # These methods are defined by Beautiful Soup.\n    def feed(self, markup):\n        if self.soup.parse_only is not None:\n            warnings.warn(\"You provided a value for parse_only, but the html5lib tree builder doesn't support parse_only. The entire document will be parsed.\")\n        parser = html5lib.HTMLParser(tree=self.create_treebuilder)\n\n        extra_kwargs = dict()\n        if not isinstance(markup, unicode):\n            if new_html5lib:\n                extra_kwargs['override_encoding'] = self.user_specified_encoding\n            else:\n                extra_kwargs['encoding'] = self.user_specified_encoding\n        doc = parser.parse(markup, **extra_kwargs)\n\n        # Set the character encoding detected by the tokenizer.\n        if isinstance(markup, unicode):\n            # We need to special-case this because html5lib sets\n            # charEncoding to UTF-8 if it gets Unicode input.\n            doc.original_encoding = None\n        else:\n            original_encoding = parser.tokenizer.stream.charEncoding[0]\n            if not isinstance(original_encoding, basestring):\n                # In 0.99999999 and up, the encoding is an html5lib\n                # Encoding object. We want to use a string for compatibility\n                # with other tree builders.\n                original_encoding = original_encoding.name\n            doc.original_encoding = original_encoding\n\n    def create_treebuilder(self, namespaceHTMLElements):\n        self.underlying_builder = TreeBuilderForHtml5lib(\n            self.soup, namespaceHTMLElements)\n        return self.underlying_builder\n\n    def test_fragment_to_document(self, fragment):\n        \"\"\"See `TreeBuilder`.\"\"\"\n        return u'<html><head></head><body>%s</body></html>' % fragment\n\n\nclass TreeBuilderForHtml5lib(treebuilder_base.TreeBuilder):\n\n    def __init__(self, soup, namespaceHTMLElements):\n        self.soup = soup\n        super(TreeBuilderForHtml5lib, self).__init__(namespaceHTMLElements)\n\n    def documentClass(self):\n        self.soup.reset()\n        return Element(self.soup, self.soup, None)\n\n    def insertDoctype(self, token):\n        name = token[\"name\"]\n        publicId = token[\"publicId\"]\n        systemId = token[\"systemId\"]\n\n        doctype = Doctype.for_name_and_ids(name, publicId, systemId)\n        self.soup.object_was_parsed(doctype)\n\n    def elementClass(self, name, namespace):\n        tag = self.soup.new_tag(name, namespace)\n        return Element(tag, self.soup, namespace)\n\n    def commentClass(self, data):\n        return TextNode(Comment(data), self.soup)\n\n    def fragmentClass(self):\n        self.soup = BeautifulSoup(\"\")\n        self.soup.name = \"[document_fragment]\"\n        return Element(self.soup, self.soup, None)\n\n    def appendChild(self, node):\n        # XXX This code is not covered by the BS4 tests.\n        self.soup.append(node.element)\n\n    def getDocument(self):\n        return self.soup\n\n    def getFragment(self):\n        return treebuilder_base.TreeBuilder.getFragment(self).element\n\nclass AttrList(object):\n    def __init__(self, element):\n        self.element = element\n        self.attrs = dict(self.element.attrs)\n    def __iter__(self):\n        return list(self.attrs.items()).__iter__()\n    def __setitem__(self, name, value):\n        # If this attribute is a multi-valued attribute for this element,\n        # turn its value into a list.\n        list_attr = HTML5TreeBuilder.cdata_list_attributes\n        if (name in list_attr['*']\n            or (self.element.name in list_attr\n                and name in list_attr[self.element.name])):\n            # A node that is being cloned may have already undergone\n            # this procedure.\n            if not isinstance(value, list):\n                value = whitespace_re.split(value)\n        self.element[name] = value\n    def items(self):\n        return list(self.attrs.items())\n    def keys(self):\n        return list(self.attrs.keys())\n    def __len__(self):\n        return len(self.attrs)\n    def __getitem__(self, name):\n        return self.attrs[name]\n    def __contains__(self, name):\n        return name in list(self.attrs.keys())\n\n\nclass Element(treebuilder_base.Node):\n    def __init__(self, element, soup, namespace):\n        treebuilder_base.Node.__init__(self, element.name)\n        self.element = element\n        self.soup = soup\n        self.namespace = namespace\n\n    def appendChild(self, node):\n        string_child = child = None\n        if isinstance(node, basestring):\n            # Some other piece of code decided to pass in a string\n            # instead of creating a TextElement object to contain the\n            # string.\n            string_child = child = node\n        elif isinstance(node, Tag):\n            # Some other piece of code decided to pass in a Tag\n            # instead of creating an Element object to contain the\n            # Tag.\n            child = node\n        elif node.element.__class__ == NavigableString:\n            string_child = child = node.element\n        else:\n            child = node.element\n\n        if not isinstance(child, basestring) and child.parent is not None:\n            node.element.extract()\n\n        if (string_child and self.element.contents\n            and self.element.contents[-1].__class__ == NavigableString):\n            # We are appending a string onto another string.\n            # TODO This has O(n^2) performance, for input like\n            # \"a</a>a</a>a</a>...\"\n            old_element = self.element.contents[-1]\n            new_element = self.soup.new_string(old_element + string_child)\n            old_element.replace_with(new_element)\n            self.soup._most_recent_element = new_element\n        else:\n            if isinstance(node, basestring):\n                # Create a brand new NavigableString from this string.\n                child = self.soup.new_string(node)\n\n            # Tell Beautiful Soup to act as if it parsed this element\n            # immediately after the parent's last descendant. (Or\n            # immediately after the parent, if it has no children.)\n            if self.element.contents:\n                most_recent_element = self.element._last_descendant(False)\n            elif self.element.next_element is not None:\n                # Something from further ahead in the parse tree is\n                # being inserted into this earlier element. This is\n                # very annoying because it means an expensive search\n                # for the last element in the tree.\n                most_recent_element = self.soup._last_descendant()\n            else:\n                most_recent_element = self.element\n\n            self.soup.object_was_parsed(\n                child, parent=self.element,\n                most_recent_element=most_recent_element)\n\n    def getAttributes(self):\n        return AttrList(self.element)\n\n    def setAttributes(self, attributes):\n\n        if attributes is not None and len(attributes) > 0:\n\n            converted_attributes = []\n            for name, value in list(attributes.items()):\n                if isinstance(name, tuple):\n                    new_name = NamespacedAttribute(*name)\n                    del attributes[name]\n                    attributes[new_name] = value\n\n            self.soup.builder._replace_cdata_list_attribute_values(\n                self.name, attributes)\n            for name, value in attributes.items():\n                self.element[name] = value\n\n            # The attributes may contain variables that need substitution.\n            # Call set_up_substitutions manually.\n            #\n            # The Tag constructor called this method when the Tag was created,\n            # but we just set/changed the attributes, so call it again.\n            self.soup.builder.set_up_substitutions(self.element)\n    attributes = property(getAttributes, setAttributes)\n\n    def insertText(self, data, insertBefore=None):\n        if insertBefore:\n            text = TextNode(self.soup.new_string(data), self.soup)\n            self.insertBefore(data, insertBefore)\n        else:\n            self.appendChild(data)\n\n    def insertBefore(self, node, refNode):\n        index = self.element.index(refNode.element)\n        if (node.element.__class__ == NavigableString and self.element.contents\n            and self.element.contents[index-1].__class__ == NavigableString):\n            # (See comments in appendChild)\n            old_node = self.element.contents[index-1]\n            new_str = self.soup.new_string(old_node + node.element)\n            old_node.replace_with(new_str)\n        else:\n            self.element.insert(index, node.element)\n            node.parent = self\n\n    def removeChild(self, node):\n        node.element.extract()\n\n    def reparentChildren(self, new_parent):\n        \"\"\"Move all of this tag's children into another tag.\"\"\"\n        # print \"MOVE\", self.element.contents\n        # print \"FROM\", self.element\n        # print \"TO\", new_parent.element\n        element = self.element\n        new_parent_element = new_parent.element\n        # Determine what this tag's next_element will be once all the children\n        # are removed.\n        final_next_element = element.next_sibling\n\n        new_parents_last_descendant = new_parent_element._last_descendant(False, False)\n        if len(new_parent_element.contents) > 0:\n            # The new parent already contains children. We will be\n            # appending this tag's children to the end.\n            new_parents_last_child = new_parent_element.contents[-1]\n            new_parents_last_descendant_next_element = new_parents_last_descendant.next_element\n        else:\n            # The new parent contains no children.\n            new_parents_last_child = None\n            new_parents_last_descendant_next_element = new_parent_element.next_element\n\n        to_append = element.contents\n        append_after = new_parent_element.contents\n        if len(to_append) > 0:\n            # Set the first child's previous_element and previous_sibling\n            # to elements within the new parent\n            first_child = to_append[0]\n            if new_parents_last_descendant:\n                first_child.previous_element = new_parents_last_descendant\n            else:\n                first_child.previous_element = new_parent_element\n            first_child.previous_sibling = new_parents_last_child\n            if new_parents_last_descendant:\n                new_parents_last_descendant.next_element = first_child\n            else:\n                new_parent_element.next_element = first_child\n            if new_parents_last_child:\n                new_parents_last_child.next_sibling = first_child\n\n            # Fix the last child's next_element and next_sibling\n            last_child = to_append[-1]\n            last_child.next_element = new_parents_last_descendant_next_element\n            if new_parents_last_descendant_next_element:\n                new_parents_last_descendant_next_element.previous_element = last_child\n            last_child.next_sibling = None\n\n        for child in to_append:\n            child.parent = new_parent_element\n            new_parent_element.contents.append(child)\n\n        # Now that this element has no children, change its .next_element.\n        element.contents = []\n        element.next_element = final_next_element\n\n        # print \"DONE WITH MOVE\"\n        # print \"FROM\", self.element\n        # print \"TO\", new_parent_element\n\n    def cloneNode(self):\n        tag = self.soup.new_tag(self.element.name, self.namespace)\n        node = Element(tag, self.soup, self.namespace)\n        for key,value in self.attributes:\n            node.attributes[key] = value\n        return node\n\n    def hasContent(self):\n        return self.element.contents\n\n    def getNameTuple(self):\n        if self.namespace == None:\n            return namespaces[\"html\"], self.name\n        else:\n            return self.namespace, self.name\n\n    nameTuple = property(getNameTuple)\n\nclass TextNode(Element):\n    def __init__(self, element, soup):\n        treebuilder_base.Node.__init__(self, None)\n        self.element = element\n        self.soup = soup\n\n    def cloneNode(self):\n        raise NotImplementedError\n"
  },
  {
    "path": "parallax_svg_tools/bs4/builder/_htmlparser.py",
    "content": "\"\"\"Use the HTMLParser library to parse HTML files that aren't too bad.\"\"\"\n\n# Use of this source code is governed by a BSD-style license that can be\n# found in the LICENSE file.\n\n__all__ = [\n    'HTMLParserTreeBuilder',\n    ]\n\nfrom HTMLParser import HTMLParser\n\ntry:\n    from HTMLParser import HTMLParseError\nexcept ImportError, e:\n    # HTMLParseError is removed in Python 3.5. Since it can never be\n    # thrown in 3.5, we can just define our own class as a placeholder.\n    class HTMLParseError(Exception):\n        pass\n\nimport sys\nimport warnings\n\n# Starting in Python 3.2, the HTMLParser constructor takes a 'strict'\n# argument, which we'd like to set to False. Unfortunately,\n# http://bugs.python.org/issue13273 makes strict=True a better bet\n# before Python 3.2.3.\n#\n# At the end of this file, we monkeypatch HTMLParser so that\n# strict=True works well on Python 3.2.2.\nmajor, minor, release = sys.version_info[:3]\nCONSTRUCTOR_TAKES_STRICT = major == 3 and minor == 2 and release >= 3\nCONSTRUCTOR_STRICT_IS_DEPRECATED = major == 3 and minor == 3\nCONSTRUCTOR_TAKES_CONVERT_CHARREFS = major == 3 and minor >= 4\n\n\nfrom bs4.element import (\n    CData,\n    Comment,\n    Declaration,\n    Doctype,\n    ProcessingInstruction,\n    )\nfrom bs4.dammit import EntitySubstitution, UnicodeDammit\n\nfrom bs4.builder import (\n    HTML,\n    HTMLTreeBuilder,\n    STRICT,\n    )\n\n\nHTMLPARSER = 'html.parser'\n\nclass BeautifulSoupHTMLParser(HTMLParser):\n    def handle_starttag(self, name, attrs):\n        # XXX namespace\n        attr_dict = {}\n        for key, value in attrs:\n            # Change None attribute values to the empty string\n            # for consistency with the other tree builders.\n            if value is None:\n                value = ''\n            attr_dict[key] = value\n            attrvalue = '\"\"'\n        self.soup.handle_starttag(name, None, None, attr_dict)\n\n    def handle_endtag(self, name):\n        self.soup.handle_endtag(name)\n\n    def handle_data(self, data):\n        self.soup.handle_data(data)\n\n    def handle_charref(self, name):\n        # XXX workaround for a bug in HTMLParser. Remove this once\n        # it's fixed in all supported versions.\n        # http://bugs.python.org/issue13633\n        if name.startswith('x'):\n            real_name = int(name.lstrip('x'), 16)\n        elif name.startswith('X'):\n            real_name = int(name.lstrip('X'), 16)\n        else:\n            real_name = int(name)\n\n        try:\n            data = unichr(real_name)\n        except (ValueError, OverflowError), e:\n            data = u\"\\N{REPLACEMENT CHARACTER}\"\n\n        self.handle_data(data)\n\n    def handle_entityref(self, name):\n        character = EntitySubstitution.HTML_ENTITY_TO_CHARACTER.get(name)\n        if character is not None:\n            data = character\n        else:\n            data = \"&%s;\" % name\n        self.handle_data(data)\n\n    def handle_comment(self, data):\n        self.soup.endData()\n        self.soup.handle_data(data)\n        self.soup.endData(Comment)\n\n    def handle_decl(self, data):\n        self.soup.endData()\n        if data.startswith(\"DOCTYPE \"):\n            data = data[len(\"DOCTYPE \"):]\n        elif data == 'DOCTYPE':\n            # i.e. \"<!DOCTYPE>\"\n            data = ''\n        self.soup.handle_data(data)\n        self.soup.endData(Doctype)\n\n    def unknown_decl(self, data):\n        if data.upper().startswith('CDATA['):\n            cls = CData\n            data = data[len('CDATA['):]\n        else:\n            cls = Declaration\n        self.soup.endData()\n        self.soup.handle_data(data)\n        self.soup.endData(cls)\n\n    def handle_pi(self, data):\n        self.soup.endData()\n        self.soup.handle_data(data)\n        self.soup.endData(ProcessingInstruction)\n\n\nclass HTMLParserTreeBuilder(HTMLTreeBuilder):\n\n    is_xml = False\n    picklable = True\n    NAME = HTMLPARSER\n    features = [NAME, HTML, STRICT]\n\n    def __init__(self, *args, **kwargs):\n        if CONSTRUCTOR_TAKES_STRICT and not CONSTRUCTOR_STRICT_IS_DEPRECATED:\n            kwargs['strict'] = False\n        if CONSTRUCTOR_TAKES_CONVERT_CHARREFS:\n            kwargs['convert_charrefs'] = False\n        self.parser_args = (args, kwargs)\n\n    def prepare_markup(self, markup, user_specified_encoding=None,\n                       document_declared_encoding=None, exclude_encodings=None):\n        \"\"\"\n        :return: A 4-tuple (markup, original encoding, encoding\n        declared within markup, whether any characters had to be\n        replaced with REPLACEMENT CHARACTER).\n        \"\"\"\n        if isinstance(markup, unicode):\n            yield (markup, None, None, False)\n            return\n\n        try_encodings = [user_specified_encoding, document_declared_encoding]\n        dammit = UnicodeDammit(markup, try_encodings, is_html=True,\n                               exclude_encodings=exclude_encodings)\n        yield (dammit.markup, dammit.original_encoding,\n               dammit.declared_html_encoding,\n               dammit.contains_replacement_characters)\n\n    def feed(self, markup):\n        args, kwargs = self.parser_args\n        parser = BeautifulSoupHTMLParser(*args, **kwargs)\n        parser.soup = self.soup\n        try:\n            parser.feed(markup)\n        except HTMLParseError, e:\n            warnings.warn(RuntimeWarning(\n                \"Python's built-in HTMLParser cannot parse the given document. This is not a bug in Beautiful Soup. The best solution is to install an external parser (lxml or html5lib), and use Beautiful Soup with that parser. See http://www.crummy.com/software/BeautifulSoup/bs4/doc/#installing-a-parser for help.\"))\n            raise e\n\n# Patch 3.2 versions of HTMLParser earlier than 3.2.3 to use some\n# 3.2.3 code. This ensures they don't treat markup like <p></p> as a\n# string.\n#\n# XXX This code can be removed once most Python 3 users are on 3.2.3.\nif major == 3 and minor == 2 and not CONSTRUCTOR_TAKES_STRICT:\n    import re\n    attrfind_tolerant = re.compile(\n        r'\\s*((?<=[\\'\"\\s])[^\\s/>][^\\s/=>]*)(\\s*=+\\s*'\n        r'(\\'[^\\']*\\'|\"[^\"]*\"|(?![\\'\"])[^>\\s]*))?')\n    HTMLParserTreeBuilder.attrfind_tolerant = attrfind_tolerant\n\n    locatestarttagend = re.compile(r\"\"\"\n  <[a-zA-Z][-.a-zA-Z0-9:_]*          # tag name\n  (?:\\s+                             # whitespace before attribute name\n    (?:[a-zA-Z_][-.:a-zA-Z0-9_]*     # attribute name\n      (?:\\s*=\\s*                     # value indicator\n        (?:'[^']*'                   # LITA-enclosed value\n          |\\\"[^\\\"]*\\\"                # LIT-enclosed value\n          |[^'\\\">\\s]+                # bare value\n         )\n       )?\n     )\n   )*\n  \\s*                                # trailing whitespace\n\"\"\", re.VERBOSE)\n    BeautifulSoupHTMLParser.locatestarttagend = locatestarttagend\n\n    from html.parser import tagfind, attrfind\n\n    def parse_starttag(self, i):\n        self.__starttag_text = None\n        endpos = self.check_for_whole_start_tag(i)\n        if endpos < 0:\n            return endpos\n        rawdata = self.rawdata\n        self.__starttag_text = rawdata[i:endpos]\n\n        # Now parse the data between i+1 and j into a tag and attrs\n        attrs = []\n        match = tagfind.match(rawdata, i+1)\n        assert match, 'unexpected call to parse_starttag()'\n        k = match.end()\n        self.lasttag = tag = rawdata[i+1:k].lower()\n        while k < endpos:\n            if self.strict:\n                m = attrfind.match(rawdata, k)\n            else:\n                m = attrfind_tolerant.match(rawdata, k)\n            if not m:\n                break\n            attrname, rest, attrvalue = m.group(1, 2, 3)\n            if not rest:\n                attrvalue = None\n            elif attrvalue[:1] == '\\'' == attrvalue[-1:] or \\\n                 attrvalue[:1] == '\"' == attrvalue[-1:]:\n                attrvalue = attrvalue[1:-1]\n            if attrvalue:\n                attrvalue = self.unescape(attrvalue)\n            attrs.append((attrname.lower(), attrvalue))\n            k = m.end()\n\n        end = rawdata[k:endpos].strip()\n        if end not in (\">\", \"/>\"):\n            lineno, offset = self.getpos()\n            if \"\\n\" in self.__starttag_text:\n                lineno = lineno + self.__starttag_text.count(\"\\n\")\n                offset = len(self.__starttag_text) \\\n                         - self.__starttag_text.rfind(\"\\n\")\n            else:\n                offset = offset + len(self.__starttag_text)\n            if self.strict:\n                self.error(\"junk characters in start tag: %r\"\n                           % (rawdata[k:endpos][:20],))\n            self.handle_data(rawdata[i:endpos])\n            return endpos\n        if end.endswith('/>'):\n            # XHTML-style empty tag: <span attr=\"value\" />\n            self.handle_startendtag(tag, attrs)\n        else:\n            self.handle_starttag(tag, attrs)\n            if tag in self.CDATA_CONTENT_ELEMENTS:\n                self.set_cdata_mode(tag)\n        return endpos\n\n    def set_cdata_mode(self, elem):\n        self.cdata_elem = elem.lower()\n        self.interesting = re.compile(r'</\\s*%s\\s*>' % self.cdata_elem, re.I)\n\n    BeautifulSoupHTMLParser.parse_starttag = parse_starttag\n    BeautifulSoupHTMLParser.set_cdata_mode = set_cdata_mode\n\n    CONSTRUCTOR_TAKES_STRICT = True\n"
  },
  {
    "path": "parallax_svg_tools/bs4/builder/_lxml.py",
    "content": "# Use of this source code is governed by a BSD-style license that can be\n# found in the LICENSE file.\n__all__ = [\n    'LXMLTreeBuilderForXML',\n    'LXMLTreeBuilder',\n    ]\n\nfrom io import BytesIO\nfrom StringIO import StringIO\nimport collections\nfrom lxml import etree\nfrom bs4.element import (\n    Comment,\n    Doctype,\n    NamespacedAttribute,\n    ProcessingInstruction,\n    XMLProcessingInstruction,\n)\nfrom bs4.builder import (\n    FAST,\n    HTML,\n    HTMLTreeBuilder,\n    PERMISSIVE,\n    ParserRejectedMarkup,\n    TreeBuilder,\n    XML)\nfrom bs4.dammit import EncodingDetector\n\nLXML = 'lxml'\n\nclass LXMLTreeBuilderForXML(TreeBuilder):\n    DEFAULT_PARSER_CLASS = etree.XMLParser\n\n    is_xml = True\n    processing_instruction_class = XMLProcessingInstruction\n\n    NAME = \"lxml-xml\"\n    ALTERNATE_NAMES = [\"xml\"]\n\n    # Well, it's permissive by XML parser standards.\n    features = [NAME, LXML, XML, FAST, PERMISSIVE]\n\n    CHUNK_SIZE = 512\n\n    # This namespace mapping is specified in the XML Namespace\n    # standard.\n    DEFAULT_NSMAPS = {'http://www.w3.org/XML/1998/namespace' : \"xml\"}\n\n    def default_parser(self, encoding):\n        # This can either return a parser object or a class, which\n        # will be instantiated with default arguments.\n        if self._default_parser is not None:\n            return self._default_parser\n        return etree.XMLParser(\n            target=self, strip_cdata=False, recover=True, encoding=encoding)\n\n    def parser_for(self, encoding):\n        # Use the default parser.\n        parser = self.default_parser(encoding)\n\n        if isinstance(parser, collections.Callable):\n            # Instantiate the parser with default arguments\n            parser = parser(target=self, strip_cdata=False, encoding=encoding)\n        return parser\n\n    def __init__(self, parser=None, empty_element_tags=None):\n        # TODO: Issue a warning if parser is present but not a\n        # callable, since that means there's no way to create new\n        # parsers for different encodings.\n        self._default_parser = parser\n        if empty_element_tags is not None:\n            self.empty_element_tags = set(empty_element_tags)\n        self.soup = None\n        self.nsmaps = [self.DEFAULT_NSMAPS]\n\n    def _getNsTag(self, tag):\n        # Split the namespace URL out of a fully-qualified lxml tag\n        # name. Copied from lxml's src/lxml/sax.py.\n        if tag[0] == '{':\n            return tuple(tag[1:].split('}', 1))\n        else:\n            return (None, tag)\n\n    def prepare_markup(self, markup, user_specified_encoding=None,\n                       exclude_encodings=None,\n                       document_declared_encoding=None):\n        \"\"\"\n        :yield: A series of 4-tuples.\n         (markup, encoding, declared encoding,\n          has undergone character replacement)\n\n        Each 4-tuple represents a strategy for parsing the document.\n        \"\"\"\n        # Instead of using UnicodeDammit to convert the bytestring to\n        # Unicode using different encodings, use EncodingDetector to\n        # iterate over the encodings, and tell lxml to try to parse\n        # the document as each one in turn.\n        is_html = not self.is_xml\n        if is_html:\n            self.processing_instruction_class = ProcessingInstruction\n        else:\n            self.processing_instruction_class = XMLProcessingInstruction\n\n        if isinstance(markup, unicode):\n            # We were given Unicode. Maybe lxml can parse Unicode on\n            # this system?\n            yield markup, None, document_declared_encoding, False\n\n        if isinstance(markup, unicode):\n            # No, apparently not. Convert the Unicode to UTF-8 and\n            # tell lxml to parse it as UTF-8.\n            yield (markup.encode(\"utf8\"), \"utf8\",\n                   document_declared_encoding, False)\n\n        try_encodings = [user_specified_encoding, document_declared_encoding]\n        detector = EncodingDetector(\n            markup, try_encodings, is_html, exclude_encodings)\n        for encoding in detector.encodings:\n            yield (detector.markup, encoding, document_declared_encoding, False)\n\n    def feed(self, markup):\n        if isinstance(markup, bytes):\n            markup = BytesIO(markup)\n        elif isinstance(markup, unicode):\n            markup = StringIO(markup)\n\n        # Call feed() at least once, even if the markup is empty,\n        # or the parser won't be initialized.\n        data = markup.read(self.CHUNK_SIZE)\n        try:\n            self.parser = self.parser_for(self.soup.original_encoding)\n            self.parser.feed(data)\n            while len(data) != 0:\n                # Now call feed() on the rest of the data, chunk by chunk.\n                data = markup.read(self.CHUNK_SIZE)\n                if len(data) != 0:\n                    self.parser.feed(data)\n            self.parser.close()\n        except (UnicodeDecodeError, LookupError, etree.ParserError), e:\n            raise ParserRejectedMarkup(str(e))\n\n    def close(self):\n        self.nsmaps = [self.DEFAULT_NSMAPS]\n\n    def start(self, name, attrs, nsmap={}):\n        # Make sure attrs is a mutable dict--lxml may send an immutable dictproxy.\n        attrs = dict(attrs)\n        nsprefix = None\n        # Invert each namespace map as it comes in.\n        if len(self.nsmaps) > 1:\n            # There are no new namespaces for this tag, but\n            # non-default namespaces are in play, so we need a\n            # separate tag stack to know when they end.\n            self.nsmaps.append(None)\n        elif len(nsmap) > 0:\n            # A new namespace mapping has come into play.\n            inverted_nsmap = dict((value, key) for key, value in nsmap.items())\n            self.nsmaps.append(inverted_nsmap)\n            # Also treat the namespace mapping as a set of attributes on the\n            # tag, so we can recreate it later.\n            attrs = attrs.copy()\n            for prefix, namespace in nsmap.items():\n                attribute = NamespacedAttribute(\n                    \"xmlns\", prefix, \"http://www.w3.org/2000/xmlns/\")\n                attrs[attribute] = namespace\n\n        # Namespaces are in play. Find any attributes that came in\n        # from lxml with namespaces attached to their names, and\n        # turn then into NamespacedAttribute objects.\n        new_attrs = {}\n        for attr, value in attrs.items():\n            namespace, attr = self._getNsTag(attr)\n            if namespace is None:\n                new_attrs[attr] = value\n            else:\n                nsprefix = self._prefix_for_namespace(namespace)\n                attr = NamespacedAttribute(nsprefix, attr, namespace)\n                new_attrs[attr] = value\n        attrs = new_attrs\n\n        namespace, name = self._getNsTag(name)\n        nsprefix = self._prefix_for_namespace(namespace)\n        self.soup.handle_starttag(name, namespace, nsprefix, attrs)\n\n    def _prefix_for_namespace(self, namespace):\n        \"\"\"Find the currently active prefix for the given namespace.\"\"\"\n        if namespace is None:\n            return None\n        for inverted_nsmap in reversed(self.nsmaps):\n            if inverted_nsmap is not None and namespace in inverted_nsmap:\n                return inverted_nsmap[namespace]\n        return None\n\n    def end(self, name):\n        self.soup.endData()\n        completed_tag = self.soup.tagStack[-1]\n        namespace, name = self._getNsTag(name)\n        nsprefix = None\n        if namespace is not None:\n            for inverted_nsmap in reversed(self.nsmaps):\n                if inverted_nsmap is not None and namespace in inverted_nsmap:\n                    nsprefix = inverted_nsmap[namespace]\n                    break\n        self.soup.handle_endtag(name, nsprefix)\n        if len(self.nsmaps) > 1:\n            # This tag, or one of its parents, introduced a namespace\n            # mapping, so pop it off the stack.\n            self.nsmaps.pop()\n\n    def pi(self, target, data):\n        self.soup.endData()\n        self.soup.handle_data(target + ' ' + data)\n        self.soup.endData(self.processing_instruction_class)\n\n    def data(self, content):\n        self.soup.handle_data(content)\n\n    def doctype(self, name, pubid, system):\n        self.soup.endData()\n        doctype = Doctype.for_name_and_ids(name, pubid, system)\n        self.soup.object_was_parsed(doctype)\n\n    def comment(self, content):\n        \"Handle comments as Comment objects.\"\n        self.soup.endData()\n        self.soup.handle_data(content)\n        self.soup.endData(Comment)\n\n    def test_fragment_to_document(self, fragment):\n        \"\"\"See `TreeBuilder`.\"\"\"\n        return u'<?xml version=\"1.0\" encoding=\"utf-8\"?>\\n%s' % fragment\n\n\nclass LXMLTreeBuilder(HTMLTreeBuilder, LXMLTreeBuilderForXML):\n\n    NAME = LXML\n    ALTERNATE_NAMES = [\"lxml-html\"]\n\n    features = ALTERNATE_NAMES + [NAME, HTML, FAST, PERMISSIVE]\n    is_xml = False\n    processing_instruction_class = ProcessingInstruction\n\n    def default_parser(self, encoding):\n        return etree.HTMLParser\n\n    def feed(self, markup):\n        encoding = self.soup.original_encoding\n        try:\n            self.parser = self.parser_for(encoding)\n            self.parser.feed(markup)\n            self.parser.close()\n        except (UnicodeDecodeError, LookupError, etree.ParserError), e:\n            raise ParserRejectedMarkup(str(e))\n\n\n    def test_fragment_to_document(self, fragment):\n        \"\"\"See `TreeBuilder`.\"\"\"\n        return u'<html><body>%s</body></html>' % fragment\n"
  },
  {
    "path": "parallax_svg_tools/bs4/dammit.py",
    "content": "# -*- coding: utf-8 -*-\n\"\"\"Beautiful Soup bonus library: Unicode, Dammit\n\nThis library converts a bytestream to Unicode through any means\nnecessary. It is heavily based on code from Mark Pilgrim's Universal\nFeed Parser. It works best on XML and HTML, but it does not rewrite the\nXML or HTML to reflect a new encoding; that's the tree builder's job.\n\"\"\"\n# Use of this source code is governed by a BSD-style license that can be\n# found in the LICENSE file.\n__license__ = \"MIT\"\n\nimport codecs\nfrom htmlentitydefs import codepoint2name\nimport re\nimport logging\nimport string\n\n# Import a library to autodetect character encodings.\nchardet_type = None\ntry:\n    # First try the fast C implementation.\n    #  PyPI package: cchardet\n    import cchardet\n    def chardet_dammit(s):\n        return cchardet.detect(s)['encoding']\nexcept ImportError:\n    try:\n        # Fall back to the pure Python implementation\n        #  Debian package: python-chardet\n        #  PyPI package: chardet\n        import chardet\n        def chardet_dammit(s):\n            return chardet.detect(s)['encoding']\n        #import chardet.constants\n        #chardet.constants._debug = 1\n    except ImportError:\n        # No chardet available.\n        def chardet_dammit(s):\n            return None\n\n# Available from http://cjkpython.i18n.org/.\ntry:\n    import iconv_codec\nexcept ImportError:\n    pass\n\nxml_encoding_re = re.compile(\n    '^<\\?.*encoding=[\\'\"](.*?)[\\'\"].*\\?>'.encode(), re.I)\nhtml_meta_re = re.compile(\n    '<\\s*meta[^>]+charset\\s*=\\s*[\"\\']?([^>]*?)[ /;\\'\">]'.encode(), re.I)\n\nclass EntitySubstitution(object):\n\n    \"\"\"Substitute XML or HTML entities for the corresponding characters.\"\"\"\n\n    def _populate_class_variables():\n        lookup = {}\n        reverse_lookup = {}\n        characters_for_re = []\n        for codepoint, name in list(codepoint2name.items()):\n            character = unichr(codepoint)\n            if codepoint != 34:\n                # There's no point in turning the quotation mark into\n                # &quot;, unless it happens within an attribute value, which\n                # is handled elsewhere.\n                characters_for_re.append(character)\n                lookup[character] = name\n            # But we do want to turn &quot; into the quotation mark.\n            reverse_lookup[name] = character\n        re_definition = \"[%s]\" % \"\".join(characters_for_re)\n        return lookup, reverse_lookup, re.compile(re_definition)\n    (CHARACTER_TO_HTML_ENTITY, HTML_ENTITY_TO_CHARACTER,\n     CHARACTER_TO_HTML_ENTITY_RE) = _populate_class_variables()\n\n    CHARACTER_TO_XML_ENTITY = {\n        \"'\": \"apos\",\n        '\"': \"quot\",\n        \"&\": \"amp\",\n        \"<\": \"lt\",\n        \">\": \"gt\",\n        }\n\n    BARE_AMPERSAND_OR_BRACKET = re.compile(\"([<>]|\"\n                                           \"&(?!#\\d+;|#x[0-9a-fA-F]+;|\\w+;)\"\n                                           \")\")\n\n    AMPERSAND_OR_BRACKET = re.compile(\"([<>&])\")\n\n    @classmethod\n    def _substitute_html_entity(cls, matchobj):\n        entity = cls.CHARACTER_TO_HTML_ENTITY.get(matchobj.group(0))\n        return \"&%s;\" % entity\n\n    @classmethod\n    def _substitute_xml_entity(cls, matchobj):\n        \"\"\"Used with a regular expression to substitute the\n        appropriate XML entity for an XML special character.\"\"\"\n        entity = cls.CHARACTER_TO_XML_ENTITY[matchobj.group(0)]\n        return \"&%s;\" % entity\n\n    @classmethod\n    def quoted_attribute_value(self, value):\n        \"\"\"Make a value into a quoted XML attribute, possibly escaping it.\n\n         Most strings will be quoted using double quotes.\n\n          Bob's Bar -> \"Bob's Bar\"\n\n         If a string contains double quotes, it will be quoted using\n         single quotes.\n\n          Welcome to \"my bar\" -> 'Welcome to \"my bar\"'\n\n         If a string contains both single and double quotes, the\n         double quotes will be escaped, and the string will be quoted\n         using double quotes.\n\n          Welcome to \"Bob's Bar\" -> \"Welcome to &quot;Bob's bar&quot;\n        \"\"\"\n        quote_with = '\"'\n        if '\"' in value:\n            if \"'\" in value:\n                # The string contains both single and double\n                # quotes.  Turn the double quotes into\n                # entities. We quote the double quotes rather than\n                # the single quotes because the entity name is\n                # \"&quot;\" whether this is HTML or XML.  If we\n                # quoted the single quotes, we'd have to decide\n                # between &apos; and &squot;.\n                replace_with = \"&quot;\"\n                value = value.replace('\"', replace_with)\n            else:\n                # There are double quotes but no single quotes.\n                # We can use single quotes to quote the attribute.\n                quote_with = \"'\"\n        return quote_with + value + quote_with\n\n    @classmethod\n    def substitute_xml(cls, value, make_quoted_attribute=False):\n        \"\"\"Substitute XML entities for special XML characters.\n\n        :param value: A string to be substituted. The less-than sign\n          will become &lt;, the greater-than sign will become &gt;,\n          and any ampersands will become &amp;. If you want ampersands\n          that appear to be part of an entity definition to be left\n          alone, use substitute_xml_containing_entities() instead.\n\n        :param make_quoted_attribute: If True, then the string will be\n         quoted, as befits an attribute value.\n        \"\"\"\n        # Escape angle brackets and ampersands.\n        value = cls.AMPERSAND_OR_BRACKET.sub(\n            cls._substitute_xml_entity, value)\n\n        if make_quoted_attribute:\n            value = cls.quoted_attribute_value(value)\n        return value\n\n    @classmethod\n    def substitute_xml_containing_entities(\n        cls, value, make_quoted_attribute=False):\n        \"\"\"Substitute XML entities for special XML characters.\n\n        :param value: A string to be substituted. The less-than sign will\n          become &lt;, the greater-than sign will become &gt;, and any\n          ampersands that are not part of an entity defition will\n          become &amp;.\n\n        :param make_quoted_attribute: If True, then the string will be\n         quoted, as befits an attribute value.\n        \"\"\"\n        # Escape angle brackets, and ampersands that aren't part of\n        # entities.\n        value = cls.BARE_AMPERSAND_OR_BRACKET.sub(\n            cls._substitute_xml_entity, value)\n\n        if make_quoted_attribute:\n            value = cls.quoted_attribute_value(value)\n        return value\n\n    @classmethod\n    def substitute_html(cls, s):\n        \"\"\"Replace certain Unicode characters with named HTML entities.\n\n        This differs from data.encode(encoding, 'xmlcharrefreplace')\n        in that the goal is to make the result more readable (to those\n        with ASCII displays) rather than to recover from\n        errors. There's absolutely nothing wrong with a UTF-8 string\n        containg a LATIN SMALL LETTER E WITH ACUTE, but replacing that\n        character with \"&eacute;\" will make it more readable to some\n        people.\n        \"\"\"\n        return cls.CHARACTER_TO_HTML_ENTITY_RE.sub(\n            cls._substitute_html_entity, s)\n\n\nclass EncodingDetector:\n    \"\"\"Suggests a number of possible encodings for a bytestring.\n\n    Order of precedence:\n\n    1. Encodings you specifically tell EncodingDetector to try first\n    (the override_encodings argument to the constructor).\n\n    2. An encoding declared within the bytestring itself, either in an\n    XML declaration (if the bytestring is to be interpreted as an XML\n    document), or in a <meta> tag (if the bytestring is to be\n    interpreted as an HTML document.)\n\n    3. An encoding detected through textual analysis by chardet,\n    cchardet, or a similar external library.\n\n    4. UTF-8.\n\n    5. Windows-1252.\n    \"\"\"\n    def __init__(self, markup, override_encodings=None, is_html=False,\n                 exclude_encodings=None):\n        self.override_encodings = override_encodings or []\n        exclude_encodings = exclude_encodings or []\n        self.exclude_encodings = set([x.lower() for x in exclude_encodings])\n        self.chardet_encoding = None\n        self.is_html = is_html\n        self.declared_encoding = None\n\n        # First order of business: strip a byte-order mark.\n        self.markup, self.sniffed_encoding = self.strip_byte_order_mark(markup)\n\n    def _usable(self, encoding, tried):\n        if encoding is not None:\n            encoding = encoding.lower()\n            if encoding in self.exclude_encodings:\n                return False\n            if encoding not in tried:\n                tried.add(encoding)\n                return True\n        return False\n\n    @property\n    def encodings(self):\n        \"\"\"Yield a number of encodings that might work for this markup.\"\"\"\n        tried = set()\n        for e in self.override_encodings:\n            if self._usable(e, tried):\n                yield e\n\n        # Did the document originally start with a byte-order mark\n        # that indicated its encoding?\n        if self._usable(self.sniffed_encoding, tried):\n            yield self.sniffed_encoding\n\n        # Look within the document for an XML or HTML encoding\n        # declaration.\n        if self.declared_encoding is None:\n            self.declared_encoding = self.find_declared_encoding(\n                self.markup, self.is_html)\n        if self._usable(self.declared_encoding, tried):\n            yield self.declared_encoding\n\n        # Use third-party character set detection to guess at the\n        # encoding.\n        if self.chardet_encoding is None:\n            self.chardet_encoding = chardet_dammit(self.markup)\n        if self._usable(self.chardet_encoding, tried):\n            yield self.chardet_encoding\n\n        # As a last-ditch effort, try utf-8 and windows-1252.\n        for e in ('utf-8', 'windows-1252'):\n            if self._usable(e, tried):\n                yield e\n\n    @classmethod\n    def strip_byte_order_mark(cls, data):\n        \"\"\"If a byte-order mark is present, strip it and return the encoding it implies.\"\"\"\n        encoding = None\n        if isinstance(data, unicode):\n            # Unicode data cannot have a byte-order mark.\n            return data, encoding\n        if (len(data) >= 4) and (data[:2] == b'\\xfe\\xff') \\\n               and (data[2:4] != '\\x00\\x00'):\n            encoding = 'utf-16be'\n            data = data[2:]\n        elif (len(data) >= 4) and (data[:2] == b'\\xff\\xfe') \\\n                 and (data[2:4] != '\\x00\\x00'):\n            encoding = 'utf-16le'\n            data = data[2:]\n        elif data[:3] == b'\\xef\\xbb\\xbf':\n            encoding = 'utf-8'\n            data = data[3:]\n        elif data[:4] == b'\\x00\\x00\\xfe\\xff':\n            encoding = 'utf-32be'\n            data = data[4:]\n        elif data[:4] == b'\\xff\\xfe\\x00\\x00':\n            encoding = 'utf-32le'\n            data = data[4:]\n        return data, encoding\n\n    @classmethod\n    def find_declared_encoding(cls, markup, is_html=False, search_entire_document=False):\n        \"\"\"Given a document, tries to find its declared encoding.\n\n        An XML encoding is declared at the beginning of the document.\n\n        An HTML encoding is declared in a <meta> tag, hopefully near the\n        beginning of the document.\n        \"\"\"\n        if search_entire_document:\n            xml_endpos = html_endpos = len(markup)\n        else:\n            xml_endpos = 1024\n            html_endpos = max(2048, int(len(markup) * 0.05))\n            \n        declared_encoding = None\n        declared_encoding_match = xml_encoding_re.search(markup, endpos=xml_endpos)\n        if not declared_encoding_match and is_html:\n            declared_encoding_match = html_meta_re.search(markup, endpos=html_endpos)\n        if declared_encoding_match is not None:\n            declared_encoding = declared_encoding_match.groups()[0].decode(\n                'ascii', 'replace')\n        if declared_encoding:\n            return declared_encoding.lower()\n        return None\n\nclass UnicodeDammit:\n    \"\"\"A class for detecting the encoding of a *ML document and\n    converting it to a Unicode string. If the source encoding is\n    windows-1252, can replace MS smart quotes with their HTML or XML\n    equivalents.\"\"\"\n\n    # This dictionary maps commonly seen values for \"charset\" in HTML\n    # meta tags to the corresponding Python codec names. It only covers\n    # values that aren't in Python's aliases and can't be determined\n    # by the heuristics in find_codec.\n    CHARSET_ALIASES = {\"macintosh\": \"mac-roman\",\n                       \"x-sjis\": \"shift-jis\"}\n\n    ENCODINGS_WITH_SMART_QUOTES = [\n        \"windows-1252\",\n        \"iso-8859-1\",\n        \"iso-8859-2\",\n        ]\n\n    def __init__(self, markup, override_encodings=[],\n                 smart_quotes_to=None, is_html=False, exclude_encodings=[]):\n        self.smart_quotes_to = smart_quotes_to\n        self.tried_encodings = []\n        self.contains_replacement_characters = False\n        self.is_html = is_html\n        self.log = logging.getLogger(__name__)\n        self.detector = EncodingDetector(\n            markup, override_encodings, is_html, exclude_encodings)\n\n        # Short-circuit if the data is in Unicode to begin with.\n        if isinstance(markup, unicode) or markup == '':\n            self.markup = markup\n            self.unicode_markup = unicode(markup)\n            self.original_encoding = None\n            return\n\n        # The encoding detector may have stripped a byte-order mark.\n        # Use the stripped markup from this point on.\n        self.markup = self.detector.markup\n\n        u = None\n        for encoding in self.detector.encodings:\n            markup = self.detector.markup\n            u = self._convert_from(encoding)\n            if u is not None:\n                break\n\n        if not u:\n            # None of the encodings worked. As an absolute last resort,\n            # try them again with character replacement.\n\n            for encoding in self.detector.encodings:\n                if encoding != \"ascii\":\n                    u = self._convert_from(encoding, \"replace\")\n                if u is not None:\n                    self.log.warning(\n                            \"Some characters could not be decoded, and were \"\n                            \"replaced with REPLACEMENT CHARACTER.\"\n                    )\n                    self.contains_replacement_characters = True\n                    break\n\n        # If none of that worked, we could at this point force it to\n        # ASCII, but that would destroy so much data that I think\n        # giving up is better.\n        self.unicode_markup = u\n        if not u:\n            self.original_encoding = None\n\n    def _sub_ms_char(self, match):\n        \"\"\"Changes a MS smart quote character to an XML or HTML\n        entity, or an ASCII character.\"\"\"\n        orig = match.group(1)\n        if self.smart_quotes_to == 'ascii':\n            sub = self.MS_CHARS_TO_ASCII.get(orig).encode()\n        else:\n            sub = self.MS_CHARS.get(orig)\n            if type(sub) == tuple:\n                if self.smart_quotes_to == 'xml':\n                    sub = '&#x'.encode() + sub[1].encode() + ';'.encode()\n                else:\n                    sub = '&'.encode() + sub[0].encode() + ';'.encode()\n            else:\n                sub = sub.encode()\n        return sub\n\n    def _convert_from(self, proposed, errors=\"strict\"):\n        proposed = self.find_codec(proposed)\n        if not proposed or (proposed, errors) in self.tried_encodings:\n            return None\n        self.tried_encodings.append((proposed, errors))\n        markup = self.markup\n        # Convert smart quotes to HTML if coming from an encoding\n        # that might have them.\n        if (self.smart_quotes_to is not None\n            and proposed in self.ENCODINGS_WITH_SMART_QUOTES):\n            smart_quotes_re = b\"([\\x80-\\x9f])\"\n            smart_quotes_compiled = re.compile(smart_quotes_re)\n            markup = smart_quotes_compiled.sub(self._sub_ms_char, markup)\n\n        try:\n            #print \"Trying to convert document to %s (errors=%s)\" % (\n            #    proposed, errors)\n            u = self._to_unicode(markup, proposed, errors)\n            self.markup = u\n            self.original_encoding = proposed\n        except Exception as e:\n            #print \"That didn't work!\"\n            #print e\n            return None\n        #print \"Correct encoding: %s\" % proposed\n        return self.markup\n\n    def _to_unicode(self, data, encoding, errors=\"strict\"):\n        '''Given a string and its encoding, decodes the string into Unicode.\n        %encoding is a string recognized by encodings.aliases'''\n        return unicode(data, encoding, errors)\n\n    @property\n    def declared_html_encoding(self):\n        if not self.is_html:\n            return None\n        return self.detector.declared_encoding\n\n    def find_codec(self, charset):\n        value = (self._codec(self.CHARSET_ALIASES.get(charset, charset))\n               or (charset and self._codec(charset.replace(\"-\", \"\")))\n               or (charset and self._codec(charset.replace(\"-\", \"_\")))\n               or (charset and charset.lower())\n               or charset\n                )\n        if value:\n            return value.lower()\n        return None\n\n    def _codec(self, charset):\n        if not charset:\n            return charset\n        codec = None\n        try:\n            codecs.lookup(charset)\n            codec = charset\n        except (LookupError, ValueError):\n            pass\n        return codec\n\n\n    # A partial mapping of ISO-Latin-1 to HTML entities/XML numeric entities.\n    MS_CHARS = {b'\\x80': ('euro', '20AC'),\n                b'\\x81': ' ',\n                b'\\x82': ('sbquo', '201A'),\n                b'\\x83': ('fnof', '192'),\n                b'\\x84': ('bdquo', '201E'),\n                b'\\x85': ('hellip', '2026'),\n                b'\\x86': ('dagger', '2020'),\n                b'\\x87': ('Dagger', '2021'),\n                b'\\x88': ('circ', '2C6'),\n                b'\\x89': ('permil', '2030'),\n                b'\\x8A': ('Scaron', '160'),\n                b'\\x8B': ('lsaquo', '2039'),\n                b'\\x8C': ('OElig', '152'),\n                b'\\x8D': '?',\n                b'\\x8E': ('#x17D', '17D'),\n                b'\\x8F': '?',\n                b'\\x90': '?',\n                b'\\x91': ('lsquo', '2018'),\n                b'\\x92': ('rsquo', '2019'),\n                b'\\x93': ('ldquo', '201C'),\n                b'\\x94': ('rdquo', '201D'),\n                b'\\x95': ('bull', '2022'),\n                b'\\x96': ('ndash', '2013'),\n                b'\\x97': ('mdash', '2014'),\n                b'\\x98': ('tilde', '2DC'),\n                b'\\x99': ('trade', '2122'),\n                b'\\x9a': ('scaron', '161'),\n                b'\\x9b': ('rsaquo', '203A'),\n                b'\\x9c': ('oelig', '153'),\n                b'\\x9d': '?',\n                b'\\x9e': ('#x17E', '17E'),\n                b'\\x9f': ('Yuml', ''),}\n\n    # A parochial partial mapping of ISO-Latin-1 to ASCII. Contains\n    # horrors like stripping diacritical marks to turn á into a, but also\n    # contains non-horrors like turning “ into \".\n    MS_CHARS_TO_ASCII = {\n        b'\\x80' : 'EUR',\n        b'\\x81' : ' ',\n        b'\\x82' : ',',\n        b'\\x83' : 'f',\n        b'\\x84' : ',,',\n        b'\\x85' : '...',\n        b'\\x86' : '+',\n        b'\\x87' : '++',\n        b'\\x88' : '^',\n        b'\\x89' : '%',\n        b'\\x8a' : 'S',\n        b'\\x8b' : '<',\n        b'\\x8c' : 'OE',\n        b'\\x8d' : '?',\n        b'\\x8e' : 'Z',\n        b'\\x8f' : '?',\n        b'\\x90' : '?',\n        b'\\x91' : \"'\",\n        b'\\x92' : \"'\",\n        b'\\x93' : '\"',\n        b'\\x94' : '\"',\n        b'\\x95' : '*',\n        b'\\x96' : '-',\n        b'\\x97' : '--',\n        b'\\x98' : '~',\n        b'\\x99' : '(TM)',\n        b'\\x9a' : 's',\n        b'\\x9b' : '>',\n        b'\\x9c' : 'oe',\n        b'\\x9d' : '?',\n        b'\\x9e' : 'z',\n        b'\\x9f' : 'Y',\n        b'\\xa0' : ' ',\n        b'\\xa1' : '!',\n        b'\\xa2' : 'c',\n        b'\\xa3' : 'GBP',\n        b'\\xa4' : '$', #This approximation is especially parochial--this is the\n                       #generic currency symbol.\n        b'\\xa5' : 'YEN',\n        b'\\xa6' : '|',\n        b'\\xa7' : 'S',\n        b'\\xa8' : '..',\n        b'\\xa9' : '',\n        b'\\xaa' : '(th)',\n        b'\\xab' : '<<',\n        b'\\xac' : '!',\n        b'\\xad' : ' ',\n        b'\\xae' : '(R)',\n        b'\\xaf' : '-',\n        b'\\xb0' : 'o',\n        b'\\xb1' : '+-',\n        b'\\xb2' : '2',\n        b'\\xb3' : '3',\n        b'\\xb4' : (\"'\", 'acute'),\n        b'\\xb5' : 'u',\n        b'\\xb6' : 'P',\n        b'\\xb7' : '*',\n        b'\\xb8' : ',',\n        b'\\xb9' : '1',\n        b'\\xba' : '(th)',\n        b'\\xbb' : '>>',\n        b'\\xbc' : '1/4',\n        b'\\xbd' : '1/2',\n        b'\\xbe' : '3/4',\n        b'\\xbf' : '?',\n        b'\\xc0' : 'A',\n        b'\\xc1' : 'A',\n        b'\\xc2' : 'A',\n        b'\\xc3' : 'A',\n        b'\\xc4' : 'A',\n        b'\\xc5' : 'A',\n        b'\\xc6' : 'AE',\n        b'\\xc7' : 'C',\n        b'\\xc8' : 'E',\n        b'\\xc9' : 'E',\n        b'\\xca' : 'E',\n        b'\\xcb' : 'E',\n        b'\\xcc' : 'I',\n        b'\\xcd' : 'I',\n        b'\\xce' : 'I',\n        b'\\xcf' : 'I',\n        b'\\xd0' : 'D',\n        b'\\xd1' : 'N',\n        b'\\xd2' : 'O',\n        b'\\xd3' : 'O',\n        b'\\xd4' : 'O',\n        b'\\xd5' : 'O',\n        b'\\xd6' : 'O',\n        b'\\xd7' : '*',\n        b'\\xd8' : 'O',\n        b'\\xd9' : 'U',\n        b'\\xda' : 'U',\n        b'\\xdb' : 'U',\n        b'\\xdc' : 'U',\n        b'\\xdd' : 'Y',\n        b'\\xde' : 'b',\n        b'\\xdf' : 'B',\n        b'\\xe0' : 'a',\n        b'\\xe1' : 'a',\n        b'\\xe2' : 'a',\n        b'\\xe3' : 'a',\n        b'\\xe4' : 'a',\n        b'\\xe5' : 'a',\n        b'\\xe6' : 'ae',\n        b'\\xe7' : 'c',\n        b'\\xe8' : 'e',\n        b'\\xe9' : 'e',\n        b'\\xea' : 'e',\n        b'\\xeb' : 'e',\n        b'\\xec' : 'i',\n        b'\\xed' : 'i',\n        b'\\xee' : 'i',\n        b'\\xef' : 'i',\n        b'\\xf0' : 'o',\n        b'\\xf1' : 'n',\n        b'\\xf2' : 'o',\n        b'\\xf3' : 'o',\n        b'\\xf4' : 'o',\n        b'\\xf5' : 'o',\n        b'\\xf6' : 'o',\n        b'\\xf7' : '/',\n        b'\\xf8' : 'o',\n        b'\\xf9' : 'u',\n        b'\\xfa' : 'u',\n        b'\\xfb' : 'u',\n        b'\\xfc' : 'u',\n        b'\\xfd' : 'y',\n        b'\\xfe' : 'b',\n        b'\\xff' : 'y',\n        }\n\n    # A map used when removing rogue Windows-1252/ISO-8859-1\n    # characters in otherwise UTF-8 documents.\n    #\n    # Note that \\x81, \\x8d, \\x8f, \\x90, and \\x9d are undefined in\n    # Windows-1252.\n    WINDOWS_1252_TO_UTF8 = {\n        0x80 : b'\\xe2\\x82\\xac', # €\n        0x82 : b'\\xe2\\x80\\x9a', # ‚\n        0x83 : b'\\xc6\\x92',     # ƒ\n        0x84 : b'\\xe2\\x80\\x9e', # „\n        0x85 : b'\\xe2\\x80\\xa6', # …\n        0x86 : b'\\xe2\\x80\\xa0', # †\n        0x87 : b'\\xe2\\x80\\xa1', # ‡\n        0x88 : b'\\xcb\\x86',     # ˆ\n        0x89 : b'\\xe2\\x80\\xb0', # ‰\n        0x8a : b'\\xc5\\xa0',     # Š\n        0x8b : b'\\xe2\\x80\\xb9', # ‹\n        0x8c : b'\\xc5\\x92',     # Œ\n        0x8e : b'\\xc5\\xbd',     # Ž\n        0x91 : b'\\xe2\\x80\\x98', # ‘\n        0x92 : b'\\xe2\\x80\\x99', # ’\n        0x93 : b'\\xe2\\x80\\x9c', # “\n        0x94 : b'\\xe2\\x80\\x9d', # ”\n        0x95 : b'\\xe2\\x80\\xa2', # •\n        0x96 : b'\\xe2\\x80\\x93', # –\n        0x97 : b'\\xe2\\x80\\x94', # —\n        0x98 : b'\\xcb\\x9c',     # ˜\n        0x99 : b'\\xe2\\x84\\xa2', # ™\n        0x9a : b'\\xc5\\xa1',     # š\n        0x9b : b'\\xe2\\x80\\xba', # ›\n        0x9c : b'\\xc5\\x93',     # œ\n        0x9e : b'\\xc5\\xbe',     # ž\n        0x9f : b'\\xc5\\xb8',     # Ÿ\n        0xa0 : b'\\xc2\\xa0',     #  \n        0xa1 : b'\\xc2\\xa1',     # ¡\n        0xa2 : b'\\xc2\\xa2',     # ¢\n        0xa3 : b'\\xc2\\xa3',     # £\n        0xa4 : b'\\xc2\\xa4',     # ¤\n        0xa5 : b'\\xc2\\xa5',     # ¥\n        0xa6 : b'\\xc2\\xa6',     # ¦\n        0xa7 : b'\\xc2\\xa7',     # §\n        0xa8 : b'\\xc2\\xa8',     # ¨\n        0xa9 : b'\\xc2\\xa9',     # ©\n        0xaa : b'\\xc2\\xaa',     # ª\n        0xab : b'\\xc2\\xab',     # «\n        0xac : b'\\xc2\\xac',     # ¬\n        0xad : b'\\xc2\\xad',     # ­\n        0xae : b'\\xc2\\xae',     # ®\n        0xaf : b'\\xc2\\xaf',     # ¯\n        0xb0 : b'\\xc2\\xb0',     # °\n        0xb1 : b'\\xc2\\xb1',     # ±\n        0xb2 : b'\\xc2\\xb2',     # ²\n        0xb3 : b'\\xc2\\xb3',     # ³\n        0xb4 : b'\\xc2\\xb4',     # ´\n        0xb5 : b'\\xc2\\xb5',     # µ\n        0xb6 : b'\\xc2\\xb6',     # ¶\n        0xb7 : b'\\xc2\\xb7',     # ·\n        0xb8 : b'\\xc2\\xb8',     # ¸\n        0xb9 : b'\\xc2\\xb9',     # ¹\n        0xba : b'\\xc2\\xba',     # º\n        0xbb : b'\\xc2\\xbb',     # »\n        0xbc : b'\\xc2\\xbc',     # ¼\n        0xbd : b'\\xc2\\xbd',     # ½\n        0xbe : b'\\xc2\\xbe',     # ¾\n        0xbf : b'\\xc2\\xbf',     # ¿\n        0xc0 : b'\\xc3\\x80',     # À\n        0xc1 : b'\\xc3\\x81',     # Á\n        0xc2 : b'\\xc3\\x82',     # Â\n        0xc3 : b'\\xc3\\x83',     # Ã\n        0xc4 : b'\\xc3\\x84',     # Ä\n        0xc5 : b'\\xc3\\x85',     # Å\n        0xc6 : b'\\xc3\\x86',     # Æ\n        0xc7 : b'\\xc3\\x87',     # Ç\n        0xc8 : b'\\xc3\\x88',     # È\n        0xc9 : b'\\xc3\\x89',     # É\n        0xca : b'\\xc3\\x8a',     # Ê\n        0xcb : b'\\xc3\\x8b',     # Ë\n        0xcc : b'\\xc3\\x8c',     # Ì\n        0xcd : b'\\xc3\\x8d',     # Í\n        0xce : b'\\xc3\\x8e',     # Î\n        0xcf : b'\\xc3\\x8f',     # Ï\n        0xd0 : b'\\xc3\\x90',     # Ð\n        0xd1 : b'\\xc3\\x91',     # Ñ\n        0xd2 : b'\\xc3\\x92',     # Ò\n        0xd3 : b'\\xc3\\x93',     # Ó\n        0xd4 : b'\\xc3\\x94',     # Ô\n        0xd5 : b'\\xc3\\x95',     # Õ\n        0xd6 : b'\\xc3\\x96',     # Ö\n        0xd7 : b'\\xc3\\x97',     # ×\n        0xd8 : b'\\xc3\\x98',     # Ø\n        0xd9 : b'\\xc3\\x99',     # Ù\n        0xda : b'\\xc3\\x9a',     # Ú\n        0xdb : b'\\xc3\\x9b',     # Û\n        0xdc : b'\\xc3\\x9c',     # Ü\n        0xdd : b'\\xc3\\x9d',     # Ý\n        0xde : b'\\xc3\\x9e',     # Þ\n        0xdf : b'\\xc3\\x9f',     # ß\n        0xe0 : b'\\xc3\\xa0',     # à\n        0xe1 : b'\\xa1',     # á\n        0xe2 : b'\\xc3\\xa2',     # â\n        0xe3 : b'\\xc3\\xa3',     # ã\n        0xe4 : b'\\xc3\\xa4',     # ä\n        0xe5 : b'\\xc3\\xa5',     # å\n        0xe6 : b'\\xc3\\xa6',     # æ\n        0xe7 : b'\\xc3\\xa7',     # ç\n        0xe8 : b'\\xc3\\xa8',     # è\n        0xe9 : b'\\xc3\\xa9',     # é\n        0xea : b'\\xc3\\xaa',     # ê\n        0xeb : b'\\xc3\\xab',     # ë\n        0xec : b'\\xc3\\xac',     # ì\n        0xed : b'\\xc3\\xad',     # í\n        0xee : b'\\xc3\\xae',     # î\n        0xef : b'\\xc3\\xaf',     # ï\n        0xf0 : b'\\xc3\\xb0',     # ð\n        0xf1 : b'\\xc3\\xb1',     # ñ\n        0xf2 : b'\\xc3\\xb2',     # ò\n        0xf3 : b'\\xc3\\xb3',     # ó\n        0xf4 : b'\\xc3\\xb4',     # ô\n        0xf5 : b'\\xc3\\xb5',     # õ\n        0xf6 : b'\\xc3\\xb6',     # ö\n        0xf7 : b'\\xc3\\xb7',     # ÷\n        0xf8 : b'\\xc3\\xb8',     # ø\n        0xf9 : b'\\xc3\\xb9',     # ù\n        0xfa : b'\\xc3\\xba',     # ú\n        0xfb : b'\\xc3\\xbb',     # û\n        0xfc : b'\\xc3\\xbc',     # ü\n        0xfd : b'\\xc3\\xbd',     # ý\n        0xfe : b'\\xc3\\xbe',     # þ\n        }\n\n    MULTIBYTE_MARKERS_AND_SIZES = [\n        (0xc2, 0xdf, 2), # 2-byte characters start with a byte C2-DF\n        (0xe0, 0xef, 3), # 3-byte characters start with E0-EF\n        (0xf0, 0xf4, 4), # 4-byte characters start with F0-F4\n        ]\n\n    FIRST_MULTIBYTE_MARKER = MULTIBYTE_MARKERS_AND_SIZES[0][0]\n    LAST_MULTIBYTE_MARKER = MULTIBYTE_MARKERS_AND_SIZES[-1][1]\n\n    @classmethod\n    def detwingle(cls, in_bytes, main_encoding=\"utf8\",\n                  embedded_encoding=\"windows-1252\"):\n        \"\"\"Fix characters from one encoding embedded in some other encoding.\n\n        Currently the only situation supported is Windows-1252 (or its\n        subset ISO-8859-1), embedded in UTF-8.\n\n        The input must be a bytestring. If you've already converted\n        the document to Unicode, you're too late.\n\n        The output is a bytestring in which `embedded_encoding`\n        characters have been converted to their `main_encoding`\n        equivalents.\n        \"\"\"\n        if embedded_encoding.replace('_', '-').lower() not in (\n            'windows-1252', 'windows_1252'):\n            raise NotImplementedError(\n                \"Windows-1252 and ISO-8859-1 are the only currently supported \"\n                \"embedded encodings.\")\n\n        if main_encoding.lower() not in ('utf8', 'utf-8'):\n            raise NotImplementedError(\n                \"UTF-8 is the only currently supported main encoding.\")\n\n        byte_chunks = []\n\n        chunk_start = 0\n        pos = 0\n        while pos < len(in_bytes):\n            byte = in_bytes[pos]\n            if not isinstance(byte, int):\n                # Python 2.x\n                byte = ord(byte)\n            if (byte >= cls.FIRST_MULTIBYTE_MARKER\n                and byte <= cls.LAST_MULTIBYTE_MARKER):\n                # This is the start of a UTF-8 multibyte character. Skip\n                # to the end.\n                for start, end, size in cls.MULTIBYTE_MARKERS_AND_SIZES:\n                    if byte >= start and byte <= end:\n                        pos += size\n                        break\n            elif byte >= 0x80 and byte in cls.WINDOWS_1252_TO_UTF8:\n                # We found a Windows-1252 character!\n                # Save the string up to this point as a chunk.\n                byte_chunks.append(in_bytes[chunk_start:pos])\n\n                # Now translate the Windows-1252 character into UTF-8\n                # and add it as another, one-byte chunk.\n                byte_chunks.append(cls.WINDOWS_1252_TO_UTF8[byte])\n                pos += 1\n                chunk_start = pos\n            else:\n                # Go on to the next character.\n                pos += 1\n        if chunk_start == 0:\n            # The string is unchanged.\n            return in_bytes\n        else:\n            # Store the final chunk.\n            byte_chunks.append(in_bytes[chunk_start:])\n        return b''.join(byte_chunks)\n\n"
  },
  {
    "path": "parallax_svg_tools/bs4/diagnose.py",
    "content": "\"\"\"Diagnostic functions, mainly for use when doing tech support.\"\"\"\n\n# Use of this source code is governed by a BSD-style license that can be\n# found in the LICENSE file.\n__license__ = \"MIT\"\n\nimport cProfile\nfrom StringIO import StringIO\nfrom HTMLParser import HTMLParser\nimport bs4\nfrom bs4 import BeautifulSoup, __version__\nfrom bs4.builder import builder_registry\n\nimport os\nimport pstats\nimport random\nimport tempfile\nimport time\nimport traceback\nimport sys\nimport cProfile\n\ndef diagnose(data):\n    \"\"\"Diagnostic suite for isolating common problems.\"\"\"\n    print \"Diagnostic running on Beautiful Soup %s\" % __version__\n    print \"Python version %s\" % sys.version\n\n    basic_parsers = [\"html.parser\", \"html5lib\", \"lxml\"]\n    for name in basic_parsers:\n        for builder in builder_registry.builders:\n            if name in builder.features:\n                break\n        else:\n            basic_parsers.remove(name)\n            print (\n                \"I noticed that %s is not installed. Installing it may help.\" %\n                name)\n\n    if 'lxml' in basic_parsers:\n        basic_parsers.append([\"lxml\", \"xml\"])\n        try:\n            from lxml import etree\n            print \"Found lxml version %s\" % \".\".join(map(str,etree.LXML_VERSION))\n        except ImportError, e:\n            print (\n                \"lxml is not installed or couldn't be imported.\")\n\n\n    if 'html5lib' in basic_parsers:\n        try:\n            import html5lib\n            print \"Found html5lib version %s\" % html5lib.__version__\n        except ImportError, e:\n            print (\n                \"html5lib is not installed or couldn't be imported.\")\n\n    if hasattr(data, 'read'):\n        data = data.read()\n    elif os.path.exists(data):\n        print '\"%s\" looks like a filename. Reading data from the file.' % data\n        with open(data) as fp:\n            data = fp.read()\n    elif data.startswith(\"http:\") or data.startswith(\"https:\"):\n        print '\"%s\" looks like a URL. Beautiful Soup is not an HTTP client.' % data\n        print \"You need to use some other library to get the document behind the URL, and feed that document to Beautiful Soup.\"\n        return\n    print\n\n    for parser in basic_parsers:\n        print \"Trying to parse your markup with %s\" % parser\n        success = False\n        try:\n            soup = BeautifulSoup(data, parser)\n            success = True\n        except Exception, e:\n            print \"%s could not parse the markup.\" % parser\n            traceback.print_exc()\n        if success:\n            print \"Here's what %s did with the markup:\" % parser\n            print soup.prettify()\n\n        print \"-\" * 80\n\ndef lxml_trace(data, html=True, **kwargs):\n    \"\"\"Print out the lxml events that occur during parsing.\n\n    This lets you see how lxml parses a document when no Beautiful\n    Soup code is running.\n    \"\"\"\n    from lxml import etree\n    for event, element in etree.iterparse(StringIO(data), html=html, **kwargs):\n        print(\"%s, %4s, %s\" % (event, element.tag, element.text))\n\nclass AnnouncingParser(HTMLParser):\n    \"\"\"Announces HTMLParser parse events, without doing anything else.\"\"\"\n\n    def _p(self, s):\n        print(s)\n\n    def handle_starttag(self, name, attrs):\n        self._p(\"%s START\" % name)\n\n    def handle_endtag(self, name):\n        self._p(\"%s END\" % name)\n\n    def handle_data(self, data):\n        self._p(\"%s DATA\" % data)\n\n    def handle_charref(self, name):\n        self._p(\"%s CHARREF\" % name)\n\n    def handle_entityref(self, name):\n        self._p(\"%s ENTITYREF\" % name)\n\n    def handle_comment(self, data):\n        self._p(\"%s COMMENT\" % data)\n\n    def handle_decl(self, data):\n        self._p(\"%s DECL\" % data)\n\n    def unknown_decl(self, data):\n        self._p(\"%s UNKNOWN-DECL\" % data)\n\n    def handle_pi(self, data):\n        self._p(\"%s PI\" % data)\n\ndef htmlparser_trace(data):\n    \"\"\"Print out the HTMLParser events that occur during parsing.\n\n    This lets you see how HTMLParser parses a document when no\n    Beautiful Soup code is running.\n    \"\"\"\n    parser = AnnouncingParser()\n    parser.feed(data)\n\n_vowels = \"aeiou\"\n_consonants = \"bcdfghjklmnpqrstvwxyz\"\n\ndef rword(length=5):\n    \"Generate a random word-like string.\"\n    s = ''\n    for i in range(length):\n        if i % 2 == 0:\n            t = _consonants\n        else:\n            t = _vowels\n        s += random.choice(t)\n    return s\n\ndef rsentence(length=4):\n    \"Generate a random sentence-like string.\"\n    return \" \".join(rword(random.randint(4,9)) for i in range(length))\n        \ndef rdoc(num_elements=1000):\n    \"\"\"Randomly generate an invalid HTML document.\"\"\"\n    tag_names = ['p', 'div', 'span', 'i', 'b', 'script', 'table']\n    elements = []\n    for i in range(num_elements):\n        choice = random.randint(0,3)\n        if choice == 0:\n            # New tag.\n            tag_name = random.choice(tag_names)\n            elements.append(\"<%s>\" % tag_name)\n        elif choice == 1:\n            elements.append(rsentence(random.randint(1,4)))\n        elif choice == 2:\n            # Close a tag.\n            tag_name = random.choice(tag_names)\n            elements.append(\"</%s>\" % tag_name)\n    return \"<html>\" + \"\\n\".join(elements) + \"</html>\"\n\ndef benchmark_parsers(num_elements=100000):\n    \"\"\"Very basic head-to-head performance benchmark.\"\"\"\n    print \"Comparative parser benchmark on Beautiful Soup %s\" % __version__\n    data = rdoc(num_elements)\n    print \"Generated a large invalid HTML document (%d bytes).\" % len(data)\n    \n    for parser in [\"lxml\", [\"lxml\", \"html\"], \"html5lib\", \"html.parser\"]:\n        success = False\n        try:\n            a = time.time()\n            soup = BeautifulSoup(data, parser)\n            b = time.time()\n            success = True\n        except Exception, e:\n            print \"%s could not parse the markup.\" % parser\n            traceback.print_exc()\n        if success:\n            print \"BS4+%s parsed the markup in %.2fs.\" % (parser, b-a)\n\n    from lxml import etree\n    a = time.time()\n    etree.HTML(data)\n    b = time.time()\n    print \"Raw lxml parsed the markup in %.2fs.\" % (b-a)\n\n    import html5lib\n    parser = html5lib.HTMLParser()\n    a = time.time()\n    parser.parse(data)\n    b = time.time()\n    print \"Raw html5lib parsed the markup in %.2fs.\" % (b-a)\n\ndef profile(num_elements=100000, parser=\"lxml\"):\n\n    filehandle = tempfile.NamedTemporaryFile()\n    filename = filehandle.name\n\n    data = rdoc(num_elements)\n    vars = dict(bs4=bs4, data=data, parser=parser)\n    cProfile.runctx('bs4.BeautifulSoup(data, parser)' , vars, vars, filename)\n\n    stats = pstats.Stats(filename)\n    # stats.strip_dirs()\n    stats.sort_stats(\"cumulative\")\n    stats.print_stats('_html5lib|bs4', 50)\n\nif __name__ == '__main__':\n    diagnose(sys.stdin.read())\n"
  },
  {
    "path": "parallax_svg_tools/bs4/element.py",
    "content": "# Use of this source code is governed by a BSD-style license that can be\n# found in the LICENSE file.\n__license__ = \"MIT\"\n\nimport collections\nimport re\nimport shlex\nimport sys\nimport warnings\nfrom bs4.dammit import EntitySubstitution\n\nDEFAULT_OUTPUT_ENCODING = \"utf-8\"\nPY3K = (sys.version_info[0] > 2)\n\nwhitespace_re = re.compile(\"\\s+\")\n\ndef _alias(attr):\n    \"\"\"Alias one attribute name to another for backward compatibility\"\"\"\n    @property\n    def alias(self):\n        return getattr(self, attr)\n\n    @alias.setter\n    def alias(self):\n        return setattr(self, attr)\n    return alias\n\n\nclass NamespacedAttribute(unicode):\n\n    def __new__(cls, prefix, name, namespace=None):\n        if name is None:\n            obj = unicode.__new__(cls, prefix)\n        elif prefix is None:\n            # Not really namespaced.\n            obj = unicode.__new__(cls, name)\n        else:\n            obj = unicode.__new__(cls, prefix + \":\" + name)\n        obj.prefix = prefix\n        obj.name = name\n        obj.namespace = namespace\n        return obj\n\nclass AttributeValueWithCharsetSubstitution(unicode):\n    \"\"\"A stand-in object for a character encoding specified in HTML.\"\"\"\n\nclass CharsetMetaAttributeValue(AttributeValueWithCharsetSubstitution):\n    \"\"\"A generic stand-in for the value of a meta tag's 'charset' attribute.\n\n    When Beautiful Soup parses the markup '<meta charset=\"utf8\">', the\n    value of the 'charset' attribute will be one of these objects.\n    \"\"\"\n\n    def __new__(cls, original_value):\n        obj = unicode.__new__(cls, original_value)\n        obj.original_value = original_value\n        return obj\n\n    def encode(self, encoding):\n        return encoding\n\n\nclass ContentMetaAttributeValue(AttributeValueWithCharsetSubstitution):\n    \"\"\"A generic stand-in for the value of a meta tag's 'content' attribute.\n\n    When Beautiful Soup parses the markup:\n     <meta http-equiv=\"content-type\" content=\"text/html; charset=utf8\">\n\n    The value of the 'content' attribute will be one of these objects.\n    \"\"\"\n\n    CHARSET_RE = re.compile(\"((^|;)\\s*charset=)([^;]*)\", re.M)\n\n    def __new__(cls, original_value):\n        match = cls.CHARSET_RE.search(original_value)\n        if match is None:\n            # No substitution necessary.\n            return unicode.__new__(unicode, original_value)\n\n        obj = unicode.__new__(cls, original_value)\n        obj.original_value = original_value\n        return obj\n\n    def encode(self, encoding):\n        def rewrite(match):\n            return match.group(1) + encoding\n        return self.CHARSET_RE.sub(rewrite, self.original_value)\n\nclass HTMLAwareEntitySubstitution(EntitySubstitution):\n\n    \"\"\"Entity substitution rules that are aware of some HTML quirks.\n\n    Specifically, the contents of <script> and <style> tags should not\n    undergo entity substitution.\n\n    Incoming NavigableString objects are checked to see if they're the\n    direct children of a <script> or <style> tag.\n    \"\"\"\n\n    cdata_containing_tags = set([\"script\", \"style\"])\n\n    preformatted_tags = set([\"pre\"])\n\n    preserve_whitespace_tags = set(['pre', 'textarea'])\n\n    @classmethod\n    def _substitute_if_appropriate(cls, ns, f):\n        if (isinstance(ns, NavigableString)\n            and ns.parent is not None\n            and ns.parent.name in cls.cdata_containing_tags):\n            # Do nothing.\n            return ns\n        # Substitute.\n        return f(ns)\n\n    @classmethod\n    def substitute_html(cls, ns):\n        return cls._substitute_if_appropriate(\n            ns, EntitySubstitution.substitute_html)\n\n    @classmethod\n    def substitute_xml(cls, ns):\n        return cls._substitute_if_appropriate(\n            ns, EntitySubstitution.substitute_xml)\n\nclass PageElement(object):\n    \"\"\"Contains the navigational information for some part of the page\n    (either a tag or a piece of text)\"\"\"\n\n    # There are five possible values for the \"formatter\" argument passed in\n    # to methods like encode() and prettify():\n    #\n    # \"html\" - All Unicode characters with corresponding HTML entities\n    #   are converted to those entities on output.\n    # \"minimal\" - Bare ampersands and angle brackets are converted to\n    #   XML entities: &amp; &lt; &gt;\n    # None - The null formatter. Unicode characters are never\n    #   converted to entities.  This is not recommended, but it's\n    #   faster than \"minimal\".\n    # A function - This function will be called on every string that\n    #  needs to undergo entity substitution.\n    #\n\n    # In an HTML document, the default \"html\" and \"minimal\" functions\n    # will leave the contents of <script> and <style> tags alone. For\n    # an XML document, all tags will be given the same treatment.\n\n    HTML_FORMATTERS = {\n        \"html\" : HTMLAwareEntitySubstitution.substitute_html,\n        \"minimal\" : HTMLAwareEntitySubstitution.substitute_xml,\n        None : None\n        }\n\n    XML_FORMATTERS = {\n        \"html\" : EntitySubstitution.substitute_html,\n        \"minimal\" : EntitySubstitution.substitute_xml,\n        None : None\n        }\n\n    def format_string(self, s, formatter='minimal'):\n        \"\"\"Format the given string using the given formatter.\"\"\"\n        if not callable(formatter):\n            formatter = self._formatter_for_name(formatter)\n        if formatter is None:\n            output = s\n        else:\n            output = formatter(s)\n        return output\n\n    @property\n    def _is_xml(self):\n        \"\"\"Is this element part of an XML tree or an HTML tree?\n\n        This is used when mapping a formatter name (\"minimal\") to an\n        appropriate function (one that performs entity-substitution on\n        the contents of <script> and <style> tags, or not). It can be\n        inefficient, but it should be called very rarely.\n        \"\"\"\n        if self.known_xml is not None:\n            # Most of the time we will have determined this when the\n            # document is parsed.\n            return self.known_xml\n\n        # Otherwise, it's likely that this element was created by\n        # direct invocation of the constructor from within the user's\n        # Python code.\n        if self.parent is None:\n            # This is the top-level object. It should have .known_xml set\n            # from tree creation. If not, take a guess--BS is usually\n            # used on HTML markup.\n            return getattr(self, 'is_xml', False)\n        return self.parent._is_xml\n\n    def _formatter_for_name(self, name):\n        \"Look up a formatter function based on its name and the tree.\"\n        if self._is_xml:\n            return self.XML_FORMATTERS.get(\n                name, EntitySubstitution.substitute_xml)\n        else:\n            return self.HTML_FORMATTERS.get(\n                name, HTMLAwareEntitySubstitution.substitute_xml)\n\n    def setup(self, parent=None, previous_element=None, next_element=None,\n              previous_sibling=None, next_sibling=None):\n        \"\"\"Sets up the initial relations between this element and\n        other elements.\"\"\"\n        self.parent = parent\n\n        self.previous_element = previous_element\n        if previous_element is not None:\n            self.previous_element.next_element = self\n\n        self.next_element = next_element\n        if self.next_element:\n            self.next_element.previous_element = self\n\n        self.next_sibling = next_sibling\n        if self.next_sibling:\n            self.next_sibling.previous_sibling = self\n\n        if (not previous_sibling\n            and self.parent is not None and self.parent.contents):\n            previous_sibling = self.parent.contents[-1]\n\n        self.previous_sibling = previous_sibling\n        if previous_sibling:\n            self.previous_sibling.next_sibling = self\n\n    nextSibling = _alias(\"next_sibling\")  # BS3\n    previousSibling = _alias(\"previous_sibling\")  # BS3\n\n    def replace_with(self, replace_with):\n        if not self.parent:\n            raise ValueError(\n                \"Cannot replace one element with another when the\"\n                \"element to be replaced is not part of a tree.\")\n        if replace_with is self:\n            return\n        if replace_with is self.parent:\n            raise ValueError(\"Cannot replace a Tag with its parent.\")\n        old_parent = self.parent\n        my_index = self.parent.index(self)\n        self.extract()\n        old_parent.insert(my_index, replace_with)\n        return self\n    replaceWith = replace_with  # BS3\n\n    def unwrap(self):\n        my_parent = self.parent\n        if not self.parent:\n            raise ValueError(\n                \"Cannot replace an element with its contents when that\"\n                \"element is not part of a tree.\")\n        my_index = self.parent.index(self)\n        self.extract()\n        for child in reversed(self.contents[:]):\n            my_parent.insert(my_index, child)\n        return self\n    replace_with_children = unwrap\n    replaceWithChildren = unwrap  # BS3\n\n    def wrap(self, wrap_inside):\n        me = self.replace_with(wrap_inside)\n        wrap_inside.append(me)\n        return wrap_inside\n\n    def extract(self):\n        \"\"\"Destructively rips this element out of the tree.\"\"\"\n        if self.parent is not None:\n            del self.parent.contents[self.parent.index(self)]\n\n        #Find the two elements that would be next to each other if\n        #this element (and any children) hadn't been parsed. Connect\n        #the two.\n        last_child = self._last_descendant()\n        next_element = last_child.next_element\n\n        if (self.previous_element is not None and\n            self.previous_element is not next_element):\n            self.previous_element.next_element = next_element\n        if next_element is not None and next_element is not self.previous_element:\n            next_element.previous_element = self.previous_element\n        self.previous_element = None\n        last_child.next_element = None\n\n        self.parent = None\n        if (self.previous_sibling is not None\n            and self.previous_sibling is not self.next_sibling):\n            self.previous_sibling.next_sibling = self.next_sibling\n        if (self.next_sibling is not None\n            and self.next_sibling is not self.previous_sibling):\n            self.next_sibling.previous_sibling = self.previous_sibling\n        self.previous_sibling = self.next_sibling = None\n        return self\n\n    def _last_descendant(self, is_initialized=True, accept_self=True):\n        \"Finds the last element beneath this object to be parsed.\"\n        if is_initialized and self.next_sibling:\n            last_child = self.next_sibling.previous_element\n        else:\n            last_child = self\n            while isinstance(last_child, Tag) and last_child.contents:\n                last_child = last_child.contents[-1]\n        if not accept_self and last_child is self:\n            last_child = None\n        return last_child\n    # BS3: Not part of the API!\n    _lastRecursiveChild = _last_descendant\n\n    def insert(self, position, new_child):\n        if new_child is None:\n            raise ValueError(\"Cannot insert None into a tag.\")\n        if new_child is self:\n            raise ValueError(\"Cannot insert a tag into itself.\")\n        if (isinstance(new_child, basestring)\n            and not isinstance(new_child, NavigableString)):\n            new_child = NavigableString(new_child)\n\n        position = min(position, len(self.contents))\n        if hasattr(new_child, 'parent') and new_child.parent is not None:\n            # We're 'inserting' an element that's already one\n            # of this object's children.\n            if new_child.parent is self:\n                current_index = self.index(new_child)\n                if current_index < position:\n                    # We're moving this element further down the list\n                    # of this object's children. That means that when\n                    # we extract this element, our target index will\n                    # jump down one.\n                    position -= 1\n            new_child.extract()\n\n        new_child.parent = self\n        previous_child = None\n        if position == 0:\n            new_child.previous_sibling = None\n            new_child.previous_element = self\n        else:\n            previous_child = self.contents[position - 1]\n            new_child.previous_sibling = previous_child\n            new_child.previous_sibling.next_sibling = new_child\n            new_child.previous_element = previous_child._last_descendant(False)\n        if new_child.previous_element is not None:\n            new_child.previous_element.next_element = new_child\n\n        new_childs_last_element = new_child._last_descendant(False)\n\n        if position >= len(self.contents):\n            new_child.next_sibling = None\n\n            parent = self\n            parents_next_sibling = None\n            while parents_next_sibling is None and parent is not None:\n                parents_next_sibling = parent.next_sibling\n                parent = parent.parent\n                if parents_next_sibling is not None:\n                    # We found the element that comes next in the document.\n                    break\n            if parents_next_sibling is not None:\n                new_childs_last_element.next_element = parents_next_sibling\n            else:\n                # The last element of this tag is the last element in\n                # the document.\n                new_childs_last_element.next_element = None\n        else:\n            next_child = self.contents[position]\n            new_child.next_sibling = next_child\n            if new_child.next_sibling is not None:\n                new_child.next_sibling.previous_sibling = new_child\n            new_childs_last_element.next_element = next_child\n\n        if new_childs_last_element.next_element is not None:\n            new_childs_last_element.next_element.previous_element = new_childs_last_element\n        self.contents.insert(position, new_child)\n\n    def append(self, tag):\n        \"\"\"Appends the given tag to the contents of this tag.\"\"\"\n        self.insert(len(self.contents), tag)\n\n    def insert_before(self, predecessor):\n        \"\"\"Makes the given element the immediate predecessor of this one.\n\n        The two elements will have the same parent, and the given element\n        will be immediately before this one.\n        \"\"\"\n        if self is predecessor:\n            raise ValueError(\"Can't insert an element before itself.\")\n        parent = self.parent\n        if parent is None:\n            raise ValueError(\n                \"Element has no parent, so 'before' has no meaning.\")\n        # Extract first so that the index won't be screwed up if they\n        # are siblings.\n        if isinstance(predecessor, PageElement):\n            predecessor.extract()\n        index = parent.index(self)\n        parent.insert(index, predecessor)\n\n    def insert_after(self, successor):\n        \"\"\"Makes the given element the immediate successor of this one.\n\n        The two elements will have the same parent, and the given element\n        will be immediately after this one.\n        \"\"\"\n        if self is successor:\n            raise ValueError(\"Can't insert an element after itself.\")\n        parent = self.parent\n        if parent is None:\n            raise ValueError(\n                \"Element has no parent, so 'after' has no meaning.\")\n        # Extract first so that the index won't be screwed up if they\n        # are siblings.\n        if isinstance(successor, PageElement):\n            successor.extract()\n        index = parent.index(self)\n        parent.insert(index+1, successor)\n\n    def find_next(self, name=None, attrs={}, text=None, **kwargs):\n        \"\"\"Returns the first item that matches the given criteria and\n        appears after this Tag in the document.\"\"\"\n        return self._find_one(self.find_all_next, name, attrs, text, **kwargs)\n    findNext = find_next  # BS3\n\n    def find_all_next(self, name=None, attrs={}, text=None, limit=None,\n                    **kwargs):\n        \"\"\"Returns all items that match the given criteria and appear\n        after this Tag in the document.\"\"\"\n        return self._find_all(name, attrs, text, limit, self.next_elements,\n                             **kwargs)\n    findAllNext = find_all_next  # BS3\n\n    def find_next_sibling(self, name=None, attrs={}, text=None, **kwargs):\n        \"\"\"Returns the closest sibling to this Tag that matches the\n        given criteria and appears after this Tag in the document.\"\"\"\n        return self._find_one(self.find_next_siblings, name, attrs, text,\n                             **kwargs)\n    findNextSibling = find_next_sibling  # BS3\n\n    def find_next_siblings(self, name=None, attrs={}, text=None, limit=None,\n                           **kwargs):\n        \"\"\"Returns the siblings of this Tag that match the given\n        criteria and appear after this Tag in the document.\"\"\"\n        return self._find_all(name, attrs, text, limit,\n                              self.next_siblings, **kwargs)\n    findNextSiblings = find_next_siblings   # BS3\n    fetchNextSiblings = find_next_siblings  # BS2\n\n    def find_previous(self, name=None, attrs={}, text=None, **kwargs):\n        \"\"\"Returns the first item that matches the given criteria and\n        appears before this Tag in the document.\"\"\"\n        return self._find_one(\n            self.find_all_previous, name, attrs, text, **kwargs)\n    findPrevious = find_previous  # BS3\n\n    def find_all_previous(self, name=None, attrs={}, text=None, limit=None,\n                        **kwargs):\n        \"\"\"Returns all items that match the given criteria and appear\n        before this Tag in the document.\"\"\"\n        return self._find_all(name, attrs, text, limit, self.previous_elements,\n                           **kwargs)\n    findAllPrevious = find_all_previous  # BS3\n    fetchPrevious = find_all_previous    # BS2\n\n    def find_previous_sibling(self, name=None, attrs={}, text=None, **kwargs):\n        \"\"\"Returns the closest sibling to this Tag that matches the\n        given criteria and appears before this Tag in the document.\"\"\"\n        return self._find_one(self.find_previous_siblings, name, attrs, text,\n                             **kwargs)\n    findPreviousSibling = find_previous_sibling  # BS3\n\n    def find_previous_siblings(self, name=None, attrs={}, text=None,\n                               limit=None, **kwargs):\n        \"\"\"Returns the siblings of this Tag that match the given\n        criteria and appear before this Tag in the document.\"\"\"\n        return self._find_all(name, attrs, text, limit,\n                              self.previous_siblings, **kwargs)\n    findPreviousSiblings = find_previous_siblings   # BS3\n    fetchPreviousSiblings = find_previous_siblings  # BS2\n\n    def find_parent(self, name=None, attrs={}, **kwargs):\n        \"\"\"Returns the closest parent of this Tag that matches the given\n        criteria.\"\"\"\n        # NOTE: We can't use _find_one because findParents takes a different\n        # set of arguments.\n        r = None\n        l = self.find_parents(name, attrs, 1, **kwargs)\n        if l:\n            r = l[0]\n        return r\n    findParent = find_parent  # BS3\n\n    def find_parents(self, name=None, attrs={}, limit=None, **kwargs):\n        \"\"\"Returns the parents of this Tag that match the given\n        criteria.\"\"\"\n\n        return self._find_all(name, attrs, None, limit, self.parents,\n                             **kwargs)\n    findParents = find_parents   # BS3\n    fetchParents = find_parents  # BS2\n\n    @property\n    def next(self):\n        return self.next_element\n\n    @property\n    def previous(self):\n        return self.previous_element\n\n    #These methods do the real heavy lifting.\n\n    def _find_one(self, method, name, attrs, text, **kwargs):\n        r = None\n        l = method(name, attrs, text, 1, **kwargs)\n        if l:\n            r = l[0]\n        return r\n\n    def _find_all(self, name, attrs, text, limit, generator, **kwargs):\n        \"Iterates over a generator looking for things that match.\"\n\n        if text is None and 'string' in kwargs:\n            text = kwargs['string']\n            del kwargs['string']\n\n        if isinstance(name, SoupStrainer):\n            strainer = name\n        else:\n            strainer = SoupStrainer(name, attrs, text, **kwargs)\n\n        if text is None and not limit and not attrs and not kwargs:\n            if name is True or name is None:\n                # Optimization to find all tags.\n                result = (element for element in generator\n                          if isinstance(element, Tag))\n                return ResultSet(strainer, result)\n            elif isinstance(name, basestring):\n                # Optimization to find all tags with a given name.\n                result = (element for element in generator\n                          if isinstance(element, Tag)\n                            and element.name == name)\n                return ResultSet(strainer, result)\n        results = ResultSet(strainer)\n        while True:\n            try:\n                i = next(generator)\n            except StopIteration:\n                break\n            if i:\n                found = strainer.search(i)\n                if found:\n                    results.append(found)\n                    if limit and len(results) >= limit:\n                        break\n        return results\n\n    #These generators can be used to navigate starting from both\n    #NavigableStrings and Tags.\n    @property\n    def next_elements(self):\n        i = self.next_element\n        while i is not None:\n            yield i\n            i = i.next_element\n\n    @property\n    def next_siblings(self):\n        i = self.next_sibling\n        while i is not None:\n            yield i\n            i = i.next_sibling\n\n    @property\n    def previous_elements(self):\n        i = self.previous_element\n        while i is not None:\n            yield i\n            i = i.previous_element\n\n    @property\n    def previous_siblings(self):\n        i = self.previous_sibling\n        while i is not None:\n            yield i\n            i = i.previous_sibling\n\n    @property\n    def parents(self):\n        i = self.parent\n        while i is not None:\n            yield i\n            i = i.parent\n\n    # Methods for supporting CSS selectors.\n\n    tag_name_re = re.compile('^[a-zA-Z0-9][-.a-zA-Z0-9:_]*$')\n\n    # /^([a-zA-Z0-9][-.a-zA-Z0-9:_]*)\\[(\\w+)([=~\\|\\^\\$\\*]?)=?\"?([^\\]\"]*)\"?\\]$/\n    #   \\---------------------------/  \\---/\\-------------/    \\-------/\n    #     |                              |         |               |\n    #     |                              |         |           The value\n    #     |                              |    ~,|,^,$,* or =\n    #     |                           Attribute\n    #    Tag\n    attribselect_re = re.compile(\n        r'^(?P<tag>[a-zA-Z0-9][-.a-zA-Z0-9:_]*)?\\[(?P<attribute>[\\w-]+)(?P<operator>[=~\\|\\^\\$\\*]?)' +\n        r'=?\"?(?P<value>[^\\]\"]*)\"?\\]$'\n        )\n\n    def _attr_value_as_string(self, value, default=None):\n        \"\"\"Force an attribute value into a string representation.\n\n        A multi-valued attribute will be converted into a\n        space-separated stirng.\n        \"\"\"\n        value = self.get(value, default)\n        if isinstance(value, list) or isinstance(value, tuple):\n            value =\" \".join(value)\n        return value\n\n    def _tag_name_matches_and(self, function, tag_name):\n        if not tag_name:\n            return function\n        else:\n            def _match(tag):\n                return tag.name == tag_name and function(tag)\n            return _match\n\n    def _attribute_checker(self, operator, attribute, value=''):\n        \"\"\"Create a function that performs a CSS selector operation.\n\n        Takes an operator, attribute and optional value. Returns a\n        function that will return True for elements that match that\n        combination.\n        \"\"\"\n        if operator == '=':\n            # string representation of `attribute` is equal to `value`\n            return lambda el: el._attr_value_as_string(attribute) == value\n        elif operator == '~':\n            # space-separated list representation of `attribute`\n            # contains `value`\n            def _includes_value(element):\n                attribute_value = element.get(attribute, [])\n                if not isinstance(attribute_value, list):\n                    attribute_value = attribute_value.split()\n                return value in attribute_value\n            return _includes_value\n        elif operator == '^':\n            # string representation of `attribute` starts with `value`\n            return lambda el: el._attr_value_as_string(\n                attribute, '').startswith(value)\n        elif operator == '$':\n            # string representation of `attribute` ends with `value`\n            return lambda el: el._attr_value_as_string(\n                attribute, '').endswith(value)\n        elif operator == '*':\n            # string representation of `attribute` contains `value`\n            return lambda el: value in el._attr_value_as_string(attribute, '')\n        elif operator == '|':\n            # string representation of `attribute` is either exactly\n            # `value` or starts with `value` and then a dash.\n            def _is_or_starts_with_dash(element):\n                attribute_value = element._attr_value_as_string(attribute, '')\n                return (attribute_value == value or attribute_value.startswith(\n                        value + '-'))\n            return _is_or_starts_with_dash\n        else:\n            return lambda el: el.has_attr(attribute)\n\n    # Old non-property versions of the generators, for backwards\n    # compatibility with BS3.\n    def nextGenerator(self):\n        return self.next_elements\n\n    def nextSiblingGenerator(self):\n        return self.next_siblings\n\n    def previousGenerator(self):\n        return self.previous_elements\n\n    def previousSiblingGenerator(self):\n        return self.previous_siblings\n\n    def parentGenerator(self):\n        return self.parents\n\n\nclass NavigableString(unicode, PageElement):\n\n    PREFIX = ''\n    SUFFIX = ''\n\n    # We can't tell just by looking at a string whether it's contained\n    # in an XML document or an HTML document.\n\n    known_xml = None\n\n    def __new__(cls, value):\n        \"\"\"Create a new NavigableString.\n\n        When unpickling a NavigableString, this method is called with\n        the string in DEFAULT_OUTPUT_ENCODING. That encoding needs to be\n        passed in to the superclass's __new__ or the superclass won't know\n        how to handle non-ASCII characters.\n        \"\"\"\n        if isinstance(value, unicode):\n            u = unicode.__new__(cls, value)\n        else:\n            u = unicode.__new__(cls, value, DEFAULT_OUTPUT_ENCODING)\n        u.setup()\n        return u\n\n    def __copy__(self):\n        \"\"\"A copy of a NavigableString has the same contents and class\n        as the original, but it is not connected to the parse tree.\n        \"\"\"\n        return type(self)(self)\n\n    def __getnewargs__(self):\n        return (unicode(self),)\n\n    def __getattr__(self, attr):\n        \"\"\"text.string gives you text. This is for backwards\n        compatibility for Navigable*String, but for CData* it lets you\n        get the string without the CData wrapper.\"\"\"\n        if attr == 'string':\n            return self\n        else:\n            raise AttributeError(\n                \"'%s' object has no attribute '%s'\" % (\n                    self.__class__.__name__, attr))\n\n    def output_ready(self, formatter=\"minimal\"):\n        output = self.format_string(self, formatter)\n        return self.PREFIX + output + self.SUFFIX\n\n    @property\n    def name(self):\n        return None\n\n    @name.setter\n    def name(self, name):\n        raise AttributeError(\"A NavigableString cannot be given a name.\")\n\nclass PreformattedString(NavigableString):\n    \"\"\"A NavigableString not subject to the normal formatting rules.\n\n    The string will be passed into the formatter (to trigger side effects),\n    but the return value will be ignored.\n    \"\"\"\n\n    def output_ready(self, formatter=\"minimal\"):\n        \"\"\"CData strings are passed into the formatter.\n        But the return value is ignored.\"\"\"\n        self.format_string(self, formatter)\n        return self.PREFIX + self + self.SUFFIX\n\nclass CData(PreformattedString):\n\n    PREFIX = u'<![CDATA['\n    SUFFIX = u']]>'\n\nclass ProcessingInstruction(PreformattedString):\n    \"\"\"A SGML processing instruction.\"\"\"\n\n    PREFIX = u'<?'\n    SUFFIX = u'>'\n\nclass XMLProcessingInstruction(ProcessingInstruction):\n    \"\"\"An XML processing instruction.\"\"\"\n    PREFIX = u'<?'\n    SUFFIX = u'?>'\n\nclass Comment(PreformattedString):\n\n    PREFIX = u'<!--'\n    SUFFIX = u'-->'\n\n\nclass Declaration(PreformattedString):\n    PREFIX = u'<?'\n    SUFFIX = u'?>'\n\n\nclass Doctype(PreformattedString):\n\n    @classmethod\n    def for_name_and_ids(cls, name, pub_id, system_id):\n        value = name or ''\n        if pub_id is not None:\n            value += ' PUBLIC \"%s\"' % pub_id\n            if system_id is not None:\n                value += ' \"%s\"' % system_id\n        elif system_id is not None:\n            value += ' SYSTEM \"%s\"' % system_id\n\n        return Doctype(value)\n\n    PREFIX = u'<!DOCTYPE '\n    SUFFIX = u'>\\n'\n\n\nclass Tag(PageElement):\n\n    \"\"\"Represents a found HTML tag with its attributes and contents.\"\"\"\n\n    def __init__(self, parser=None, builder=None, name=None, namespace=None,\n                 prefix=None, attrs=None, parent=None, previous=None,\n                 is_xml=None):\n        \"Basic constructor.\"\n\n        if parser is None:\n            self.parser_class = None\n        else:\n            # We don't actually store the parser object: that lets extracted\n            # chunks be garbage-collected.\n            self.parser_class = parser.__class__\n        if name is None:\n            raise ValueError(\"No value provided for new tag's name.\")\n        self.name = name\n        self.namespace = namespace\n        self.prefix = prefix\n        if builder is not None:\n            preserve_whitespace_tags = builder.preserve_whitespace_tags\n        else:\n            if is_xml:\n                preserve_whitespace_tags = []\n            else:\n                preserve_whitespace_tags = HTMLAwareEntitySubstitution.preserve_whitespace_tags\n        self.preserve_whitespace_tags = preserve_whitespace_tags\n        if attrs is None:\n            attrs = {}\n        elif attrs:\n            if builder is not None and builder.cdata_list_attributes:\n                attrs = builder._replace_cdata_list_attribute_values(\n                    self.name, attrs)\n            else:\n                attrs = dict(attrs)\n        else:\n            attrs = dict(attrs)\n\n        # If possible, determine ahead of time whether this tag is an\n        # XML tag.\n        if builder:\n            self.known_xml = builder.is_xml\n        else:\n            self.known_xml = is_xml\n        self.attrs = attrs\n        self.contents = []\n        self.setup(parent, previous)\n        self.hidden = False\n\n        # Set up any substitutions, such as the charset in a META tag.\n        if builder is not None:\n            builder.set_up_substitutions(self)\n            self.can_be_empty_element = builder.can_be_empty_element(name)\n        else:\n            self.can_be_empty_element = False\n\n    parserClass = _alias(\"parser_class\")  # BS3\n\n    def __copy__(self):\n        \"\"\"A copy of a Tag is a new Tag, unconnected to the parse tree.\n        Its contents are a copy of the old Tag's contents.\n        \"\"\"\n        clone = type(self)(None, self.builder, self.name, self.namespace,\n                           self.nsprefix, self.attrs, is_xml=self._is_xml)\n        for attr in ('can_be_empty_element', 'hidden'):\n            setattr(clone, attr, getattr(self, attr))\n        for child in self.contents:\n            clone.append(child.__copy__())\n        return clone\n\n    @property\n    def is_empty_element(self):\n        \"\"\"Is this tag an empty-element tag? (aka a self-closing tag)\n\n        A tag that has contents is never an empty-element tag.\n\n        A tag that has no contents may or may not be an empty-element\n        tag. It depends on the builder used to create the tag. If the\n        builder has a designated list of empty-element tags, then only\n        a tag whose name shows up in that list is considered an\n        empty-element tag.\n\n        If the builder has no designated list of empty-element tags,\n        then any tag with no contents is an empty-element tag.\n        \"\"\"\n        return len(self.contents) == 0 and self.can_be_empty_element\n    isSelfClosing = is_empty_element  # BS3\n\n    @property\n    def string(self):\n        \"\"\"Convenience property to get the single string within this tag.\n\n        :Return: If this tag has a single string child, return value\n         is that string. If this tag has no children, or more than one\n         child, return value is None. If this tag has one child tag,\n         return value is the 'string' attribute of the child tag,\n         recursively.\n        \"\"\"\n        if len(self.contents) != 1:\n            return None\n        child = self.contents[0]\n        if isinstance(child, NavigableString):\n            return child\n        return child.string\n\n    @string.setter\n    def string(self, string):\n        self.clear()\n        self.append(string.__class__(string))\n\n    def _all_strings(self, strip=False, types=(NavigableString, CData)):\n        \"\"\"Yield all strings of certain classes, possibly stripping them.\n\n        By default, yields only NavigableString and CData objects. So\n        no comments, processing instructions, etc.\n        \"\"\"\n        for descendant in self.descendants:\n            if (\n                (types is None and not isinstance(descendant, NavigableString))\n                or\n                (types is not None and type(descendant) not in types)):\n                continue\n            if strip:\n                descendant = descendant.strip()\n                if len(descendant) == 0:\n                    continue\n            yield descendant\n\n    strings = property(_all_strings)\n\n    @property\n    def stripped_strings(self):\n        for string in self._all_strings(True):\n            yield string\n\n    def get_text(self, separator=u\"\", strip=False,\n                 types=(NavigableString, CData)):\n        \"\"\"\n        Get all child strings, concatenated using the given separator.\n        \"\"\"\n        return separator.join([s for s in self._all_strings(\n                    strip, types=types)])\n    getText = get_text\n    text = property(get_text)\n\n    def decompose(self):\n        \"\"\"Recursively destroys the contents of this tree.\"\"\"\n        self.extract()\n        i = self\n        while i is not None:\n            next = i.next_element\n            i.__dict__.clear()\n            i.contents = []\n            i = next\n\n    def clear(self, decompose=False):\n        \"\"\"\n        Extract all children. If decompose is True, decompose instead.\n        \"\"\"\n        if decompose:\n            for element in self.contents[:]:\n                if isinstance(element, Tag):\n                    element.decompose()\n                else:\n                    element.extract()\n        else:\n            for element in self.contents[:]:\n                element.extract()\n\n    def index(self, element):\n        \"\"\"\n        Find the index of a child by identity, not value. Avoids issues with\n        tag.contents.index(element) getting the index of equal elements.\n        \"\"\"\n        for i, child in enumerate(self.contents):\n            if child is element:\n                return i\n        raise ValueError(\"Tag.index: element not in tag\")\n\n    def get(self, key, default=None):\n        \"\"\"Returns the value of the 'key' attribute for the tag, or\n        the value given for 'default' if it doesn't have that\n        attribute.\"\"\"\n        return self.attrs.get(key, default)\n\n    def has_attr(self, key):\n        return key in self.attrs\n\n    def __hash__(self):\n        return str(self).__hash__()\n\n    def __getitem__(self, key):\n        \"\"\"tag[key] returns the value of the 'key' attribute for the tag,\n        and throws an exception if it's not there.\"\"\"\n        return self.attrs[key]\n\n    def __iter__(self):\n        \"Iterating over a tag iterates over its contents.\"\n        return iter(self.contents)\n\n    def __len__(self):\n        \"The length of a tag is the length of its list of contents.\"\n        return len(self.contents)\n\n    def __contains__(self, x):\n        return x in self.contents\n\n    def __nonzero__(self):\n        \"A tag is non-None even if it has no contents.\"\n        return True\n\n    def __setitem__(self, key, value):\n        \"\"\"Setting tag[key] sets the value of the 'key' attribute for the\n        tag.\"\"\"\n        self.attrs[key] = value\n\n    def __delitem__(self, key):\n        \"Deleting tag[key] deletes all 'key' attributes for the tag.\"\n        self.attrs.pop(key, None)\n\n    def __call__(self, *args, **kwargs):\n        \"\"\"Calling a tag like a function is the same as calling its\n        find_all() method. Eg. tag('a') returns a list of all the A tags\n        found within this tag.\"\"\"\n        return self.find_all(*args, **kwargs)\n\n    def __getattr__(self, tag):\n        #print \"Getattr %s.%s\" % (self.__class__, tag)\n        if len(tag) > 3 and tag.endswith('Tag'):\n            # BS3: soup.aTag -> \"soup.find(\"a\")\n            tag_name = tag[:-3]\n            warnings.warn(\n                '.%sTag is deprecated, use .find(\"%s\") instead.' % (\n                    tag_name, tag_name))\n            return self.find(tag_name)\n        # We special case contents to avoid recursion.\n        elif not tag.startswith(\"__\") and not tag == \"contents\":\n            return self.find(tag)\n        raise AttributeError(\n            \"'%s' object has no attribute '%s'\" % (self.__class__, tag))\n\n    def __eq__(self, other):\n        \"\"\"Returns true iff this tag has the same name, the same attributes,\n        and the same contents (recursively) as the given tag.\"\"\"\n        if self is other:\n            return True\n        if (not hasattr(other, 'name') or\n            not hasattr(other, 'attrs') or\n            not hasattr(other, 'contents') or\n            self.name != other.name or\n            self.attrs != other.attrs or\n            len(self) != len(other)):\n            return False\n        for i, my_child in enumerate(self.contents):\n            if my_child != other.contents[i]:\n                return False\n        return True\n\n    def __ne__(self, other):\n        \"\"\"Returns true iff this tag is not identical to the other tag,\n        as defined in __eq__.\"\"\"\n        return not self == other\n\n    def __repr__(self, encoding=\"unicode-escape\"):\n        \"\"\"Renders this tag as a string.\"\"\"\n        if PY3K:\n            # \"The return value must be a string object\", i.e. Unicode\n            return self.decode()\n        else:\n            # \"The return value must be a string object\", i.e. a bytestring.\n            # By convention, the return value of __repr__ should also be\n            # an ASCII string.\n            return self.encode(encoding)\n\n    def __unicode__(self):\n        return self.decode()\n\n    def __str__(self):\n        if PY3K:\n            return self.decode()\n        else:\n            return self.encode()\n\n    if PY3K:\n        __str__ = __repr__ = __unicode__\n\n    def encode(self, encoding=DEFAULT_OUTPUT_ENCODING,\n               indent_level=None, formatter=\"minimal\",\n               errors=\"xmlcharrefreplace\"):\n        # Turn the data structure into Unicode, then encode the\n        # Unicode.\n        u = self.decode(indent_level, encoding, formatter)\n        return u.encode(encoding, errors)\n\n    def _should_pretty_print(self, indent_level):\n        \"\"\"Should this tag be pretty-printed?\"\"\"\n\n        return (\n            indent_level is not None\n            and self.name not in self.preserve_whitespace_tags\n        )\n\n    def decode(self, indent_level=None,\n               eventual_encoding=DEFAULT_OUTPUT_ENCODING,\n               formatter=\"minimal\"):\n        \"\"\"Returns a Unicode representation of this tag and its contents.\n\n        :param eventual_encoding: The tag is destined to be\n           encoded into this encoding. This method is _not_\n           responsible for performing that encoding. This information\n           is passed in so that it can be substituted in if the\n           document contains a <META> tag that mentions the document's\n           encoding.\n        \"\"\"\n\n        # First off, turn a string formatter into a function. This\n        # will stop the lookup from happening over and over again.\n        if not callable(formatter):\n            formatter = self._formatter_for_name(formatter)\n\n        attrs = []\n        if self.attrs:\n            for key, val in sorted(self.attrs.items()):\n                if val is None:\n                    decoded = key\n                else:\n                    if isinstance(val, list) or isinstance(val, tuple):\n                        val = ' '.join(val)\n                    elif not isinstance(val, basestring):\n                        val = unicode(val)\n                    elif (\n                        isinstance(val, AttributeValueWithCharsetSubstitution)\n                        and eventual_encoding is not None):\n                        val = val.encode(eventual_encoding)\n\n                    text = self.format_string(val, formatter)\n                    decoded = (\n                        unicode(key) + '='\n                        + EntitySubstitution.quoted_attribute_value(text))\n                attrs.append(decoded)\n        close = ''\n        closeTag = ''\n\n        prefix = ''\n        if self.prefix:\n            prefix = self.prefix + \":\"\n\n        if self.is_empty_element:\n            close = '/'\n        else:\n            closeTag = '</%s%s>' % (prefix, self.name)\n\n        pretty_print = self._should_pretty_print(indent_level)\n        space = ''\n        indent_space = ''\n        if indent_level is not None:\n            indent_space = (' ' * (indent_level - 1))\n        if pretty_print:\n            space = indent_space\n            indent_contents = indent_level + 1\n        else:\n            indent_contents = None\n        contents = self.decode_contents(\n            indent_contents, eventual_encoding, formatter)\n\n        if self.hidden:\n            # This is the 'document root' object.\n            s = contents\n        else:\n            s = []\n            attribute_string = ''\n            if attrs:\n                attribute_string = ' ' + ' '.join(attrs)\n            if indent_level is not None:\n                # Even if this particular tag is not pretty-printed,\n                # we should indent up to the start of the tag.\n                s.append(indent_space)\n            s.append('<%s%s%s%s>' % (\n                    prefix, self.name, attribute_string, close))\n            if pretty_print:\n                s.append(\"\\n\")\n            s.append(contents)\n            if pretty_print and contents and contents[-1] != \"\\n\":\n                s.append(\"\\n\")\n            if pretty_print and closeTag:\n                s.append(space)\n            s.append(closeTag)\n            if indent_level is not None and closeTag and self.next_sibling:\n                # Even if this particular tag is not pretty-printed,\n                # we're now done with the tag, and we should add a\n                # newline if appropriate.\n                s.append(\"\\n\")\n            s = ''.join(s)\n        return s\n\n    def prettify(self, encoding=None, formatter=\"minimal\"):\n        if encoding is None:\n            return self.decode(True, formatter=formatter)\n        else:\n            return self.encode(encoding, True, formatter=formatter)\n\n    def decode_contents(self, indent_level=None,\n                       eventual_encoding=DEFAULT_OUTPUT_ENCODING,\n                       formatter=\"minimal\"):\n        \"\"\"Renders the contents of this tag as a Unicode string.\n\n        :param indent_level: Each line of the rendering will be\n           indented this many spaces.\n\n        :param eventual_encoding: The tag is destined to be\n           encoded into this encoding. This method is _not_\n           responsible for performing that encoding. This information\n           is passed in so that it can be substituted in if the\n           document contains a <META> tag that mentions the document's\n           encoding.\n\n        :param formatter: The output formatter responsible for converting\n           entities to Unicode characters.\n        \"\"\"\n        # First off, turn a string formatter into a function. This\n        # will stop the lookup from happening over and over again.\n        if not callable(formatter):\n            formatter = self._formatter_for_name(formatter)\n\n        pretty_print = (indent_level is not None)\n        s = []\n        for c in self:\n            text = None\n            if isinstance(c, NavigableString):\n                text = c.output_ready(formatter)\n            elif isinstance(c, Tag):\n                s.append(c.decode(indent_level, eventual_encoding,\n                                  formatter))\n            if text and indent_level and not self.name == 'pre':\n                text = text.strip()\n            if text:\n                if pretty_print and not self.name == 'pre':\n                    s.append(\" \" * (indent_level - 1))\n                s.append(text)\n                if pretty_print and not self.name == 'pre':\n                    s.append(\"\\n\")\n        return ''.join(s)\n\n    def encode_contents(\n        self, indent_level=None, encoding=DEFAULT_OUTPUT_ENCODING,\n        formatter=\"minimal\"):\n        \"\"\"Renders the contents of this tag as a bytestring.\n\n        :param indent_level: Each line of the rendering will be\n           indented this many spaces.\n\n        :param eventual_encoding: The bytestring will be in this encoding.\n\n        :param formatter: The output formatter responsible for converting\n           entities to Unicode characters.\n        \"\"\"\n\n        contents = self.decode_contents(indent_level, encoding, formatter)\n        return contents.encode(encoding)\n\n    # Old method for BS3 compatibility\n    def renderContents(self, encoding=DEFAULT_OUTPUT_ENCODING,\n                       prettyPrint=False, indentLevel=0):\n        if not prettyPrint:\n            indentLevel = None\n        return self.encode_contents(\n            indent_level=indentLevel, encoding=encoding)\n\n    #Soup methods\n\n    def find(self, name=None, attrs={}, recursive=True, text=None,\n             **kwargs):\n        \"\"\"Return only the first child of this Tag matching the given\n        criteria.\"\"\"\n        r = None\n        l = self.find_all(name, attrs, recursive, text, 1, **kwargs)\n        if l:\n            r = l[0]\n        return r\n    findChild = find\n\n    def find_all(self, name=None, attrs={}, recursive=True, text=None,\n                 limit=None, **kwargs):\n        \"\"\"Extracts a list of Tag objects that match the given\n        criteria.  You can specify the name of the Tag and any\n        attributes you want the Tag to have.\n\n        The value of a key-value pair in the 'attrs' map can be a\n        string, a list of strings, a regular expression object, or a\n        callable that takes a string and returns whether or not the\n        string matches for some custom definition of 'matches'. The\n        same is true of the tag name.\"\"\"\n\n        generator = self.descendants\n        if not recursive:\n            generator = self.children\n        return self._find_all(name, attrs, text, limit, generator, **kwargs)\n    findAll = find_all       # BS3\n    findChildren = find_all  # BS2\n\n    #Generator methods\n    @property\n    def children(self):\n        # return iter() to make the purpose of the method clear\n        return iter(self.contents)  # XXX This seems to be untested.\n\n    @property\n    def descendants(self):\n        if not len(self.contents):\n            return\n        stopNode = self._last_descendant().next_element\n        current = self.contents[0]\n        while current is not stopNode:\n            yield current\n            current = current.next_element\n\n    # CSS selector code\n\n    _selector_combinators = ['>', '+', '~']\n    _select_debug = False\n    quoted_colon = re.compile('\"[^\"]*:[^\"]*\"')\n    def select_one(self, selector):\n        \"\"\"Perform a CSS selection operation on the current element.\"\"\"\n        value = self.select(selector, limit=1)\n        if value:\n            return value[0]\n        return None\n\n    def select(self, selector, _candidate_generator=None, limit=None):\n        \"\"\"Perform a CSS selection operation on the current element.\"\"\"\n\n        # Handle grouping selectors if ',' exists, ie: p,a\n        if ',' in selector:\n            context = []\n            for partial_selector in selector.split(','):\n                partial_selector = partial_selector.strip()\n                if partial_selector == '':\n                    raise ValueError('Invalid group selection syntax: %s' % selector)\n                candidates = self.select(partial_selector, limit=limit)\n                for candidate in candidates:\n                    if candidate not in context:\n                        context.append(candidate)\n\n                if limit and len(context) >= limit:\n                    break\n            return context\n        tokens = shlex.split(selector)\n        current_context = [self]\n\n        if tokens[-1] in self._selector_combinators:\n            raise ValueError(\n                'Final combinator \"%s\" is missing an argument.' % tokens[-1])\n\n        if self._select_debug:\n            print 'Running CSS selector \"%s\"' % selector\n\n        for index, token in enumerate(tokens):\n            new_context = []\n            new_context_ids = set([])\n\n            if tokens[index-1] in self._selector_combinators:\n                # This token was consumed by the previous combinator. Skip it.\n                if self._select_debug:\n                    print '  Token was consumed by the previous combinator.'\n                continue\n\n            if self._select_debug:\n                print ' Considering token \"%s\"' % token\n            recursive_candidate_generator = None\n            tag_name = None\n\n            # Each operation corresponds to a checker function, a rule\n            # for determining whether a candidate matches the\n            # selector. Candidates are generated by the active\n            # iterator.\n            checker = None\n\n            m = self.attribselect_re.match(token)\n            if m is not None:\n                # Attribute selector\n                tag_name, attribute, operator, value = m.groups()\n                checker = self._attribute_checker(operator, attribute, value)\n\n            elif '#' in token:\n                # ID selector\n                tag_name, tag_id = token.split('#', 1)\n                def id_matches(tag):\n                    return tag.get('id', None) == tag_id\n                checker = id_matches\n\n            elif '.' in token:\n                # Class selector\n                tag_name, klass = token.split('.', 1)\n                classes = set(klass.split('.'))\n                def classes_match(candidate):\n                    return classes.issubset(candidate.get('class', []))\n                checker = classes_match\n\n            elif ':' in token and not self.quoted_colon.search(token):\n                # Pseudo-class\n                tag_name, pseudo = token.split(':', 1)\n                if tag_name == '':\n                    raise ValueError(\n                        \"A pseudo-class must be prefixed with a tag name.\")\n                pseudo_attributes = re.match('([a-zA-Z\\d-]+)\\(([a-zA-Z\\d]+)\\)', pseudo)\n                found = []\n                if pseudo_attributes is None:\n                    pseudo_type = pseudo\n                    pseudo_value = None\n                else:\n                    pseudo_type, pseudo_value = pseudo_attributes.groups()\n                if pseudo_type == 'nth-of-type':\n                    try:\n                        pseudo_value = int(pseudo_value)\n                    except:\n                        raise NotImplementedError(\n                            'Only numeric values are currently supported for the nth-of-type pseudo-class.')\n                    if pseudo_value < 1:\n                        raise ValueError(\n                            'nth-of-type pseudo-class value must be at least 1.')\n                    class Counter(object):\n                        def __init__(self, destination):\n                            self.count = 0\n                            self.destination = destination\n\n                        def nth_child_of_type(self, tag):\n                            self.count += 1\n                            if self.count == self.destination:\n                                return True\n                            else:\n                                return False\n                    checker = Counter(pseudo_value).nth_child_of_type\n                else:\n                    raise NotImplementedError(\n                        'Only the following pseudo-classes are implemented: nth-of-type.')\n\n            elif token == '*':\n                # Star selector -- matches everything\n                pass\n            elif token == '>':\n                # Run the next token as a CSS selector against the\n                # direct children of each tag in the current context.\n                recursive_candidate_generator = lambda tag: tag.children\n            elif token == '~':\n                # Run the next token as a CSS selector against the\n                # siblings of each tag in the current context.\n                recursive_candidate_generator = lambda tag: tag.next_siblings\n            elif token == '+':\n                # For each tag in the current context, run the next\n                # token as a CSS selector against the tag's next\n                # sibling that's a tag.\n                def next_tag_sibling(tag):\n                    yield tag.find_next_sibling(True)\n                recursive_candidate_generator = next_tag_sibling\n\n            elif self.tag_name_re.match(token):\n                # Just a tag name.\n                tag_name = token\n            else:\n                raise ValueError(\n                    'Unsupported or invalid CSS selector: \"%s\"' % token)\n            if recursive_candidate_generator:\n                # This happens when the selector looks like  \"> foo\".\n                #\n                # The generator calls select() recursively on every\n                # member of the current context, passing in a different\n                # candidate generator and a different selector.\n                #\n                # In the case of \"> foo\", the candidate generator is\n                # one that yields a tag's direct children (\">\"), and\n                # the selector is \"foo\".\n                next_token = tokens[index+1]\n                def recursive_select(tag):\n                    if self._select_debug:\n                        print '    Calling select(\"%s\") recursively on %s %s' % (next_token, tag.name, tag.attrs)\n                        print '-' * 40\n                    for i in tag.select(next_token, recursive_candidate_generator):\n                        if self._select_debug:\n                            print '(Recursive select picked up candidate %s %s)' % (i.name, i.attrs)\n                        yield i\n                    if self._select_debug:\n                        print '-' * 40\n                _use_candidate_generator = recursive_select\n            elif _candidate_generator is None:\n                # By default, a tag's candidates are all of its\n                # children. If tag_name is defined, only yield tags\n                # with that name.\n                if self._select_debug:\n                    if tag_name:\n                        check = \"[any]\"\n                    else:\n                        check = tag_name\n                    print '   Default candidate generator, tag name=\"%s\"' % check\n                if self._select_debug:\n                    # This is redundant with later code, but it stops\n                    # a bunch of bogus tags from cluttering up the\n                    # debug log.\n                    def default_candidate_generator(tag):\n                        for child in tag.descendants:\n                            if not isinstance(child, Tag):\n                                continue\n                            if tag_name and not child.name == tag_name:\n                                continue\n                            yield child\n                    _use_candidate_generator = default_candidate_generator\n                else:\n                    _use_candidate_generator = lambda tag: tag.descendants\n            else:\n                _use_candidate_generator = _candidate_generator\n\n            count = 0\n            for tag in current_context:\n                if self._select_debug:\n                    print \"    Running candidate generator on %s %s\" % (\n                        tag.name, repr(tag.attrs))\n                for candidate in _use_candidate_generator(tag):\n                    if not isinstance(candidate, Tag):\n                        continue\n                    if tag_name and candidate.name != tag_name:\n                        continue\n                    if checker is not None:\n                        try:\n                            result = checker(candidate)\n                        except StopIteration:\n                            # The checker has decided we should no longer\n                            # run the generator.\n                            break\n                    if checker is None or result:\n                        if self._select_debug:\n                            print \"     SUCCESS %s %s\" % (candidate.name, repr(candidate.attrs))\n                        if id(candidate) not in new_context_ids:\n                            # If a tag matches a selector more than once,\n                            # don't include it in the context more than once.\n                            new_context.append(candidate)\n                            new_context_ids.add(id(candidate))\n                    elif self._select_debug:\n                        print \"     FAILURE %s %s\" % (candidate.name, repr(candidate.attrs))\n\n            current_context = new_context\n        if limit and len(current_context) >= limit:\n            current_context = current_context[:limit]\n\n        if self._select_debug:\n            print \"Final verdict:\"\n            for i in current_context:\n                print \" %s %s\" % (i.name, i.attrs)\n        return current_context\n\n    # Old names for backwards compatibility\n    def childGenerator(self):\n        return self.children\n\n    def recursiveChildGenerator(self):\n        return self.descendants\n\n    def has_key(self, key):\n        \"\"\"This was kind of misleading because has_key() (attributes)\n        was different from __in__ (contents). has_key() is gone in\n        Python 3, anyway.\"\"\"\n        warnings.warn('has_key is deprecated. Use has_attr(\"%s\") instead.' % (\n                key))\n        return self.has_attr(key)\n\n# Next, a couple classes to represent queries and their results.\nclass SoupStrainer(object):\n    \"\"\"Encapsulates a number of ways of matching a markup element (tag or\n    text).\"\"\"\n\n    def __init__(self, name=None, attrs={}, text=None, **kwargs):\n        self.name = self._normalize_search_value(name)\n        if not isinstance(attrs, dict):\n            # Treat a non-dict value for attrs as a search for the 'class'\n            # attribute.\n            kwargs['class'] = attrs\n            attrs = None\n\n        if 'class_' in kwargs:\n            # Treat class_=\"foo\" as a search for the 'class'\n            # attribute, overriding any non-dict value for attrs.\n            kwargs['class'] = kwargs['class_']\n            del kwargs['class_']\n\n        if kwargs:\n            if attrs:\n                attrs = attrs.copy()\n                attrs.update(kwargs)\n            else:\n                attrs = kwargs\n        normalized_attrs = {}\n        for key, value in attrs.items():\n            normalized_attrs[key] = self._normalize_search_value(value)\n\n        self.attrs = normalized_attrs\n        self.text = self._normalize_search_value(text)\n\n    def _normalize_search_value(self, value):\n        # Leave it alone if it's a Unicode string, a callable, a\n        # regular expression, a boolean, or None.\n        if (isinstance(value, unicode) or callable(value) or hasattr(value, 'match')\n            or isinstance(value, bool) or value is None):\n            return value\n\n        # If it's a bytestring, convert it to Unicode, treating it as UTF-8.\n        if isinstance(value, bytes):\n            return value.decode(\"utf8\")\n\n        # If it's listlike, convert it into a list of strings.\n        if hasattr(value, '__iter__'):\n            new_value = []\n            for v in value:\n                if (hasattr(v, '__iter__') and not isinstance(v, bytes)\n                    and not isinstance(v, unicode)):\n                    # This is almost certainly the user's mistake. In the\n                    # interests of avoiding infinite loops, we'll let\n                    # it through as-is rather than doing a recursive call.\n                    new_value.append(v)\n                else:\n                    new_value.append(self._normalize_search_value(v))\n            return new_value\n\n        # Otherwise, convert it into a Unicode string.\n        # The unicode(str()) thing is so this will do the same thing on Python 2\n        # and Python 3.\n        return unicode(str(value))\n\n    def __str__(self):\n        if self.text:\n            return self.text\n        else:\n            return \"%s|%s\" % (self.name, self.attrs)\n\n    def search_tag(self, markup_name=None, markup_attrs={}):\n        found = None\n        markup = None\n        if isinstance(markup_name, Tag):\n            markup = markup_name\n            markup_attrs = markup\n        call_function_with_tag_data = (\n            isinstance(self.name, collections.Callable)\n            and not isinstance(markup_name, Tag))\n\n        if ((not self.name)\n            or call_function_with_tag_data\n            or (markup and self._matches(markup, self.name))\n            or (not markup and self._matches(markup_name, self.name))):\n            if call_function_with_tag_data:\n                match = self.name(markup_name, markup_attrs)\n            else:\n                match = True\n                markup_attr_map = None\n                for attr, match_against in list(self.attrs.items()):\n                    if not markup_attr_map:\n                        if hasattr(markup_attrs, 'get'):\n                            markup_attr_map = markup_attrs\n                        else:\n                            markup_attr_map = {}\n                            for k, v in markup_attrs:\n                                markup_attr_map[k] = v\n                    attr_value = markup_attr_map.get(attr)\n                    if not self._matches(attr_value, match_against):\n                        match = False\n                        break\n            if match:\n                if markup:\n                    found = markup\n                else:\n                    found = markup_name\n        if found and self.text and not self._matches(found.string, self.text):\n            found = None\n        return found\n    searchTag = search_tag\n\n    def search(self, markup):\n        # print 'looking for %s in %s' % (self, markup)\n        found = None\n        # If given a list of items, scan it for a text element that\n        # matches.\n        if hasattr(markup, '__iter__') and not isinstance(markup, (Tag, basestring)):\n            for element in markup:\n                if isinstance(element, NavigableString) \\\n                       and self.search(element):\n                    found = element\n                    break\n        # If it's a Tag, make sure its name or attributes match.\n        # Don't bother with Tags if we're searching for text.\n        elif isinstance(markup, Tag):\n            if not self.text or self.name or self.attrs:\n                found = self.search_tag(markup)\n        # If it's text, make sure the text matches.\n        elif isinstance(markup, NavigableString) or \\\n                 isinstance(markup, basestring):\n            if not self.name and not self.attrs and self._matches(markup, self.text):\n                found = markup\n        else:\n            raise Exception(\n                \"I don't know how to match against a %s\" % markup.__class__)\n        return found\n\n    def _matches(self, markup, match_against):\n        # print u\"Matching %s against %s\" % (markup, match_against)\n        result = False\n        if isinstance(markup, list) or isinstance(markup, tuple):\n            # This should only happen when searching a multi-valued attribute\n            # like 'class'.\n            for item in markup:\n                if self._matches(item, match_against):\n                    return True\n            # We didn't match any particular value of the multivalue\n            # attribute, but maybe we match the attribute value when\n            # considered as a string.\n            if self._matches(' '.join(markup), match_against):\n                return True\n            return False\n\n        if match_against is True:\n            # True matches any non-None value.\n            return markup is not None\n\n        if isinstance(match_against, collections.Callable):\n            return match_against(markup)\n\n        # Custom callables take the tag as an argument, but all\n        # other ways of matching match the tag name as a string.\n        if isinstance(markup, Tag):\n            markup = markup.name\n\n        # Ensure that `markup` is either a Unicode string, or None.\n        markup = self._normalize_search_value(markup)\n\n        if markup is None:\n            # None matches None, False, an empty string, an empty list, and so on.\n            return not match_against\n\n        if isinstance(match_against, unicode):\n            # Exact string match\n            return markup == match_against\n\n        if hasattr(match_against, 'match'):\n            # Regexp match\n            return match_against.search(markup)\n\n        if hasattr(match_against, '__iter__'):\n            # The markup must be an exact match against something\n            # in the iterable.\n            return markup in match_against\n\n\nclass ResultSet(list):\n    \"\"\"A ResultSet is just a list that keeps track of the SoupStrainer\n    that created it.\"\"\"\n    def __init__(self, source, result=()):\n        super(ResultSet, self).__init__(result)\n        self.source = source\n"
  },
  {
    "path": "parallax_svg_tools/run.py",
    "content": "from svg import * \r\n\r\ncompile_svg('animation.svg', 'processed_animation.svg', \r\n{\r\n\t'process_layer_names': True,\r\n\t'namespace': 'example'\r\n})\r\n\r\ninline_svg('animation.html', 'output/animation.html')"
  },
  {
    "path": "parallax_svg_tools/svg/__init__.py",
    "content": "# Super simple Illustrator SVG processor for animations. Uses the BeautifulSoup python xml library. \n\nimport os\nimport errno\nfrom bs4 import BeautifulSoup\n\ndef create_file(path, mode):\n\tdirectory = os.path.dirname(path)\n\tif directory != '' and not os.path.exists(directory):\n\t\ttry:\n\t\t\tos.makedirs(directory)\n\t\texcept OSError as e:\n\t\t    if e.errno != errno.EEXIST:\n\t\t        raise\n\t\n\tfile = open(path, mode)\n\treturn file\n\ndef parse_svg(path, namespace, options):\n\t#print(path)\n\tfile = open(path,'r')\n\tfile_string = file.read().decode('utf8')\n\tfile.close();\n\n\tif namespace == None:\n\t\tnamespace = ''\n\telse:\n\t\tnamespace = namespace + '-'\n\n\t# BeautifulSoup can't parse attributes with dashes so we replace them with underscores instead\t\t\n\tfile_string = file_string.replace('data-name', 'data_name')\n\n\t# Expand origin to data-svg-origin as its a pain in the ass to type\n\tif 'expand_origin' in options and options['expand_origin'] == True:\n\t\tfile_string = file_string.replace('origin=', 'data-svg-origin=')\n\n\t# Expand spirit to data-spirit-id for use with https://spiritapp.io/\n\tif 'spirit' in options and options['spirit'] == True:\n\t\tfile_string = file_string.replace('spirit=', 'data-spirit-id=')\n\t\n\t# Add namespaces to ids\n\tif namespace:\n\t\tfile_string = file_string.replace('id=\"', 'id=\"' + namespace)\n\t\tfile_string = file_string.replace('url(#', 'url(#' + namespace)\n\n\tsvg = BeautifulSoup(file_string, 'html.parser')\n\n\t# namespace symbols\n\tsymbol_elements = svg.select('symbol')\n\tfor element in symbol_elements:\n\t\tdel element['data_name']\n\n\tuse_elements = svg.select('use')\n\tfor element in use_elements:\n\t\tif namespace:\n\t\t\thref = element['xlink:href']\n\t\t\telement['xlink:href'] = href.replace('#', '#' + namespace)\n\n\t\tdel element['id']\n\n\n\t# remove titles\n\tif 'title' in options and options['title'] == False:\n\t\ttitles = svg.select('title')\n\t\tfor t in titles: t.extract()\n\n\t# remove description\n\tif 'description' in options and options['description'] == False:\n\t\tdescriptions = svg.select('desc')\n\t\tfor d in descriptions: d.extract()\n\n\tforeign_tags_to_add = []\n\tif 'convert_svg_text_to_html' in options and options['convert_svg_text_to_html'] == True:\n\t\ttext_elements = svg.select('[data_name=\"#TEXT\"]')\n\t\tfor element in text_elements:\n\n\t\t\tarea = element.rect\n\t\t\tif not area: \n\t\t\t\tprint('WARNING: Text areas require a rectangle to be in the same group as the text element')\n\t\t\t\tcontinue\n\n\t\t\ttext_element = element.select('text')[0]\n\t\t\tif not text_element:\n\t\t\t\tprint('WARNING: No text element found in text area')\n\t\t\t\tcontinue\n\n\t\t\tx = area['x']\n\t\t\ty = area['y']\n\t\t\twidth = area['width']\n\t\t\theight = area['height']\n\n\t\t\ttext_content = text_element.getText()\n\t\t\ttext_tag = BeautifulSoup(text_content, 'html.parser')\n\t\t\t\n\t\t\tdata_name = None\n\t\t\tif area.has_attr('data_name'): data_name = area['data_name']\n\t\t\t#print(data_name)\n\t\t\t\t\t\t\n\t\t\tarea.extract()\n\t\t\ttext_element.extract()\n\t\t\t\n\t\t\tforeign_object_tag = svg.new_tag('foreignObject')\n\t\t\tforeign_object_tag['requiredFeatures'] = \"http://www.w3.org/TR/SVG11/feature#Extensibility\"\n\t\t\tforeign_object_tag['transform'] = 'translate(' + x + ' ' + y + ')'\n\t\t\tforeign_object_tag['width'] = width + 'px'\n\t\t\tforeign_object_tag['height'] = height + 'px'\n\n\t\t\tif 'dont_overflow_text_areas' in options and options['dont_overflow_text_areas'] == True:\n\t\t\t\tforeign_object_tag['style'] = 'overflow:hidden'\n\n\t\t\tif data_name:\n\t\t\t\tval = data_name\n\t\t\t\tif not val.startswith('#'): continue\n\t\t\t\tval = val.replace('#', '')\n\t\t\t\t\n\t\t\t\tattributes = str.split(str(val), ',')\n\t\t\t\tfor a in attributes:\n\t\t\t\t\tsplit = str.split(a.strip(), '=')\n\t\t\t\t\tif (len(split) < 2): continue\n\t\t\t\t\tkey = split[0]\n\t\t\t\t\tvalue = split[1]\n\t\t\t\t\tif key == 'id': key = namespace + key\n\t\t\t\t\tforeign_object_tag[key] = value\n\t\t\t\n\t\t\tforeign_object_tag.append(text_tag)\n\n\t\t\t# modyfing the tree affects searches so we need to defer it until the end\n\t\t\tforeign_tags_to_add.append({'element':element, 'tag':foreign_object_tag})\n\t\t\t\n\n\tif (not 'process_layer_names' in options or ('process_layer_names' in options and options['process_layer_names'] == True)):\n\t\telements_with_data_names = svg.select('[data_name]')\n\t\tfor element in elements_with_data_names:\n\n\t\t\t# remove any existing id tag as we'll be making our own\n\t\t\tif element.has_attr('id'): del element.attrs['id']\n\t\t\t\n\t\t\tval = element['data_name']\n\t\t\t#print(val)\n\t\t\tdel element['data_name']\n\n\t\t\tif not val.startswith('#'): continue\n\t\t\tval = val.replace('#', '')\n\t\t\t\n\t\t\tattributes = str.split(str(val), ',')\n\t\t\tfor a in attributes:\n\t\t\t\tsplit = str.split(a.strip(), '=')\n\t\t\t\tif (len(split) < 2): continue\n\t\t\t\tkey = split[0]\n\t\t\t\tvalue = split[1]\n\t\t\t\tif key == 'id' or key == 'class': value = namespace + value\n\t\t\t\telement[key] = value\n\t\n\t\n\tif 'remove_text_attributes' in options and options['remove_text_attributes'] == True:\n\t\t#Remove attributes from text tags\n\t\ttext_elements = svg.select('text')\n\t\tfor element in text_elements:\n\t\t\tif element.has_attr('font-size'): del element.attrs['font-size']\n\t\t\tif element.has_attr('font-family'): del element.attrs['font-family']\n\t\t\tif element.has_attr('font-weight'): del element.attrs['font-weight']\n\t\t\tif element.has_attr('fill'): del element.attrs['fill']\n\n\t# Do tree modifications here\n\tif 'convert_svg_text_to_html' in options and options['convert_svg_text_to_html'] == True:\n\t\tfor t in foreign_tags_to_add:\n\t\t\tt['element'].append(t['tag'])\n\t\n\n\treturn svg\n\n\ndef write_svg(svg, dst_path, options):\n\t\n\tresult = str(svg)\n\tresult = unicode(result, \"utf8\")\t\n\t#Remove self closing tags\n\tresult = result.replace('></circle>','/>') \n\tresult = result.replace('></rect>','/>') \n\tresult = result.replace('></path>','/>') \n\tresult = result.replace('></polygon>','/>')\n\n\tif 'nowhitespace' in options and options['nowhitespace'] == True:\n\t\tresult = result.replace('\\n','')\n\t#else:\n\t#\tresult = svg.prettify()\n\n\t# bs4 incorrectly outputs clippath instead of clipPath \n\tresult = result.replace('clippath', 'clipPath')\n\tresult = result.encode('UTF8')\n\n\tresult_file = create_file(dst_path, 'wb')\n\tresult_file.write(result)\n\tresult_file.close()\n\n\n\ndef compile_svg(src_path, dst_path, options):\n\tnamespace = None\n\n\tif 'namespace' in options: \n\t\tnamespace = options['namespace']\n\tsvg = parse_svg(src_path, namespace, options)\n\n\tif 'attributes' in options: \n\t\tattrs = options['attributes']\n\t\tfor k in attrs:\n\t\t\tsvg.svg[k] = attrs[k]\n\n\tif 'description' in options:\n\t\tcurrent_desc = svg.select('description')\n\t\tif current_desc:\n\t\t\tcurrent_desc[0].string = options['description']\n\t\telse:\n\t\t\tdesc_tag = svg.new_tag('description');\n\t\t\tdesc_tag.string = options['description']\n\t\t\tsvg.svg.append(desc_tag)\n\t\t\n\twrite_svg(svg, dst_path, options)\n\n\n\ndef compile_master_svg(src_path, dst_path, options):\n\tprint('\\n')\n\tprint(src_path)\n\tfile = open(src_path)\n\tsvg = BeautifulSoup(file, 'html.parser')\n\tfile.close()\n\n\tmaster_viewbox = svg.svg.attrs['viewbox']\n\n\timport_tags = svg.select('[path]')\n\tfor tag in import_tags:\n\n\t\tcomponent_path = str(tag['path'])\n\t\t\n\t\tnamespace = None\n\t\tif tag.has_attr('namespace'): namespace = tag['namespace']\n\n\t\tcomponent = parse_svg(component_path, namespace, options)\n\n\t\tcomponent_viewbox = component.svg.attrs['viewbox']\n\t\tif master_viewbox != component_viewbox:\n\t\t\tprint('WARNING: Master viewbox: [' + master_viewbox + '] does not match component viewbox [' + component_viewbox + ']')\n\t\n\t\t# Moves the contents of the component svg file into the master svg\n\t\tfor child in component.svg: tag.contents.append(child)\n\n\t\t# Remove redundant path and namespace attributes from the import element\n\t\tdel tag.attrs['path']\n\t\tif namespace: del tag.attrs['namespace']\n\n\n\tif 'attributes' in options: \n\t\tattrs = options['attributes']\n\t\tfor k in attrs:\n\t\t\tprint(k + ' = ' + attrs[k])\n\t\t\tsvg.svg[k] = attrs[k]\n\n\n\tif 'title' in options and options['title'] is not False:\n\t\tcurrent_title = svg.select('title')\n\t\tif current_title:\n\t\t\tcurrent_title[0].string = options['title']\n\t\telse:\n\t\t\ttitle_tag = svg.new_tag('title');\n\t\t\ttitle_tag.string = options['title']\n\t\t\tsvg.svg.append(title_tag)\n\n\n\tif 'description' in options:\n\t\tcurrent_desc = svg.select('description')\n\t\tif current_desc:\n\t\t\tcurrent_desc[0].string = options['description']\n\t\telse:\n\t\t\tdesc_tag = svg.new_tag('description');\n\t\t\tdesc_tag.string = options['description']\n\t\t\tsvg.svg.append(desc_tag)\n\n\n\twrite_svg(svg, dst_path, options)\n\n\n# Super dumb / simple function that inlines svgs into html source files\n\ndef parse_markup(src_path, output):\n\tprint(src_path)\n\tread_state = 0\n\tfile = open(src_path, 'r')\n\tfor line in file:\n\t\tif line.startswith('//import'):\n\t\t\tpath = line.split('//import ')[1].rstrip('\\n').rstrip('\\r')\n\t\t\tparse_markup(path, output)\n\t\telse:\n\t\t\toutput.append(line)\n\n\tfile.close()\n\ndef inline_svg(src_path, dst_path):\n\toutput = [];\n\n\tfile = create_file(dst_path, 'w')\n\tparse_markup(src_path, output)\n\tfor line in output: file.write(line)\n\tfile.close()\n\tprint('')\t"
  }
]