[
  {
    "path": ".gitignore",
    "content": "# Created by https://www.toptal.com/developers/gitignore/api/macos,intellij,virtualenv,python\n# Edit at https://www.toptal.com/developers/gitignore?templates=macos,intellij,virtualenv,python\n\n### Intellij ###\n# Covers JetBrains IDEs: IntelliJ, RubyMine, PhpStorm, AppCode, PyCharm, CLion, Android Studio, WebStorm and Rider\n# Reference: https://intellij-support.jetbrains.com/hc/en-us/articles/206544839\n\n# User-specific stuff\n.idea/**/workspace.xml\n.idea/**/tasks.xml\n.idea/**/usage.statistics.xml\n.idea/**/dictionaries\n.idea/**/shelf\n\n# Generated files\n.idea/**/contentModel.xml\n\n# Sensitive or high-churn files\n.idea/**/dataSources/\n.idea/**/dataSources.ids\n.idea/**/dataSources.local.xml\n.idea/**/sqlDataSources.xml\n.idea/**/dynamic.xml\n.idea/**/uiDesigner.xml\n.idea/**/dbnavigator.xml\n\n# Gradle\n.idea/**/gradle.xml\n.idea/**/libraries\n\n# Gradle and Maven with auto-import\n# When using Gradle or Maven with auto-import, you should exclude module files,\n# since they will be recreated, and may cause churn.  Uncomment if using\n# auto-import.\n# .idea/artifacts\n# .idea/compiler.xml\n# .idea/jarRepositories.xml\n# .idea/modules.xml\n# .idea/*.iml\n# .idea/modules\n# *.iml\n# *.ipr\n\n# CMake\ncmake-build-*/\n\n# Mongo Explorer plugin\n.idea/**/mongoSettings.xml\n\n# File-based project format\n*.iws\n\n# IntelliJ\nout/\n\n# mpeltonen/sbt-idea plugin\n.idea_modules/\n\n# JIRA plugin\natlassian-ide-plugin.xml\n\n# Cursive Clojure plugin\n.idea/replstate.xml\n\n# Crashlytics plugin (for Android Studio and IntelliJ)\ncom_crashlytics_export_strings.xml\ncrashlytics.properties\ncrashlytics-build.properties\nfabric.properties\n\n# Editor-based Rest Client\n.idea/httpRequests\n\n# Android studio 3.1+ serialized cache file\n.idea/caches/build_file_checksums.ser\n\n### Intellij Patch ###\n# Comment Reason: https://github.com/joeblau/gitignore.io/issues/186#issuecomment-215987721\n\n# *.iml\n# modules.xml\n# .idea/misc.xml\n# *.ipr\n\n# Sonarlint plugin\n# https://plugins.jetbrains.com/plugin/7973-sonarlint\n.idea/**/sonarlint/\n\n# SonarQube Plugin\n# https://plugins.jetbrains.com/plugin/7238-sonarqube-community-plugin\n.idea/**/sonarIssues.xml\n\n# Markdown Navigator plugin\n# https://plugins.jetbrains.com/plugin/7896-markdown-navigator-enhanced\n.idea/**/markdown-navigator.xml\n.idea/**/markdown-navigator-enh.xml\n.idea/**/markdown-navigator/\n\n# Cache file creation bug\n# See https://youtrack.jetbrains.com/issue/JBR-2257\n.idea/$CACHE_FILE$\n\n# CodeStream plugin\n# https://plugins.jetbrains.com/plugin/12206-codestream\n.idea/codestream.xml\n\n### macOS ###\n# General\n.DS_Store\n.AppleDouble\n.LSOverride\n\n# Icon must end with two \\r\nIcon\n\n\n# Thumbnails\n._*\n\n# Files that might appear in the root of a volume\n.DocumentRevisions-V100\n.fseventsd\n.Spotlight-V100\n.TemporaryItems\n.Trashes\n.VolumeIcon.icns\n.com.apple.timemachine.donotpresent\n\n# Directories potentially created on remote AFP share\n.AppleDB\n.AppleDesktop\nNetwork Trash Folder\nTemporary Items\n.apdisk\n\n### Python ###\n# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# C extensions\n*.so\n\n# Distribution / packaging\n.Python\nbuild/\ndevelop-eggs/\ndist/\ndownloads/\neggs/\n.eggs/\nlib/\nlib64/\nparts/\nsdist/\nvar/\nwheels/\npip-wheel-metadata/\nshare/python-wheels/\n*.egg-info/\n.installed.cfg\n*.egg\nMANIFEST\n\n# PyInstaller\n#  Usually these files are written by a python script from a template\n#  before PyInstaller builds the exe, so as to inject date/other infos into it.\n*.manifest\n*.spec\n\n# Installer logs\npip-log.txt\npip-delete-this-directory.txt\n\n# Unit test / coverage reports\nhtmlcov/\n.tox/\n.nox/\n.coverage\n.coverage.*\n.cache\nnosetests.xml\ncoverage.xml\n*.cover\n*.py,cover\n.hypothesis/\n.pytest_cache/\npytestdebug.log\n\n# Translations\n*.mo\n*.pot\n\n# Django stuff:\n*.log\nlocal_settings.py\ndb.sqlite3\ndb.sqlite3-journal\n\n# Flask stuff:\ninstance/\n.webassets-cache\n\n# Scrapy stuff:\n.scrapy\n\n# Sphinx documentation\ndocs/_build/\ndoc/_build/\n\n# PyBuilder\ntarget/\n\n# Jupyter Notebook\n.ipynb_checkpoints\n\n# IPython\nprofile_default/\nipython_config.py\n\n# pyenv\n.python-version\n\n# pipenv\n#   According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.\n#   However, in case of collaboration, if having platform-specific dependencies or dependencies\n#   having no cross-platform support, pipenv may install dependencies that don't work, or not\n#   install all needed dependencies.\n#Pipfile.lock\n\n# PEP 582; used by e.g. github.com/David-OConnor/pyflow\n__pypackages__/\n\n# Celery stuff\ncelerybeat-schedule\ncelerybeat.pid\n\n# SageMath parsed files\n*.sage.py\n\n# Environments\n.env\n.venv\nenv/\nvenv/\nENV/\nenv.bak/\nvenv.bak/\npythonenv*\n\n# Spyder project settings\n.spyderproject\n.spyproject\n\n# Rope project settings\n.ropeproject\n\n# mkdocs documentation\n/site\n\n# mypy\n.mypy_cache/\n.dmypy.json\ndmypy.json\n\n# Pyre type checker\n.pyre/\n\n# pytype static type analyzer\n.pytype/\n\n# profiling data\n.prof\n\n### VirtualEnv ###\n# Virtualenv\n# http://iamzed.com/2009/05/07/a-primer-on-virtualenv/\n[Bb]in\n[Ii]nclude\n[Ll]ib\n[Ll]ib64\n[Ll]ocal\n[Ss]cripts\npyvenv.cfg\npip-selfcheck.json\n\n# End of https://www.toptal.com/developers/gitignore/api/macos,intellij,virtualenv,python\n"
  },
  {
    "path": "README.md",
    "content": "# asm2vec\n\nThis is an unofficial implementation of the `asm2vec` model as a standalone python package. The details of the model can be found in the original paper: [(sp'19) Asm2Vec: Boosting Static Representation Robustness for Binary Clone Search against Code Obfuscation and Compiler Optimization](https://www.computer.org/csdl/proceedings-article/sp/2019/666000a038/19skfc3ZfKo)\n\n## Requirements\n\nThis implementation is written in python 3.7 and it's recommended to use python 3.7+ as well. The only dependency of this package is `numpy` which can be installed as follows:\n\n```shell\npython3 -m pip install numpy\n```\n\n## How to use\n\n### Import\n\nTo install the package, execute the following commands:\n\n```shell\ngit clone https://github.com/lancern/asm2vec.git\n```\n\nAdd the following line to the `.bashrc` file to add `asm2vec` to your python interpreter's search path for external packages:\n\n```shell\nexport PYTHONPATH=\"path/to/asm2vec:$PYTHONPATH\"\n```\n\nReplace `path/to/asm2vec` with the directory you clone `asm2vec` into. Then execute the following commands to update `PYTHONPATH`:\n\n```shell\nsource ~/.bashrc\n```\n\nYou can also add the following code snippets to your python source code referring `asm2vec` to guide python interpreter finding the package successfully:\n\n```python\nimport sys\nsys.path.append('path/to/asm2vec')\n```\n\nIn your python code, use the following `import` statement to import this package:\n\n```python\nimport asm2vec.<module-name>\n```\n\n### Define CFGs And Training\n\nYou have 2 approaches to define the binary program that will be sent to the `asm2vec` model. The first approach is to build the CFG manually, as shown below:\n\n```python\nfrom asm2vec.asm import BasicBlock\nfrom asm2vec.asm import Function\nfrom asm2vec.asm import parse_instruction\n\nblock1 = BasicBlock()\nblock1.add_instruction(parse_instruction('mov eax, ebx'))\nblock1.add_instruction(parse_instruction('jmp _loc'))\n\nblock2 = BasicBlock()\nblock2.add_instruction(parse_instruction('xor eax, eax'))\nblock2.add_instruction(parse_instruction('ret'))\n\nblock1.add_successor(block2)\n\nblock3 = BasicBlock()\nblock3.add_instruction(parse_instruction('sub eax, [ebp]'))\n\nf1 = Function(block1, 'some_func')\nf2 = Function(block3, 'another_func')\n\n# block4 is ignore here for clarity\nf3 = Function(block4, 'estimate_func')\n```\n\nAnd then you can train a model with the following code:\n\n```python\nfrom asm2vec.model import Asm2Vec\n\nmodel = Asm2Vec(d=200)\ntrain_repo = model.make_function_repo([f1, f2, f3])\nmodel.train(train_repo)\n```\n\nThe second approach is using the `parse` module provided by `asm2vec` to build CFGs automatically from an assembly code source file:\n\n```python\nfrom asm2vec.parse import parse_fp\n\nwith open('source.asm', 'r') as fp:\n    funcs = parse_fp(fp)\n```\n\nAnd then you can train a model with the following code:\n\n```python\nfrom asm2vec.model import Asm2Vec\n\nmodel = Asm2Vec(d=200)\ntrain_repo = model.make_function_repo(funcs)\nmodel.train(train_repo)\n```\n\n### Estimation\n\nYou can use the `asm2vec.model.Asm2Vec.to_vec` method to convert a function into its vector representation.\n\n### Serialization\n\nThe implementation support serialization on many of its internal data structures so that you can serialize the internal state of a trained model into disk for future use.\n\nYou can serialize two data structures to primitive data: the function repository and the model memento.\n\n> To be finished.\n\n## Hyper Parameters\n\nThe constructor of `asm2vec.model.Asm2Vec` class accepts some keyword arguments as hyper parameters of the model. The following table lists all the hyper parameters available:\n\n| Parameter Name          | Type    | Meaning                                                                                                | Default Value |\n| ----------------------- | ------- | ------------------------------------------------------------------------------------------------------ | ------------- |\n| `d`                     | `int`   | The dimention of the vectors for tokens.                                                               | `200`         |\n| `initial_alpha`         | `float` | The initial learning rate.                                                                             | `0.05`        |\n| `alpha_update_interval` | `int`   | How many tokens can be processed before changing the learning rate?                                    | `10000`       |\n| `rnd_walks`             | `int`   | How many random walks to perform to sequentialize a function?                                          | `3`           |\n| `neg_samples`           | `int`   | How many samples to take during negative sampling?                                                     | `25`          |\n| `iteration`             | `int`   | How many iterations to perform? (This parameter is reserved for future use and is not implemented now) | `1`           |\n| `jobs`                  | `int`   | How many tasks to execute concurrently during training?                                                | `4`           |\n\n## Notes\n\nFor simplicity, the Selective Callee Expansion is not implemented in this early implementation. You have to do it manually before sending CFG into `asm2vec` .\n"
  },
  {
    "path": "asm2vec/__init__.py",
    "content": "__all__ = ['asm', 'model', 'parse']\n"
  },
  {
    "path": "asm2vec/asm.py",
    "content": "from typing import *\n\n\nclass Instruction:\n    def __init__(self, op: str, *args: str):\n        self._op = op\n        self._args = list(args)\n\n    def op(self) -> str:\n        return self._op\n\n    def number_of_args(self) -> int:\n        return len(self._args)\n\n    def args(self) -> List[str]:\n        return self._args\n\n\ndef parse_instruction(code: str) -> Instruction:\n    sep_index = code.find(' ')\n    if sep_index == -1:\n        return Instruction(code)\n\n    op = code[:sep_index]   # Operator\n    args_list = list(map(str.strip, code[sep_index:].split(',')))   # Operands\n    return Instruction(op, *args_list)\n\n\nclass BasicBlock:\n    _next_unused_id: int = 1\n\n    def __init__(self):\n        # Allocate a new unique ID for the basic block.\n        self._id = self.__class__._next_unused_id\n        self.__class__._next_unused_id += 1\n\n        self._instructions = []\n        self._predecessors = []\n        self._successors = []\n\n    def __iter__(self):\n        return self._instructions.__iter__()\n\n    def __len__(self):\n        return len(self._instructions)\n\n    def __hash__(self):\n        return self._id.__hash__()\n\n    def __eq__(self, other):\n        if not isinstance(other, BasicBlock):\n            return False\n        return self._id == other.id()\n\n    def __ne__(self, other):\n        return not self.__eq__(other)\n\n    def id(self) -> int:\n        return self._id\n\n    def add_instruction(self, instr: Instruction) -> None:\n        self._instructions.append(instr)\n\n    def body_instructions(self) -> List[Instruction]:\n        return self._instructions[:-1]\n\n    def instructions(self) -> List[Instruction]:\n        return self._instructions\n\n    def add_predecessor(self, predecessor: 'BasicBlock') -> None:\n        self._predecessors.append(predecessor)\n        predecessor._successors.append(self)\n\n    def add_successor(self, successor: 'BasicBlock') -> None:\n        self._successors.append(successor)\n        successor._predecessors.append(self)\n\n    def first_instruction(self) -> Instruction:\n        return self._instructions[0]\n\n    def last_instruction(self) -> Instruction:\n        return self._instructions[-1]\n\n    def predecessors(self) -> List['BasicBlock']:\n        return self._predecessors\n\n    def in_degree(self) -> int:\n        return len(self._predecessors)\n\n    def successors(self) -> List['BasicBlock']:\n        return self._successors\n\n    def out_degree(self) -> int:\n        return len(self._successors)\n\n\nclass CFGWalkerCallback:\n    def __call__(self, *args, **kwargs):\n        self.on_enter(*args)\n\n    def on_enter(self, block: BasicBlock) -> None:\n        pass\n\n    def on_exit(self, block: BasicBlock) -> None:\n        pass\n\n\nCFGWalkerCallbackType = Union[CFGWalkerCallback, Callable[[BasicBlock], Any]]\n\n\ndef _walk_cfg(entry: BasicBlock, action: CFGWalkerCallbackType, visited: Set) -> None:\n    if entry.id() in visited:\n        return\n\n    visited.add(entry.id())\n    action(entry)\n\n    for successor in entry.successors():\n        _walk_cfg(successor, action, visited)\n\n    if isinstance(action, CFGWalkerCallback):\n        action.on_exit(entry)\n\n\ndef walk_cfg(entry: BasicBlock, action: CFGWalkerCallbackType) -> None:\n    _walk_cfg(entry, action, set())\n\n\nclass Function:\n    _next_unused_id = 1\n\n    def __init__(self, entry: BasicBlock, name: str = None):\n        # Allocate a unique ID for the current Function object.\n        self._id = self.__class__._next_unused_id\n        self.__class__._next_unused_id += 1\n\n        self._entry = entry\n        self._name = name\n        self._callees = []  # Functions that are called by this function\n        self._callers = []  # Functions that call this function\n\n    def __len__(self) -> int:\n        instr_count = 0\n\n        def count_instr(block: BasicBlock) -> None:\n            nonlocal instr_count\n            instr_count += len(block)\n\n        walk_cfg(self._entry, count_instr)\n        return instr_count\n\n    def __hash__(self):\n        return self._id\n\n    def __eq__(self, other):\n        if not isinstance(other, Function):\n            return False\n        return self._id == other.id()\n\n    def __ne__(self, other):\n        return not self.__eq__(other)\n\n    def id(self) -> int:\n        return self._id\n\n    def entry(self) -> BasicBlock:\n        return self._entry\n\n    def name(self) -> str:\n        return self._name\n\n    def add_callee(self, f: 'Function') -> None:\n        self._callees.append(f)\n        f._callers.append(self)\n\n    def callees(self) -> List['Function']:\n        return self._callees\n\n    def out_degree(self) -> int:\n        return len(self._callees)\n\n    def add_caller(self, f: 'Function') -> None:\n        self._callers.append(f)\n        f._callees.append(self)\n\n    def callers(self) -> List['Function']:\n        return self._callers\n\n    def in_degree(self) -> int:\n        return len(self._callers)\n"
  },
  {
    "path": "asm2vec/internal/__init__.py",
    "content": ""
  },
  {
    "path": "asm2vec/internal/atomic.py",
    "content": "from typing import *\nimport threading\n\n\nclass LockContextManager:\n    def __init__(self, lock: threading.Lock):\n        self._lock = lock\n        self._exited = False\n\n    def __enter__(self):\n        self._lock.acquire()\n\n    def __exit__(self, exc_type, exc_val, exc_tb):\n        self._exited = True\n        self._lock.release()\n\n    def exited(self) -> bool:\n        return self._exited\n\n\nclass Atomic:\n    class AtomicContextManager(LockContextManager):\n        def __init__(self, atomic: 'Atomic'):\n            super().__init__(atomic._lock)\n            self._atomic = atomic\n            self._exited = False\n\n        def __enter__(self):\n            super().__enter__()\n            return self\n\n        def __exit__(self, exc_type, exc_val, exc_tb):\n            super().__exit__(exc_type, exc_val, exc_tb)\n\n        def value(self) -> Any:\n            if self.exited():\n                raise RuntimeError('Trying to access AtomicContextManager after its exit.')\n            return self._atomic._val\n\n        def set(self, value: Any) -> None:\n            if self.exited():\n                raise RuntimeError('Trying to access AtomicContextManager after its exit.')\n            self._atomic._val = value\n\n    def __init__(self, value: Any):\n        self._val = value\n        self._lock = threading.Lock()\n\n    def lock(self) -> AtomicContextManager:\n        return self.__class__.AtomicContextManager(self)\n\n    def value(self) -> Any:\n        with self.lock() as val:\n            return val.value()\n"
  },
  {
    "path": "asm2vec/internal/parse.py",
    "content": "from typing import *\nimport logging\n\nimport asm2vec.asm\n\n\nclass AssemblySyntaxError(Exception):\n    def __init__(self, message: str = None):\n        self._msg = message\n\n    def message(self) -> str:\n        return self._msg\n\n\ndef raise_asm_syntax_error(expect: str, found: str) -> None:\n    raise AssemblySyntaxError('Expect \"{}\", but \"{}\" was found.'.format(expect, found))\n\n\njmp_op = {\n    'jmp', 'ja', 'jae', 'jb', 'jbe', 'jc', 'jcxz', 'jecxz', 'jrcxz', 'je', 'jg', 'jge', 'jl', 'jle', 'jna',\n    'jnae', 'jnb', 'jnbe', 'jnc', 'jne', 'jng', 'jnge', 'jnl', 'jnle', 'jno', 'jnp', 'jns', 'jnz', 'jo', 'jp',\n    'jpe', 'jpo', 'js', 'jz'\n}\n\ncall_op = {\n    'call'\n}\n\nret_op = {\n    'ret'\n}\n\nx86_64_regs = {\n    'al', 'ah', 'bl', 'bh', 'cl', 'ch', 'dl', 'dh', 'spl', 'bpl', 'sil', 'dil',\n    'ax', 'bx', 'cx', 'dx', 'sp', 'bp', 'si', 'di',\n    'eax', 'ebx', 'ecx', 'edx', 'esp', 'ebp', 'esi', 'edi',\n    'rax', 'rdx', 'rcx', 'rdx', 'rsp', 'rbp', 'rsi', 'rdi',\n    'r8b', 'r9b', 'r10b', 'r11b', 'r12b', 'r13b', 'r14b', 'r15b',\n    'r8w', 'r9w', 'r10w', 'r11w', 'r12w', 'r13w', 'r14w', 'r15w',\n    'r8d', 'r9d', 'r10d', 'r11d', 'r12d', 'r13d', 'r14d', 'r15d',\n    'r8', 'r9', 'r10', 'r11', 'r12', 'r13', 'r14', 'r15',\n    'cs', 'ss', 'ds', 'es', 'fs', 'gs',\n    'ecs', 'ess', 'eds', 'ees', 'efs', 'egs',\n    'rcs', 'rss', 'rds', 'res', 'rfs', 'rgs'\n}\n\n\ndef is_jmp(op: str) -> bool:\n    return op.lower() in jmp_op\n\n\ndef is_conditional_jmp(op: str) -> bool:\n    return is_jmp(op) and op.lower() != 'jmp'\n\n\ndef is_call(op: str) -> bool:\n    return op.lower() in call_op\n\n\ndef is_ret(op: str) -> bool:\n    return op.lower() in ret_op\n\n\ndef is_reg(arg: str) -> bool:\n    return arg.lower() in x86_64_regs\n\n\nclass CFGBuilder:\n    def __init__(self, context: 'ParseContext'):\n        self._context = context\n        self._blocks: List[asm2vec.asm.BasicBlock] = []\n        self._active_block = -1\n        self._block_labels: Dict[str, int] = dict()\n\n    def _logger(self) -> logging.Logger:\n        return self._context.logger().getChild(self.__class__.__name__)\n\n    def _allocate_block(self) -> int:\n        self._blocks.append(asm2vec.asm.BasicBlock())\n        return len(self._blocks) - 1\n\n    def _allocate_named_block(self, name: str) -> int:\n        if name in self._block_labels:\n            return self._block_labels[name]\n        else:\n            idx = self._allocate_block()\n            self._block_labels[name] = idx\n            return idx\n\n    def _get_active_block(self) -> asm2vec.asm.BasicBlock:\n        return self._blocks[self._active_block]\n\n    def _set_active_block(self, block_id: int) -> None:\n        self._active_block = block_id\n\n    def _has_active_block(self) -> bool:\n        return self._active_block != -1\n\n    def _close_active_block(self) -> None:\n        self._active_block = -1\n\n    def _add_jmp(self, op: str, args: List[str]) -> None:\n        if len(args) != 1:\n            raise_asm_syntax_error('Jump with single operand', '{} operands'.format(len(args)))\n        cur_block = self._get_active_block()\n        self._close_active_block()\n        if is_conditional_jmp(op):\n            # Allocate another basic block for more instructions since the current code point is reachable.\n            # This may produce some empty basic blocks in the final output.\n            self._set_active_block(self._allocate_block())\n            self._get_active_block().add_predecessor(cur_block)\n\n    def add_instr(self, op: str, args: List[str]) -> None:\n        if not self._has_active_block():\n            # Allocate a new basic block.\n            self._set_active_block(self._allocate_block())\n\n        self._get_active_block().add_instruction(asm2vec.asm.Instruction(op, *args))\n        if is_jmp(op):\n            self._add_jmp(op, args)\n        elif is_ret(op):\n            # `ret` instruction encountered. Close current active block.\n            self._close_active_block()\n\n    def set_label(self, label: str) -> None:\n        block_id = self._block_labels.get(label, -1)\n        if block_id == -1:\n            # Test if the current active block is empty in which case we can reuse it.\n            if self._has_active_block() and len(self._get_active_block()) == 0:\n                self._block_labels[label] = self._active_block\n            else:\n                # Open a new block for the label.\n                block_id = self._allocate_block()\n                self._block_labels[label] = block_id\n                # Link the new block with the previously-active block.\n                if self._has_active_block():\n                    self._get_active_block().add_successor(self._blocks[block_id])\n                self._set_active_block(block_id)\n        else:\n            self._set_active_block(block_id)\n\n    def build(self) -> List[asm2vec.asm.Function]:\n        func_entries: Dict[str, int] = dict()\n\n        # Walk through all instructions and fix block relations formed by jump and call instructions.\n        for blk in self._blocks:\n            for inst in blk:\n                if is_jmp(inst.op()):\n                    target = inst.args()[0]\n                    if target in self._block_labels:\n                        blk.add_successor(self._blocks[self._block_labels[target]])\n                elif is_call(inst.op()):\n                    target = inst.args()[0]\n                    if target in self._block_labels and target not in func_entries:\n                        func_entries[target] = self._block_labels[target]\n\n        for func_name in self._context.options().func_names():\n            if func_name not in self._block_labels:\n                self._logger().warning('Cannot find function \"{}\"', func_name)\n                continue\n            if func_name not in func_entries:\n                func_entries[func_name] = self._block_labels[func_name]\n\n        funcs: Dict[str, asm2vec.asm.Function] = \\\n            dict(map(lambda x: (x[0], asm2vec.asm.Function(self._blocks[x[1]], x[0])), func_entries.items()))\n\n        # Fix function call relation.\n        for (name, f) in funcs.items():\n            def block_action(block: asm2vec.asm.BasicBlock) -> None:\n                for instr in block:\n                    if is_call(instr.op()):\n                        callee_name = instr.args()[0]\n                        if callee_name in funcs:\n                            f.add_callee(funcs[callee_name])\n\n            asm2vec.asm.walk_cfg(f.entry(), block_action)\n\n        # TODO: Implement Selective Callee Expansion here.\n\n        return list(funcs.values())\n\n\nclass ParseOptions:\n    def __init__(self, **kwargs):\n        self._func_names = kwargs.get('func_names', [])\n\n    def func_names(self) -> List[str]:\n        return self._func_names\n\n\nclass ParseContext:\n    def __init__(self, **kwargs):\n        self._builder = CFGBuilder(self)\n        self._options = ParseOptions(**kwargs)\n        self._logger = logging.getLogger('asm2vec.ParseContext')\n\n    def logger(self) -> logging.Logger:\n        return self._logger\n\n    def options(self) -> ParseOptions:\n        return self._options\n\n    def builder(self) -> CFGBuilder:\n        return self._builder\n\n\n'''\n\nParser rules for input assembly file:\n\nprogram\n    : asm_line*\n    ;\n\nasm_line\n    : asm_label '\\n'\n    | BLANKS asm_instr '\\n'\n    ;\n\nasm_label\n    : ASM_LABEL_ID ':'\n    ;\n\nasm_instr\n    : ASM_INSTR_OP ' ' asm_instr_arg_list\n    ;\n\nasm_instr_arg_list\n    : ASM_INSTR_ARG (',' asm_instr_arg_list)?\n    | /* epsilon */\n    ;\n\nBLANKS : [ \\n\\t]+;\n\n'''\n\n\ndef is_fullmatch(pattern, s: str) -> bool:\n    return pattern.fullmatch(s) is not None\n\n\ndef parse_asm_label(ln: str, context: ParseContext) -> None:\n    stripped = ln.strip()\n    if stripped[-1] != ':':\n        raise_asm_syntax_error('asm_label', ln)\n\n    context.builder().set_label(stripped[:-1])\n\n\ndef parse_asm_instr(ln: str, context: ParseContext) -> None:\n    delim_index = ln.find(' ')\n    args = []\n    if delim_index == -1:\n        op = ln\n    else:\n        op = ln[:delim_index]\n        args = list(map(lambda arg: arg.strip(), ln[delim_index + 1:].split(',')))\n\n    context.builder().add_instr(op, args)\n\n\ndef parse_asm_line(ln: str, context: ParseContext) -> None:\n    if len(ln.strip()) == 0:\n        return\n\n    if ln[0].isspace():\n        # Expect production asm_line -> BLANKS asm_instr '\\n'\n        parse_asm_instr(ln.strip(), context)\n    else:\n        # Expect production asm_line -> asm_label\n        parse_asm_label(ln, context)\n\n\ndef parse_asm_lines(lines: Iterable[str], **kwargs) -> List[asm2vec.asm.Function]:\n    context = ParseContext(**kwargs)\n    for ln in lines:\n        parse_asm_line(ln, context)\n    return context.builder().build()\n"
  },
  {
    "path": "asm2vec/internal/repr.py",
    "content": "import random\nfrom typing import *\nimport concurrent.futures\n\nfrom asm2vec.asm import Instruction\nfrom asm2vec.asm import BasicBlock\nfrom asm2vec.asm import Function\nfrom asm2vec.asm import walk_cfg\nfrom asm2vec.repo import SequentialFunction\nfrom asm2vec.repo import VectorizedFunction\nfrom asm2vec.repo import VectorizedToken\nfrom asm2vec.repo import Token\nfrom asm2vec.repo import FunctionRepository\nfrom asm2vec.logging import asm2vec_logger\n\nfrom asm2vec.internal.atomic import Atomic\n\n\ndef _random_walk(f: Function) -> List[Instruction]:\n    visited: Set[int] = set()\n    current = f.entry()\n    seq: List[Instruction] = []\n\n    while current.id() not in visited:\n        visited.add(current.id())\n        for instr in current:\n            seq.append(instr)\n        if len(current.successors()) == 0:\n            break\n\n        current = random.choice(current.successors())\n\n    return seq\n\n\ndef _edge_sampling(f: Function) -> List[List[Instruction]]:\n    edges: List[Tuple[BasicBlock, BasicBlock]] = []\n\n    def collect_edges(block: BasicBlock) -> None:\n        nonlocal edges\n        for successor in block.successors():\n            edges.append((block, successor))\n\n    walk_cfg(f.entry(), collect_edges)\n\n    visited_edges: Set[Tuple[int, int]] = set()\n    sequences = []\n    while len(visited_edges) < len(edges):\n        e = random.choice(edges)\n        visited_edges.add((e[0].id(), e[1].id()))\n        sequences.append(list(e[0]) + list(e[1]))\n\n    return sequences\n\n\ndef make_sequential_function(f: Function, num_of_random_walks: int = 10) -> SequentialFunction:\n    seq: List[List[Instruction]] = []\n\n    for _ in range(num_of_random_walks):\n        seq.append(_random_walk(f))\n\n    # seq += _edge_sampling(f)\n\n    return SequentialFunction(f.id(), f.name(), seq)\n\n\ndef _get_function_tokens(f: Function, dim: int = 200) -> List[VectorizedToken]:\n    tokens: List[VectorizedToken] = []\n\n    def collect_tokens(block: BasicBlock) -> None:\n        nonlocal tokens\n        for ins in block:\n            tk: List[str] = [ins.op()] + ins.args()\n            for t in tk:\n                tokens.append(VectorizedToken(t, None, None, dim))\n\n    walk_cfg(f.entry(), collect_tokens)\n    return tokens\n\n\ndef _make_function_repo_helper(vocab: Dict[str, Token], funcs: List[Function],\n                               dim: int, num_of_rnd_walks: int, jobs: int) -> FunctionRepository:\n    progress = Atomic(1)\n\n    vec_funcs_atomic = Atomic([])\n    vocab_atomic = Atomic(vocab)\n\n    def func_handler(f: Function):\n        with vec_funcs_atomic.lock() as vfa:\n            vfa.value().append(VectorizedFunction(make_sequential_function(f, num_of_rnd_walks), dim=dim*2))\n\n        tokens = _get_function_tokens(f, dim)\n        for tk in tokens:\n            with vocab_atomic.lock() as va:\n                if tk.name() in va.value():\n                    va.value()[tk.name()].count += 1\n                else:\n                    va.value()[tk.name()] = Token(tk)\n\n        asm2vec_logger().debug('Sequence generated for function \"%s\", progress: %f%%',\n                               f.name(), progress.value() / len(funcs) * 100)\n        with progress.lock() as prog:\n            prog.set(prog.value() + 1)\n\n    executor = concurrent.futures.ThreadPoolExecutor(max_workers=jobs)\n    fs = []\n    for fn in funcs:\n        fs.append(executor.submit(func_handler, fn))\n    done, not_done = concurrent.futures.wait(fs, return_when=concurrent.futures.FIRST_EXCEPTION)\n\n    if len(not_done) > 0 or any(map(lambda fut: fut.cancelled() or not fut.done(), done)):\n        raise RuntimeError('Not all tasks finished successfully.')\n\n    vec_funcs = vec_funcs_atomic.value()\n    repo = FunctionRepository(vec_funcs, vocab)\n\n    # Re-calculate the frequency of each token.\n    for t in repo.vocab().values():\n        t.frequency = t.count / repo.num_of_tokens()\n\n    return repo\n\n\ndef make_function_repo(funcs: List[Function], dim: int, num_of_rnd_walks: int, jobs: int) -> FunctionRepository:\n    return _make_function_repo_helper(dict(), funcs, dim, num_of_rnd_walks, jobs)\n\n\ndef make_estimate_repo(vocabulary: Dict[str, Token], f: Function,\n                       dim: int, num_of_rnd_walks: int) -> FunctionRepository:\n    # Make a copy of the function list and vocabulary to avoid the change to affect the original trained repo.\n    vocab: Dict[str, Token] = dict(**vocabulary)\n    return _make_function_repo_helper(vocab, [f], dim, num_of_rnd_walks, 1)\n"
  },
  {
    "path": "asm2vec/internal/sampling.py",
    "content": "from typing import *\nimport random\n\nT = TypeVar('T')\n\n\nclass NegativeSampler:\n    def __init__(self, distribution: List[Tuple[T, float]], alpha: float = 3 / 4):\n        self._values = list(map(lambda x: x[0], distribution))\n        self._weights = list(map(lambda x: x[1] ** alpha, distribution))\n\n    def sample(self, k: int) -> List[T]:\n        return random.choices(self._values, self._weights, k=k)\n"
  },
  {
    "path": "asm2vec/internal/training.py",
    "content": "from typing import *\nimport math\nimport threading\nimport concurrent.futures\n\nimport numpy as np\n\nfrom asm2vec.asm import Instruction\nfrom asm2vec.internal.repr import FunctionRepository\nfrom asm2vec.internal.repr import VectorizedFunction\nfrom asm2vec.internal.repr import Token\nfrom asm2vec.internal.repr import VectorizedToken\nfrom asm2vec.internal.sampling import NegativeSampler\nfrom asm2vec.internal.atomic import LockContextManager\nfrom asm2vec.internal.atomic import Atomic\nfrom asm2vec.logging import asm2vec_logger\n\n\nclass Asm2VecParams:\n    def __init__(self, **kwargs):\n        self.d: int = kwargs.get('d', 200)\n        self.initial_alpha: float = kwargs.get('alpha', 0.0025)\n        self.alpha_update_interval: int = kwargs.get('alpha_update_interval', 10000)\n        self.num_of_rnd_walks: int = kwargs.get('rnd_walks', 3)\n        self.neg_samples: int = kwargs.get('neg_samples', 25)\n        self.iteration: int = kwargs.get('iteration', 1)\n        self.jobs: int = kwargs.get('jobs', 4)\n\n    def to_dict(self) -> Dict[str, Any]:\n        return {\n            'd': self.d,\n            'alpha': self.initial_alpha,\n            'alpha_update_interval': self.alpha_update_interval,\n            'num_of_rnd_walks': self.num_of_rnd_walks,\n            'neg_samples': self.neg_samples,\n            'iteration': self.iteration,\n            'jobs': self.jobs\n        }\n\n    def populate(self, rep: Dict[bytes, Any]) -> None:\n        self.d: int = rep.get(b'd', 200)\n        self.initial_alpha: float = rep.get(b'alpha', 0.0025)\n        self.alpha_update_interval: int = rep.get(b'alpha_update_interval', 10000)\n        self.num_of_rnd_walks: int = rep.get(b'rnd_walks', 3)\n        self.neg_samples: int = rep.get(b'neg_samples', 25)\n        self.iteration: int = rep.get(b'iteration', 1)\n        self.jobs: int = rep.get(b'jobs', 4)\n\n\nclass SequenceWindow:\n    def __init__(self, sequence: List[Instruction], vocabulary: Dict[str, Token]):\n        self._seq = sequence\n        self._vocab = vocabulary\n        self._i = 1\n\n        self._prev_ins = None\n        self._curr_ins = None\n        self._next_ins = None\n\n        self._prev_ins_op = None\n        self._prev_ins_args = None\n        self._curr_ins_op = None\n        self._curr_ins_args = None\n        self._next_ins_op = None\n        self._next_ins_args = None\n\n    def move_next(self) -> bool:\n        if self._i >= len(self._seq) - 1:\n            return False\n\n        def token_lookup(name) -> VectorizedToken:\n            return self._vocab[name].vectorized()\n\n        self._prev_ins = self._seq[self._i - 1]\n        self._curr_ins = self._seq[self._i]\n        self._next_ins = self._seq[self._i + 1]\n\n        self._prev_ins_op = token_lookup(self._prev_ins.op())\n        self._prev_ins_args = list(map(token_lookup, self._prev_ins.args()))\n        self._curr_ins_op = token_lookup(self._curr_ins.op())\n        self._curr_ins_args = list(map(token_lookup, self._curr_ins.args()))\n        self._next_ins_op = token_lookup(self._next_ins.op())\n        self._next_ins_args = list(map(token_lookup, self._next_ins.args()))\n\n        self._i += 1\n\n        return True\n\n    def prev_ins(self) -> Instruction:\n        return self._prev_ins\n\n    def prev_ins_op(self) -> VectorizedToken:\n        return self._prev_ins_op\n\n    def prev_ins_args(self) -> List[VectorizedToken]:\n        return self._prev_ins_args\n\n    def curr_ins(self) -> Instruction:\n        return self._curr_ins\n\n    def curr_ins_op(self) -> VectorizedToken:\n        return self._curr_ins_op\n\n    def curr_ins_args(self) -> List[VectorizedToken]:\n        return self._curr_ins_args\n\n    def next_ins(self) -> Instruction:\n        return self._next_ins\n\n    def next_ins_op(self) -> VectorizedToken:\n        return self._next_ins_op\n\n    def next_ins_args(self) -> List[VectorizedToken]:\n        return self._next_ins_args\n\n\nclass TrainingContext:\n    class Counter:\n        def __init__(self, context: 'TrainingContext', name: str, initial: int = 0):\n            self._context = context\n            self._name = name\n            self._val = initial\n\n        def val(self) -> int:\n            with self._context.lock():\n                return self._val\n\n        def inc(self) -> int:\n            with self._context.lock():\n                self._val += 1\n                return self._val\n\n        def reset(self) -> int:\n            with self._context.lock():\n                v = self._val\n                self._val = 0\n                return v\n\n    TOKENS_HANDLED_COUNTER: str = \"tokens_handled\"\n\n    def __init__(self, repo: FunctionRepository, params: Asm2VecParams, is_estimating: bool = False):\n        self._repo = repo\n        self._params = params\n        self._alpha = params.initial_alpha\n        self._sampler = NegativeSampler(list(map(lambda t: (t, t.frequency), repo.vocab().values())))\n        self._is_estimating = is_estimating\n        self._counters = dict()\n        self._lock = threading.Lock()\n\n    def repo(self) -> FunctionRepository:\n        return self._repo\n\n    def params(self) -> Asm2VecParams:\n        return self._params\n\n    def lock(self) -> LockContextManager:\n        return LockContextManager(self._lock)\n\n    def alpha(self) -> float:\n        with self.lock():\n            return self._alpha\n\n    def set_alpha(self, alpha: float) -> None:\n        with self.lock():\n            self._alpha = alpha\n\n    def sampler(self) -> NegativeSampler:\n        return self._sampler\n\n    def is_estimating(self) -> bool:\n        return self._is_estimating\n\n    def create_sequence_window(self, seq: List[Instruction]) -> SequenceWindow:\n        return SequenceWindow(seq, self._repo.vocab())\n\n    def get_counter(self, name: str) -> Counter:\n        with self.lock():\n            return self._counters.get(name)\n\n    def add_counter(self, name: str, initial: int = 0) -> Counter:\n        with self.lock():\n            c = self.__class__.Counter(self, name, initial)\n            self._counters[name] = c\n            return c\n\n\ndef _sigmoid(x: float) -> float:\n    return 1 / (1 + np.exp(-x))\n\n\ndef _identity(cond: bool) -> int:\n    return 1 if cond else 0\n\n\ndef _dot_sigmoid(lhs: np.ndarray, rhs: np.ndarray) -> float:\n    # noinspection PyTypeChecker\n    return _sigmoid(np.dot(lhs, rhs))\n\n\ndef _get_inst_repr(op: VectorizedToken, args: List[VectorizedToken]) -> np.ndarray:\n    if len(args) == 0:\n        arg_vec = np.zeros(len(op.v))\n    else:\n        arg_vec = np.average(list(map(lambda tk: tk.v, args)), axis=0)\n    return np.hstack((op.v, arg_vec))\n\n\ndef _train_vectorized(wnd: SequenceWindow, f: VectorizedFunction, context: TrainingContext) -> None:\n    ct_prev = _get_inst_repr(wnd.prev_ins_op(), wnd.prev_ins_args())\n    ct_next = _get_inst_repr(wnd.next_ins_op(), wnd.next_ins_args())\n    delta = np.average([ct_prev, f.v, ct_next], axis=0)\n\n    tokens = [wnd.curr_ins_op()] + wnd.curr_ins_args()\n\n    f_grad = np.zeros(f.v.shape)\n    for tk in tokens:\n        # Negative sampling.\n        sampled_tokens: Dict[str, VectorizedToken] = \\\n            dict(map(lambda x: (x.name(), x.vectorized()), context.sampler().sample(context.params().neg_samples)))\n        if tk.name() not in sampled_tokens:\n            sampled_tokens[tk.name()] = tk\n\n        # The following code block tries to update the learning rate when necessary. Not required for now.\n        # tokens_handled_counter = context.get_counter(TrainingContext.TOKENS_HANDLED_COUNTER)\n        # if tokens_handled_counter is not None:\n        #     if tokens_handled_counter.val() % context.params().alpha_update_interval == 0:\n        #         # Update the learning rate.\n        #         alpha = 1 - tokens_handled_counter.val() / (\n        #                 context.params().iteration * context.repo().num_of_tokens() + 1)\n        #         context.set_alpha(max(alpha, context.params().initial_alpha * 0.0001))\n\n        for sp_tk in sampled_tokens.values():\n            # Accumulate gradient for function vector.\n            g = (_dot_sigmoid(delta, tk.v_pred) - _identity(tk is sp_tk)) * context.alpha()\n            f_grad += g / 3 * tk.v_pred\n\n            if not context.is_estimating():\n                with context.lock():\n                    # Update v'_t\n                    tk.v_pred -= g * delta\n\n    # Apply function gradient.\n    with context.lock():\n        f.v -= f_grad\n\n    if not context.is_estimating():\n        # Apply gradient to instructions.\n        d = len(f_grad) // 2\n\n        with context.lock():\n            wnd.prev_ins_op().v -= f_grad[:d]\n            if len(wnd.prev_ins_args()) > 0:\n                prev_args_grad = f_grad[d:] / len(wnd.prev_ins_args())\n                for t in wnd.prev_ins_args():\n                    t.v -= prev_args_grad\n\n            wnd.next_ins_op().v -= f_grad[:d]\n            if len(wnd.next_ins_args()) > 0:\n                next_args_grad = f_grad[d:] / len(wnd.next_ins_args())\n                for t in wnd.next_ins_args():\n                    t.v -= next_args_grad\n\n\ndef _train_sequence(f: VectorizedFunction, seq: List[Instruction], context: TrainingContext) -> None:\n    wnd = context.create_sequence_window(seq)\n    while wnd.move_next():\n        _train_vectorized(wnd, f, context)\n\n\ndef train(repository: FunctionRepository, params: Asm2VecParams) -> None:\n    context = TrainingContext(repository, params)\n    context.add_counter(TrainingContext.TOKENS_HANDLED_COUNTER)\n\n    asm2vec_logger().debug('Total number of functions: %d', len(context.repo().funcs()))\n    progress = Atomic(1)\n\n    def train_function(fn: VectorizedFunction):\n        for seq in fn.sequential().sequences():\n            _train_sequence(fn, seq, context)\n\n        asm2vec_logger().debug('Function \"%s\" trained, progress: %f%%',\n                               fn.sequential().name(), progress.value() / len(context.repo().funcs()) * 100)\n        with progress.lock() as prog_proxy:\n            prog_proxy.set(prog_proxy.value() + 1)\n\n    executor = concurrent.futures.ThreadPoolExecutor(max_workers=context.params().jobs)\n    futures = []\n    for f in context.repo().funcs():\n        futures.append(executor.submit(train_function, f))\n\n    done, not_done = concurrent.futures.wait(futures, return_when=concurrent.futures.FIRST_EXCEPTION)\n    if len(not_done) > 0:\n        raise RuntimeError('Train failed due to one or more failed task.')\n\n\ndef estimate(f: VectorizedFunction, estimate_repo: FunctionRepository, params: Asm2VecParams) -> np.ndarray:\n    context = TrainingContext(estimate_repo, params, True)\n    for seq in f.sequential().sequences():\n        _train_sequence(f, seq, context)\n\n    return f.v\n"
  },
  {
    "path": "asm2vec/internal/util.py",
    "content": "import numpy as np\n\n\ndef make_small_ndarray(dim: int) -> np.ndarray:\n    rng = np.random.default_rng()\n    return (rng.random(dim) - 0.5) / dim\n"
  },
  {
    "path": "asm2vec/logging.py",
    "content": "import logging\n\n\ndef asm2vec_logger() -> logging.Logger:\n    return logging.getLogger('asm2vec')\n\n\ndef config_asm2vec_logging(**kwargs):\n    level = kwargs.get('level', logging.WARNING)\n    handlers = kwargs.get('handlers', [])\n    filters = kwargs.get('filters', [])\n\n    asm2vec_logger().setLevel(level)\n    for hd in handlers:\n        asm2vec_logger().addHandler(hd)\n    for ft in filters:\n        asm2vec_logger().addFilter(ft)\n"
  },
  {
    "path": "asm2vec/model.py",
    "content": "from typing import *\n\nimport numpy as np\n\nimport asm2vec.asm\nimport asm2vec.repo\n\nimport asm2vec.internal.training\nimport asm2vec.internal.repr\nimport asm2vec.internal.util\n\n\nclass Asm2VecMemento:\n    def __init__(self):\n        self.params: Optional[asm2vec.internal.training.Asm2VecParams] = None\n        self.vocab: Optional[Dict[str, asm2vec.repo.Token]] = None\n\n    def serialize(self) -> Dict[str, Any]:\n        return {\n            'params': self.params.to_dict(),\n            'vocab': asm2vec.repo.serialize_vocabulary(self.vocab)\n        }\n\n    def populate(self, rep: Dict[bytes, Any]) -> None:\n        self.params = asm2vec.internal.training.Asm2VecParams()\n        self.params.populate(rep[b'params'])\n        self.vocab = asm2vec.repo.deserialize_vocabulary(rep[b'vocab'])\n\n\nclass Asm2Vec:\n    def __init__(self, **kwargs):\n        self._params = asm2vec.internal.training.Asm2VecParams(**kwargs)\n        self._vocab = None\n\n    def memento(self) -> Asm2VecMemento:\n        memento = Asm2VecMemento()\n        memento.params = self._params\n        memento.vocab = self._vocab\n        return memento\n\n    def set_memento(self, memento: Asm2VecMemento) -> None:\n        self._params = memento.params\n        self._vocab = memento.vocab\n\n    def make_function_repo(self, funcs: List[asm2vec.asm.Function]) -> asm2vec.repo.FunctionRepository:\n        return asm2vec.internal.repr.make_function_repo(\n            funcs, self._params.d, self._params.num_of_rnd_walks, self._params.jobs)\n\n    def train(self, repo: asm2vec.repo.FunctionRepository) -> None:\n        asm2vec.internal.training.train(repo, self._params)\n        self._vocab = repo.vocab()\n\n    def to_vec(self, f: asm2vec.asm.Function) -> np.ndarray:\n        estimate_repo = asm2vec.internal.repr.make_estimate_repo(\n            self._vocab, f, self._params.d, self._params.num_of_rnd_walks)\n        vf = estimate_repo.funcs()[0]\n\n        asm2vec.internal.training.estimate(vf, estimate_repo, self._params)\n\n        return vf.v\n"
  },
  {
    "path": "asm2vec/parse.py",
    "content": "from typing import *\n\nimport asm2vec.asm\nimport asm2vec.internal.parse\n\nfrom asm2vec.internal.parse import AssemblySyntaxError\n\n\ndef parse_text(asm: str, **kwargs) -> List[asm2vec.asm.Function]:\n    return asm2vec.internal.parse.parse_asm_lines(asm.split('\\n'), **kwargs)\n\n\ndef parse_fp(fp, **kwargs) -> List[asm2vec.asm.Function]:\n    return asm2vec.internal.parse.parse_asm_lines(fp, **kwargs)\n\n\ndef parse(asm_file_name: str, **kwargs) -> List[asm2vec.asm.Function]:\n    with open(asm_file_name, mode='r') as fp:\n        return parse_fp(fp, **kwargs)\n"
  },
  {
    "path": "asm2vec/repo.py",
    "content": "from typing import *\n\nimport numpy as np\n\nimport asm2vec.asm\nimport asm2vec.internal.util\n\n\nclass SequentialFunction:\n    def __init__(self, fid: int, name: str, sequences: List[List[asm2vec.asm.Instruction]]):\n        self._id = fid\n        self._name = name\n        self._seq = sequences\n\n    def id(self) -> int:\n        return self._id\n\n    def name(self) -> str:\n        return self._name\n\n    def sequences(self) -> List[List[asm2vec.asm.Instruction]]:\n        return self._seq\n\n\nclass VectorizedFunction:\n    def __init__(self, f: SequentialFunction, v: np.ndarray = None, dim: int = 400):\n        self._f = f\n        self.v = v if v is not None else asm2vec.internal.util.make_small_ndarray(dim)\n\n    def sequential(self) -> SequentialFunction:\n        return self._f\n\n\nclass VectorizedToken:\n    def __init__(self, name: str, v: np.ndarray = None, v_pred: np.ndarray = None, dim: int = 200):\n        self._name = name\n        self.v = v if v is not None else np.zeros(dim)\n        self.v_pred = v_pred if v_pred is not None else asm2vec.internal.util.make_small_ndarray(dim * 2)\n\n    def __eq__(self, other):\n        if not isinstance(other, VectorizedToken):\n            return False\n\n        return self._name == other._name\n\n    def __ne__(self, other):\n        return not self.__eq__(other)\n\n    def name(self) -> str:\n        return self._name\n\n\nclass Token:\n    def __init__(self, vt: VectorizedToken, count: int = 1):\n        self._vt = vt\n        self.count: int = count\n        self.frequency: float = 0\n\n    def vectorized(self) -> VectorizedToken:\n        return self._vt\n\n    def name(self) -> str:\n        return self._vt.name()\n\n\nclass FunctionRepository:\n    def __init__(self, funcs: List[VectorizedFunction], vocab: Dict[str, Token]):\n        self._funcs = funcs\n        self._vocab = vocab\n        self._num_of_tokens = sum(map(lambda x: x.count, vocab.values()))\n\n    def funcs(self) -> List[VectorizedFunction]:\n        return self._funcs\n\n    def vocab(self) -> Dict[str, Token]:\n        return self._vocab\n\n    def num_of_tokens(self) -> int:\n        return self._num_of_tokens\n\n\ndef _serialize_token(token: Token) -> Dict[str, Any]:\n    return {\n        'name': token.name(),\n        'v': list(token.vectorized().v),\n        'v_pred': list(token.vectorized().v_pred),\n        'count': token.count,\n        'frequency': token.frequency\n    }\n\n\ndef _deserialize_token(rep: Dict[bytes, Any]) -> Token:\n    name = rep[b'name'].decode('utf-8')\n    v = np.array(rep[b'v'])\n    v_pred = np.array(rep[b'v_pred'])\n    count = rep[b'count']\n    frequency = rep[b'frequency']\n\n    token = Token(VectorizedToken(name, v, v_pred))\n    token.count = count\n    token.frequency = frequency\n    return token\n\n\ndef serialize_vocabulary(vocab: Dict[str, Token]) -> Dict[str, Any]:\n    return dict(zip(vocab.keys(), map(_serialize_token, vocab.values())))\n\n\ndef deserialize_vocabulary(rep: Dict[bytes, Any]) -> Dict[str, Token]:\n    return dict(zip(map(lambda b: b.decode('utf-8'), rep.keys()), map(_deserialize_token, rep.values())))\n\n\ndef _serialize_sequence(seq: List[asm2vec.asm.Instruction]) -> List[Any]:\n    return list(map(lambda instr: [instr.op(), instr.args()], seq))\n\n\ndef _deserialize_sequence(rep: List[Any]) -> List[asm2vec.asm.Instruction]:\n    return list(map(\n        lambda instr_rep: asm2vec.asm.Instruction(instr_rep[0].decode('utf-8'), instr_rep[1].decode('utf-8')), rep))\n\n\ndef _serialize_vectorized_function(func: VectorizedFunction, include_sequences: bool) -> Dict[str, Any]:\n    data = {\n        'id': func.sequential().id(),\n        'name': func.sequential().name(),\n        'v': list(func.v)\n    }\n\n    if include_sequences:\n        data['sequences'] = list(map(_serialize_sequence, func.sequential().sequences()))\n\n    return data\n\n\ndef _deserialize_vectorized_function(rep: Dict[bytes, Any]) -> VectorizedFunction:\n    name = rep[b'name'].decode('utf-8')\n    fid = rep[b'id']\n    v = np.array(rep[b'v'])\n    sequences = list(map(_deserialize_sequence, rep.get(b'sequences', [])))\n    return VectorizedFunction(SequentialFunction(fid, name, sequences), v)\n\n\nSERIALIZE_VOCABULARY: int = 1\nSERIALIZE_FUNCTION: int = 2\nSERIALIZE_FUNCTION_SEQUENCES: int = 4\nSERIALIZE_ALL: int = SERIALIZE_VOCABULARY | SERIALIZE_FUNCTION | SERIALIZE_FUNCTION_SEQUENCES\n\n\ndef serialize_function_repo(repo: FunctionRepository, flags: int) -> Dict[str, Any]:\n    data = dict()\n    if (flags & SERIALIZE_VOCABULARY) != 0:\n        data['vocab'] = serialize_vocabulary(repo.vocab())\n    if (flags & SERIALIZE_FUNCTION) != 0:\n        include_sequences = ((flags & SERIALIZE_FUNCTION_SEQUENCES) != 0)\n        data['funcs'] = list(map(\n            lambda f: _serialize_vectorized_function(f, include_sequences),\n            repo.funcs()))\n\n    return data\n\n\ndef deserialize_function_repo(rep: Dict[bytes, Any]) -> FunctionRepository:\n    funcs = list(map(_deserialize_vectorized_function, rep.get(b'funcs', [])))\n    vocab = deserialize_vocabulary(rep.get(b'vocab', dict()))\n    return FunctionRepository(funcs, vocab)\n"
  },
  {
    "path": "examples/estimating.s",
    "content": "my_strlen_est:\n        cmp     BYTE PTR [rdi], 0\n        je      .L4\n        mov     rax, rdi\n.L3:\n        add     rax, 1\n        cmp     BYTE PTR [rax], 0\n        jne     .L3\n.L2:\n        sub     rax, rdi\n        ret\n.L4:\n        mov     rax, rdi\n        jmp     .L2\nmy_strcmp_est:\n        movzx   eax, BYTE PTR [rdi]\n        test    al, al\n        je      .L12\n.L7:\n        movzx   edx, BYTE PTR [rsi]\n        test    dl, dl\n        je      .L15\n        cmp     dl, al\n        jne     .L16\n        add     rdi, 1\n        add     rsi, 1\n        movzx   eax, BYTE PTR [rdi]\n        test    al, al\n        jne     .L7\n.L12:\n        cmp     BYTE PTR [rsi], 0\n        setne   dl\n        movzx   edx, dl\n        neg     edx\n.L6:\n        mov     eax, edx\n        ret\n.L16:\n        movsx   eax, al\n        movsx   edx, dl\n        sub     eax, edx\n        mov     edx, eax\n        jmp     .L6\n.L15:\n        mov     edx, 1\n        test    al, al\n        jne     .L6\n        jmp     .L12\n.LC0:\n        .string \"%s\"\n.LC1:\n        .string \"%d\\n\"\nmain:\n        sub     rsp, 264\n        lea     rsi, [rsp+128]\n        mov     edi, OFFSET FLAT:.LC0\n        mov     eax, 0\n        call    scanf\n        mov     rsi, rsp\n        mov     edi, OFFSET FLAT:.LC0\n        mov     eax, 0\n        call    scanf\n        lea     rdi, [rsp+128]\n        call    my_strlen_est\n        mov     esi, eax\n        mov     edi, OFFSET FLAT:.LC1\n        mov     eax, 0\n        call    printf\n        mov     rsi, rsp\n        lea     rdi, [rsp+128]\n        call    my_strcmp_est\n        mov     esi, eax\n        mov     edi, OFFSET FLAT:.LC1\n        mov     eax, 0\n        call    printf\n        mov     eax, 0\n        add     rsp, 264\n        ret"
  },
  {
    "path": "examples/training-estimating.py",
    "content": "import numpy as np\n\nimport asm2vec.asm\nimport asm2vec.parse\nimport asm2vec.model\n\n\ndef cosine_similarity(v1, v2):\n    return np.dot(v1, v2) / (np.linalg.norm(v1) * np.linalg.norm(v2))\n\n\ndef main():\n    training_funcs = asm2vec.parse.parse('training.s',\n                                         func_names=['main', 'my_strlen_train', 'my_strcmp_train'])\n    estimating_funcs = asm2vec.parse.parse('estimating.s',\n                                           func_names=['main', 'my_strlen_est', 'my_strcmp_est'])\n\n    print('# of training functions:', len(training_funcs))\n    print('# of estimating functions:', len(estimating_funcs))\n\n    model = asm2vec.model.Asm2Vec(d=200)\n    training_repo = model.make_function_repo(training_funcs)\n    model.train(training_repo)\n    print('Training complete.')\n\n    for tf in training_repo.funcs():\n        print('Norm of trained function \"{}\" = {}'.format(tf.sequential().name(), np.linalg.norm(tf.v)))\n\n    estimating_funcs_vec = list(map(lambda f: model.to_vec(f), estimating_funcs))\n    print('Estimating complete.')\n\n    for (ef, efv) in zip(estimating_funcs, estimating_funcs_vec):\n        print('Norm of trained function \"{}\" = {}'.format(ef.name(), np.linalg.norm(efv)))\n\n    for tf in training_repo.funcs():\n        for (ef, efv) in zip(estimating_funcs, estimating_funcs_vec):\n            sim = cosine_similarity(tf.v, efv)\n            print('sim(\"{}\", \"{}\") = {}'.format(tf.sequential().name(), ef.name(), sim))\n\n\nif __name__ == '__main__':\n    main()\n"
  },
  {
    "path": "examples/training.s",
    "content": "my_strlen_train:\n        push    rbp\n        mov     rbp, rsp\n        mov     QWORD PTR [rbp-24], rdi\n        mov     rax, QWORD PTR [rbp-24]\n        mov     QWORD PTR [rbp-8], rax\n        jmp     .L2\n.L3:\n        add     QWORD PTR [rbp-8], 1\n.L2:\n        mov     rax, QWORD PTR [rbp-8]\n        movzx   eax, BYTE PTR [rax]\n        test    al, al\n        jne     .L3\n        mov     rax, QWORD PTR [rbp-8]\n        sub     rax, QWORD PTR [rbp-24]\n        pop     rbp\n        ret\nmy_strcmp_train:\n        push    rbp\n        mov     rbp, rsp\n        mov     QWORD PTR [rbp-8], rdi\n        mov     QWORD PTR [rbp-16], rsi\n        jmp     .L6\n.L10:\n        mov     rax, QWORD PTR [rbp-8]\n        movzx   edx, BYTE PTR [rax]\n        mov     rax, QWORD PTR [rbp-16]\n        movzx   eax, BYTE PTR [rax]\n        cmp     dl, al\n        je      .L7\n        mov     rax, QWORD PTR [rbp-8]\n        movzx   eax, BYTE PTR [rax]\n        movsx   edx, al\n        mov     rax, QWORD PTR [rbp-16]\n        movzx   eax, BYTE PTR [rax]\n        movsx   eax, al\n        sub     edx, eax\n        mov     eax, edx\n        jmp     .L8\n.L7:\n        add     QWORD PTR [rbp-8], 1\n        add     QWORD PTR [rbp-16], 1\n.L6:\n        mov     rax, QWORD PTR [rbp-8]\n        movzx   eax, BYTE PTR [rax]\n        test    al, al\n        je      .L9\n        mov     rax, QWORD PTR [rbp-16]\n        movzx   eax, BYTE PTR [rax]\n        test    al, al\n        jne     .L10\n.L9:\n        mov     rax, QWORD PTR [rbp-8]\n        movzx   eax, BYTE PTR [rax]\n        test    al, al\n        je      .L11\n        mov     eax, 1\n        jmp     .L8\n.L11:\n        mov     rax, QWORD PTR [rbp-16]\n        movzx   eax, BYTE PTR [rax]\n        test    al, al\n        je      .L12\n        mov     eax, -1\n        jmp     .L8\n.L12:\n        mov     eax, 0\n.L8:\n        pop     rbp\n        ret\n.LC0:\n        .string \"%s\"\n.LC1:\n        .string \"%d\\n\"\nmain:\n        push    rbp\n        mov     rbp, rsp\n        sub     rsp, 256\n        lea     rax, [rbp-128]\n        mov     rsi, rax\n        mov     edi, OFFSET FLAT:.LC0\n        mov     eax, 0\n        call    scanf\n        lea     rax, [rbp-256]\n        mov     rsi, rax\n        mov     edi, OFFSET FLAT:.LC0\n        mov     eax, 0\n        call    scanf\n        lea     rax, [rbp-128]\n        mov     rdi, rax\n        call    my_strlen_train\n        mov     esi, eax\n        mov     edi, OFFSET FLAT:.LC1\n        mov     eax, 0\n        call    printf\n        lea     rdx, [rbp-256]\n        lea     rax, [rbp-128]\n        mov     rsi, rdx\n        mov     rdi, rax\n        call    my_strcmp_train\n        mov     esi, eax\n        mov     edi, OFFSET FLAT:.LC1\n        mov     eax, 0\n        call    printf\n        mov     eax, 0\n        leave\n        ret"
  },
  {
    "path": "tests/asm_test.py",
    "content": "import unittest as ut\n\nimport asm2vec.asm as asm\n\n\nclass InstructionTest(ut.TestCase):\n    def test_parse_instruction(self):\n        ins = asm.parse_instruction('mov eax, ebx')\n        self.assertEqual('mov', ins.op(), 'Operators not equal')\n        self.assertListEqual(['eax', 'ebx'], ins.args(), 'Operands not equal')\n\n    def test_parse_instruction_one_operand(self):\n        ins = asm.parse_instruction('inc eax')\n        self.assertEqual('inc', ins.op(), 'Operators not equal')\n        self.assertListEqual(['eax'], ins.args(), 'Operands not equal')\n\n    def test_parse_instruction_no_operands(self):\n        ins = asm.parse_instruction('ret')\n        self.assertEqual('ret', ins.op(), 'Operators not equal')\n        self.assertListEqual([], ins.args(), 'Operands not equal')\n\n\nclass BasicBlockTest(ut.TestCase):\n    pass\n\n\nclass FunctionTest(ut.TestCase):\n    pass\n"
  },
  {
    "path": "tests/parse_test.py",
    "content": "import unittest as ut\n\nimport asm2vec.parse\n\n\ntest_asm = \"\"\"\nmy_strlen:\n        push    rbp\n        mov     rbp, rsp\n        mov     QWORD PTR [rbp-24], rdi\n        mov     rax, QWORD PTR [rbp-24]\n        mov     QWORD PTR [rbp-8], rax\n        jmp     .L2\n.L3:\n        add     QWORD PTR [rbp-8], 1\n.L2:\n        mov     rax, QWORD PTR [rbp-8]\n        movzx   eax, BYTE PTR [rax]\n        test    al, al\n        jne     .L3\n        mov     rax, QWORD PTR [rbp-8]\n        sub     rax, QWORD PTR [rbp-24]\n        pop     rbp\n        ret\n.LC0:\n        .string \"%s\"\n.LC1:\n        .string \"%d\\\\n\"\nmain:\n        push    rbp\n        mov     rbp, rsp\n        add     rsp, -128\n        lea     rax, [rbp-128]\n        mov     rsi, rax\n        mov     edi, OFFSET FLAT:.LC0\n        mov     eax, 0\n        call    scanf\n        lea     rax, [rbp-128]\n        mov     rdi, rax\n        call    my_strlen\n        mov     esi, eax\n        mov     edi, OFFSET FLAT:.LC1\n        mov     eax, 0\n        call    printf\n        mov     eax, 0\n        leave\n        ret\n\"\"\"\n\n\nclass ParseTest(ut.TestCase):\n    def test_parse_text(self):\n        funcs = asm2vec.parse.parse_text(test_asm, func_names=['main', 'my_strlen'])\n        self.assertEqual(2, len(funcs))\n        self.assertEqual({'main', 'my_strlen'}, set(map(lambda f: f.name(), funcs)))\n\n        funcs = dict(map(lambda f: (f.name(), f), funcs))\n        main_func: asm2vec.asm.Function = funcs['main']\n        my_strlen_func: asm2vec.asm.Function = funcs['my_strlen']\n\n        self.assertListEqual(['my_strlen'], list(map(lambda f: f.name(), main_func.callees())))\n        self.assertListEqual(['main'], list(map(lambda f: f.name(), my_strlen_func.callers())))\n"
  },
  {
    "path": "tests/utilities_test.py",
    "content": "import unittest as ut\n\nimport asm2vec.internal.util as utilities\n\n\nclass PermutationTest(ut.TestCase):\n    def test_permute(self):\n        v = [10, 20, 30, 40, 50]\n        p = [2, 4, 1, 0, 3]\n        pv = utilities.permute(v, p)\n        self.assertListEqual([30, 50, 20, 10, 40], pv, 'Permutated vectors not equal.')\n\n    def test_inv_permute(self):\n        v = [30, 50, 20, 10, 40]\n        p = [2, 4, 1, 0, 3]\n        pv = utilities.inverse_permute(v, p)\n        self.assertListEqual([10, 20, 30, 40, 50], pv, 'Inverse permutated vectors not equal.')\n"
  }
]