[
  {
    "path": "LICENSE",
    "content": "MIT License\n\nCopyright (c) 2020 Gabriele Corso, Luca Cavalleri\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "README.md",
    "content": "# Principal Neighbourhood Aggregation\n\nImplementation of Principal Neighbourhood Aggregation for Graph Nets [arxiv.org/abs/2004.05718](https://arxiv.org/abs/2004.05718) in PyTorch, DGL and PyTorch Geometric.\n\n*Update: now you can find PNA directly integrated in both [PyTorch Geometric](https://pytorch-geometric.readthedocs.io/en/latest/modules/nn.html#torch_geometric.nn.conv.PNAConv) and [DGL](https://docs.dgl.ai/generated/dgl.nn.pytorch.conv.PNAConv.html)!*\n\n![symbol](./multitask_benchmark/images/symbol.png)\n\n## Overview\n\nWe provide the implementation of the Principal Neighbourhood Aggregation (PNA) in PyTorch, DGL and PyTorch Geometric frameworks, along with scripts to generate and run the multitask benchmarks, scripts for running real-world benchmarks, a flexible PyTorch GNN framework and implementations of the other models used for comparison. The repository is organised as follows:\n\n- `models` contains:\n  - `pytorch` contains the various GNN models implemented in PyTorch:\n    - the implementation of the aggregators, the scalers and the PNA layer (`pna`)\n    - the flexible GNN framework that can be used with any type of graph convolutions (`gnn_framework.py`)\n    - implementations of the other GNN models used for comparison in the paper, namely GCN, GAT, GIN and MPNN\n  - `dgl` contains the PNA model implemented via the [DGL library](https://www.dgl.ai/): aggregators, scalers, and layer.\n  - `pytorch_geometric` contains the PNA model implemented via the [PyTorch Geometric library](https://pytorch-geometric.readthedocs.io/): aggregators, scalers, and layer.\n  - `layers.py` contains general NN layers used by the various models\n- `multi_task` contains various scripts to recreate the multi_task benchmark along with the files used to train the various models. In `multi_task/README.md` we detail the instructions for the generation and training hyperparameters tuned.\n- `real_world` contains various scripts from [Benchmarking GNNs](https://github.com/graphdeeplearning/benchmarking-gnns) to download the real-world benchmarks and train the PNA on them. In `real_world/README.md` we provide instructions for the generation and training hyperparameters tuned.\n\n![results](./multitask_benchmark/images/results.png)\n\n## Reference\n```\n@inproceedings{corso2020pna,\n title = {Principal Neighbourhood Aggregation for Graph Nets},\n author = {Corso, Gabriele and Cavalleri, Luca and Beaini, Dominique and Li\\`{o}, Pietro and Veli\\v{c}kovi\\'{c}, Petar},\n booktitle = {Advances in Neural Information Processing Systems},\n year = {2020}\n}\n```\n\n## License\nMIT\n\n\n## Acknowledgements\n\nThe authors would like to thank Saro Passaro for running some of the tests presented in this repository and \nGiorgos Bouritsas, Fabrizio Frasca, Leonardo Cotta, Zhanghao Wu, Zhanqiu Zhang and George Watkins for pointing out some issues with the code.\n"
  },
  {
    "path": "models/dgl/aggregators.py",
    "content": "import torch\n\nEPS = 1e-5\n\n\ndef aggregate_mean(h):\n    return torch.mean(h, dim=1)\n\n\ndef aggregate_max(h):\n    return torch.max(h, dim=1)[0]\n\n\ndef aggregate_min(h):\n    return torch.min(h, dim=1)[0]\n\n\ndef aggregate_std(h):\n    return torch.sqrt(aggregate_var(h) + EPS)\n\n\ndef aggregate_var(h):\n    h_mean_squares = torch.mean(h * h, dim=-2)\n    h_mean = torch.mean(h, dim=-2)\n    var = torch.relu(h_mean_squares - h_mean * h_mean)\n    return var\n\n\ndef aggregate_moment(h, n=3):\n    # for each node (E[(X-E[X])^n])^{1/n}\n    # EPS is added to the absolute value of expectation before taking the nth root for stability\n    h_mean = torch.mean(h, dim=1, keepdim=True)\n    h_n = torch.mean(torch.pow(h - h_mean, n))\n    rooted_h_n = torch.sign(h_n) * torch.pow(torch.abs(h_n) + EPS, 1. / n)\n    return rooted_h_n\n\n\ndef aggregate_moment_3(h):\n    return aggregate_moment(h, n=3)\n\n\ndef aggregate_moment_4(h):\n    return aggregate_moment(h, n=4)\n\n\ndef aggregate_moment_5(h):\n    return aggregate_moment(h, n=5)\n\n\ndef aggregate_sum(h):\n    return torch.sum(h, dim=1)\n\n\nAGGREGATORS = {'mean': aggregate_mean, 'sum': aggregate_sum, 'max': aggregate_max, 'min': aggregate_min,\n               'std': aggregate_std, 'var': aggregate_var, 'moment3': aggregate_moment_3, 'moment4': aggregate_moment_4,\n               'moment5': aggregate_moment_5}\n"
  },
  {
    "path": "models/dgl/pna_layer.py",
    "content": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport dgl.function as fn\n\nfrom .aggregators import AGGREGATORS\nfrom models.layers import MLP, FCLayer\nfrom .scalers import SCALERS\n\n\"\"\"\n    PNA: Principal Neighbourhood Aggregation \n    Gabriele Corso, Luca Cavalleri, Dominique Beaini, Pietro Lio, Petar Velickovic\n    https://arxiv.org/abs/2004.05718\n\"\"\"\n\n\nclass PNATower(nn.Module):\n    def __init__(self, in_dim, out_dim, dropout, graph_norm, batch_norm, aggregators, scalers, avg_d,\n                 pretrans_layers, posttrans_layers, edge_features, edge_dim):\n        super().__init__()\n        self.dropout = dropout\n        self.graph_norm = graph_norm\n        self.batch_norm = batch_norm\n        self.edge_features = edge_features\n\n        self.batchnorm_h = nn.BatchNorm1d(out_dim)\n        self.aggregators = aggregators\n        self.scalers = scalers\n        self.pretrans = MLP(in_size=2 * in_dim + (edge_dim if edge_features else 0), hidden_size=in_dim,\n                            out_size=in_dim, layers=pretrans_layers, mid_activation='relu', last_activation='none')\n        self.posttrans = MLP(in_size=(len(aggregators) * len(scalers) + 1) * in_dim, hidden_size=out_dim,\n                             out_size=out_dim, layers=posttrans_layers, mid_activation='relu', last_activation='none')\n        self.avg_d = avg_d\n\n    def pretrans_edges(self, edges):\n        if self.edge_features:\n            z2 = torch.cat([edges.src['h'], edges.dst['h'], edges.data['ef']], dim=1)\n        else:\n            z2 = torch.cat([edges.src['h'], edges.dst['h']], dim=1)\n        return {'e': self.pretrans(z2)}\n\n    def message_func(self, edges):\n        return {'e': edges.data['e']}\n\n    def reduce_func(self, nodes):\n        h = nodes.mailbox['e']\n        D = h.shape[-2]\n        h = torch.cat([aggregate(h) for aggregate in self.aggregators], dim=1)\n        h = torch.cat([scale(h, D=D, avg_d=self.avg_d) for scale in self.scalers], dim=1)\n        return {'h': h}\n\n    def posttrans_nodes(self, nodes):\n        return self.posttrans(nodes.data['h'])\n\n    def forward(self, g, h, e, snorm_n):\n        g.ndata['h'] = h\n        if self.edge_features: # add the edges information only if edge_features = True\n            g.edata['ef'] = e\n\n        # pretransformation\n        g.apply_edges(self.pretrans_edges)\n\n        # aggregation\n        g.update_all(self.message_func, self.reduce_func)\n        h = torch.cat([h, g.ndata['h']], dim=1)\n\n        # posttransformation\n        h = self.posttrans(h)\n\n        # graph and batch normalization\n        if self.graph_norm:\n            h = h * snorm_n\n        if self.batch_norm:\n            h = self.batchnorm_h(h)\n        h = F.dropout(h, self.dropout, training=self.training)\n        return h\n\n\nclass PNALayer(nn.Module):\n\n    def __init__(self, in_dim, out_dim, aggregators, scalers, avg_d, dropout, graph_norm, batch_norm, towers=1,\n                 pretrans_layers=1, posttrans_layers=1, divide_input=True, residual=False, edge_features=False,\n                 edge_dim=0):\n        \"\"\"\n        :param in_dim:              size of the input per node\n        :param out_dim:             size of the output per node\n        :param aggregators:         set of aggregation function identifiers\n        :param scalers:             set of scaling functions identifiers\n        :param avg_d:               average degree of nodes in the training set, used by scalers to normalize\n        :param dropout:             dropout used\n        :param graph_norm:          whether to use graph normalisation\n        :param batch_norm:          whether to use batch normalisation\n        :param towers:              number of towers to use\n        :param pretrans_layers:     number of layers in the transformation before the aggregation\n        :param posttrans_layers:    number of layers in the transformation after the aggregation\n        :param divide_input:        whether the input features should be split between towers or not\n        :param residual:            whether to add a residual connection\n        :param edge_features:       whether to use the edge features\n        :param edge_dim:            size of the edge features\n        \"\"\"\n        super().__init__()\n        assert ((not divide_input) or in_dim % towers == 0), \"if divide_input is set the number of towers has to divide in_dim\"\n        assert (out_dim % towers == 0), \"the number of towers has to divide the out_dim\"\n        assert avg_d is not None\n\n        # retrieve the aggregators and scalers functions\n        aggregators = [AGGREGATORS[aggr] for aggr in aggregators.split()]\n        scalers = [SCALERS[scale] for scale in scalers.split()]\n\n        self.divide_input = divide_input\n        self.input_tower = in_dim // towers if divide_input else in_dim\n        self.output_tower = out_dim // towers\n        self.in_dim = in_dim\n        self.out_dim = out_dim\n        self.edge_features = edge_features\n        self.residual = residual\n        if in_dim != out_dim:\n            self.residual = False\n\n        # convolution\n        self.towers = nn.ModuleList()\n        for _ in range(towers):\n            self.towers.append(PNATower(in_dim=self.input_tower, out_dim=self.output_tower, aggregators=aggregators,\n                                        scalers=scalers, avg_d=avg_d, pretrans_layers=pretrans_layers,\n                                        posttrans_layers=posttrans_layers, batch_norm=batch_norm, dropout=dropout,\n                                        graph_norm=graph_norm, edge_features=edge_features, edge_dim=edge_dim))\n        # mixing network\n        self.mixing_network = FCLayer(out_dim, out_dim, activation='LeakyReLU')\n\n    def forward(self, g, h, e, snorm_n):\n        h_in = h  # for residual connection\n\n        if self.divide_input:\n            h_cat = torch.cat(\n                [tower(g, h[:, n_tower * self.input_tower: (n_tower + 1) * self.input_tower],\n                       e, snorm_n)\n                 for n_tower, tower in enumerate(self.towers)], dim=1)\n        else:\n            h_cat = torch.cat([tower(g, h, e, snorm_n) for tower in self.towers], dim=1)\n\n        h_out = self.mixing_network(h_cat)\n\n        if self.residual:\n            h_out = h_in + h_out  # residual connection\n        return h_out\n\n    def __repr__(self):\n        return '{}(in_channels={}, out_channels={})'.format(self.__class__.__name__, self.in_dim, self.out_dim)\n\n\nclass PNASimpleLayer(nn.Module):\n\n    def __init__(self, in_dim, out_dim, aggregators, scalers, avg_d, dropout, batch_norm, residual,\n                posttrans_layers=1):\n        \"\"\"\n        A simpler version of PNA layer that simply aggregates the neighbourhood (similar to GCN and GIN),\n        without using the pretransformation or the tower mechanisms of the MPNN. It does not support edge features.\n\n        :param in_dim:              size of the input per node\n        :param out_dim:             size of the output per node\n        :param aggregators:         set of aggregation function identifiers\n        :param scalers:             set of scaling functions identifiers\n        :param avg_d:               average degree of nodes in the training set, used by scalers to normalize\n        :param dropout:             dropout used\n        :param batch_norm:          whether to use batch normalisation\n        :param posttrans_layers:    number of layers in the transformation after the aggregation\n        \"\"\"\n        super().__init__()\n\n        # retrieve the aggregators and scalers functions\n        aggregators = [AGGREGATORS[aggr] for aggr in aggregators.split()]\n        scalers = [SCALERS[scale] for scale in scalers.split()]\n\n        self.aggregators = aggregators\n        self.scalers = scalers\n        self.in_dim = in_dim\n        self.out_dim = out_dim\n        self.dropout = dropout\n        self.batch_norm = batch_norm\n        self.residual = residual\n\n        self.batchnorm_h = nn.BatchNorm1d(out_dim)\n        self.posttrans = MLP(in_size=(len(aggregators) * len(scalers)) * in_dim, hidden_size=out_dim,\n                             out_size=out_dim, layers=posttrans_layers, mid_activation='relu',\n                             last_activation='none')\n        self.avg_d = avg_d\n\n\n    def reduce_func(self, nodes):\n        h = nodes.mailbox['m']\n        D = h.shape[-2]\n        h = torch.cat([aggregate(h) for aggregate in self.aggregators], dim=1)\n        h = torch.cat([scale(h, D=D, avg_d=self.avg_d) for scale in self.scalers], dim=1)\n        return {'h': h}\n\n\n    def forward(self, g, h):\n        h_in = h\n        g.ndata['h'] = h\n\n        # aggregation\n        g.update_all(fn.copy_u('h', 'm'), self.reduce_func)\n        h = g.ndata['h']\n\n        # posttransformation\n        h = self.posttrans(h)\n\n        # batch normalization and residual\n        if self.batch_norm:\n            h = self.batchnorm_h(h)\n        h = F.relu(h)\n        if self.residual:\n            h = h_in + h\n\n        h = F.dropout(h, self.dropout, training=self.training)\n        return h\n\n    def __repr__(self):\n        return '{}(in_channels={}, out_channels={})'.format(self.__class__.__name__, self.in_dim, self.out_dim)\n"
  },
  {
    "path": "models/dgl/scalers.py",
    "content": "import torch\nimport numpy as np\n\n\n# each scaler is a function that takes as input X (B x N x Din), adj (B x N x N) and\n# avg_d (dictionary containing averages over training set) and returns X_scaled (B x N x Din) as output\n\ndef scale_identity(h, D=None, avg_d=None):\n    return h\n\n\ndef scale_amplification(h, D, avg_d):\n    # log(D + 1) / d * h     where d is the average of the ``log(D + 1)`` in the training set\n    return h * (np.log(D + 1) / avg_d[\"log\"])\n\n\ndef scale_attenuation(h, D, avg_d):\n    # (log(D + 1))^-1 / d * X     where d is the average of the ``log(D + 1))^-1`` in the training set\n    return h * (avg_d[\"log\"] / np.log(D + 1))\n\n\nSCALERS = {'identity': scale_identity, 'amplification': scale_amplification, 'attenuation': scale_attenuation}\n"
  },
  {
    "path": "models/layers.py",
    "content": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\nSUPPORTED_ACTIVATION_MAP = {'ReLU', 'Sigmoid', 'Tanh', 'ELU', 'SELU', 'GLU', 'LeakyReLU', 'Softplus', 'None'}\n\n\ndef get_activation(activation):\n    \"\"\" returns the activation function represented by the input string \"\"\"\n    if activation and callable(activation):\n        # activation is already a function\n        return activation\n    # search in SUPPORTED_ACTIVATION_MAP a torch.nn.modules.activation\n    activation = [x for x in SUPPORTED_ACTIVATION_MAP if activation.lower() == x.lower()]\n    assert len(activation) == 1 and isinstance(activation[0], str), 'Unhandled activation function'\n    activation = activation[0]\n    if activation.lower() == 'none':\n        return None\n    return vars(torch.nn.modules.activation)[activation]()\n\n\nclass Set2Set(torch.nn.Module):\n    r\"\"\"\n    Set2Set global pooling operator from the `\"Order Matters: Sequence to sequence for sets\"\n    <https://arxiv.org/abs/1511.06391>`_ paper. This pooling layer performs the following operation\n\n    .. math::\n        \\mathbf{q}_t &= \\mathrm{LSTM}(\\mathbf{q}^{*}_{t-1})\n\n        \\alpha_{i,t} &= \\mathrm{softmax}(\\mathbf{x}_i \\cdot \\mathbf{q}_t)\n\n        \\mathbf{r}_t &= \\sum_{i=1}^N \\alpha_{i,t} \\mathbf{x}_i\n\n        \\mathbf{q}^{*}_t &= \\mathbf{q}_t \\, \\Vert \\, \\mathbf{r}_t,\n\n    where :math:`\\mathbf{q}^{*}_T` defines the output of the layer with twice\n    the dimensionality as the input.\n\n    Arguments\n    ---------\n        input_dim: int\n            Size of each input sample.\n        hidden_dim: int, optional\n            the dim of set representation which corresponds to the input dim of the LSTM in Set2Set.\n            This is typically the sum of the input dim and the lstm output dim. If not provided, it will be set to :obj:`input_dim*2`\n        steps: int, optional\n            Number of iterations :math:`T`. If not provided, the number of nodes will be used.\n        num_layers : int, optional\n            Number of recurrent layers (e.g., :obj:`num_layers=2` would mean stacking two LSTMs together)\n            (Default, value = 1)\n    \"\"\"\n\n    def __init__(self, nin, nhid=None, steps=None, num_layers=1, activation=None, device='cpu'):\n        super(Set2Set, self).__init__()\n        self.steps = steps\n        self.nin = nin\n        self.nhid = nin * 2 if nhid is None else nhid\n        if self.nhid <= self.nin:\n            raise ValueError('Set2Set hidden_dim should be larger than input_dim')\n        # the hidden is a concatenation of weighted sum of embedding and LSTM output\n        self.lstm_output_dim = self.nhid - self.nin\n        self.num_layers = num_layers\n        self.lstm = nn.LSTM(self.nhid, self.nin, num_layers=num_layers, batch_first=True).to(device)\n        self.softmax = nn.Softmax(dim=1)\n\n    def forward(self, x):\n        r\"\"\"\n        Applies the pooling on input tensor x\n\n        Arguments\n        ----------\n            x: torch.FloatTensor\n                Input tensor of size (B, N, D)\n\n        Returns\n        -------\n            x: `torch.FloatTensor`\n                Tensor resulting from the  set2set pooling operation.\n        \"\"\"\n\n        batch_size = x.shape[0]\n        n = self.steps or x.shape[1]\n\n        h = (x.new_zeros((self.num_layers, batch_size, self.nin)),\n             x.new_zeros((self.num_layers, batch_size, self.nin)))\n\n        q_star = x.new_zeros(batch_size, 1, self.nhid)\n\n        for i in range(n):\n            # q: batch_size x 1 x input_dim\n            q, h = self.lstm(q_star, h)\n            # e: batch_size x n x 1\n            e = torch.matmul(x, torch.transpose(q, 1, 2))\n            a = self.softmax(e)\n            r = torch.sum(a * x, dim=1, keepdim=True)\n            q_star = torch.cat([q, r], dim=-1)\n\n        return torch.squeeze(q_star, dim=1)\n\n\nclass FCLayer(nn.Module):\n    r\"\"\"\n    A simple fully connected and customizable layer. This layer is centered around a torch.nn.Linear module.\n    The order in which transformations are applied is:\n\n    #. Dense Layer\n    #. Activation\n    #. Dropout (if applicable)\n    #. Batch Normalization (if applicable)\n\n    Arguments\n    ----------\n        in_size: int\n            Input dimension of the layer (the torch.nn.Linear)\n        out_size: int\n            Output dimension of the layer.\n        dropout: float, optional\n            The ratio of units to dropout. No dropout by default.\n            (Default value = 0.)\n        activation: str or callable, optional\n            Activation function to use.\n            (Default value = relu)\n        b_norm: bool, optional\n            Whether to use batch normalization\n            (Default value = False)\n        bias: bool, optional\n            Whether to enable bias in for the linear layer.\n            (Default value = True)\n        init_fn: callable, optional\n            Initialization function to use for the weight of the layer. Default is\n            :math:`\\mathcal{U}(-\\sqrt{k}, \\sqrt{k})` with :math:`k=\\frac{1}{ \\text{in_size}}`\n            (Default value = None)\n\n    Attributes\n    ----------\n        dropout: int\n            The ratio of units to dropout.\n        b_norm: int\n            Whether to use batch normalization\n        linear: torch.nn.Linear\n            The linear layer\n        activation: the torch.nn.Module\n            The activation layer\n        init_fn: function\n            Initialization function used for the weight of the layer\n        in_size: int\n            Input dimension of the linear layer\n        out_size: int\n            Output dimension of the linear layer\n    \"\"\"\n\n    def __init__(self, in_size, out_size, activation='relu', dropout=0., b_norm=False, bias=True, init_fn=None,\n                 device='cpu'):\n        super(FCLayer, self).__init__()\n\n        self.__params = locals()\n        del self.__params['__class__']\n        del self.__params['self']\n        self.in_size = in_size\n        self.out_size = out_size\n        self.bias = bias\n        self.linear = nn.Linear(in_size, out_size, bias=bias).to(device)\n        self.dropout = None\n        self.b_norm = None\n        if dropout:\n            self.dropout = nn.Dropout(p=dropout)\n        if b_norm:\n            self.b_norm = nn.BatchNorm1d(out_size).to(device)\n        self.activation = get_activation(activation)\n        self.init_fn = nn.init.xavier_uniform_\n\n        self.reset_parameters()\n\n    def reset_parameters(self, init_fn=None):\n        init_fn = init_fn or self.init_fn\n        if init_fn is not None:\n            init_fn(self.linear.weight, 1 / self.in_size)\n        if self.bias:\n            self.linear.bias.data.zero_()\n\n    def forward(self, x):\n        h = self.linear(x)\n        if self.activation is not None:\n            h = self.activation(h)\n        if self.dropout is not None:\n            h = self.dropout(h)\n        if self.b_norm is not None:\n            if h.shape[1] != self.out_size:\n                h = self.b_norm(h.transpose(1, 2)).transpose(1, 2)\n            else:\n                h = self.b_norm(h)\n        return h\n\n    def __repr__(self):\n        return self.__class__.__name__ + ' (' \\\n               + str(self.in_size) + ' -> ' \\\n               + str(self.out_size) + ')'\n\n\nclass MLP(nn.Module):\n    \"\"\"\n        Simple multi-layer perceptron, built of a series of FCLayers\n    \"\"\"\n\n    def __init__(self, in_size, hidden_size, out_size, layers, mid_activation='relu', last_activation='none',\n                 dropout=0., mid_b_norm=False, last_b_norm=False, device='cpu'):\n        super(MLP, self).__init__()\n\n        self.in_size = in_size\n        self.hidden_size = hidden_size\n        self.out_size = out_size\n\n        self.fully_connected = nn.ModuleList()\n        if layers <= 1:\n            self.fully_connected.append(FCLayer(in_size, out_size, activation=last_activation, b_norm=last_b_norm,\n                                                device=device, dropout=dropout))\n        else:\n            self.fully_connected.append(FCLayer(in_size, hidden_size, activation=mid_activation, b_norm=mid_b_norm,\n                                                device=device, dropout=dropout))\n            for _ in range(layers - 2):\n                self.fully_connected.append(FCLayer(hidden_size, hidden_size, activation=mid_activation,\n                                                    b_norm=mid_b_norm, device=device, dropout=dropout))\n            self.fully_connected.append(FCLayer(hidden_size, out_size, activation=last_activation, b_norm=last_b_norm,\n                                                device=device, dropout=dropout))\n\n    def forward(self, x):\n        for fc in self.fully_connected:\n            x = fc(x)\n        return x\n\n    def __repr__(self):\n        return self.__class__.__name__ + ' (' \\\n               + str(self.in_size) + ' -> ' \\\n               + str(self.out_size) + ')'\n\n\nclass GRU(nn.Module):\n    \"\"\"\n        Wrapper class for the GRU used by the GNN framework, nn.GRU is used for the Gated Recurrent Unit itself\n    \"\"\"\n\n    def __init__(self, input_size, hidden_size, device):\n        super(GRU, self).__init__()\n        self.input_size = input_size\n        self.hidden_size = hidden_size\n        self.gru = nn.GRU(input_size=input_size, hidden_size=hidden_size).to(device)\n\n    def forward(self, x, y):\n        \"\"\"\n        :param x:   shape: (B, N, Din) where Din <= input_size (difference is padded)\n        :param y:   shape: (B, N, Dh) where Dh <= hidden_size (difference is padded)\n        :return:    shape: (B, N, Dh)\n        \"\"\"\n        assert (x.shape[-1] <= self.input_size and y.shape[-1] <= self.hidden_size)\n\n        (B, N, _) = x.shape\n        x = x.reshape(1, B * N, -1).contiguous()\n        y = y.reshape(1, B * N, -1).contiguous()\n\n        # padding if necessary\n        if x.shape[-1] < self.input_size:\n            x = F.pad(input=x, pad=[0, self.input_size - x.shape[-1]], mode='constant', value=0)\n        if y.shape[-1] < self.hidden_size:\n            y = F.pad(input=y, pad=[0, self.hidden_size - y.shape[-1]], mode='constant', value=0)\n\n        x = self.gru(x, y)[1]\n        x = x.reshape(B, N, -1)\n        return x\n\n\nclass S2SReadout(nn.Module):\n    \"\"\"\n        Performs a Set2Set aggregation of all the graph nodes' features followed by a series of fully connected layers\n    \"\"\"\n\n    def __init__(self, in_size, hidden_size, out_size, fc_layers=3, device='cpu', final_activation='relu'):\n        super(S2SReadout, self).__init__()\n\n        # set2set aggregation\n        self.set2set = Set2Set(in_size, device=device)\n\n        # fully connected layers\n        self.mlp = MLP(in_size=2 * in_size, hidden_size=hidden_size, out_size=out_size, layers=fc_layers,\n                       mid_activation=\"relu\", last_activation=final_activation, mid_b_norm=True, last_b_norm=False,\n                       device=device)\n\n    def forward(self, x):\n        x = self.set2set(x)\n        return self.mlp(x)\n"
  },
  {
    "path": "models/pytorch/gat/layer.py",
    "content": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\n\nclass GATHead(nn.Module):\n\n    def __init__(self, in_features, out_features, alpha, activation=True, device='cpu'):\n        super(GATHead, self).__init__()\n        self.in_features = in_features\n        self.out_features = out_features\n        self.activation = activation\n\n        self.W = nn.Parameter(torch.zeros(size=(in_features, out_features), device=device))\n        self.a = nn.Parameter(torch.zeros(size=(2 * out_features, 1), device=device))\n        self.leakyrelu = nn.LeakyReLU(alpha)\n\n        self.reset_parameters()\n\n    def reset_parameters(self):\n        nn.init.xavier_uniform_(self.W.data, gain=0.1414)\n        nn.init.xavier_uniform_(self.a.data, gain=0.1414)\n\n    def forward(self, input, adj):\n\n        h = torch.matmul(input, self.W)\n        (B, N, _) = adj.shape\n        a_input = torch.cat([h.repeat(1, 1, N).view(B, N * N, -1), h.repeat(1, N, 1)], dim=1)\\\n            .view(B, N, -1, 2 * self.out_features)\n        e = self.leakyrelu(torch.matmul(a_input, self.a).squeeze(3))\n\n        zero_vec = -9e15 * torch.ones_like(e)\n\n        attention = torch.where(adj > 0, e, zero_vec)\n        attention = F.softmax(attention, dim=1)\n        h_prime = torch.matmul(attention, h)\n\n        if self.activation:\n            return F.elu(h_prime)\n        else:\n            return h_prime\n\n    def __repr__(self):\n        return self.__class__.__name__ + ' (' + str(self.in_features) + ' -> ' + str(self.out_features) + ')'\n\n\nclass GATLayer(nn.Module):\n    \"\"\"\n        Graph Attention Layer, GAT paper at https://arxiv.org/abs/1710.10903\n        Implementation inspired by https://github.com/Diego999/pyGAT\n    \"\"\"\n\n    def __init__(self, in_features, out_features, alpha, nheads=1, activation=True, device='cpu'):\n        \"\"\"\n        :param in_features:     size of the input per node\n        :param out_features:    size of the output per node\n        :param alpha:           slope of the leaky relu\n        :param nheads:          number of attention heads\n        :param activation:      whether to apply a non-linearity\n        :param device:          device used for computation\n        \"\"\"\n        super(GATLayer, self).__init__()\n        assert (out_features % nheads == 0)\n\n        self.input_head = in_features\n        self.output_head = out_features // nheads\n\n        self.heads = nn.ModuleList()\n        for _ in range(nheads):\n            self.heads.append(GATHead(in_features=self.input_head, out_features=self.output_head, alpha=alpha,\n                                      activation=activation, device=device))\n\n    def forward(self, input, adj):\n        y = torch.cat([head(input, adj) for head in self.heads], dim=2)\n        return y\n\n    def __repr__(self):\n        return self.__class__.__name__ + ' (' \\\n               + str(self.in_features) + ' -> ' \\\n               + str(self.out_features) + ')'\n"
  },
  {
    "path": "models/pytorch/gcn/layer.py",
    "content": "import math\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\n\nclass GCNLayer(nn.Module):\n    \"\"\"\n        GCN layer, similar to https://arxiv.org/abs/1609.02907\n        Implementation inspired by https://github.com/tkipf/pygcn\n    \"\"\"\n\n    def __init__(self, in_features, out_features, bias=True, device='cpu'):\n        \"\"\"\n        :param in_features:     size of the input per node\n        :param out_features:    size of the output per node\n        :param bias:            whether to add a learnable bias before the activation\n        :param device:          device used for computation\n        \"\"\"\n        super(GCNLayer, self).__init__()\n        self.in_features = in_features\n        self.out_features = out_features\n        self.device = device\n        self.W = nn.Parameter(torch.zeros(size=(in_features, out_features), device=device))\n        if bias:\n            self.b = nn.Parameter(torch.zeros(out_features, device=device))\n        else:\n            self.register_parameter('b', None)\n        self.reset_parameters()\n\n    def reset_parameters(self):\n        stdv = 1. / math.sqrt(self.W.size(1))\n        self.W.data.uniform_(-stdv, stdv)\n        if self.b is not None:\n            self.b.data.uniform_(-stdv, stdv)\n\n    def forward(self, X, adj):\n        (B, N, _) = adj.shape\n\n        # linear transformation\n        XW = torch.matmul(X, self.W)\n\n        # normalised mean aggregation\n        adj = adj + torch.eye(N, device=self.device).unsqueeze(0)\n        rD = torch.mul(torch.pow(torch.sum(adj, -1, keepdim=True), -0.5),\n                       torch.eye(N, device=self.device).unsqueeze(0))  # D^{-1/2]\n        adj = torch.matmul(torch.matmul(rD, adj), rD)  # D^{-1/2] A' D^{-1/2]\n        y = torch.bmm(adj, XW)\n\n        if self.b is not None:\n            y = y + self.b\n        return F.leaky_relu(y)\n\n    def __repr__(self):\n        return self.__class__.__name__ + ' (' \\\n               + str(self.in_features) + ' -> ' \\\n               + str(self.out_features) + ')'\n"
  },
  {
    "path": "models/pytorch/gin/layer.py",
    "content": "import torch\nimport torch.nn as nn\nfrom models.layers import MLP\n\n\nclass GINLayer(nn.Module):\n    \"\"\"\n        Graph Isomorphism Network layer, similar to https://arxiv.org/abs/1810.00826\n    \"\"\"\n\n    def __init__(self, in_features, out_features, fc_layers=2, device='cpu'):\n        \"\"\"\n        :param in_features:     size of the input per node\n        :param out_features:    size of the output per node\n        :param fc_layers:       number of fully connected layers after the sum aggregator\n        :param device:          device used for computation\n        \"\"\"\n        super(GINLayer, self).__init__()\n\n        self.device = device\n        self.in_features = in_features\n        self.out_features = out_features\n        self.epsilon = nn.Parameter(torch.zeros(size=(1,), device=device))\n        self.post_transformation = MLP(in_size=in_features, hidden_size=max(in_features, out_features),\n                                       out_size=out_features, layers=fc_layers, mid_activation='relu',\n                                       last_activation='relu', mid_b_norm=True, last_b_norm=False, device=device)\n        self.reset_parameters()\n\n    def reset_parameters(self):\n        self.epsilon.data.fill_(0.1)\n\n    def forward(self, input, adj):\n        (B, N, _) = adj.shape\n\n        # sum aggregation\n        mod_adj = adj + torch.eye(N, device=self.device).unsqueeze(0) * (1 + self.epsilon)\n        support = torch.matmul(mod_adj, input)\n\n        # post-aggregation transformation\n        return self.post_transformation(support)\n\n    def __repr__(self):\n        return self.__class__.__name__ + ' (' \\\n               + str(self.in_features) + ' -> ' \\\n               + str(self.out_features) + ')'\n"
  },
  {
    "path": "models/pytorch/gnn_framework.py",
    "content": "import types\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom models.layers import GRU, S2SReadout, MLP\n\n\nclass GNN(nn.Module):\n    def __init__(self, nfeat, nhid, nodes_out, graph_out, dropout, conv_layers=2, fc_layers=3, first_conv_descr=None,\n                 middle_conv_descr=None, final_activation='LeakyReLU', skip=False, gru=False, fixed=False,\n                 variable=False, device='cpu'):\n        \"\"\"\n        :param nfeat:               number of input features per node\n        :param nhid:                number of hidden features per node\n        :param nodes_out:           number of nodes' labels\n        :param graph_out:           number of graph labels\n        :param dropout:             dropout value\n        :param conv_layers:         if variable, conv_layers should be a function : adj -> int, otherwise an int\n        :param fc_layers:           number of fully connected layers before the labels\n        :param first_conv_descr:    dict or SimpleNamespace: \"type\"-> type of layer, \"args\" -> dict of calling args\n        :param middle_conv_descr:   dict or SimpleNamespace : \"type\"-> type of layer, \"args\" -> dict of calling args\n        :param final_activation:    activation to be used on the last fc layer before the labels\n        :param skip:                whether to use skip connections feeding to the readout\n        :param gru:                 whether to use a shared GRU after each convolution\n        :param fixed:               whether to reuse the same middle convolutional layer multiple times\n        :param variable:            whether the number of convolutional layers is variable or fixed\n        :param device:              device used for computation\n        \"\"\"\n        super(GNN, self).__init__()\n        if variable:\n            assert callable(conv_layers), \"conv_layers should be a function from adjacency matrix to int\"\n            assert fixed, \"With a variable number of layers they must be fixed\"\n            assert not skip, \"cannot have skip and fixed at the same time\"\n        else:\n            assert type(conv_layers) == int, \"conv_layers should be an int\"\n            assert conv_layers > 0, \"conv_layers should be greater than 0\"\n\n        if type(first_conv_descr) == dict:\n            first_conv_descr = types.SimpleNamespace(**first_conv_descr)\n        assert type(first_conv_descr) == types.SimpleNamespace, \"first_conv_descr should be dict or SimpleNamespace\"\n        if type(first_conv_descr.args) == dict:\n            first_conv_descr.args = types.SimpleNamespace(**first_conv_descr.args)\n        assert type(first_conv_descr.args) == types.SimpleNamespace, \\\n            \"first_conv_descr.args should be either a dict or a SimpleNamespace\"\n\n        if type(middle_conv_descr) == dict:\n            middle_conv_descr = types.SimpleNamespace(**middle_conv_descr)\n        assert type(middle_conv_descr) == types.SimpleNamespace, \"middle_conv_descr should be dict or SimpleNamespace\"\n        if type(middle_conv_descr.args) == dict:\n            middle_conv_descr.args = types.SimpleNamespace(**middle_conv_descr.args)\n        assert type(middle_conv_descr.args) == types.SimpleNamespace, \\\n            \"middle_conv_descr.args should be either a dict or a SimpleNamespace\"\n\n        self.dropout = dropout\n        self.conv_layers = nn.ModuleList()\n        self.skip = skip\n        self.fixed = fixed\n        self.variable = variable\n        self.n_fixed_conv = conv_layers\n        self.gru = GRU(input_size=nhid, hidden_size=nhid, device=device) if gru else None\n\n        # first graph convolution\n        first_conv_descr.args.in_features = nfeat\n        first_conv_descr.args.out_features = nhid\n        first_conv_descr.args.device = device\n        self.conv_layers.append(first_conv_descr.layer_type(**vars(first_conv_descr.args)))\n\n        # middle graph convolutions\n        middle_conv_descr.args.in_features = nhid\n        middle_conv_descr.args.out_features = nhid\n        middle_conv_descr.args.device = device\n        for l in range(1 if fixed else conv_layers - 1):\n            self.conv_layers.append(\n                middle_conv_descr.layer_type(**vars(middle_conv_descr.args)))\n\n        n_conv_out = nfeat + conv_layers * nhid if skip else nhid\n\n        # nodes output: fully connected layers\n        self.nodes_read_out = MLP(in_size=n_conv_out, hidden_size=n_conv_out, out_size=nodes_out, layers=fc_layers,\n                                  mid_activation=\"LeakyReLU\", last_activation=final_activation, device=device)\n\n        # graph output: S2S readout\n        self.graph_read_out = S2SReadout(n_conv_out, n_conv_out, graph_out, fc_layers=fc_layers, device=device,\n                                         final_activation=final_activation)\n\n    def forward(self, x, adj):\n        # graph convolutions\n        skip_connections = [x] if self.skip else None\n\n        n_layers = self.n_fixed_conv(adj) if self.variable else self.n_fixed_conv\n        conv_layers = [self.conv_layers[0]] + ([self.conv_layers[1]] * (n_layers - 1)) if self.fixed else self.conv_layers\n\n        for layer, conv in enumerate(conv_layers):\n            y = conv(x, adj)\n            x = y if self.gru is None else self.gru(x, y)\n\n            if self.skip:\n                skip_connections.append(x)\n\n            # dropout at all layers but the last\n            if layer != n_layers - 1:\n                x = F.dropout(x, self.dropout, training=self.training)\n\n        if self.skip:\n            x = torch.cat(skip_connections, dim=2)\n\n        # readout output\n        return (self.nodes_read_out(x), self.graph_read_out(x))\n"
  },
  {
    "path": "models/pytorch/pna/aggregators.py",
    "content": "import math\nimport torch\n\nEPS = 1e-5\n\n\n# each aggregator is a function taking as input X (B x N x N x Din), adj (B x N x N), self_loop and device and\n# returning the aggregated value of X (B x N x Din) for each dimension\n\ndef aggregate_identity(X, adj, self_loop=False, device='cpu'):\n    # Y is corresponds to the elements of the main diagonal of X\n    (_, N, N, _) = X.shape\n    Y = torch.sum(torch.mul(X, torch.eye(N).reshape(1, N, N, 1)), dim=2)\n    return Y\n\n\ndef aggregate_mean(X, adj, self_loop=False, device='cpu'):\n    # D^{-1} A * X    i.e. the mean of the neighbours\n\n    if self_loop:  # add self connections\n        (B, N, _) = adj.shape\n        adj = adj + torch.eye(N, device=device).unsqueeze(0)\n\n    D = torch.sum(adj, -1, keepdim=True)\n    X_sum = torch.sum(torch.mul(X, adj.unsqueeze(-1)), dim=2)\n    X_mean = torch.div(X_sum, D)\n    return X_mean\n\n\ndef aggregate_max(X, adj, min_value=-math.inf, self_loop=False, device='cpu'):\n    (B, N, N, Din) = X.shape\n\n    if self_loop:  # add self connections\n        adj = adj + torch.eye(N, device=device).unsqueeze(0)\n\n    adj = adj.unsqueeze(-1)  # adding extra dimension\n    M = torch.where(adj > 0.0, X, torch.tensor(min_value, device=device))\n    max = torch.max(M, -3)[0]\n    return max\n\n\ndef aggregate_min(X, adj, max_value=math.inf, self_loop=False, device='cpu'):\n    (B, N, N, Din) = X.shape\n\n    if self_loop:  # add self connections\n        adj = adj + torch.eye(N, device=device).unsqueeze(0)\n\n    adj = adj.unsqueeze(-1)  # adding extra dimension\n    M = torch.where(adj > 0.0, X, torch.tensor(max_value, device=device))\n    min = torch.min(M, -3)[0]\n    return min\n\n\ndef aggregate_std(X, adj, self_loop=False, device='cpu'):\n    # sqrt(relu(D^{-1} A X^2 - (D^{-1} A X)^2) + EPS)     i.e.  the standard deviation of the features of the neighbours\n    # the EPS is added for the stability of the derivative of the square root\n    std = torch.sqrt(aggregate_var(X, adj, self_loop, device) + EPS)  # sqrt(mean_squares_X - mean_X^2)\n    return std\n\n\ndef aggregate_var(X, adj, self_loop=False, device='cpu'):\n    # relu(D^{-1} A X^2 - (D^{-1} A X)^2)     i.e.  the variance of the features of the neighbours\n\n    if self_loop:  # add self connections\n        (B, N, _) = adj.shape\n        adj = adj + torch.eye(N, device=device).unsqueeze(0)\n\n    D = torch.sum(adj, -1, keepdim=True)\n    X_sum_squares = torch.sum(torch.mul(torch.mul(X, X), adj.unsqueeze(-1)), dim=2)\n    X_mean_squares = torch.div(X_sum_squares, D)  # D^{-1} A X^2\n    X_mean = aggregate_mean(X, adj)  # D^{-1} A X\n    var = torch.relu(X_mean_squares - torch.mul(X_mean, X_mean))  # relu(mean_squares_X - mean_X^2)\n    return var\n\n\ndef aggregate_sum(X, adj, self_loop=False, device='cpu'):\n    # A * X    i.e. the mean of the neighbours\n\n    if self_loop:  # add self connections\n        (B, N, _) = adj.shape\n        adj = adj + torch.eye(N, device=device).unsqueeze(0)\n\n    X_sum = torch.sum(torch.mul(X, adj.unsqueeze(-1)), dim=2)\n    return X_sum\n\n\ndef aggregate_normalised_mean(X, adj, self_loop=False, device='cpu'):\n    # D^{-1/2] A D^{-1/2] X\n    (B, N, N, _) = X.shape\n\n    if self_loop:  # add self connections\n        adj = adj + torch.eye(N, device=device).unsqueeze(0)\n\n    rD = torch.mul(torch.pow(torch.sum(adj, -1, keepdim=True), -0.5), torch.eye(N, device=device)\n                   .unsqueeze(0).repeat(B, 1, 1))  # D^{-1/2]\n    adj = torch.matmul(torch.matmul(rD, adj), rD)  # D^{-1/2] A' D^{-1/2]\n\n    X_sum = torch.sum(torch.mul(X, adj.unsqueeze(-1)), dim=2)\n    return X_sum\n\n\ndef aggregate_softmax(X, adj, self_loop=False, device='cpu'):\n    # for each node sum_i(x_i*exp(x_i)/sum_j(exp(x_j)) where x_i and x_j vary over the neighbourhood of the node\n    (B, N, N, Din) = X.shape\n\n    if self_loop:  # add self connections\n        adj = adj + torch.eye(N, device=device).unsqueeze(0)\n\n    X_exp = torch.exp(X)\n    adj = adj.unsqueeze(-1)  # adding extra dimension\n    X_exp = torch.mul(X_exp, adj)\n    X_sum = torch.sum(X_exp, dim=2, keepdim=True)\n    softmax = torch.sum(torch.mul(torch.div(X_exp, X_sum), X), dim=2)\n    return softmax\n\n\ndef aggregate_softmin(X, adj, self_loop=False, device='cpu'):\n    # for each node sum_i(x_i*exp(-x_i)/sum_j(exp(-x_j)) where x_i and x_j vary over the neighbourhood of the node\n    return -aggregate_softmax(-X, adj, self_loop=self_loop, device=device)\n\n\ndef aggregate_moment(X, adj, self_loop=False, device='cpu', n=3):\n    # for each node (E[(X-E[X])^n])^{1/n}\n    # EPS is added to the absolute value of expectation before taking the nth root for stability\n\n    if self_loop:  # add self connections\n        (B, N, _) = adj.shape\n        adj = adj + torch.eye(N, device=device).unsqueeze(0)\n\n    D = torch.sum(adj, -1, keepdim=True)\n    X_mean = aggregate_mean(X, adj, self_loop=self_loop, device=device)\n    X_n = torch.div(torch.sum(torch.mul(torch.pow(X - X_mean.unsqueeze(2), n), adj.unsqueeze(-1)), dim=2), D)\n    rooted_X_n = torch.sign(X_n) * torch.pow(torch.abs(X_n) + EPS, 1. / n)\n    return rooted_X_n\n\n\ndef aggregate_moment_3(X, adj, self_loop=False, device='cpu'):\n    return aggregate_moment(X, adj, self_loop=self_loop, device=device, n=3)\n\n\ndef aggregate_moment_4(X, adj, self_loop=False, device='cpu'):\n    return aggregate_moment(X, adj, self_loop=self_loop, device=device, n=4)\n\n\ndef aggregate_moment_5(X, adj, self_loop=False, device='cpu'):\n    return aggregate_moment(X, adj, self_loop=self_loop, device=device, n=5)\n\n\nAGGREGATORS = {'mean': aggregate_mean, 'sum': aggregate_sum, 'max': aggregate_max, 'min': aggregate_min,\n               'identity': aggregate_identity, 'std': aggregate_std, 'var': aggregate_var,\n               'normalised_mean': aggregate_normalised_mean, 'softmax': aggregate_softmax, 'softmin': aggregate_softmin,\n               'moment3': aggregate_moment_3, 'moment4': aggregate_moment_4, 'moment5': aggregate_moment_5}\n"
  },
  {
    "path": "models/pytorch/pna/layer.py",
    "content": "import torch\nimport torch.nn as nn\n\nfrom models.pytorch.pna.aggregators import AGGREGATORS\nfrom models.pytorch.pna.scalers import SCALERS\nfrom models.layers import FCLayer, MLP\n\n\nclass PNATower(nn.Module):\n    def __init__(self, in_features, out_features, aggregators, scalers, avg_d, self_loop, pretrans_layers,\n                 posttrans_layers, device):\n        \"\"\"\n        :param in_features:     size of the input per node of the tower\n        :param out_features:    size of the output per node of the tower\n        :param aggregators:     set of aggregation functions each taking as input X (B x N x N x Din), adj (B x N x N), self_loop and device\n        :param scalers:         set of scaling functions each taking as input X (B x N x Din), adj (B x N x N) and avg_d\n        \"\"\"\n        super(PNATower, self).__init__()\n\n        self.device = device\n        self.in_features = in_features\n        self.out_features = out_features\n        self.aggregators = aggregators\n        self.scalers = scalers\n        self.self_loop = self_loop\n        self.pretrans = MLP(in_size=2 * self.in_features, hidden_size=self.in_features, out_size=self.in_features,\n                            layers=pretrans_layers, mid_activation='relu', last_activation='none')\n        self.posttrans = MLP(in_size=(len(aggregators) * len(scalers) + 1) * self.in_features,\n                             hidden_size=self.out_features, out_size=self.out_features, layers=posttrans_layers,\n                             mid_activation='relu', last_activation='none')\n        self.avg_d = avg_d\n\n    def forward(self, input, adj):\n        (B, N, _) = adj.shape\n\n        # pre-aggregation transformation\n        h_i = input.unsqueeze(2).repeat(1, 1, N, 1)\n        h_j = input.unsqueeze(1).repeat(1, N, 1, 1)\n        h_cat = torch.cat([h_i, h_j], dim=3)\n        h_mod = self.pretrans(h_cat)\n\n        # aggregation\n        m = torch.cat([aggregate(h_mod, adj, self_loop=self.self_loop, device=self.device) for aggregate in self.aggregators], dim=2)\n        m = torch.cat([scale(m, adj, avg_d=self.avg_d) for scale in self.scalers], dim=2)\n\n        # post-aggregation transformation\n        m_cat = torch.cat([input, m], dim=2)\n        out = self.posttrans(m_cat)\n        return out\n\n    def __repr__(self):\n        return self.__class__.__name__ + ' (' \\\n               + str(self.in_features) + ' -> ' \\\n               + str(self.out_features) + ')'\n\n\nclass PNALayer(nn.Module):\n    \"\"\"\n        Implements a single convolutional layer of the Principal Neighbourhood Aggregation Networks\n        as described in https://arxiv.org/abs/2004.05718\n    \"\"\"\n\n    def __init__(self, in_features, out_features, aggregators, scalers, avg_d, towers=1, self_loop=False,\n                 pretrans_layers=1, posttrans_layers=1, divide_input=True, device='cpu'):\n        \"\"\"\n        :param in_features:     size of the input per node\n        :param out_features:    size of the output per node\n        :param aggregators:     set of aggregation function identifiers\n        :param scalers:         set of scaling functions identifiers\n        :param avg_d:           average degree of nodes in the training set, used by scalers to normalize\n        :param self_loop:       whether to add a self loop in the adjacency matrix when aggregating\n        :param pretrans_layers: number of layers in the transformation before the aggregation\n        :param posttrans_layers: number of layers in the transformation after the aggregation\n        :param divide_input:    whether the input features should be split between towers or not\n        :param device:          device used for computation\n        \"\"\"\n        super(PNALayer, self).__init__()\n        assert ((not divide_input) or in_features % towers == 0), \"if divide_input is set the number of towers has to divide in_features\"\n        assert (out_features % towers == 0), \"the number of towers has to divide the out_features\"\n\n        # retrieve the aggregators and scalers functions\n        aggregators = [AGGREGATORS[aggr] for aggr in aggregators]\n        scalers = [SCALERS[scale] for scale in scalers]\n\n        self.divide_input = divide_input\n        self.input_tower = in_features // towers if divide_input else in_features\n        self.output_tower = out_features // towers\n\n        # convolution\n        self.towers = nn.ModuleList()\n        for _ in range(towers):\n            self.towers.append(\n                PNATower(in_features=self.input_tower, out_features=self.output_tower, aggregators=aggregators,\n                         scalers=scalers, avg_d=avg_d, self_loop=self_loop, pretrans_layers=pretrans_layers,\n                         posttrans_layers=posttrans_layers, device=device))\n        # mixing network\n        self.mixing_network = FCLayer(out_features, out_features, activation='LeakyReLU')\n\n    def forward(self, input, adj):\n        # convolution\n        if self.divide_input:\n            y = torch.cat(\n                [tower(input[:, :, n_tower * self.input_tower: (n_tower + 1) * self.input_tower], adj)\n                 for n_tower, tower in enumerate(self.towers)], dim=2)\n        else:\n            y = torch.cat([tower(input, adj) for tower in self.towers], dim=2)\n\n        # mixing network\n        return self.mixing_network(y)\n\n    def __repr__(self):\n        return self.__class__.__name__ + ' (' \\\n               + str(self.in_features) + ' -> ' \\\n               + str(self.out_features) + ')'\n"
  },
  {
    "path": "models/pytorch/pna/scalers.py",
    "content": "import torch\n\n\n# each scaler is a function that takes as input X (B x N x Din), adj (B x N x N) and\n# avg_d (dictionary containing averages over training set) and returns X_scaled (B x N x Din) as output\n\ndef scale_identity(X, adj, avg_d=None):\n    return X\n\n\ndef scale_amplification(X, adj, avg_d=None):\n    # log(D + 1) / d * X     where d is the average of the ``log(D + 1)`` in the training set\n    D = torch.sum(adj, -1)\n    scale = (torch.log(D + 1) / avg_d[\"log\"]).unsqueeze(-1)\n    X_scaled = torch.mul(scale, X)\n    return X_scaled\n\n\ndef scale_attenuation(X, adj, avg_d=None):\n    # (log(D + 1))^-1 / d * X     where d is the average of the ``log(D + 1))^-1`` in the training set\n    D = torch.sum(adj, -1)\n    scale = (avg_d[\"log\"] / torch.log(D + 1)).unsqueeze(-1)\n    X_scaled = torch.mul(scale, X)\n    return X_scaled\n\n\ndef scale_linear(X, adj, avg_d=None):\n    # d^{-1} D X     where d is the average degree in the training set\n    D = torch.sum(adj, -1, keepdim=True)\n    X_scaled = D * X / avg_d[\"lin\"]\n    return X_scaled\n\n\ndef scale_inverse_linear(X, adj, avg_d=None):\n    # d D^{-1} X     where d is the average degree in the training set\n    D = torch.sum(adj, -1, keepdim=True)\n    X_scaled = avg_d[\"lin\"] * X / D\n    return X_scaled\n\n\nSCALERS = {'identity': scale_identity, 'amplification': scale_amplification, 'attenuation': scale_attenuation,\n           'linear': scale_linear, 'inverse_linear': scale_inverse_linear}\n"
  },
  {
    "path": "models/pytorch_geometric/aggregators.py",
    "content": "import torch\nfrom torch import Tensor\nfrom torch_scatter import scatter\nfrom typing import Optional\n\n# Implemented with the help of Matthias Fey, author of PyTorch Geometric\n# For an example see https://github.com/rusty1s/pytorch_geometric/blob/master/examples/pna.py\n\ndef aggregate_sum(src: Tensor, index: Tensor, dim_size: Optional[int]):\n    return scatter(src, index, 0, None, dim_size, reduce='sum')\n\n\ndef aggregate_mean(src: Tensor, index: Tensor, dim_size: Optional[int]):\n    return scatter(src, index, 0, None, dim_size, reduce='mean')\n\n\ndef aggregate_min(src: Tensor, index: Tensor, dim_size: Optional[int]):\n    return scatter(src, index, 0, None, dim_size, reduce='min')\n\n\ndef aggregate_max(src: Tensor, index: Tensor, dim_size: Optional[int]):\n    return scatter(src, index, 0, None, dim_size, reduce='max')\n\n\ndef aggregate_var(src, index, dim_size):\n    mean = aggregate_mean(src, index, dim_size)\n    mean_squares = aggregate_mean(src * src, index, dim_size)\n    return mean_squares - mean * mean\n\n\ndef aggregate_std(src, index, dim_size):\n    return torch.sqrt(torch.relu(aggregate_var(src, index, dim_size)) + 1e-5)\n\n\nAGGREGATORS = {\n    'sum': aggregate_sum,\n    'mean': aggregate_mean,\n    'min': aggregate_min,\n    'max': aggregate_max,\n    'var': aggregate_var,\n    'std': aggregate_std,\n}"
  },
  {
    "path": "models/pytorch_geometric/example.py",
    "content": "import torch\nimport torch.nn.functional as F\nfrom torch.nn import ModuleList\nfrom torch.nn import Sequential, ReLU, Linear\nfrom torch.optim.lr_scheduler import ReduceLROnPlateau\nfrom torch_geometric.utils import degree\nfrom ogb.graphproppred import PygGraphPropPredDataset, Evaluator\nfrom ogb.graphproppred.mol_encoder import AtomEncoder\nfrom torch_geometric.data import DataLoader\nfrom torch_geometric.nn import BatchNorm, global_mean_pool\n\nfrom models.pytorch_geometric.pna import PNAConvSimple\n\ndataset = PygGraphPropPredDataset(name=\"ogbg-molhiv\")\n\nsplit_idx = dataset.get_idx_split()\ntrain_loader = DataLoader(dataset[split_idx[\"train\"]], batch_size=128, shuffle=True)\nval_loader = DataLoader(dataset[split_idx[\"valid\"]], batch_size=128, shuffle=False)\ntest_loader = DataLoader(dataset[split_idx[\"test\"]], batch_size=128, shuffle=False)\n\n# Compute in-degree histogram over training data.\ndeg = torch.zeros(10, dtype=torch.long)\nfor data in dataset[split_idx['train']]:\n    d = degree(data.edge_index[1], num_nodes=data.num_nodes, dtype=torch.long)\n    deg += torch.bincount(d, minlength=deg.numel())\n\nclass Net(torch.nn.Module):\n    def __init__(self):\n        super(Net, self).__init__()\n\n        self.node_emb = AtomEncoder(emb_dim=80)\n\n        aggregators = ['mean', 'min', 'max', 'std']\n        scalers = ['identity', 'amplification', 'attenuation']\n\n        self.convs = ModuleList()\n        self.batch_norms = ModuleList()\n        for _ in range(4):\n            conv = PNAConvSimple(in_channels=80, out_channels=80, aggregators=aggregators,\n                                 scalers=scalers, deg=deg, post_layers=1)\n            self.convs.append(conv)\n            self.batch_norms.append(BatchNorm(80))\n\n        self.mlp = Sequential(Linear(80, 40), ReLU(), Linear(40, 20), ReLU(), Linear(20, 1))\n\n    def forward(self, x, edge_index, edge_attr, batch):\n        x = self.node_emb(x)\n\n        for conv, batch_norm in zip(self.convs, self.batch_norms):\n            h = F.relu(batch_norm(conv(x, edge_index, edge_attr)))\n            x = h + x  # residual#\n            x = F.dropout(x, 0.3, training=self.training)\n\n        x = global_mean_pool(x, batch)\n        return self.mlp(x)\n\n\ndevice = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\nmodel = Net().to(device)\noptimizer = torch.optim.Adam(model.parameters(), lr=0.01, weight_decay=3e-6)\nscheduler = ReduceLROnPlateau(optimizer, mode='max', factor=0.5, patience=20, min_lr=0.0001)\n\n\ndef train(epoch):\n    model.train()\n\n    total_loss = 0\n    for data in train_loader:\n        data = data.to(device)\n        optimizer.zero_grad()\n        out = model(data.x, data.edge_index, None, data.batch)\n\n        loss = torch.nn.BCEWithLogitsLoss()(out.to(torch.float32), data.y.to(torch.float32))\n        loss.backward()\n        total_loss += loss.item() * data.num_graphs\n        optimizer.step()\n    return total_loss / len(train_loader.dataset)\n\n\n@torch.no_grad()\ndef test(loader):\n    model.eval()\n    evaluator = Evaluator(name='ogbg-molhiv')\n    list_pred = []\n    list_labels = []\n    for data in loader:\n        data = data.to(device)\n        out = model(data.x, data.edge_index, None, data.batch)\n        list_pred.append(out)\n        list_labels.append(data.y)\n    epoch_test_ROC = evaluator.eval({'y_pred': torch.cat(list_pred),\n                                     'y_true': torch.cat(list_labels)})['rocauc']\n    return epoch_test_ROC\n\n\nbest = (0, 0)\n\nfor epoch in range(1, 201):\n    loss = train(epoch)\n    val_roc = test(val_loader)\n    test_roc = test(test_loader)\n    scheduler.step(val_roc)\n    print(f'Epoch: {epoch:02d}, Loss: {loss:.4f}, Val: {val_roc:.4f}, '\n          f'Test: {test_roc:.4f}')\n    if val_roc > best[0]:\n        best = (val_roc, test_roc)\n\nprint(f'Best epoch val: {best[0]:.4f}, test: {best[1]:.4f}')\n"
  },
  {
    "path": "models/pytorch_geometric/pna.py",
    "content": "from typing import Optional, List, Dict\nfrom torch_geometric.typing import Adj, OptTensor\n\nimport torch\nfrom torch import Tensor\nfrom torch.nn import ModuleList, Sequential, Linear, ReLU\nfrom torch_geometric.nn.conv import MessagePassing\nfrom torch_geometric.nn.inits import reset\nfrom torch_geometric.utils import degree\n\nfrom models.pytorch_geometric.aggregators import AGGREGATORS\nfrom models.pytorch_geometric.scalers import SCALERS\n\n# Implemented with the help of Matthias Fey, author of PyTorch Geometric\n# For an example see https://github.com/rusty1s/pytorch_geometric/blob/master/examples/pna.py\n\nclass PNAConv(MessagePassing):\n    r\"\"\"The Principal Neighbourhood Aggregation graph convolution operator\n    from the `\"Principal Neighbourhood Aggregation for Graph Nets\"\n    <https://arxiv.org/abs/2004.05718>`_ paper\n        .. math::\n            \\bigoplus = \\underbrace{\\begin{bmatrix}I \\\\ S(D, \\alpha=1) \\\\\n            S(D, \\alpha=-1) \\end{bmatrix} }_{\\text{scalers}}\n            \\otimes \\underbrace{\\begin{bmatrix} \\mu \\\\ \\sigma \\\\ \\max \\\\ \\min\n            \\end{bmatrix}}_{\\text{aggregators}},\n        in:\n        .. math::\n            X_i^{(t+1)} = U \\left( X_i^{(t)}, \\underset{(j,i) \\in E}{\\bigoplus}\n            M \\left( X_i^{(t)}, X_j^{(t)} \\right) \\right)\n        where :math:`M` and :math:`U` denote the MLP referred to with pretrans\n        and posttrans respectively.\n        Args:\n            in_channels (int): Size of each input sample.\n            out_channels (int): Size of each output sample.\n            aggregators (list of str): Set of aggregation function identifiers,\n                namely :obj:`\"sum\"`, :obj:`\"mean\"`, :obj:`\"min\"`, :obj:`\"max\"`,\n                :obj:`\"var\"` and :obj:`\"std\"`.\n            scalers: (list of str): Set of scaling function identifiers, namely\n                :obj:`\"identity\"`, :obj:`\"amplification\"`,\n                :obj:`\"attenuation\"`, :obj:`\"linear\"` and\n                :obj:`\"inverse_linear\"`.\n            deg (Tensor): Histogram of in-degrees of nodes in the training set,\n                used by scalers to normalize.\n            edge_dim (int, optional): Edge feature dimensionality (in case\n                there are any). (default :obj:`None`)\n            towers (int, optional): Number of towers (default: :obj:`1`).\n            pre_layers (int, optional): Number of transformation layers before\n                aggregation (default: :obj:`1`).\n            post_layers (int, optional): Number of transformation layers after\n                aggregation (default: :obj:`1`).\n            divide_input (bool, optional): Whether the input features should\n                be split between towers or not (default: :obj:`False`).\n            **kwargs (optional): Additional arguments of\n                :class:`torch_geometric.nn.conv.MessagePassing`.\n        \"\"\"\n    def __init__(self, in_channels: int, out_channels: int,\n                 aggregators: List[str], scalers: List[str], deg: Tensor,\n                 edge_dim: Optional[int] = None, towers: int = 1,\n                 pre_layers: int = 1, post_layers: int = 1,\n                 divide_input: bool = False, **kwargs):\n\n        super(PNAConv, self).__init__(aggr=None, node_dim=0, **kwargs)\n\n        if divide_input:\n            assert in_channels % towers == 0\n        assert out_channels % towers == 0\n\n        self.in_channels = in_channels\n        self.out_channels = out_channels\n        self.aggregators = [AGGREGATORS[aggr] for aggr in aggregators]\n        self.scalers = [SCALERS[scale] for scale in scalers]\n        self.edge_dim = edge_dim\n        self.towers = towers\n        self.divide_input = divide_input\n\n        self.F_in = in_channels // towers if divide_input else in_channels\n        self.F_out = self.out_channels // towers\n\n        deg = deg.to(torch.float)\n        total_no_vertices = deg.sum()\n        bin_degrees = torch.arange(len(deg))\n        self.avg_deg: Dict[str, float] = {\n            'lin': ((bin_degrees * deg).sum() / total_no_vertices).item(),\n            'log': (((bin_degrees + 1).log() * deg).sum() / total_no_vertices).item(),\n            'exp': ((bin_degrees.exp() * deg).sum() / total_no_vertices).item(),\n        }\n\n        if self.edge_dim is not None:\n            self.edge_encoder = Linear(edge_dim, self.F_in)\n\n        self.pre_nns = ModuleList()\n        self.post_nns = ModuleList()\n        for _ in range(towers):\n            modules = [Linear((3 if edge_dim else 2) * self.F_in, self.F_in)]\n            for _ in range(pre_layers - 1):\n                modules += [ReLU()]\n                modules += [Linear(self.F_in, self.F_in)]\n            self.pre_nns.append(Sequential(*modules))\n\n            in_channels = (len(aggregators) * len(scalers) + 1) * self.F_in\n            modules = [Linear(in_channels, self.F_out)]\n            for _ in range(post_layers - 1):\n                modules += [ReLU()]\n                modules += [Linear(self.F_out, self.F_out)]\n            self.post_nns.append(Sequential(*modules))\n\n        self.lin = Linear(out_channels, out_channels)\n\n        self.reset_parameters()\n\n    def reset_parameters(self):\n        if self.edge_dim is not None:\n            self.edge_encoder.reset_parameters()\n        for nn in self.pre_nns:\n            reset(nn)\n        for nn in self.post_nns:\n            reset(nn)\n        self.lin.reset_parameters()\n\n    def forward(self, x: Tensor, edge_index: Adj,\n                edge_attr: OptTensor = None) -> Tensor:\n\n        if self.divide_input:\n            x = x.view(-1, self.towers, self.F_in)\n        else:\n            x = x.view(-1, 1, self.F_in).repeat(1, self.towers, 1)\n\n        # propagate_type: (x: Tensor, edge_attr: OptTensor)\n        out = self.propagate(edge_index, x=x, edge_attr=edge_attr, size=None)\n\n        out = torch.cat([x, out], dim=-1)\n        outs = [nn(out[:, i]) for i, nn in enumerate(self.post_nns)]\n        out = torch.cat(outs, dim=1)\n\n        return self.lin(out)\n\n    def message(self, x_i: Tensor, x_j: Tensor,\n                edge_attr: OptTensor) -> Tensor:\n\n        h: Tensor = x_i  # Dummy.\n        if edge_attr is not None:\n            edge_attr = self.edge_encoder(edge_attr)\n            edge_attr = edge_attr.view(-1, 1, self.F_in)\n            edge_attr = edge_attr.repeat(1, self.towers, 1)\n            h = torch.cat([x_i, x_j, edge_attr], dim=-1)\n        else:\n            h = torch.cat([x_i, x_j], dim=-1)\n\n        hs = [nn(h[:, i]) for i, nn in enumerate(self.pre_nns)]\n        return torch.stack(hs, dim=1)\n\n    def aggregate(self, inputs: Tensor, index: Tensor,\n                  dim_size: Optional[int] = None) -> Tensor:\n        outs = [aggr(inputs, index, dim_size) for aggr in self.aggregators]\n        out = torch.cat(outs, dim=-1)\n\n        deg = degree(index, dim_size, dtype=inputs.dtype).view(-1, 1, 1)\n        outs = [scaler(out, deg, self.avg_deg) for scaler in self.scalers]\n        return torch.cat(outs, dim=-1)\n\n    def __repr__(self):\n        return (f'{self.__class__.__name__}({self.in_channels}, '\n                f'{self.out_channels}, towers={self.towers}, dim={self.dim})')\n        raise NotImplementedError\n\n\nclass PNAConvSimple(MessagePassing):\n    r\"\"\"The Principal Neighbourhood Aggregation graph convolution operator\n    from the `\"Principal Neighbourhood Aggregation for Graph Nets\"\n    <https://arxiv.org/abs/2004.05718>`_ paper\n        .. math::\n            \\bigoplus = \\underbrace{\\begin{bmatrix}I \\\\ S(D, \\alpha=1) \\\\\n            S(D, \\alpha=-1) \\end{bmatrix} }_{\\text{scalers}}\n            \\otimes \\underbrace{\\begin{bmatrix} \\mu \\\\ \\sigma \\\\ \\max \\\\ \\min\n            \\end{bmatrix}}_{\\text{aggregators}},\n        in:\n        .. math::\n            X_i^{(t+1)} = U \\left( \\underset{(j,i) \\in E}{\\bigoplus}\n            M \\left(X_j^{(t)} \\right) \\right)\n        where :math:`U` denote the MLP referred to with posttrans.\n        Args:\n            in_channels (int): Size of each input sample.\n            out_channels (int): Size of each output sample.\n            aggregators (list of str): Set of aggregation function identifiers,\n                namely :obj:`\"sum\"`, :obj:`\"mean\"`, :obj:`\"min\"`, :obj:`\"max\"`,\n                :obj:`\"var\"` and :obj:`\"std\"`.\n            scalers: (list of str): Set of scaling function identifiers, namely\n                :obj:`\"identity\"`, :obj:`\"amplification\"`,\n                :obj:`\"attenuation\"`, :obj:`\"linear\"` and\n                :obj:`\"inverse_linear\"`.\n            deg (Tensor): Histogram of in-degrees of nodes in the training set,\n                used by scalers to normalize.\n            post_layers (int, optional): Number of transformation layers after\n                aggregation (default: :obj:`1`).\n            **kwargs (optional): Additional arguments of\n                :class:`torch_geometric.nn.conv.MessagePassing`.\n        \"\"\"\n    def __init__(self, in_channels: int, out_channels: int,\n                 aggregators: List[str], scalers: List[str], deg: Tensor,\n                 post_layers: int = 1, **kwargs):\n\n        super(PNAConvSimple, self).__init__(aggr=None, node_dim=0, **kwargs)\n\n        self.in_channels = in_channels\n        self.out_channels = out_channels\n        self.aggregators = [AGGREGATORS[aggr] for aggr in aggregators]\n        self.scalers = [SCALERS[scale] for scale in scalers]\n\n        self.F_in = in_channels\n        self.F_out = self.out_channels\n\n        deg = deg.to(torch.float)\n        total_no_vertices = deg.sum()\n        bin_degrees = torch.arange(len(deg))\n        self.avg_deg: Dict[str, float] = {\n            'lin': ((bin_degrees * deg).sum() / total_no_vertices).item(),\n            'log': (((bin_degrees + 1).log() * deg).sum() / total_no_vertices).item(),\n            'exp': ((bin_degrees.exp() * deg).sum() / total_no_vertices).item(),\n        }\n\n        in_channels = (len(aggregators) * len(scalers)) * self.F_in\n        modules = [Linear(in_channels, self.F_out)]\n        for _ in range(post_layers - 1):\n            modules += [ReLU()]\n            modules += [Linear(self.F_out, self.F_out)]\n        self.post_nn = Sequential(*modules)\n\n        self.reset_parameters()\n\n    def reset_parameters(self):\n        reset(self.post_nn)\n\n    def forward(self, x: Tensor, edge_index: Adj, edge_attr: OptTensor = None) -> Tensor:\n\n        # propagate_type: (x: Tensor)\n        out = self.propagate(edge_index, x=x, size=None)\n        return self.post_nn(out)\n\n    def message(self, x_j: Tensor) -> Tensor:\n        return x_j\n\n    def aggregate(self, inputs: Tensor, index: Tensor,\n                  dim_size: Optional[int] = None) -> Tensor:\n        outs = [aggr(inputs, index, dim_size) for aggr in self.aggregators]\n        out = torch.cat(outs, dim=-1)\n\n        deg = degree(index, dim_size, dtype=inputs.dtype).view(-1, 1)\n        outs = [scaler(out, deg, self.avg_deg) for scaler in self.scalers]\n        return torch.cat(outs, dim=-1)\n\n    def __repr__(self):\n        return (f'{self.__class__.__name__}({self.in_channels}, '\n                f'{self.out_channels}')\n        raise NotImplementedError"
  },
  {
    "path": "models/pytorch_geometric/scalers.py",
    "content": "import torch\nfrom torch import Tensor\nfrom typing import Dict\n\n# Implemented with the help of Matthias Fey, author of PyTorch Geometric\n# For an example see https://github.com/rusty1s/pytorch_geometric/blob/master/examples/pna.py\n\ndef scale_identity(src: Tensor, deg: Tensor, avg_deg: Dict[str, float]):\n    return src\n\n\ndef scale_amplification(src: Tensor, deg: Tensor, avg_deg: Dict[str, float]):\n    return src * (torch.log(deg + 1) / avg_deg['log'])\n\n\ndef scale_attenuation(src: Tensor, deg: Tensor, avg_deg: Dict[str, float]):\n    scale = avg_deg['log'] / torch.log(deg + 1)\n    scale[deg == 0] = 1\n    return src * scale\n\n\ndef scale_linear(src: Tensor, deg: Tensor, avg_deg: Dict[str, float]):\n    return src * (deg / avg_deg['lin'])\n\n\ndef scale_inverse_linear(src: Tensor, deg: Tensor, avg_deg: Dict[str, float]):\n    scale = avg_deg['lin'] / deg\n    scale[deg == 0] = 1\n    return src * scale\n\n\nSCALERS = {\n    'identity': scale_identity,\n    'amplification': scale_amplification,\n    'attenuation': scale_attenuation,\n    'linear': scale_linear,\n    'inverse_linear': scale_inverse_linear\n}\n"
  },
  {
    "path": "multitask_benchmark/README.md",
    "content": "# Multi-task benchmark\n\n<img src=\"https://raw.githubusercontent.com/lukecavabarrett/pna/master/multitask_benchmark/images/multitask_results.png\" alt=\"Real world results\" width=\"500\"/>\n\n## Overview\n\nWe provide the scripts for the generation and execution of the multi-task benchmark.\n- `dataset_generation` contains:\n  - `graph_generation.py` with scripts to generate the various graphs and add randomness;\n  - `graph_algorithms.py` with the implementation of many algorithms on graphs that can be used as labels;\n  - `multitask_dataset.py` unifies the two files above generating and saving the benchmarks we used in the paper.\n- `util` contains:\n  - preprocessing subroutines and loss functions (`util.py`);\n  - general training and evaluation procedures (`train.py`).\n- `train` contains a script for each model which sets up the command line parameters and initiates the training procedure. \n  \nThis benchmark uses the PyTorch version of PNA (`../models/pytorch/pna`). Below you can find the instructions on how to create the dataset and run the models, these are also available in this [notebook](https://colab.research.google.com/drive/17NntHxoKQzpKmi8siMOLP9WfANlwbW8S?usp=sharing).\n\n## Dependencies\nInstall PyTorch from the [official website](https://pytorch.org/). The code was tested over PyTorch 1.4.\n\nMove to the source of the repository before running the following. Then install the other dependencies:\n```\npip3 install -r multitask_benchmark/requirements.txt\n```\n\n## Test run\n\nGenerate the benchmark dataset (add `--extrapolation` for multiple test sets of different sizes):\n```\npython3 -m multitask_benchmark.datasets_generation.multitask_dataset\n```\n\nthen run the training:\n```\npython3 -m multitask_benchmark.train.pna --variable --fixed --gru --lr=0.003 --weight_decay=1e-6 --dropout=0.0 --epochs=10000 --patience=1000 --variable_conv_layers=N/2 --fc_layers=3 --hidden=16 --towers=4 --aggregators=\"mean max min std\" --scalers=\"identity amplification attenuation\" --data=multitask_benchmark/data/multitask_dataset.pkl\n```\n\nThe command above uses the hyperparameters tuned for the non-extrapolating dataset and the architecture outlined in the diagram below. For more details on the architecture, how the hyperparameters were tuned and the results collected refer to our [paper](https://arxiv.org/abs/2004.05718).\n\n![architecture](images/architecture.png)\n"
  },
  {
    "path": "multitask_benchmark/datasets_generation/graph_algorithms.py",
    "content": "import math\nfrom queue import Queue\n\nimport numpy as np\n\n\ndef is_connected(A):\n    \"\"\"\n    :param A:np.array the adjacency matrix\n    :return:bool whether the graph is connected or not\n    \"\"\"\n    for _ in range(int(1 + math.ceil(math.log2(A.shape[0])))):\n        A = np.dot(A, A)\n    return np.min(A) > 0\n\n\ndef identity(A, F):\n    \"\"\"\n    :param A:np.array the adjacency matrix\n    :param F:np.array the nodes features\n    :return:F\n    \"\"\"\n    return F\n\n\ndef first_neighbours(A):\n    \"\"\"\n    :param A:np.array the adjacency matrix\n    :param F:np.array the nodes features\n    :return: for each node, the number of nodes reachable in 1 hop\n    \"\"\"\n    return np.sum(A > 0, axis=0)\n\n\ndef second_neighbours(A):\n    \"\"\"\n    :param A:np.array the adjacency matrix\n    :param F:np.array the nodes features\n    :return: for each node, the number of nodes reachable in no more than 2 hops\n    \"\"\"\n    A = A > 0.0\n    A = A + np.dot(A, A)\n    np.fill_diagonal(A, 0)\n    return np.sum(A > 0, axis=0)\n\n\ndef kth_neighbours(A, k):\n    \"\"\"\n    :param A:np.array the adjacency matrix\n    :param F:np.array the nodes features\n    :return: for each node, the number of nodes reachable in k hops\n    \"\"\"\n    A = A > 0.0\n    R = np.zeros(A.shape)\n    for _ in range(k):\n        R = np.dot(R, A) + A\n    np.fill_diagonal(R, 0)\n    return np.sum(R > 0, axis=0)\n\n\ndef map_reduce_neighbourhood(A, F, f_reduce, f_map=None, hops=1, consider_itself=False):\n    \"\"\"\n    :param A:np.array the adjacency matrix\n    :param F:np.array the nodes features\n    :return: for each node, map its neighbourhood with f_map, and reduce it with f_reduce\n    \"\"\"\n    if f_map is not None:\n        F = f_map(F)\n    A = np.array(A)\n\n    A = A > 0\n    R = np.zeros(A.shape)\n    for _ in range(hops):\n        R = np.dot(R, A) + A\n    np.fill_diagonal(R, 1 if consider_itself else 0)\n    R = R > 0\n\n    return np.array([f_reduce(F[R[i]]) for i in range(A.shape[0])])\n\n\ndef max_neighbourhood(A, F):\n    \"\"\"\n    :param A:np.array the adjacency matrix\n    :param F:np.array the nodes features\n    :return: for each node, the maximum in its neighbourhood\n    \"\"\"\n    return map_reduce_neighbourhood(A, F, np.max, consider_itself=True)\n\n\ndef min_neighbourhood(A, F):\n    \"\"\"\n    :param A:np.array the adjacency matrix\n    :param F:np.array the nodes features\n    :return: for each node, the minimum in its neighbourhood\n    \"\"\"\n    return map_reduce_neighbourhood(A, F, np.min, consider_itself=True)\n\n\ndef std_neighbourhood(A, F):\n    \"\"\"\n    :param A:np.array the adjacency matrix\n    :param F:np.array the nodes features\n    :return: for each node, the standard deviation of its neighbourhood\n    \"\"\"\n    return map_reduce_neighbourhood(A, F, np.std, consider_itself=True)\n\n\ndef mean_neighbourhood(A, F):\n    \"\"\"\n    :param A:np.array the adjacency matrix\n    :param F:np.array the nodes features\n    :return: for each node, the mean of its neighbourhood\n    \"\"\"\n    return map_reduce_neighbourhood(A, F, np.mean, consider_itself=True)\n\n\ndef local_maxima(A, F):\n    \"\"\"\n    :param A:np.array the adjacency matrix\n    :param F:np.array the nodes features\n    :return: for each node, whether it is the maximum in its neighbourhood\n    \"\"\"\n    return F == map_reduce_neighbourhood(A, F, np.max, consider_itself=True)\n\n\ndef graph_laplacian(A):\n    \"\"\"\n    :param A:np.array the adjacency matrix\n    :return: the laplacian of the adjacency matrix\n    \"\"\"\n    L = (A > 0) * -1\n    np.fill_diagonal(L, np.sum(A > 0, axis=0))\n    return L\n\n\ndef graph_laplacian_features(A, F):\n    \"\"\"\n    :param A:np.array the adjacency matrix\n    :param F:np.array the nodes features\n    :return: the laplacian of the adjacency matrix multiplied by the features\n    \"\"\"\n    return np.matmul(graph_laplacian(A), F)\n\n\ndef isomorphism(A1, A2, F1=None, F2=None):\n    \"\"\"\n        Takes two adjacency matrices (A1,A2) and (optionally) two lists of features. It uses Weisfeiler-Lehman algorithms, so false positives might arise\n        :param      A1: adj_matrix, N*N numpy matrix\n        :param      A2: adj_matrix, N*N numpy matrix\n        :param      F1: node_values, numpy array of size N\n        :param      F1: node_values, numpy array of size N\n        :return:    isomorphic: boolean which is false when the two graphs are not isomorphic, true when they probably are.\n    \"\"\"\n    N = A1.shape[0]\n    if (F1 is None) ^ (F2 is None):\n        raise ValueError(\"either both or none between F1,F2 must be defined.\")\n    if F1 is None:\n        # Assign same initial value to each node\n        F1 = np.ones(N, int)\n        F2 = np.ones(N, int)\n    else:\n        if not np.array_equal(np.sort(F1), np.sort(F2)):\n            return False\n        if F1.dtype() != int:\n            raise NotImplementedError('Still have to implement this')\n\n    p = 1000000007\n\n    def mapping(F):\n        return (F * 234 + 133) % 1000000007\n\n    def adjacency_hash(F):\n        F = np.sort(F)\n        b = 257\n\n        h = 0\n        for f in F:\n            h = (b * h + f) % 1000000007\n        return h\n\n    for i in range(N):\n        F1 = map_reduce_neighbourhood(A1, F1, adjacency_hash, f_map=mapping, consider_itself=True, hops=1)\n        F2 = map_reduce_neighbourhood(A2, F2, adjacency_hash, f_map=mapping, consider_itself=True, hops=1)\n        if not np.array_equal(np.sort(F1), np.sort(F2)):\n            return False\n    return True\n\n\ndef count_edges(A):\n    \"\"\"\n    :param A:np.array the adjacency matrix\n    :return: the number of edges in the graph\n    \"\"\"\n    return np.sum(A) / 2\n\n\ndef is_eulerian_cyclable(A):\n    \"\"\"\n    :param A:np.array the adjacency matrix\n    :return: whether the graph has an eulerian cycle\n    \"\"\"\n    return is_connected(A) and np.count_nonzero(first_neighbours(A) % 2 == 1) == 0\n\n\ndef is_eulerian_percorrible(A):\n    \"\"\"\n    :param A:np.array the adjacency matrix\n    :return: whether the graph has an eulerian path\n    \"\"\"\n    return is_connected(A) and np.count_nonzero(first_neighbours(A) % 2 == 1) in [0, 2]\n\n\ndef map_reduce_graph(A, F, f_reduce):\n    \"\"\"\n    :param A:np.array the adjacency matrix\n    :param F:np.array the nodes features\n    :return: the features of the nodes reduced by f_reduce\n    \"\"\"\n    return f_reduce(F)\n\n\ndef mean_graph(A, F):\n    \"\"\"\n    :param A:np.array the adjacency matrix\n    :param F:np.array the nodes features\n    :return: the mean of the features\n    \"\"\"\n    return map_reduce_graph(A, F, np.mean)\n\n\ndef max_graph(A, F):\n    \"\"\"\n    :param A:np.array the adjacency matrix\n    :param F:np.array the nodes features\n    :return: the maximum of the features\n    \"\"\"\n    return map_reduce_graph(A, F, np.max)\n\n\ndef min_graph(A, F):\n    \"\"\"\n    :param A:np.array the adjacency matrix\n    :param F:np.array the nodes features\n    :return: the minimum of the features\n    \"\"\"\n    return map_reduce_graph(A, F, np.min)\n\n\ndef std_graph(A, F):\n    \"\"\"\n    :param A:np.array the adjacency matrix\n    :param F:np.array the nodes features\n    :return: the standard deviation of the features\n    \"\"\"\n    return map_reduce_graph(A, F, np.std)\n\n\ndef has_hamiltonian_cycle(A):\n    \"\"\"\n    :param A:np.array the adjacency matrix\n    :return:bool whether the graph has an hamiltonian cycle\n    \"\"\"\n    A += np.transpose(A)\n    A = A > 0\n    V = A.shape[0]\n\n    def ham_cycle_loop(pos):\n        if pos == V:\n            if A[path[pos - 1]][path[0]]:\n                return True\n            else:\n                return False\n        for v in range(1, V):\n            if A[path[pos - 1]][v] and not used[v]:\n                path[pos] = v\n                used[v] = True\n                if ham_cycle_loop(pos + 1):\n                    return True\n                path[pos] = -1\n                used[v] = False\n        return False\n\n    used = [False] * V\n    path = [-1] * V\n    path[0] = 0\n\n    return ham_cycle_loop(1)\n\n\ndef all_pairs_shortest_paths(A, inf_sub=math.inf):\n    \"\"\"\n    :param A:np.array the adjacency matrix\n    :param inf_sub: the placeholder value to use for pairs which are not connected\n    :return:np.array all pairs shortest paths\n    \"\"\"\n    A = np.array(A)\n    N = A.shape[0]\n    for i in range(N):\n        for j in range(N):\n            if A[i][j] == 0:\n                A[i][j] = math.inf\n            if i == j:\n                A[i][j] = 0\n\n    for k in range(N):\n        for i in range(N):\n            for j in range(N):\n                A[i][j] = min(A[i][j], A[i][k] + A[k][j])\n\n    A = np.where(A == math.inf, inf_sub, A)\n    return A\n\n\ndef diameter(A):\n    \"\"\"\n    :param A:np.array the adjacency matrix\n    :return: the diameter of the gra[h\n    \"\"\"\n    sum = np.sum(A)\n    apsp = all_pairs_shortest_paths(A)\n    apsp = np.where(apsp < sum + 1, apsp, -1)\n    return np.max(apsp)\n\n\ndef eccentricity(A):\n    \"\"\"\n    :param A:np.array the adjacency matrix\n    :return: the eccentricity of the gra[h\n    \"\"\"\n    sum = np.sum(A)\n    apsp = all_pairs_shortest_paths(A)\n    apsp = np.where(apsp < sum + 1, apsp, -1)\n    return np.max(apsp, axis=0)\n\n\ndef sssp_predecessor(A, F):\n    \"\"\"\n    :param A:np.array the adjacency matrix\n    :param F:np.array the nodes features\n    :return: for each node, the best next step to reach the designated source\n    \"\"\"\n    assert (np.sum(F) == 1)\n    assert (np.max(F) == 1)\n    s = np.argmax(F)\n    N = A.shape[0]\n    P = np.zeros(A.shape)\n    V = np.zeros(N)\n    bfs = Queue()\n    bfs.put(s)\n    V[s] = 1\n    while not bfs.empty():\n        u = bfs.get()\n        for v in range(N):\n            if A[u][v] > 0 and V[v] == 0:\n                V[v] = 1\n                P[v][u] = 1\n                bfs.put(v)\n    return P\n\n\ndef max_eigenvalue(A):\n    \"\"\"\n    :param A:np.array the adjacency matrix\n    :return: the maximum eigenvalue of A\n    since A is positive symmetric, all the eigenvalues are guaranteed to be real\n    \"\"\"\n    [W, _] = np.linalg.eig(A)\n    return W[np.argmax(np.absolute(W))].real\n\n\ndef max_eigenvalues(A, k):\n    \"\"\"\n    :param A:np.array the adjacency matrix\n    :param k:int the number of eigenvalues to be selected\n    :return: the k greatest (by absolute value) eigenvalues of A\n    \"\"\"\n    [W, _] = np.linalg.eig(A)\n    values = W[sorted(range(len(W)), key=lambda x: -np.absolute(W[x]))[:k]]\n    return values.real\n\n\ndef max_absolute_eigenvalues(A, k):\n    \"\"\"\n    :param A:np.array the adjacency matrix\n    :param k:int the number of eigenvalues to be selected\n    :return: the absolute value of the k greatest (by absolute value) eigenvalues of A\n    \"\"\"\n    return np.absolute(max_eigenvalues(A, k))\n\n\ndef max_absolute_eigenvalues_laplacian(A, n):\n    \"\"\"\n    :param A:np.array the adjacency matrix\n    :param k:int the number of eigenvalues to be selected\n    :return: the absolute value of the k greatest (by absolute value) eigenvalues of the laplacian of A\n    \"\"\"\n    A = graph_laplacian(A)\n    return np.absolute(max_eigenvalues(A, n))\n\n\ndef max_eigenvector(A):\n    \"\"\"\n    :param A:np.array the adjacency matrix\n    :return: the maximum (by absolute value) eigenvector of A\n    since A is positive symmetric, all the eigenvectors are guaranteed to be real\n    \"\"\"\n    [W, V] = np.linalg.eig(A)\n    return V[:, np.argmax(np.absolute(W))].real\n\n\ndef spectral_radius(A):\n    \"\"\"\n    :param A:np.array the adjacency matrix\n    :return: the maximum (by absolute value) eigenvector of A\n    since A is positive symmetric, all the eigenvectors are guaranteed to be real\n    \"\"\"\n    return np.abs(max_eigenvalue(A))\n\n\ndef page_rank(A, F=None, iter=64):\n    \"\"\"\n    :param A:np.array the adjacency matrix\n    :param F:np.array with initial weights. If None, uniform initialization will happen.\n    :param iter: log2 of length of power iteration\n    :return: for each node, its pagerank\n    \"\"\"\n\n    # normalize A rows\n    A = np.array(A)\n    A /= A.sum(axis=1)[:, np.newaxis]\n\n    # power iteration\n    for _ in range(iter):\n        A = np.matmul(A, A)\n\n    # generate prior distribution\n    if F is None:\n        F = np.ones(A.shape[-1])\n    else:\n        F = np.array(F)\n\n    # normalize prior\n    F /= np.sum(F)\n\n    # compute limit distribution\n    return np.matmul(F, A)\n\n\ndef tsp_length(A, F=None):\n    \"\"\"\n        :param A:np.array the adjacency matrix\n        :param F:np.array determining which nodes are to be visited. If None, all of them are.\n        :return: the length of the Traveling Salesman Problem shortest solution\n    \"\"\"\n\n    A = all_pairs_shortest_paths(A)\n    N = A.shape[0]\n    if F is None:\n        F = np.ones(N)\n    targets = np.nonzero(F)[0]\n    T = targets.shape[0]\n    S = (1 << T)\n    dp = np.zeros((S, T))\n\n    def popcount(x):\n        b = 0\n        while x > 0:\n            x &= x - 1\n            b += 1\n        return b\n\n    msks = np.argsort(np.vectorize(popcount)(np.arange(S)))\n    for i in range(T + 1):\n        for j in range(T):\n            if (1 << j) & msks[i] == 0:\n                dp[msks[i]][j] = math.inf\n\n    for i in range(T + 1, S):\n        msk = msks[i]\n        for u in range(T):\n            if (1 << u) & msk == 0:\n                dp[msk][u] = math.inf\n                continue\n            cost = math.inf\n            for v in range(T):\n                if v == u or (1 << v) & msk == 0:\n                    continue\n                cost = min(cost, dp[msk ^ (1 << u)][v] + A[targets[v]][targets[u]])\n            dp[msk][u] = cost\n    return np.min(dp[S - 1])\n\n\ndef get_nodes_labels(A, F):\n    \"\"\"\n    Takes the adjacency matrix and the list of nodes features (and a list of algorithms) and returns\n    a set of labels for each node\n    :param      A: adj_matrix, N*N numpy matrix\n    :param      F: node_values, numpy array of size N\n    :return:    labels: KxN numpy matrix where K is the number of labels for each node\n    \"\"\"\n    labels = [identity(A, F), map_reduce_neighbourhood(A, F, np.mean, consider_itself=True),\n              map_reduce_neighbourhood(A, F, np.max, consider_itself=True),\n              map_reduce_neighbourhood(A, F, np.std, consider_itself=True), first_neighbours(A), second_neighbours(A),\n              eccentricity(A)]\n    return np.swapaxes(np.stack(labels), 0, 1)\n\n\ndef get_graph_labels(A, F):\n    \"\"\"\n    Takes the adjacency matrix and the list of nodes features (and a list of algorithms) and returns\n    a set of labels for the whole graph\n    :param      A: adj_matrix, N*N numpy matrix\n    :param      F: node_values, numpy array of size N\n    :return:    labels: numpy array of size K where K is the number of labels for the graph\n    \"\"\"\n    labels = [diameter(A)]\n    return np.asarray(labels)\n"
  },
  {
    "path": "multitask_benchmark/datasets_generation/graph_generation.py",
    "content": "import numpy as np\nimport random\nimport networkx as nx\nimport math\nimport matplotlib.pyplot as plt  # only required to plot\nfrom enum import Enum\n\n\"\"\"\n    Generates random graphs of different types of a given size.\n    Some of the graph are created using the NetworkX library, for more info see\n    https://networkx.github.io/documentation/networkx-1.10/reference/generators.html\n\"\"\"\n\n\nclass GraphType(Enum):\n    RANDOM = 0\n    ERDOS_RENYI = 1\n    BARABASI_ALBERT = 2\n    GRID = 3\n    CAVEMAN = 5\n    TREE = 6\n    LADDER = 7\n    LINE = 8\n    STAR = 9\n    CATERPILLAR = 10\n    LOBSTER = 11\n\n\n# probabilities of each type in case of random type\nMIXTURE = [(GraphType.ERDOS_RENYI, 0.2), (GraphType.BARABASI_ALBERT, 0.2), (GraphType.GRID, 0.05),\n           (GraphType.CAVEMAN, 0.05), (GraphType.TREE, 0.15), (GraphType.LADDER, 0.05),\n           (GraphType.LINE, 0.05), (GraphType.STAR, 0.05), (GraphType.CATERPILLAR, 0.1), (GraphType.LOBSTER, 0.1)]\n\n\ndef erdos_renyi(N, degree, seed):\n    \"\"\" Creates an Erdős-Rényi or binomial graph of size N with degree/N probability of edge creation \"\"\"\n    return nx.fast_gnp_random_graph(N, degree / N, seed, directed=False)\n\n\ndef barabasi_albert(N, degree, seed):\n    \"\"\" Creates a random graph according to the Barabási–Albert preferential attachment model\n        of size N and where nodes are atteched with degree edges \"\"\"\n    return nx.barabasi_albert_graph(N, degree, seed)\n\n\ndef grid(N):\n    \"\"\" Creates a m x k 2d grid graph with N = m*k and m and k as close as possible \"\"\"\n    m = 1\n    for i in range(1, int(math.sqrt(N)) + 1):\n        if N % i == 0:\n            m = i\n    return nx.grid_2d_graph(m, N // m)\n\n\ndef caveman(N):\n    \"\"\" Creates a caveman graph of m cliques of size k, with m and k as close as possible \"\"\"\n    m = 1\n    for i in range(1, int(math.sqrt(N)) + 1):\n        if N % i == 0:\n            m = i\n    return nx.caveman_graph(m, N // m)\n\n\ndef tree(N, seed):\n    \"\"\" Creates a tree of size N with a power law degree distribution \"\"\"\n    return nx.random_powerlaw_tree(N, seed=seed, tries=10000)\n\n\ndef ladder(N):\n    \"\"\" Creates a ladder graph of N nodes: two rows of N/2 nodes, with each pair connected by a single edge.\n        In case N is odd another node is attached to the first one. \"\"\"\n    G = nx.ladder_graph(N // 2)\n    if N % 2 != 0:\n        G.add_node(N - 1)\n        G.add_edge(0, N - 1)\n    return G\n\n\ndef line(N):\n    \"\"\" Creates a graph composed of N nodes in a line \"\"\"\n    return nx.path_graph(N)\n\n\ndef star(N):\n    \"\"\" Creates a graph composed by one center node connected N-1 outer nodes \"\"\"\n    return nx.star_graph(N - 1)\n\n\ndef caterpillar(N, seed):\n    \"\"\" Creates a random caterpillar graph with a backbone of size b (drawn from U[1, N)), and N − b\n        pendent vertices uniformly connected to the backbone. \"\"\"\n    np.random.seed(seed)\n    B = np.random.randint(low=1, high=N)\n    G = nx.empty_graph(N)\n    for i in range(1, B):\n        G.add_edge(i - 1, i)\n    for i in range(B, N):\n        G.add_edge(i, np.random.randint(B))\n    return G\n\n\ndef lobster(N, seed):\n    \"\"\" Creates a random Lobster graph with a backbone of size b (drawn from U[1, N)), and p (drawn\n        from U[1, N − b ]) pendent vertices uniformly connected to the backbone, and additional\n        N − b − p pendent vertices uniformly connected to the previous pendent vertices \"\"\"\n    np.random.seed(seed)\n    B = np.random.randint(low=1, high=N)\n    F = np.random.randint(low=B + 1, high=N + 1)\n    G = nx.empty_graph(N)\n    for i in range(1, B):\n        G.add_edge(i - 1, i)\n    for i in range(B, F):\n        G.add_edge(i, np.random.randint(B))\n    for i in range(F, N):\n        G.add_edge(i, np.random.randint(low=B, high=F))\n    return G\n\n\ndef randomize(A):\n    \"\"\" Adds some randomness by toggling some edges without changing the expected number of edges of the graph \"\"\"\n    BASE_P = 0.9\n\n    # e is the number of edges, r the number of missing edges\n    N = A.shape[0]\n    e = np.sum(A) / 2\n    r = N * (N - 1) / 2 - e\n\n    # ep chance of an existing edge to remain, rp chance of another edge to appear\n    if e <= r:\n        ep = BASE_P\n        rp = (1 - BASE_P) * e / r\n    else:\n        ep = BASE_P + (1 - BASE_P) * (e - r) / e\n        rp = 1 - BASE_P\n\n    array = np.random.uniform(size=(N, N), low=0.0, high=0.5)\n    array = array + array.transpose()\n    remaining = np.multiply(np.where(array < ep, 1, 0), A)\n    appearing = np.multiply(np.multiply(np.where(array < rp, 1, 0), 1 - A), 1 - np.eye(N))\n    ans = np.add(remaining, appearing)\n\n    # assert (np.all(np.multiply(ans, np.eye(N)) == np.zeros((N, N))))\n    # assert (np.all(ans >= 0))\n    # assert (np.all(ans <= 1))\n    # assert (np.all(ans == ans.transpose()))\n    return ans\n\n\ndef generate_graph(N, type=GraphType.RANDOM, seed=None, degree=None):\n    \"\"\"\n    Generates random graphs of different types of a given size. Note:\n     - graph are undirected and without weights on edges\n     - node values are sampled independently from U[0,1]\n\n    :param N:       number of nodes\n    :param type:    type chosen between the categories specified in GraphType enum\n    :param seed:    random seed\n    :param degree:  average degree of a node, only used in some graph types\n    :return:        adj_matrix: N*N numpy matrix\n                    node_values: numpy array of size N\n    \"\"\"\n    random.seed(seed)\n    np.random.seed(seed)\n\n    # sample which random type to use\n    if type == GraphType.RANDOM:\n        type = np.random.choice([t for (t, _) in MIXTURE], 1, p=[pr for (_, pr) in MIXTURE])[0]\n\n    # generate the graph structure depending on the type\n    if type == GraphType.ERDOS_RENYI:\n        if degree == None: degree = random.random() * N\n        G = erdos_renyi(N, degree, seed)\n    elif type == GraphType.BARABASI_ALBERT:\n        if degree == None: degree = int(random.random() * (N - 1)) + 1\n        G = barabasi_albert(N, degree, seed)\n    elif type == GraphType.GRID:\n        G = grid(N)\n    elif type == GraphType.CAVEMAN:\n        G = caveman(N)\n    elif type == GraphType.TREE:\n        G = tree(N, seed)\n    elif type == GraphType.LADDER:\n        G = ladder(N)\n    elif type == GraphType.LINE:\n        G = line(N)\n    elif type == GraphType.STAR:\n        G = star(N)\n    elif type == GraphType.CATERPILLAR:\n        G = caterpillar(N, seed)\n    elif type == GraphType.LOBSTER:\n        G = lobster(N, seed)\n    else:\n        print(\"Type not defined\")\n        return\n\n    # generate adjacency matrix and nodes values\n    nodes = list(G)\n    random.shuffle(nodes)\n    adj_matrix = nx.to_numpy_array(G, nodes)\n    node_values = np.random.uniform(low=0, high=1, size=N)\n\n    # randomization\n    adj_matrix = randomize(adj_matrix)\n\n    # draw the graph created\n    # nx.draw(G, pos=nx.spring_layout(G))\n    # plt.draw()\n\n    return adj_matrix, node_values, type\n\n\nif __name__ == '__main__':\n    for i in range(100):\n        adj_matrix, node_values = generate_graph(10, GraphType.RANDOM, seed=i)\n    print(adj_matrix)\n"
  },
  {
    "path": "multitask_benchmark/datasets_generation/multitask_dataset.py",
    "content": "import argparse\nimport os\nimport pickle\n\nimport numpy as np\nimport torch\nfrom inspect import signature\n\nfrom tqdm import tqdm\n\nfrom . import graph_algorithms\nfrom .graph_generation import GraphType, generate_graph\n\n\nclass DatasetMultitask:\n\n    def __init__(self, n_graphs, N, seed, graph_type, get_nodes_labels, get_graph_labels, print_every, sssp, filename):\n        self.adj = {}\n        self.features = {}\n        self.nodes_labels = {}\n        self.graph_labels = {}\n\n        def to_categorical(x, N):\n            v = np.zeros(N)\n            v[x] = 1\n            return v\n\n        for dset in N.keys():\n            if dset not in n_graphs:\n                n_graphs[dset] = n_graphs['default']\n\n            total_n_graphs = sum(n_graphs[dset])\n\n            set_adj = [[] for _ in n_graphs[dset]]\n            set_features = [[] for _ in n_graphs[dset]]\n            set_nodes_labels = [[] for _ in n_graphs[dset]]\n            set_graph_labels = [[] for _ in n_graphs[dset]]\n\n            t = tqdm(total=np.sum(n_graphs[dset]), desc=dset, leave=True, unit=' graphs')\n            for batch, batch_size in enumerate(n_graphs[dset]):\n                for i in range(batch_size):\n                    # generate a random graph of type graph_type and size N\n                    seed += 1\n                    adj, features, type = generate_graph(N[dset][batch], graph_type, seed=seed)\n\n                    while np.min(np.max(adj, 0)) == 0.0:\n                        # remove graph with singleton nodes\n                        seed += 1\n                        adj, features, _ = generate_graph(N[dset][batch], type, seed=seed)\n\n                    t.update(1)\n\n                    # make sure there are no self connection\n                    assert np.all(\n                        np.multiply(adj, np.eye(N[dset][batch])) == np.zeros((N[dset][batch], N[dset][batch])))\n\n                    if sssp:\n                        # define the source node\n                        source_node = np.random.randint(0, N[dset][batch])\n\n                    # compute the labels with graph_algorithms; if sssp add the sssp\n                    node_labels = get_nodes_labels(adj, features,\n                                                   graph_algorithms.all_pairs_shortest_paths(adj, 0)[source_node]\n                                                   if sssp else None)\n                    graph_labels = get_graph_labels(adj, features)\n                    if sssp:\n                        # add the 1-hot feature determining the starting node\n                        features = np.stack([to_categorical(source_node, N[dset][batch]), features], axis=1)\n\n                    set_adj[batch].append(adj)\n                    set_features[batch].append(features)\n                    set_nodes_labels[batch].append(node_labels)\n                    set_graph_labels[batch].append(graph_labels)\n                    \n            t.close()\n            self.adj[dset] = [torch.from_numpy(np.asarray(adjs)).float() for adjs in set_adj]\n            self.features[dset] = [torch.from_numpy(np.asarray(fs)).float() for fs in set_features]\n            self.nodes_labels[dset] = [torch.from_numpy(np.asarray(nls)).float() for nls in set_nodes_labels]\n            self.graph_labels[dset] = [torch.from_numpy(np.asarray(gls)).float() for gls in set_graph_labels]\n\n        self.save_as_pickle(filename)\n\n    def save_as_pickle(self, filename):\n        \"\"\"\" Saves the data into a pickle file at filename \"\"\"\n        directory = os.path.dirname(filename)\n        if not os.path.exists(directory):\n            os.makedirs(directory)\n\n        with open(filename, 'wb') as f:\n            torch.save((self.adj, self.features, self.nodes_labels, self.graph_labels), f)\n\n\nif __name__ == '__main__':\n    parser = argparse.ArgumentParser()\n    parser.add_argument('--out', type=str, default='./multitask_benchmark/data/multitask_dataset.pkl', help='Data path.')\n    parser.add_argument('--seed', type=int, default=1234, help='Random seed.')\n    parser.add_argument('--graph_type', type=str, default='RANDOM', help='Type of graphs in train set')\n    parser.add_argument('--nodes_labels', nargs='+', default=[\"eccentricity\", \"graph_laplacian_features\", \"sssp\"])\n    parser.add_argument('--graph_labels', nargs='+', default=[\"is_connected\", \"diameter\", \"spectral_radius\"])\n    parser.add_argument('--extrapolation', action='store_true', default=False,\n                        help='Generated various test sets of dimensions larger than train and validation.')\n    parser.add_argument('--print_every', type=int, default=20, help='')\n    args = parser.parse_args()\n\n    if 'sssp' in args.nodes_labels:\n        sssp = True\n        args.nodes_labels.remove('sssp')\n    else:\n        sssp = False\n\n    # gets the functions of graph_algorithms from the specified datasets\n    nodes_labels_algs = list(map(lambda s: getattr(graph_algorithms, s), args.nodes_labels))\n    graph_labels_algs = list(map(lambda s: getattr(graph_algorithms, s), args.graph_labels))\n\n\n    def get_nodes_labels(A, F, initial=None):\n        labels = [] if initial is None else [initial]\n        for f in nodes_labels_algs:\n            params = signature(f).parameters\n            labels.append(f(A, F) if 'F' in params else f(A))\n        return np.swapaxes(np.stack(labels), 0, 1)\n\n\n    def get_graph_labels(A, F):\n        labels = []\n        for f in graph_labels_algs:\n            params = signature(f).parameters\n            labels.append(f(A, F) if 'F' in params else f(A))\n        return np.asarray(labels).flatten()\n\n\n    data = DatasetMultitask(n_graphs={'train': [512] * 10, 'val': [128] * 5, 'default': [256] * 5},\n                            N={**{'train': range(15, 25), 'val': range(15, 25)}, **(\n                                {'test-(20,25)': range(20, 25), 'test-(25,30)': range(25, 30),\n                                 'test-(30,35)': range(30, 35), 'test-(35,40)': range(35, 40),\n                                 'test-(40,45)': range(40, 45), 'test-(45,50)': range(45, 50),\n                                 'test-(60,65)': range(60, 65), 'test-(75,80)': range(75, 80),\n                                 'test-(95,100)': range(95, 100)} if args.extrapolation else\n                                {'test': range(15, 25)})},\n                            seed=args.seed, graph_type=getattr(GraphType, args.graph_type),\n                            get_nodes_labels=get_nodes_labels, get_graph_labels=get_graph_labels,\n                            print_every=args.print_every, sssp=sssp, filename=args.out)\n\n    data.save_as_pickle(args.out)\n"
  },
  {
    "path": "multitask_benchmark/requirements.txt",
    "content": "numpy\nnetworkx\nmatplotlib\ntorch"
  },
  {
    "path": "multitask_benchmark/train/gat.py",
    "content": "from __future__ import division\nfrom __future__ import print_function\n\nfrom models.pytorch.gat.layer import GATLayer\nfrom multitask_benchmark.util.train import execute_train, build_arg_parser\n\n# Training settings\nparser = build_arg_parser()\nparser.add_argument('--nheads', type=int, default=4, help='Number of attentions heads.')\nparser.add_argument('--alpha', type=float, default=0.2, help='Alpha for the leaky_relu.')\nargs = parser.parse_args()\n\nexecute_train(gnn_args=dict(nfeat=None,\n                            nhid=args.hidden,\n                            nodes_out=None,\n                            graph_out=None,\n                            dropout=args.dropout,\n                            device=None,\n                            first_conv_descr=dict(layer_type=GATLayer,\n                                                  args=dict(\n                                                      nheads=args.nheads,\n                                                      alpha=args.alpha\n                                                  )),\n                            middle_conv_descr=dict(layer_type=GATLayer,\n                                                   args=dict(\n                                                       nheads=args.nheads,\n                                                       alpha=args.alpha\n                                                   )),\n                            fc_layers=args.fc_layers,\n                            conv_layers=args.conv_layers,\n                            skip=args.skip,\n                            gru=args.gru,\n                            fixed=args.fixed,\n                            variable=args.variable), args=args)\n"
  },
  {
    "path": "multitask_benchmark/train/gcn.py",
    "content": "from __future__ import division\nfrom __future__ import print_function\n\nfrom models.pytorch.gcn.layer import GCNLayer\nfrom multitask_benchmark.util.train import execute_train, build_arg_parser\n\n# Training settings\nparser = build_arg_parser()\nargs = parser.parse_args()\n\nexecute_train(gnn_args=dict(nfeat=None,\n                            nhid=args.hidden,\n                            nodes_out=None,\n                            graph_out=None,\n                            dropout=args.dropout,\n                            device=None,\n                            first_conv_descr=dict(layer_type=GCNLayer, args=dict()),\n                            middle_conv_descr=dict(layer_type=GCNLayer, args=dict()),\n                            fc_layers=args.fc_layers,\n                            conv_layers=args.conv_layers,\n                            skip=args.skip,\n                            gru=args.gru,\n                            fixed=args.fixed,\n                            variable=args.variable), args=args)\n"
  },
  {
    "path": "multitask_benchmark/train/gin.py",
    "content": "from __future__ import division\nfrom __future__ import print_function\n\nfrom models.pytorch.gin.layer import GINLayer\nfrom multitask_benchmark.util.train import execute_train, build_arg_parser\n\n# Training settings\nparser = build_arg_parser()\nparser.add_argument('--gin_fc_layers', type=int, default=2, help='Number of fully connected layers after the aggregation.')\nargs = parser.parse_args()\n\nexecute_train(gnn_args=dict(nfeat=None,\n                            nhid=args.hidden,\n                            nodes_out=None,\n                            graph_out=None,\n                            dropout=args.dropout,\n                            device=None,\n                            first_conv_descr=dict(layer_type=GINLayer, args=dict(fc_layers=args.gin_fc_layers)),\n                            middle_conv_descr=dict(layer_type=GINLayer, args=dict(fc_layers=args.gin_fc_layers)),\n                            fc_layers=args.fc_layers,\n                            conv_layers=args.conv_layers,\n                            skip=args.skip,\n                            gru=args.gru,\n                            fixed=args.fixed,\n                            variable=args.variable), args=args)\n"
  },
  {
    "path": "multitask_benchmark/train/mpnn.py",
    "content": "from __future__ import division\nfrom __future__ import print_function\n\nfrom models.pytorch.pna.layer import PNALayer\nfrom multitask_benchmark.util.train import execute_train, build_arg_parser\n\n# Training settings\nparser = build_arg_parser()\nparser.add_argument('--self_loop', action='store_true', default=False, help='Whether to add self loops in aggregators')\nparser.add_argument('--towers', type=int, default=4, help='Number of towers in MPNN layers')\nparser.add_argument('--aggregation', type=str, default='sum', help='Type of aggregation')\nparser.add_argument('--pretrans_layers', type=int, default=1, help='Number of MLP layers before aggregation')\nparser.add_argument('--posttrans_layers', type=int, default=1, help='Number of MLP layers after aggregation')\nargs = parser.parse_args()\n\n# The MPNNs can be considered a particular case of PNA networks with a single aggregator and no scalers (identity)\n\nexecute_train(gnn_args=dict(nfeat=None,\n                            nhid=args.hidden,\n                            nodes_out=None,\n                            graph_out=None,\n                            dropout=args.dropout,\n                            device=None,\n                            first_conv_descr=dict(layer_type=PNALayer,\n                                                  args=dict(\n                                                      aggregators=[args.aggregation],\n                                                      scalers=['identity'], avg_d=None,\n                                                      towers=args.towers,\n                                                      self_loop=args.self_loop,\n                                                      divide_input=False,\n                                                      pretrans_layers=args.pretrans_layers,\n                                                      posttrans_layers=args.posttrans_layers\n                                                  )),\n                            middle_conv_descr=dict(layer_type=PNALayer,\n                                                   args=dict(\n                                                       aggregators=[args.aggregation],\n                                                       scalers=['identity'],\n                                                       avg_d=None, towers=args.towers,\n                                                       self_loop=args.self_loop,\n                                                       divide_input=True,\n                                                       pretrans_layers=args.pretrans_layers,\n                                                       posttrans_layers=args.posttrans_layers\n                                                   )),\n                            fc_layers=args.fc_layers,\n                            conv_layers=args.conv_layers,\n                            skip=args.skip,\n                            gru=args.gru,\n                            fixed=args.fixed,\n                            variable=args.variable), args=args)\n"
  },
  {
    "path": "multitask_benchmark/train/pna.py",
    "content": "from __future__ import division\nfrom __future__ import print_function\n\nfrom models.pytorch.pna.layer import PNALayer\nfrom multitask_benchmark.util.train import execute_train, build_arg_parser\n\n# Training settings\nparser = build_arg_parser()\nparser.add_argument('--self_loop', action='store_true', default=False, help='Whether to add self loops in aggregators')\nparser.add_argument('--aggregators', type=str, default='mean max min std', help='Aggregators to use')\nparser.add_argument('--scalers', type=str, default='identity amplification attenuation', help='Scalers to use')\nparser.add_argument('--towers', type=int, default=4, help='Number of towers in PNA layers')\nparser.add_argument('--pretrans_layers', type=int, default=1, help='Number of MLP layers before aggregation')\nparser.add_argument('--posttrans_layers', type=int, default=1, help='Number of MLP layers after aggregation')\nargs = parser.parse_args()\n\nexecute_train(gnn_args=dict(nfeat=None,\n                            nhid=args.hidden,\n                            nodes_out=None,\n                            graph_out=None,\n                            dropout=args.dropout,\n                            device=None,\n                            first_conv_descr=dict(layer_type=PNALayer,\n                                                  args=dict(\n                                                      aggregators=args.aggregators.split(),\n                                                      scalers=args.scalers.split(), avg_d=None,\n                                                      towers=args.towers,\n                                                      self_loop=args.self_loop,\n                                                      divide_input=False,\n                                                      pretrans_layers=args.pretrans_layers,\n                                                      posttrans_layers=args.posttrans_layers\n                                                  )),\n                            middle_conv_descr=dict(layer_type=PNALayer,\n                                                   args=dict(\n                                                       aggregators=args.aggregators.split(),\n                                                       scalers=args.scalers.split(),\n                                                       avg_d=None, towers=args.towers,\n                                                       self_loop=args.self_loop,\n                                                       divide_input=True,\n                                                       pretrans_layers=args.pretrans_layers,\n                                                       posttrans_layers=args.posttrans_layers\n                                                   )),\n                            fc_layers=args.fc_layers,\n                            conv_layers=args.conv_layers,\n                            skip=args.skip,\n                            gru=args.gru,\n                            fixed=args.fixed,\n                            variable=args.variable), args=args)\n"
  },
  {
    "path": "multitask_benchmark/util/train.py",
    "content": "from __future__ import division\nfrom __future__ import print_function\n\nimport argparse\nimport os\nimport sys\nimport time\nfrom types import SimpleNamespace\n\nimport math\nimport numpy as np\nimport torch\nimport torch.optim as optim\nfrom tqdm import tqdm\n\nfrom models.pytorch.gnn_framework import GNN\nfrom multitask_benchmark.util.util import load_dataset, total_loss, total_loss_multiple_batches, \\\n    specific_loss_multiple_batches\n\n\ndef build_arg_parser():\n    \"\"\"\n    :return:    argparse.ArgumentParser() filled with the standard arguments for a training session.\n                    Might need to be enhanced for some train_scripts.\n    \"\"\"\n    parser = argparse.ArgumentParser()\n\n    parser.add_argument('--data', type=str, default='../../data/multitask_dataset.pkl', help='Data path.')\n    parser.add_argument('--no-cuda', action='store_true', default=False, help='Disables CUDA training.')\n    parser.add_argument('--only_nodes', action='store_true', default=False, help='Evaluate only nodes labels.')\n    parser.add_argument('--only_graph', action='store_true', default=False, help='Evaluate only graph labels.')\n    parser.add_argument('--seed', type=int, default=42, help='Random seed.')\n    parser.add_argument('--epochs', type=int, default=10000, help='Number of epochs to train.')\n    parser.add_argument('--lr', type=float, default=0.003, help='Initial learning rate.')\n    parser.add_argument('--weight_decay', type=float, default=1e-6, help='Weight decay (L2 loss on parameters).')\n    parser.add_argument('--hidden', type=int, default=16, help='Number of hidden units.')\n    parser.add_argument('--dropout', type=float, default=0.0, help='Dropout rate (1 - keep probability).')\n    parser.add_argument('--patience', type=int, default=1000, help='Patience')\n    parser.add_argument('--conv_layers', type=int, default=None, help='Graph convolutions')\n    parser.add_argument('--variable_conv_layers', type=str, default='N', help='Graph convolutions function name')\n    parser.add_argument('--fc_layers', type=int, default=3, help='Fully connected layers in readout')\n    parser.add_argument('--loss', type=str, default='mse', help='Loss function to use.')\n    parser.add_argument('--print_every', type=int, default=50, help='Print training results every')\n    parser.add_argument('--final_activation', type=str, default='LeakyReLu',\n                        help='final activation in both FC layers for nodes and S2S for Graph')\n    parser.add_argument('--skip', action='store_true', default=False,\n                        help='Whether to use the model with skip connections.')\n    parser.add_argument('--gru', action='store_true', default=False,\n                        help='Whether to use a GRU in the update function of the layers.')\n    parser.add_argument('--fixed', action='store_true', default=False,\n                        help='Whether to use the model with fixed middle convolutions.')\n    parser.add_argument('--variable', action='store_true', default=False,\n                        help='Whether to have a variable number of comvolutional layers.')\n    return parser\n\n\n# map from names (as passed as parameters) to function determining number of convolutional layers at runtime\nVARIABLE_LAYERS_FUNCTIONS = {\n    'N': lambda adj: adj.shape[1],\n    'N/2': lambda adj: adj.shape[1] // 2,\n    '4log2N': lambda adj: int(4 * math.log2(adj.shape[1])),\n    '2log2N': lambda adj: int(2 * math.log2(adj.shape[1])),\n    '3sqrtN': lambda adj: int(3 * math.sqrt(adj.shape[1]))\n}\n\n\ndef execute_train(gnn_args, args):\n    \"\"\"\n    :param gnn_args: the description of the model to be trained (expressed as arguments for GNN.__init__)\n    :param args: the parameters of the training session\n    \"\"\"\n    args.cuda = not args.no_cuda and torch.cuda.is_available()\n\n    np.random.seed(args.seed)\n    torch.manual_seed(args.seed)\n    if args.cuda:\n        torch.cuda.manual_seed(args.seed)\n\n    device = 'cuda' if args.cuda else 'cpu'\n    print('Using device:', device)\n\n    # load data\n    adj, features, node_labels, graph_labels = load_dataset(args.data, args.loss, args.only_nodes, args.only_graph,\n                                                            print_baseline=True)\n\n    # model and optimizer\n    gnn_args = SimpleNamespace(**gnn_args)\n\n    # compute avg_d on the training set\n    if 'avg_d' in gnn_args.first_conv_descr['args'] or 'avg_d' in gnn_args.middle_conv_descr['args']:\n        dlist = [torch.sum(A, dim=-1) for A in adj['train']]\n        avg_d = dict(lin=sum([torch.mean(D) for D in dlist]) / len(dlist),\n                     exp=sum([torch.mean(torch.exp(torch.div(1, D)) - 1) for D in dlist]) / len(dlist),\n                     log=sum([torch.mean(torch.log(D + 1)) for D in dlist]) / len(dlist))\n    if 'avg_d' in gnn_args.first_conv_descr['args']:\n        gnn_args.first_conv_descr['args']['avg_d'] = avg_d\n    if 'avg_d' in gnn_args.middle_conv_descr['args']:\n        gnn_args.middle_conv_descr['args']['avg_d'] = avg_d\n\n    gnn_args.device = device\n    gnn_args.nfeat = features['train'][0].shape[2]\n    gnn_args.nodes_out = node_labels['train'][0].shape[-1]\n    gnn_args.graph_out = graph_labels['train'][0].shape[-1]\n    if gnn_args.variable:\n        assert gnn_args.conv_layers is None, \"If model is variable, you shouldn't specify conv_layers (maybe you \" \\\n                                             \"meant variable_conv_layers?) \"\n    else:\n        assert gnn_args.conv_layers is not None, \"If the model is not variable, you should specify conv_layers\"\n    gnn_args.conv_layers = VARIABLE_LAYERS_FUNCTIONS[\n        args.variable_conv_layers] if gnn_args.variable else args.conv_layers\n    model = GNN(**vars(gnn_args))\n    optimizer = optim.Adam(model.parameters(), lr=args.lr, weight_decay=args.weight_decay)\n\n    pytorch_total_params = sum(p.numel() for p in model.parameters() if p.requires_grad)\n    print(\"Total params\", pytorch_total_params)\n\n    def move_cuda(dset):\n        assert args.cuda, \"Cannot move dataset on CUDA, running on cpu\"\n        if features[dset][0].is_cuda:\n            # already on CUDA\n            return\n        features[dset] = [x.cuda() for x in features[dset]]\n        adj[dset] = [x.cuda() for x in adj[dset]]\n        node_labels[dset] = [x.cuda() for x in node_labels[dset]]\n        graph_labels[dset] = [x.cuda() for x in graph_labels[dset]]\n\n    if args.cuda:\n        model.cuda()\n        # move train, val to CUDA (delay moving test until needed)\n        move_cuda('train')\n        move_cuda('val')\n\n    def train(epoch):\n        \"\"\"\n        Execute a single epoch of the training loop\n\n        :param epoch:int the number of the epoch being performed (0-indexed)\n        \"\"\"\n        t = time.time()\n\n        # train step\n        model.train()\n        for batch in range(len(adj['train'])):\n            optimizer.zero_grad()\n            output = model(features['train'][batch], adj['train'][batch])\n            loss_train = total_loss(output, (node_labels['train'][batch], graph_labels['train'][batch]), loss=args.loss,\n                                    only_nodes=args.only_nodes, only_graph=args.only_graph)\n            loss_train.backward()\n            optimizer.step()\n\n        # validation epoch\n        model.eval()\n        output_zip = [model(features['val'][batch], adj['val'][batch]) for batch in range(len(adj['val']))]\n        output = ([x[0] for x in output_zip], [x[1] for x in output_zip])\n\n        loss_val = total_loss_multiple_batches(output, (node_labels['val'], graph_labels['val']), loss=args.loss,\n                                               only_nodes=args.only_nodes, only_graph=args.only_graph)\n\n        return loss_train.data.item(), loss_val\n\n    def compute_test():\n        \"\"\"\n        Evaluate the current model on all the sets of the dataset, printing results.\n        This procedure is destructive on datasets.\n        \"\"\"\n        model.eval()\n\n        sets = list(features.keys())\n        for dset in sets:\n            # move data on CUDA if not already on it\n            if args.cuda:\n                move_cuda(dset)\n\n            output_zip = [model(features[dset][batch], adj[dset][batch]) for batch in range(len(adj[dset]))]\n            output = ([x[0] for x in output_zip], [x[1] for x in output_zip])\n            loss_test = total_loss_multiple_batches(output, (node_labels[dset], graph_labels[dset]), loss=args.loss,\n                                                    only_nodes=args.only_nodes, only_graph=args.only_graph)\n            print(\"Test set results \", dset, \": loss= {:.4f}\".format(loss_test))\n            print(dset, \": \",\n                  specific_loss_multiple_batches(output, (node_labels[dset], graph_labels[dset]), loss=args.loss,\n                                                 only_nodes=args.only_nodes, only_graph=args.only_graph))\n\n            # free unnecessary data\n            del output_zip\n            del output\n            del loss_test\n            del features[dset]\n            del adj[dset]\n            del node_labels[dset]\n            del graph_labels[dset]\n            torch.cuda.empty_cache()\n\n    sys.stdout.flush()\n    # Train model\n    t_total = time.time()\n    loss_values = []\n    bad_counter = 0\n    best = args.epochs + 1\n    best_epoch = -1\n\n    sys.stdout.flush()\n    with tqdm(range(args.epochs), leave=True, unit='epoch') as t:\n        for epoch in t:\n            loss_train, loss_val = train(epoch)\n            loss_values.append(loss_val)\n            t.set_description('loss.train: {:.4f}, loss.val: {:.4f}'.format(loss_train, loss_val))\n            if loss_values[-1] < best:\n                # save current model\n                torch.save(model.state_dict(), '{}.pkl'.format(epoch))\n                # remove previous model\n                if best_epoch >= 0:\n                    os.remove('{}.pkl'.format(best_epoch))\n                # update training variables\n                best = loss_values[-1]\n                best_epoch = epoch\n                bad_counter = 0\n            else:\n                bad_counter += 1\n\n            if bad_counter == args.patience:\n                print('Early stop at epoch {} (no improvement in last {} epochs)'.format(epoch + 1, bad_counter))\n                break\n\n    print(\"Optimization Finished!\")\n    print(\"Total time elapsed: {:.4f}s\".format(time.time() - t_total))\n\n    # Restore best model\n    print('Loading {}th epoch'.format(best_epoch + 1))\n    model.load_state_dict(torch.load('{}.pkl'.format(best_epoch)))\n\n    # Testing\n    with torch.no_grad():\n        compute_test()\n"
  },
  {
    "path": "multitask_benchmark/util/util.py",
    "content": "from __future__ import division\nfrom __future__ import print_function\n\nimport torch\nimport torch.nn.functional as F\n\n\ndef load_dataset(data_path, loss, only_nodes, only_graph, print_baseline=True):\n    with open(data_path, 'rb') as f:\n        (adj, features, node_labels, graph_labels) = torch.load(f)\n\n    # normalize labels\n    max_node_labels = torch.cat([nls.max(0)[0].max(0)[0].unsqueeze(0) for nls in node_labels['train']]).max(0)[0]\n    max_graph_labels = torch.cat([gls.max(0)[0].unsqueeze(0) for gls in graph_labels['train']]).max(0)[0]\n    for dset in node_labels.keys():\n        node_labels[dset] = [nls / max_node_labels for nls in node_labels[dset]]\n        graph_labels[dset] = [gls / max_graph_labels for gls in graph_labels[dset]]\n\n    if print_baseline:\n        # calculate baseline\n        mean_node_labels = torch.cat([nls.mean(0).mean(0).unsqueeze(0) for nls in node_labels['train']]).mean(0)\n        mean_graph_labels = torch.cat([gls.mean(0).unsqueeze(0) for gls in graph_labels['train']]).mean(0)\n\n        for dset in node_labels.keys():\n            if dset not in ['train', 'val']:\n                baseline_nodes = [mean_node_labels.repeat(list(nls.shape[0:-1]) + [1]) for nls in node_labels[dset]]\n                baseline_graph = [mean_graph_labels.repeat([gls.shape[0], 1]) for gls in graph_labels[dset]]\n\n                print(\"Baseline loss \", dset,\n                      specific_loss_multiple_batches((baseline_nodes, baseline_graph),\n                                                     (node_labels[dset], graph_labels[dset]),\n                                                     loss=loss, only_nodes=only_nodes, only_graph=only_graph))\n\n    return adj, features, node_labels, graph_labels\n\n\ndef get_loss(loss, output, target):\n    if loss == \"mse\":\n        return F.mse_loss(output, target)\n    elif loss == \"cross_entropy\":\n        if len(output.shape) > 2:\n            (B, N, _) = output.shape\n            output = output.reshape((B * N, -1))\n            target = target.reshape((B * N, -1))\n        _, target = target.max(dim=1)\n        return F.cross_entropy(output, target)\n    else:\n        print(\"Error: loss function not supported\")\n\n\ndef total_loss(output, target, loss='mse', only_nodes=False, only_graph=False):\n    \"\"\" returns the average of the average losses of each task \"\"\"\n    assert not (only_nodes and only_graph)\n\n    if only_nodes:\n        nodes_loss = get_loss(loss, output[0], target[0])\n        return nodes_loss\n    elif only_graph:\n        graph_loss = get_loss(loss, output[1], target[1])\n        return graph_loss\n\n    nodes_loss = get_loss(loss, output[0], target[0])\n    graph_loss = get_loss(loss, output[1], target[1])\n    weighted_average = (nodes_loss * output[0].shape[-1] + graph_loss * output[1].shape[-1]) / (\n            output[0].shape[-1] + output[1].shape[-1])\n    return weighted_average\n\n\ndef total_loss_multiple_batches(output, target, loss='mse', only_nodes=False, only_graph=False):\n    \"\"\" returns the average of the average losses of each task over all batches,\n        batches are weighted equally regardless of their cardinality or graph size \"\"\"\n    n_batches = len(output[0])\n    return sum([total_loss((output[0][batch], output[1][batch]), (target[0][batch], target[1][batch]),\n                           loss, only_nodes, only_graph).data.item()\n                for batch in range(n_batches)]) / n_batches\n\n\ndef specific_loss(output, target, loss='mse', only_nodes=False, only_graph=False):\n    \"\"\" returns the average loss for each task \"\"\"\n    assert not (only_nodes and only_graph)\n    n_nodes_labels = output[0].shape[-1] if not only_graph else 0\n    n_graph_labels = output[1].shape[-1] if not only_nodes else 0\n\n    if only_nodes:\n        nodes_loss = [get_loss(loss, output[0][:, :, k], target[0][:, :, k]).item() for k in range(n_nodes_labels)]\n        return nodes_loss\n    elif only_graph:\n        graph_loss = [get_loss(loss, output[1][:, k], target[1][:, k]).item() for k in range(n_graph_labels)]\n        return graph_loss\n\n    nodes_loss = [get_loss(loss, output[0][:, :, k], target[0][:, :, k]).item() for k in range(n_nodes_labels)]\n    graph_loss = [get_loss(loss, output[1][:, k], target[1][:, k]).item() for k in range(n_graph_labels)]\n    return nodes_loss + graph_loss\n\n\ndef specific_loss_multiple_batches(output, target, loss='mse', only_nodes=False, only_graph=False):\n    \"\"\" returns the average loss over all batches for each task,\n        batches are weighted equally regardless of their cardinality or graph size \"\"\"\n    assert not (only_nodes and only_graph)\n\n    n_batches = len(output[0])\n    classes = (output[0][0].shape[-1] if not only_graph else 0) + (output[1][0].shape[-1] if not only_nodes else 0)\n\n    sum_losses = [0] * classes\n    for batch in range(n_batches):\n        spec_loss = specific_loss((output[0][batch], output[1][batch]), (target[0][batch], target[1][batch]), loss,\n                                  only_nodes, only_graph)\n        for par in range(classes):\n            sum_losses[par] += spec_loss[par]\n\n    return [sum_loss / n_batches for sum_loss in sum_losses]\n"
  },
  {
    "path": "realworld_benchmark/README.md",
    "content": "# Real-world benchmarks\n\n<img src=\"https://raw.githubusercontent.com/lukecavabarrett/pna/master/multitask_benchmark/images/realworld_results.png\" alt=\"Real world results\" width=\"500\"/>\n\n## Overview\n\nWe provide the scripts for the download and execution of the real-world benchmarks we used. \nMany scripts in this directory were taken directly from or inspired by \"Benchmarking GNNs\" \nby Dwivedi _et al._ refer to their [code](https://github.com/graphdeeplearning/benchmarking-gnns) \nand [paper](https://arxiv.org/abs/2003.00982) for more details on their work. The graph classification\nbenchmark MolHIV comes from the [Open Graph Benchmark](https://ogb.stanford.edu/).\n\n- `configs` contains .json configuration files for the various datasets;\n- `data` contains scripts to download the datasets;\n- `nets` contains the architectures that were used with the PNA in the benchmarks;\n- `train` contains the training scripts.\n  \nThese benchmarks use the DGL version of PNA (`../models/dgl`) with the MolHIV model using the *simple* layer architecture. \nBelow you can find the instructions on how to download the datasets and run the models. \nYou can run these scripts directly in this [notebook](https://colab.research.google.com/drive/1RnV4MBjCl98eubAGpEF-eXdAW5mTP3h3?usp=sharing).\n\n\n\n## Test run\n\n### Benchmark Setup\n\n[Follow these instructions](./docs/setup.md) to install the benchmark and setup the environment.\n\n### Run model training\n```\n# at the root of the repo\ncd realworld_benchmark\npython { main_molecules.py | main_superpixels.py } [--param=value ...] --dataset { ZINC | MNIST | CIFAR10 } --gpu_id gpu_id --config config_file\n```\n\n\n## Tuned hyperparameters\n\nYou can find below the hyperparameters we used for our experiments. In general, the depth of the architectures was not changed while the width was adjusted to keep the total number of parameters of the model between 100k and 110k as done in \"Benchmarking GNNs\" to ensure a fair comparison of the architectures. Refer to our [paper](https://arxiv.org/abs/2004.05718) for an interpretation of the results.\n\n```\nFor OGB leaderboard (hyperparameters taken from the DGN model - 300k parameters):\n\npython -m main_HIV --weight_decay=3e-6 --L=4 --hidden_dim=80 --out_dim=80 --residual=True --readout=mean --in_feat_dropout=0.0 --dropout=0.3 --batch_norm=True --aggregators=\"mean max min std\" --scalers=\"identity amplification attenuation\" --dataset HIV --gpu_id 0 --config \"configs/molecules_graph_classification_PNA_HIV.json\" --epochs=200 --init_lr=0.01 --lr_reduce_factor=0.5 --lr_schedule_patience=20 --min_lr=0.0001\n\n\nFor the leaderboard (2nd version of the datasets - 400/500k parameters)\n\n# ZINC\nPNA:\npython main_molecules.py --weight_decay=3e-6 --L=16 --hidden_dim=70 --out_dim=70 --residual=True --edge_feat=True --edge_dim=40 --readout=sum --in_feat_dropout=0.0 --dropout=0.0 --graph_norm=True --batch_norm=True --aggregators=\"mean max min std\" --scalers=\"identity amplification attenuation\" --towers=5 --pretrans_layers=1 --posttrans_layers=1 --divide_input_first=True --divide_input_last=True --dataset ZINC --gpu_id 0 --config \"configs/molecules_graph_regression_pna_ZINC.json\" --lr_schedule_patience=20\nMPNN (sum/max):\npython main_molecules.py --weight_decay=3e-6 --L=16 --hidden_dim=110 --out_dim=110 --residual=True --edge_feat=True --edge_dim=40 --readout=sum --in_feat_dropout=0.0 --dropout=0.0 --graph_norm=True --batch_norm=True --aggregators=\"sum\"/\"max\" --scalers=\"identity\" --towers=5 --pretrans_layers=1 --posttrans_layers=1 --divide_input_first=True --divide_input_last=True --dataset ZINC --gpu_id 0 --config \"configs/molecules_graph_regression_pna_ZINC.json\" --lr_schedule_patience=20\n\n\nFor the paper (1st version of the datasets - 100k parameters)\n--- PNA ---\n\n# ZINC\npython main_molecules.py --weight_decay=3e-6 --L=4 --hidden_dim=75 --out_dim=70 --residual=True --edge_feat=False --readout=sum --in_feat_dropout=0.0 --dropout=0.0 --graph_norm=True --batch_norm=True --aggregators=\"mean max min std\" --scalers=\"identity amplification attenuation\" --towers=5 --divide_input_first=False --divide_input_last=True  --dataset ZINC --gpu_id 0 --config \"configs/molecules_graph_regression_pna_ZINC.json\" --lr_schedule_patience=5\npython main_molecules.py --weight_decay=3e-6 --L=4 --hidden_dim=70 --out_dim=60 --residual=True --edge_feat=True --edge_dim=50 --readout=sum --in_feat_dropout=0.0 --dropout=0.0 --graph_norm=True --batch_norm=True --aggregators=\"mean max min std\" --scalers=\"identity amplification attenuation\" --towers=5 --pretrans_layers=1 --posttrans_layers=1 --divide_input_first=True --divide_input_last=True  --dataset ZINC --gpu_id 0 --config \"configs/molecules_graph_regression_pna_ZINC.json\" --lr_schedule_patience=20\n\n# CIFAR10\npython main_superpixels.py --weight_decay=3e-6 --L=4 --hidden_dim=75 --out_dim=70 --residual=True --edge_feat=False --readout=sum --in_feat_dropout=0.0 --dropout=0.1 --graph_norm=True --batch_norm=True --aggregators=\"mean max min std\" --scalers=\"identity amplification attenuation\" --towers=5 --divide_input_first=True --divide_input_last=True  --dataset CIFAR10 --gpu_id 0 --config \"configs/superpixels_graph_classification_pna_CIFAR10.json\" --lr_schedule_patience=5\npython main_superpixels.py --weight_decay=3e-6 --L=4 --hidden_dim=75 --out_dim=70 --residual=True --edge_feat=True --edge_dim=50 --readout=sum --in_feat_dropout=0.0 --dropout=0.3 --graph_norm=True --batch_norm=True --aggregators=\"mean max min std\" --scalers=\"identity amplification attenuation\" --towers=5 --divide_input_first=True --divide_input_last=True  --dataset CIFAR10 --gpu_id 0 --config \"configs/superpixels_graph_classification_pna_CIFAR10.json\" --lr_schedule_patience=5\n\n# MNIST\npython main_superpixels.py --weight_decay=3e-6 --L=4 --hidden_dim=75 --out_dim=70 --residual=True --edge_feat=False --readout=sum --in_feat_dropout=0.0 --dropout=0.1 --graph_norm=True --batch_norm=True --aggregators=\"mean max min std\" --scalers=\"identity amplification attenuation\" --towers=5 --divide_input_first=True --divide_input_last=True  --dataset MNIST --gpu_id 0 --config \"configs/superpixels_graph_classification_pna_MNIST.json\" --lr_schedule_patience=5\npython main_superpixels.py --weight_decay=3e-6 --L=4 --hidden_dim=75 --out_dim=70 --residual=True --edge_feat=True --edge_dim=50 --readout=sum --in_feat_dropout=0.0 --dropout=0.3 --graph_norm=True --batch_norm=True --aggregators=\"mean max min std\" --scalers=\"identity amplification attenuation\" --towers=5 --divide_input_first=True --divide_input_last=True  --dataset MNIST --gpu_id 0 --config \"configs/superpixels_graph_classification_pna_MNIST.json\" --lr_schedule_patience=5\n\n\n--- PNA (no scalers) ---\n\n# ZINC\npython main_molecules.py --weight_decay=3e-6 --L=4 --hidden_dim=95 --out_dim=90 --residual=True --edge_feat=False --readout=sum --in_feat_dropout=0.0 --dropout=0.0 --graph_norm=True --batch_norm=True --aggregators=\"mean max min std\" --scalers=\"identity\" --towers=5 --divide_input_first=True --divide_input_last=True  --dataset ZINC --gpu_id 0 --config \"configs/molecules_graph_regression_pna_ZINC.json\" --lr_schedule_patience=5\npython main_molecules.py --weight_decay=3e-6 --L=4 --hidden_dim=90 --out_dim=80 --residual=True --edge_feat=True --edge_dim=50 --readout=sum --in_feat_dropout=0.0 --dropout=0.0 --graph_norm=True --batch_norm=True --aggregators=\"mean max min std\" --scalers=\"identity\" --towers=5 --pretrans_layers=1 --posttrans_layers=1 --divide_input_first=True --divide_input_last=True  --dataset ZINC --gpu_id 0 --config \"configs/molecules_graph_regression_pna_ZINC.json\" --lr_schedule_patience=20\n\n# CIFAR10\npython main_superpixels.py --weight_decay=3e-6 --L=4 --hidden_dim=95 --out_dim=90 --residual=True --edge_feat=False --readout=sum --in_feat_dropout=0.0 --dropout=0.1 --graph_norm=True --batch_norm=True --aggregators=\"mean max min std\" --scalers=\"identity\" --towers=5 --divide_input_first=True --divide_input_last=True  --dataset CIFAR10 --gpu_id 0 --config \"configs/superpixels_graph_classification_pna_CIFAR10.json\" --lr_schedule_patience=5\npython main_superpixels.py --weight_decay=3e-6 --L=4 --hidden_dim=95 --out_dim=90 --residual=True --edge_feat=True --edge_dim=50 --readout=sum --in_feat_dropout=0.0 --dropout=0.3 --graph_norm=True --batch_norm=True --aggregators=\"mean max min std\" --scalers=\"identity\" --towers=5 --divide_input_first=True --divide_input_last=True  --dataset CIFAR10 --gpu_id 0 --config \"configs/superpixels_graph_classification_pna_CIFAR10.json\" --lr_schedule_patience=5\n\n# MNIST\npython main_superpixels.py --weight_decay=3e-6 --L=4 --hidden_dim=95 --out_dim=90 --residual=True --edge_feat=False --readout=sum --in_feat_dropout=0.0 --dropout=0.1 --graph_norm=True --batch_norm=True --aggregators=\"mean max min std\" --scalers=\"identity\" --towers=5 --divide_input_first=True --divide_input_last=True  --dataset MNIST --gpu_id 0 --config \"configs/superpixels_graph_classification_pna_MNIST.json\" --lr_schedule_patience=5\npython main_superpixels.py --weight_decay=3e-6 --L=4 --hidden_dim=95 --out_dim=90 --residual=True --edge_feat=True --edge_dim=50 --readout=sum --in_feat_dropout=0.0 --dropout=0.3 --graph_norm=True --batch_norm=True --aggregators=\"mean max min std\" --scalers=\"identity\" --towers=5 --divide_input_first=True --divide_input_last=True  --dataset MNIST --gpu_id 0 --config \"configs/superpixels_graph_classification_pna_MNIST.json\" --lr_schedule_patience=5\n\n\n--- MPNN (sum/max) ---\n\n# ZINC\npython main_molecules.py --weight_decay=1e-5 --L=4 --hidden_dim=110 --out_dim=80 --residual=True --edge_feat=False --readout=sum --in_feat_dropout=0.0 --dropout=0.0 --graph_norm=True --batch_norm=True --aggregators=\"sum\"/\"max\" --scalers=\"identity\" --towers=5 --divide_input_first=True --divide_input_last=True  --dataset ZINC --gpu_id 0 --config \"configs/molecules_graph_regression_pna_ZINC.json\" --lr_schedule_patience=5\npython main_molecules.py --weight_decay=3e-6 --L=4 --hidden_dim=100 --out_dim=70 --residual=True --edge_dim=50 --edge_feat=True --readout=sum --in_feat_dropout=0.0 --dropout=0.0 --graph_norm=True --batch_norm=True --aggregators=\"sum\"/\"max\" --scalers=\"identity\" --towers=5 --divide_input_first=True --divide_input_last=True  --dataset ZINC --gpu_id 0 --config \"configs/molecules_graph_regression_pna_ZINC.json\" --lr_schedule_patience=20\n\n# CIFAR10\npython main_superpixels.py --weight_decay=3e-6 --L=4 --hidden_dim=110 --out_dim=90 --residual=True --edge_feat=False --readout=sum --in_feat_dropout=0.0 --dropout=0.2 --graph_norm=True --batch_norm=True --aggregators=\"sum\"/\"max\" --scalers=\"identity\" --towers=5 --divide_input_first=True --divide_input_last=True  --dataset CIFAR10 --gpu_id 0 --config \"configs/superpixels_graph_classification_pna_CIFAR10.json\" --lr_schedule_patience=5\npython main_superpixels.py --weight_decay=3e-6 --L=4 --hidden_dim=110 --out_dim=90 --residual=True --edge_feat=True --edge_dim=20 --readout=sum --in_feat_dropout=0.0 --dropout=0.2 --graph_norm=True --batch_norm=True --aggregators=\"sum\"/\"max\" --scalers=\"identity\" --towers=5 --divide_input_first=True --divide_input_last=True  --dataset CIFAR10 --gpu_id 0 --config \"configs/superpixels_graph_classification_pna_CIFAR10.json\" --lr_schedule_patience=5\n\n# MNIST\npython main_superpixels.py --weight_decay=3e-6 --L=4 --hidden_dim=110 --out_dim=90 --residual=True --edge_feat=False --readout=sum --in_feat_dropout=0.0 --dropout=0.2 --graph_norm=True --batch_norm=True --aggregators=\"sum\"/\"max\" --scalers=\"identity\" --towers=5 --divide_input_first=True --divide_input_last=True  --dataset MNIST --gpu_id 0 --config \"configs/superpixels_graph_classification_pna_MNIST.json\" --lr_schedule_patience=5\npython main_superpixels.py --weight_decay=3e-6 --L=4 --hidden_dim=110 --out_dim=90 --residual=True --edge_feat=True --edge_dim=20 --readout=sum --in_feat_dropout=0.0 --dropout=0.2 --graph_norm=True --batch_norm=True --aggregators=\"sum\"/\"max\" --scalers=\"identity\" --towers=5 --divide_input_first=True --divide_input_last=True  --dataset MNIST --gpu_id 0 --config \"configs/superpixels_graph_classification_pna_MNIST.json\" --lr_schedule_patience=5\n\n```\n\nalternatively, for OGB leaderboard, run the following scripts in the [DGN](https://github.com/Saro00/DGN) repository:\n\n```\n# MolHIV \n\npython -m main_HIV --weight_decay=3e-6 --L=4 --hidden_dim=80 --out_dim=80 --residual=True --readout=mean --in_feat_dropout=0.0 --dropout=0.3 --batch_norm=True --aggregators=\"mean max min std\" --scalers=\"identity amplification attenuation\" --dataset HIV --config \"configs/molecules_graph_classification_DGN_HIV.json\" --epochs=200 --init_lr=0.01 --lr_reduce_factor=0.5 --lr_schedule_patience=20 --min_lr=0.0001\n\n# MolPCBA \n\npython main_PCBA.py --type_net=\"complex\" --batch_size=512 --lap_norm=\"none\" --weight_decay=3e-6 --L=4 --hidden_dim=510 --out_dim=510 --residual=True --edge_feat=True  --readout=sum --graph_norm=True --batch_norm=True --aggregators=\"mean sum max\" --scalers=\"identity\" --config \"configs/molecules_graph_classification_DGN_PCBA.json\"  --lr_schedule_patience=4 --towers=5 --dropout=0.2 --init_lr=0.0005 --min_lr=0.00002 --edge_dim=16 --lr_reduce_factor=0.8\n```\n\n\n"
  },
  {
    "path": "realworld_benchmark/configs/molecules_graph_classification_PNA_HIV.json",
    "content": "{\n  \"gpu\": {\n    \"use\": true,\n    \"id\": 0\n  },\n  \"model\": \"PNA\",\n  \"dataset\": \"HIV\",\n\n  \"params\": {\n    \"seed\": 41,\n    \"epochs\": 200,\n    \"batch_size\": 128,\n    \"init_lr\": 0.01,\n    \"lr_reduce_factor\": 0.5,\n    \"lr_schedule_patience\": 20,\n    \"min_lr\": 1e-4,\n    \"weight_decay\": 3e-6,\n    \"print_epoch_interval\": 5,\n    \"max_time\": 48\n  },\n  \"net_params\": {\n    \"L\": 4,\n    \"hidden_dim\": 70,\n    \"out_dim\": 70,\n    \"residual\": true,\n    \"readout\": \"mean\",\n    \"in_feat_dropout\": 0.0,\n    \"dropout\": 0.3,\n    \"batch_norm\": true,\n    \"aggregators\": \"mean max min std\",\n    \"scalers\": \"identity amplification attenuation\",\n    \"posttrans_layers\" : 1\n  }\n}"
  },
  {
    "path": "realworld_benchmark/configs/molecules_graph_regression_pna_ZINC.json",
    "content": "{\n  \"gpu\": {\n    \"use\": true,\n    \"id\": 0\n  },\n  \"model\": \"PNA\",\n  \"dataset\": \"ZINC\",\n  \"out_dir\": \"out/molecules_graph_regression/\",\n  \"params\": {\n    \"seed\": 41,\n    \"epochs\": 1000,\n    \"batch_size\": 128,\n    \"init_lr\": 0.001,\n    \"lr_reduce_factor\": 0.5,\n    \"lr_schedule_patience\": 5,\n    \"min_lr\": 1e-5,\n    \"weight_decay\": 3e-6,\n    \"print_epoch_interval\": 5,\n    \"max_time\": 48\n  },\n  \"net_params\": {\n    \"L\": 4,\n    \"hidden_dim\": 75,\n    \"out_dim\": 70,\n    \"residual\": true,\n    \"edge_feat\": false,\n    \"readout\": \"sum\",\n    \"in_feat_dropout\": 0.0,\n    \"dropout\": 0.0,\n    \"graph_norm\": true,\n    \"batch_norm\": true,\n    \"aggregators\": \"mean max min std\",\n    \"scalers\": \"identity amplification attenuation\",\n    \"towers\": 5,\n    \"divide_input_first\": false,\n    \"divide_input_last\": true,\n    \"gru\": false,\n    \"edge_dim\": 0,\n    \"pretrans_layers\" : 1,\n    \"posttrans_layers\" : 1\n  }\n}"
  },
  {
    "path": "realworld_benchmark/configs/superpixels_graph_classification_pna_CIFAR10.json",
    "content": "{\n  \"gpu\": {\n    \"use\": true,\n    \"id\": 0\n  },\n  \"model\": \"PNA\",\n  \"dataset\": \"CIFAR10\",\n  \"out_dir\": \"out/superpixels_graph_classification/\",\n  \"params\": {\n    \"seed\": 41,\n    \"epochs\": 1000,\n    \"batch_size\": 128,\n    \"init_lr\": 0.001,\n    \"lr_reduce_factor\": 0.5,\n    \"lr_schedule_patience\": 5,\n    \"min_lr\": 1e-5,\n    \"weight_decay\": 3e-6,\n    \"print_epoch_interval\": 5,\n    \"max_time\": 48\n  },\n  \"net_params\": {\n    \"L\": 4,\n    \"hidden_dim\": 75,\n    \"out_dim\": 70,\n    \"residual\": true,\n    \"edge_feat\": false,\n    \"readout\": \"sum\",\n    \"in_feat_dropout\": 0.0,\n    \"dropout\": 0.0,\n    \"graph_norm\": true,\n    \"batch_norm\": true,\n    \"aggregators\": \"mean max min std\",\n    \"scalers\": \"identity amplification attenuation\",\n    \"towers\": 5,\n    \"divide_input_first\": true,\n    \"divide_input_last\": false,\n    \"gru\": false,\n    \"edge_dim\": 0,\n    \"pretrans_layers\" : 1,\n    \"posttrans_layers\" : 1\n  }\n}"
  },
  {
    "path": "realworld_benchmark/configs/superpixels_graph_classification_pna_MNIST.json",
    "content": "{\n  \"gpu\": {\n    \"use\": true,\n    \"id\": 0\n  },\n  \"model\": \"PNA\",\n  \"dataset\": \"MNIST\",\n  \"out_dir\": \"out/superpixels_graph_classification/\",\n  \"params\": {\n    \"seed\": 41,\n    \"epochs\": 1000,\n    \"batch_size\": 128,\n    \"init_lr\": 0.001,\n    \"lr_reduce_factor\": 0.5,\n    \"lr_schedule_patience\": 5,\n    \"min_lr\": 1e-5,\n    \"weight_decay\": 3e-6,\n    \"print_epoch_interval\": 5,\n    \"max_time\": 48\n  },\n  \"net_params\": {\n    \"L\": 4,\n    \"hidden_dim\": 100,\n    \"out_dim\": 70,\n    \"residual\": true,\n    \"edge_feat\": false,\n    \"readout\": \"sum\",\n    \"in_feat_dropout\": 0.0,\n    \"dropout\": 0.0,\n    \"graph_norm\": true,\n    \"batch_norm\": true,\n    \"aggregators\": \"mean max min std\",\n    \"scalers\": \"identity amplification attenuation\",\n    \"towers\": 5,\n    \"divide_input_first\": true,\n    \"divide_input_last\": false,\n    \"gru\": false,\n    \"edge_dim\": 0,\n    \"pretrans_layers\" : 1,\n    \"posttrans_layers\" : 1\n  }\n}"
  },
  {
    "path": "realworld_benchmark/data/HIV.py",
    "content": "import time\nimport dgl\nimport torch\nfrom torch.utils.data import Dataset\nfrom ogb.graphproppred import DglGraphPropPredDataset\nfrom ogb.graphproppred import Evaluator\nimport torch.utils.data\n\n\nclass HIVDGL(torch.utils.data.Dataset):\n    def __init__(self, data, split):\n        self.split = split\n        self.data = [g for g in data[self.split]]\n        self.graph_lists = []\n        self.graph_labels = []\n        for g in self.data:\n            if g[0].number_of_nodes() > 5:\n                self.graph_lists.append(g[0])\n                self.graph_labels.append(g[1])\n        self.n_samples = len(self.graph_lists)\n\n    def __len__(self):\n        \"\"\"Return the number of graphs in the dataset.\"\"\"\n        return self.n_samples\n\n    def __getitem__(self, idx):\n        \"\"\"\n            Get the idx^th sample.\n            Parameters\n            ---------\n            idx : int\n                The sample index.\n            Returns\n            -------\n            (dgl.DGLGraph, int)\n                DGLGraph with node feature stored in `feat` field\n                And its label.\n        \"\"\"\n        return self.graph_lists[idx], self.graph_labels[idx]\n\n\nclass HIVDataset(Dataset):\n    def __init__(self, name, verbose=True):\n        start = time.time()\n        if verbose:\n            print(\"[I] Loading dataset %s...\" % (name))\n        self.name = name\n        self.dataset = DglGraphPropPredDataset(name = 'ogbg-molhiv')\n        self.split_idx = self.dataset.get_idx_split()\n\n        self.train = HIVDGL(self.dataset, self.split_idx['train'])\n        self.val = HIVDGL(self.dataset, self.split_idx['valid'])\n        self.test = HIVDGL(self.dataset, self.split_idx['test'])\n\n        self.evaluator = Evaluator(name='ogbg-molhiv')\n\n        if verbose:\n            print('train, test, val sizes :', len(self.train), len(self.test), len(self.val))\n            print(\"[I] Finished loading.\")\n            print(\"[I] Data load time: {:.4f}s\".format(time.time() - start))\n\n    # form a mini batch from a given list of samples = [(graph, label) pairs]\n    def collate(self, samples):\n        # The input samples is a list of pairs (graph, label).\n        graphs, labels = map(list, zip(*samples))\n        labels = torch.cat(labels).long()\n        batched_graph = dgl.batch(graphs)\n\n        return batched_graph, labels\n\n    def _add_self_loops(self):\n        # function for adding self loops\n        # this function will be called only if self_loop flag is True\n\n        self.train.graph_lists = [self_loop(g) for g in self.train.graph_lists]\n        self.val.graph_lists = [self_loop(g) for g in self.val.graph_lists]\n        self.test.graph_lists = [self_loop(g) for g in self.test.graph_lists]"
  },
  {
    "path": "realworld_benchmark/data/download_datasets.sh",
    "content": "# MIT License\n# Copyright (c) 2020 Vijay Prakash Dwivedi, Chaitanya K. Joshi, Thomas Laurent, Yoshua Bengio, Xavier Bresson\n\n\n# Command to download dataset:\n#   bash script_download_all_datasets.sh\n\n\n# ZINC\nFILE=ZINC.pkl\nif test -f \"$FILE\"; then\n\techo -e \"$FILE already downloaded.\"\nelse\n\techo -e \"\\ndownloading $FILE...\"\n\tcurl https://www.dropbox.com/s/bhimk9p1xst6dvo/ZINC.pkl?dl=1 -o ZINC.pkl -J -L -k\nfi\n\n# MNIST and CIFAR10\nFILE=MNIST.pkl\nif test -f \"$FILE\"; then\n\techo -e \"$FILE already downloaded.\"\nelse\n\techo -e \"\\ndownloading $FILE...\"\n\tcurl https://www.dropbox.com/s/wcfmo4yvnylceaz/MNIST.pkl?dl=1 -o MNIST.pkl -J -L -k\nfi\n\nFILE=CIFAR10.pkl\nif test -f \"$FILE\"; then\n\techo -e \"$FILE already downloaded.\"\nelse\n\techo -e \"\\ndownloading $FILE...\"\n\tcurl https://www.dropbox.com/s/agocm8pxg5u8yb5/CIFAR10.pkl?dl=1 -o CIFAR10.pkl -J -L -k\nfi"
  },
  {
    "path": "realworld_benchmark/data/molecules.py",
    "content": "# MIT License\n# Copyright (c) 2020 Vijay Prakash Dwivedi, Chaitanya K. Joshi, Thomas Laurent, Yoshua Bengio, Xavier Bresson\n\n\nimport torch\nimport pickle\nimport torch.utils.data\nimport time\nimport numpy as np\nimport csv\nimport dgl\n\n\nclass MoleculeDGL(torch.utils.data.Dataset):\n    def __init__(self, data_dir, split, num_graphs):\n        self.data_dir = data_dir\n        self.split = split\n        self.num_graphs = num_graphs\n\n        with open(data_dir + \"/%s.pickle\" % self.split, \"rb\") as f:\n            self.data = pickle.load(f)\n\n        # loading the sampled indices from file ./zinc_molecules/<split>.index\n        with open(data_dir + \"/%s.index\" % self.split, \"r\") as f:\n            data_idx = [list(map(int, idx)) for idx in csv.reader(f)]\n            self.data = [self.data[i] for i in data_idx[0]]\n\n        assert len(self.data) == num_graphs, \"Sample num_graphs again; available idx: train/val/test => 10k/1k/1k\"\n\n        \"\"\"\n        data is a list of Molecule dict objects with following attributes\n        \n          molecule = data[idx]\n        ; molecule['num_atom'] : nb of atoms, an integer (N)\n        ; molecule['atom_type'] : tensor of size N, each element is an atom type, an integer between 0 and num_atom_type\n        ; molecule['bond_type'] : tensor of size N x N, each element is a bond type, an integer between 0 and num_bond_type\n        ; molecule['logP_SA_cycle_normalized'] : the chemical property to regress, a float variable\n        \"\"\"\n\n        self.graph_lists = []\n        self.graph_labels = []\n        self.n_samples = len(self.data)\n        self._prepare()\n\n    def _prepare(self):\n        print(\"preparing %d graphs for the %s set...\" % (self.num_graphs, self.split.upper()))\n\n        for molecule in self.data:\n            node_features = molecule['atom_type'].long()\n\n            adj = molecule['bond_type']\n            edge_list = (adj != 0).nonzero()  # converting adj matrix to edge_list\n\n            edge_idxs_in_adj = edge_list.split(1, dim=1)\n            edge_features = adj[edge_idxs_in_adj].reshape(-1).long()\n\n            # Create the DGL Graph\n            g = dgl.DGLGraph()\n            g.add_nodes(molecule['num_atom'])\n            g.ndata['feat'] = node_features\n\n            for src, dst in edge_list:\n                g.add_edges(src.item(), dst.item())\n            g.edata['feat'] = edge_features\n\n            self.graph_lists.append(g)\n            self.graph_labels.append(molecule['logP_SA_cycle_normalized'])\n\n    def __len__(self):\n        \"\"\"Return the number of graphs in the dataset.\"\"\"\n        return self.n_samples\n\n    def __getitem__(self, idx):\n        \"\"\"\n            Get the idx^th sample.\n            Parameters\n            ---------\n            idx : int\n                The sample index.\n            Returns\n            -------\n            (dgl.DGLGraph, int)\n                DGLGraph with node feature stored in `feat` field\n                And its label.\n        \"\"\"\n        return self.graph_lists[idx], self.graph_labels[idx]\n\n\nclass MoleculeDatasetDGL(torch.utils.data.Dataset):\n    def __init__(self, name='Zinc'):\n        t0 = time.time()\n        self.name = name\n\n        self.num_atom_type = 28  # known meta-info about the zinc dataset; can be calculated as well\n        self.num_bond_type = 4  # known meta-info about the zinc dataset; can be calculated as well\n\n        data_dir = './data/molecules'\n\n        self.train = MoleculeDGL(data_dir, 'train', num_graphs=10000)\n        self.val = MoleculeDGL(data_dir, 'val', num_graphs=1000)\n        self.test = MoleculeDGL(data_dir, 'test', num_graphs=1000)\n        print(\"Time taken: {:.4f}s\".format(time.time() - t0))\n\n\ndef self_loop(g):\n    \"\"\"\n        Utility function only, to be used only when necessary as per user self_loop flag\n        : Overwriting the function dgl.transform.add_self_loop() to not miss ndata['feat'] and edata['feat']\n        \n        \n        This function is called inside a function in MoleculeDataset class.\n    \"\"\"\n    new_g = dgl.DGLGraph()\n    new_g.add_nodes(g.number_of_nodes())\n    new_g.ndata['feat'] = g.ndata['feat']\n\n    src, dst = g.all_edges(order=\"eid\")\n    src = dgl.backend.zerocopy_to_numpy(src)\n    dst = dgl.backend.zerocopy_to_numpy(dst)\n    non_self_edges_idx = src != dst\n    nodes = np.arange(g.number_of_nodes())\n    new_g.add_edges(src[non_self_edges_idx], dst[non_self_edges_idx])\n    new_g.add_edges(nodes, nodes)\n\n    # This new edata is not used since this function gets called only for GCN, GAT\n    # However, we need this for the generic requirement of ndata and edata\n    new_g.edata['feat'] = torch.zeros(new_g.number_of_edges())\n    return new_g\n\n\nclass MoleculeDataset(torch.utils.data.Dataset):\n\n    def __init__(self, name):\n        \"\"\"\n            Loading SBM datasets\n        \"\"\"\n        start = time.time()\n        print(\"[I] Loading dataset %s...\" % (name))\n        self.name = name\n        data_dir = 'data/'\n        with open(data_dir + name + '.pkl', \"rb\") as f:\n            f = pickle.load(f)\n            self.train = f[0]\n            self.val = f[1]\n            self.test = f[2]\n            self.num_atom_type = f[3]\n            self.num_bond_type = f[4]\n        print('train, test, val sizes :', len(self.train), len(self.test), len(self.val))\n        print(\"[I] Finished loading.\")\n        print(\"[I] Data load time: {:.4f}s\".format(time.time() - start))\n\n    # form a mini batch from a given list of samples = [(graph, label) pairs]\n    def collate(self, samples):\n        # The input samples is a list of pairs (graph, label).\n        graphs, labels = map(list, zip(*samples))\n        labels = torch.tensor(np.array(labels)).unsqueeze(1)\n        tab_sizes_n = [graphs[i].number_of_nodes() for i in range(len(graphs))]\n        tab_snorm_n = [torch.FloatTensor(size, 1).fill_(1. / float(size)) for size in tab_sizes_n]\n        snorm_n = torch.cat(tab_snorm_n).sqrt()\n        tab_sizes_e = [graphs[i].number_of_edges() for i in range(len(graphs))]\n        tab_snorm_e = [torch.FloatTensor(size, 1).fill_(1. / float(size)) for size in tab_sizes_e]\n        snorm_e = torch.cat(tab_snorm_e).sqrt()\n        batched_graph = dgl.batch(graphs)\n        return batched_graph, labels, snorm_n, snorm_e\n\n    def _add_self_loops(self):\n        # function for adding self loops\n        # this function will be called only if self_loop flag is True\n\n        self.train.graph_lists = [self_loop(g) for g in self.train.graph_lists]\n        self.val.graph_lists = [self_loop(g) for g in self.val.graph_lists]\n        self.test.graph_lists = [self_loop(g) for g in self.test.graph_lists]\n"
  },
  {
    "path": "realworld_benchmark/data/superpixels.py",
    "content": "# MIT License\n# Copyright (c) 2020 Vijay Prakash Dwivedi, Chaitanya K. Joshi, Thomas Laurent, Yoshua Bengio, Xavier Bresson\n\n\nimport os\nimport pickle\nfrom scipy.spatial.distance import cdist\nimport numpy as np\nimport itertools\n\nimport dgl\nimport torch\nimport torch.utils.data\n\nimport time\n\nimport csv\nfrom sklearn.model_selection import StratifiedShuffleSplit\n\n\n\n\ndef sigma(dists, kth=8):\n    # Compute sigma and reshape\n    try:\n        # Get k-nearest neighbors for each node\n        knns = np.partition(dists, kth, axis=-1)[:, kth::-1]\n        sigma = knns.sum(axis=1).reshape((knns.shape[0], 1))/kth\n    except ValueError:     # handling for graphs with num_nodes less than kth\n        num_nodes = dists.shape[0]\n        # this sigma value is irrelevant since not used for final compute_edge_list\n        sigma = np.array([1]*num_nodes).reshape(num_nodes,1)\n        \n    return sigma + 1e-8 # adding epsilon to avoid zero value of sigma\n\n\ndef compute_adjacency_matrix_images(coord, feat, use_feat=True, kth=8):\n    coord = coord.reshape(-1, 2)\n    # Compute coordinate distance\n    c_dist = cdist(coord, coord)\n    \n    if use_feat:\n        # Compute feature distance\n        f_dist = cdist(feat, feat)\n        # Compute adjacency\n        A = np.exp(- (c_dist/sigma(c_dist))**2 - (f_dist/sigma(f_dist))**2 )\n    else:\n        A = np.exp(- (c_dist/sigma(c_dist))**2)\n        \n    # Convert to symmetric matrix\n    A = 0.5 * (A + A.T)\n    A[np.diag_indices_from(A)] = 0\n    return A        \n\n\ndef compute_edges_list(A, kth=8+1):\n    # Get k-similar neighbor indices for each node\n\n    num_nodes = A.shape[0]\n    new_kth = num_nodes - kth\n    \n    if num_nodes > 9:\n        knns = np.argpartition(A, new_kth-1, axis=-1)[:, new_kth:-1]\n        knn_values = np.partition(A, new_kth-1, axis=-1)[:, new_kth:-1] # NEW\n    else:\n        # handling for graphs with less than kth nodes\n        # in such cases, the resulting graph will be fully connected\n        knns = np.tile(np.arange(num_nodes), num_nodes).reshape(num_nodes, num_nodes)\n        knn_values = A # NEW\n        \n        # removing self loop\n        if num_nodes != 1:\n            knn_values = A[knns != np.arange(num_nodes)[:,None]].reshape(num_nodes,-1) # NEW\n            knns = knns[knns != np.arange(num_nodes)[:,None]].reshape(num_nodes,-1)\n    return knns, knn_values # NEW\n\n\nclass SuperPixDGL(torch.utils.data.Dataset):\n    def __init__(self,\n                 data_dir,\n                 dataset,\n                 split,\n                 use_mean_px=True,\n                 use_coord=True):\n\n        self.split = split\n        \n        self.graph_lists = []\n        \n        if dataset == 'MNIST':\n            self.img_size = 28\n            with open(os.path.join(data_dir, 'mnist_75sp_%s.pkl' % split), 'rb') as f:\n                self.labels, self.sp_data = pickle.load(f)\n                self.graph_labels = torch.LongTensor(self.labels)\n        elif dataset == 'CIFAR10':\n            self.img_size = 32\n            with open(os.path.join(data_dir, 'cifar10_150sp_%s.pkl' % split), 'rb') as f:\n                self.labels, self.sp_data = pickle.load(f)\n                self.graph_labels = torch.LongTensor(self.labels)\n                \n        self.use_mean_px = use_mean_px\n        self.use_coord = use_coord\n        self.n_samples = len(self.labels)\n        \n        self._prepare()\n    \n    def _prepare(self):\n        print(\"preparing %d graphs for the %s set...\" % (self.n_samples, self.split.upper()))\n        self.Adj_matrices, self.node_features, self.edges_lists, self.edge_features = [], [], [], []\n        for index, sample in enumerate(self.sp_data):\n            mean_px, coord = sample[:2]\n            \n            try:\n                coord = coord / self.img_size\n            except AttributeError:\n                VOC_has_variable_image_sizes = True\n                \n            if self.use_mean_px:\n                A = compute_adjacency_matrix_images(coord, mean_px) # using super-pixel locations + features\n            else:\n                A = compute_adjacency_matrix_images(coord, mean_px, False) # using only super-pixel locations\n            edges_list, edge_values_list = compute_edges_list(A) # NEW\n\n            N_nodes = A.shape[0]\n            \n            mean_px = mean_px.reshape(N_nodes, -1)\n            coord = coord.reshape(N_nodes, 2)\n            x = np.concatenate((mean_px, coord), axis=1)\n\n            edge_values_list = edge_values_list.reshape(-1) # NEW # TO DOUBLE-CHECK !\n            \n            self.node_features.append(x)\n            self.edge_features.append(edge_values_list) # NEW\n            self.Adj_matrices.append(A)\n            self.edges_lists.append(edges_list)\n        \n        for index in range(len(self.sp_data)):\n            g = dgl.DGLGraph()\n            g.add_nodes(self.node_features[index].shape[0])\n            g.ndata['feat'] = torch.Tensor(self.node_features[index]).half() \n\n            for src, dsts in enumerate(self.edges_lists[index]):\n                # handling for 1 node where the self loop would be the only edge\n                # since, VOC Superpixels has few samples (5 samples) with only 1 node\n                if self.node_features[index].shape[0] == 1:\n                    g.add_edges(src, dsts)\n                else:\n                    g.add_edges(src, dsts[dsts!=src])\n            \n            # adding edge features for Residual Gated ConvNet\n            edge_feat_dim = g.ndata['feat'].shape[1] # dim same as node feature dim\n            #g.edata['feat'] = torch.ones(g.number_of_edges(), edge_feat_dim).half() \n            g.edata['feat'] = torch.Tensor(self.edge_features[index]).unsqueeze(1).half()  # NEW \n\n            self.graph_lists.append(g)\n\n    def __len__(self):\n        \"\"\"Return the number of graphs in the dataset.\"\"\"\n        return self.n_samples\n\n    def __getitem__(self, idx):\n        \"\"\"\n            Get the idx^th sample.\n            Parameters\n            ---------\n            idx : int\n                The sample index.\n            Returns\n            -------\n            (dgl.DGLGraph, int)\n                DGLGraph with node feature stored in `feat` field\n                And its label.\n        \"\"\"\n        return self.graph_lists[idx], self.graph_labels[idx]\n\n\nclass DGLFormDataset(torch.utils.data.Dataset):\n    \"\"\"\n        DGLFormDataset wrapping graph list and label list as per pytorch Dataset.\n        *lists (list): lists of 'graphs' and 'labels' with same len().\n    \"\"\"\n    def __init__(self, *lists):\n        assert all(len(lists[0]) == len(li) for li in lists)\n        self.lists = lists\n        self.graph_lists = lists[0]\n        self.graph_labels = lists[1]\n\n    def __getitem__(self, index):\n        return tuple(li[index] for li in self.lists)\n\n    def __len__(self):\n        return len(self.lists[0])\n    \n    \nclass SuperPixDatasetDGL(torch.utils.data.Dataset):\n    def __init__(self, name, num_val=5000):\n        \"\"\"\n            Takes input standard image dataset name (MNIST/CIFAR10) \n            and returns the superpixels graph.\n            \n            This class uses results from the above SuperPix class.\n            which contains the steps for the generation of the Superpixels\n            graph from a superpixel .pkl file that has been given by\n            https://github.com/bknyaz/graph_attention_pool\n            \n            Please refer the SuperPix class for details.\n        \"\"\"\n        t_data = time.time()\n        self.name = name\n\n        use_mean_px = True # using super-pixel locations + features\n        use_mean_px = False # using only super-pixel locations\n        if use_mean_px:\n            print('Adj matrix defined from super-pixel locations + features')\n        else:\n            print('Adj matrix defined from super-pixel locations (only)')\n        use_coord = True\n        self.test = SuperPixDGL(\"./data/superpixels\", dataset=self.name, split='test',\n                            use_mean_px=use_mean_px, \n                            use_coord=use_coord)\n\n        self.train_ = SuperPixDGL(\"./data/superpixels\", dataset=self.name, split='train',\n                             use_mean_px=use_mean_px, \n                             use_coord=use_coord)\n\n        _val_graphs, _val_labels = self.train_[:num_val]\n        _train_graphs, _train_labels = self.train_[num_val:]\n\n        self.val = DGLFormDataset(_val_graphs, _val_labels)\n        self.train = DGLFormDataset(_train_graphs, _train_labels)\n\n        print(\"[I] Data load time: {:.4f}s\".format(time.time()-t_data))\n        \n\n\ndef self_loop(g):\n    \"\"\"\n        Utility function only, to be used only when necessary as per user self_loop flag\n        : Overwriting the function dgl.transform.add_self_loop() to not miss ndata['feat'] and edata['feat']\n        \n        \n        This function is called inside a function in SuperPixDataset class.\n    \"\"\"\n    new_g = dgl.DGLGraph()\n    new_g.add_nodes(g.number_of_nodes())\n    new_g.ndata['feat'] = g.ndata['feat']\n    \n    src, dst = g.all_edges(order=\"eid\")\n    src = dgl.backend.zerocopy_to_numpy(src)\n    dst = dgl.backend.zerocopy_to_numpy(dst)\n    non_self_edges_idx = src != dst\n    nodes = np.arange(g.number_of_nodes())\n    new_g.add_edges(src[non_self_edges_idx], dst[non_self_edges_idx])\n    new_g.add_edges(nodes, nodes)\n    \n    # This new edata is not used since this function gets called only for GCN, GAT\n    # However, we need this for the generic requirement of ndata and edata\n    new_g.edata['feat'] = torch.zeros(new_g.number_of_edges())\n    return new_g\n\n    \n\nclass SuperPixDataset(torch.utils.data.Dataset):\n\n    def __init__(self, name):\n        \"\"\"\n            Loading Superpixels datasets\n        \"\"\"\n        start = time.time()\n        print(\"[I] Loading dataset %s...\" % (name))\n        self.name = name\n        data_dir = 'data/'\n        with open(data_dir+name+'.pkl',\"rb\") as f:\n            f = pickle.load(f)\n            self.train = f[0]\n            self.val = f[1]\n            self.test = f[2]\n        print('train, test, val sizes :',len(self.train),len(self.test),len(self.val))\n        print(\"[I] Finished loading.\")\n        print(\"[I] Data load time: {:.4f}s\".format(time.time()-start))\n\n\n    # form a mini batch from a given list of samples = [(graph, label) pairs]\n    def collate(self, samples):\n        # The input samples is a list of pairs (graph, label).\n        graphs, labels = map(list, zip(*samples))\n        labels = torch.tensor(np.array(labels))\n        tab_sizes_n = [ graphs[i].number_of_nodes() for i in range(len(graphs))]\n        tab_snorm_n = [ torch.FloatTensor(size,1).fill_(1./float(size)) for size in tab_sizes_n ]\n        snorm_n = torch.cat(tab_snorm_n).sqrt()  \n        tab_sizes_e = [ graphs[i].number_of_edges() for i in range(len(graphs))]\n        tab_snorm_e = [ torch.FloatTensor(size,1).fill_(1./float(size)) for size in tab_sizes_e ]\n        snorm_e = torch.cat(tab_snorm_e).sqrt()\n        for idx, graph in enumerate(graphs):\n            graphs[idx].ndata['feat'] = graph.ndata['feat'].float()\n            graphs[idx].edata['feat'] = graph.edata['feat'].float()\n        batched_graph = dgl.batch(graphs)\n        return batched_graph, labels, snorm_n, snorm_e\n    \n    def _add_self_loops(self):\n        \n        # function for adding self loops\n        # this function will be called only if self_loop flag is True\n            \n        self.train.graph_lists = [self_loop(g) for g in self.train.graph_lists]\n        self.val.graph_lists = [self_loop(g) for g in self.val.graph_lists]\n        self.test.graph_lists = [self_loop(g) for g in self.test.graph_lists]\n        \n        self.train = DGLFormDataset(self.train.graph_lists, self.train.graph_labels)\n        self.val = DGLFormDataset(self.val.graph_lists, self.val.graph_labels)\n        self.test = DGLFormDataset(self.test.graph_lists, self.test.graph_labels)\n\n                            \n\n"
  },
  {
    "path": "realworld_benchmark/docs/setup.md",
    "content": "# Benchmark setup\n\n\n\n<br>\n\n## 1. Setup Conda\n\n```\n# Conda installation\n\n# For Linux\ncurl -o ~/miniconda.sh -O https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh\n\n# For OSX\ncurl -o ~/miniconda.sh -O https://repo.continuum.io/miniconda/Miniconda3-latest-MacOSX-x86_64.sh\n\nchmod +x ~/miniconda.sh    \n~/miniconda.sh  \n\nsource ~/.bashrc          # For Linux\nsource ~/.bash_profile    # For OSX\n```\n\n\n<br>\n\n## 2. Setup Python environment for CPU\n\n```\n# Clone GitHub repo\nconda install git\ngit clone https://github.com/lukecavabarrett/pna.git\ncd pna\n\n# Install python environment\nconda env create -f environment_cpu.yml   \n\n# Activate environment\nconda activate benchmark_gnn\n```\n\n\n\n<br>\n\n## 3. Setup Python environment for GPU\n\nDGL requires CUDA **10.0**.\n\nFor Ubuntu **18.04**\n\n```\n# Setup CUDA 10.0 on Ubuntu 18.04\nsudo apt-get --purge remove \"*cublas*\" \"cuda*\"\nsudo apt --purge remove \"nvidia*\"\nsudo apt autoremove\nwget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-repo-ubuntu1804_10.0.130-1_amd64.deb \nsudo dpkg -i cuda-repo-ubuntu1804_10.0.130-1_amd64.deb\nsudo apt-key adv --fetch-keys http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/7fa2af80.pub\nsudo apt update\nsudo apt install -y cuda-10-0\nsudo reboot\ncat /usr/local/cuda/version.txt # Check CUDA version is 10.0\n\n# Clone GitHub repo\nconda install git\ngit clone https://github.com/lukecavabarrett/pna.git\ncd pna\n\n# Install python environment\nconda env create -f environment_gpu.yml \n\n# Activate environment\nconda activate benchmark_gnn\n```\n\n\n\nFor Ubuntu **16.04**\n\n```\n# Setup CUDA 10.0 on Ubuntu 16.04\nsudo apt-get --purge remove \"*cublas*\" \"cuda*\"\nsudo apt --purge remove \"nvidia*\"\nsudo apt autoremove\nwget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/cuda-repo-ubuntu1604_10.0.130-1_amd64.deb\nsudo dpkg -i cuda-repo-ubuntu1604_10.0.130-1_amd64.deb\nsudo apt-key adv --fetch-keys http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/7fa2af80.pub\nsudo apt update\nsudo apt install -y cuda-10-0\nsudo reboot\ncat /usr/local/cuda/version.txt # Check CUDA version is 10.0\n\n# Clone GitHub repo\nconda install git\ngit clone https://github.com/lukecavabarrett/pna.git\ncd pna\n\n# Install python environment\nconda env create -f environment_gpu.yml \n\n# Activate environment\nconda activate benchmark_gnn\n```\n\n## 4. Download Datasets\n\n```\n# At the root of the repo\ncd realworld_benchmark/data/ \nbash download_datasets.sh\n```\n\n\n<br><br><br>\n\n"
  },
  {
    "path": "realworld_benchmark/environment_cpu.yml",
    "content": "# MIT License\n# Copyright (c) 2020 Vijay Prakash Dwivedi, Chaitanya K. Joshi, Thomas Laurent, Yoshua Bengio, Xavier Bresson\n\n\nname: benchmark_gnn\nchannels:\n- pytorch \n- dglteam\n- conda-forge\ndependencies:\n- python=3.7.4\n- python-dateutil=2.8.0\n- pytorch=1.3\n- torchvision==0.4.2\n- pillow==6.1\n- dgl=0.4.2\n- numpy=1.16.4\n- matplotlib=3.1.0\n- tensorboard=1.14.0\n- tensorboardx=1.8\n- absl-py\n- networkx=2.3\n- scikit-learn=0.21.2\n- scipy=1.3.0\n- notebook=6.0.0\n- h5py=2.9.0\n- mkl=2019.4\n- ipykernel=5.1.2\n- ipython=7.7.0\n- ipython_genutils=0.2.0\n- ipywidgets=7.5.1\n- jupyter=1.0.0\n- jupyter_client=5.3.1\n- jupyter_console=6.0.0\n- jupyter_core=4.5.0\n- plotly=4.1.1\n- scikit-image=0.15.0\n- requests==2.22.0\n- tqdm==4.43.0\n- pip:\n  - ogb==1.2.2"
  },
  {
    "path": "realworld_benchmark/environment_gpu.yml",
    "content": "# MIT License\n# Copyright (c) 2020 Vijay Prakash Dwivedi, Chaitanya K. Joshi, Thomas Laurent, Yoshua Bengio, Xavier Bresson\n\n\nname: benchmark_gnn_gpu\nchannels:\n- pytorch \n- dglteam\n- conda-forge\n- fragcolor\ndependencies:\n- cuda10.0\n- cudatoolkit=10.0\n- cudnn=7.6.5\n- python=3.7.4\n- python-dateutil=2.8.0\n- pytorch=1.3\n- torchvision==0.4.2\n- pillow==6.1\n- dgl-cuda10.0=0.4.2\n- numpy=1.16.4\n- matplotlib=3.1.0\n- tensorboard=1.14.0\n- tensorboardx=1.8\n- absl-py\n- networkx=2.3\n- scikit-learn=0.21.2\n- scipy=1.3.0\n- notebook=6.0.0\n- h5py=2.9.0\n- mkl=2019.4\n- ipykernel=5.1.2\n- ipython=7.7.0\n- ipython_genutils=0.2.0\n- ipywidgets=7.5.1\n- jupyter=1.0.0\n- jupyter_client=5.3.1\n- jupyter_console=6.0.0\n- jupyter_core=4.5.0\n- plotly=4.1.1\n- scikit-image=0.15.0\n- requests==2.22.0\n- tqdm==4.43.0\n- pip:\n  - ogb==1.2.2"
  },
  {
    "path": "realworld_benchmark/main_HIV.py",
    "content": "import numpy as np\nimport os\nimport time\nimport random\nimport argparse, json\nimport torch\nimport torch.optim as optim\nfrom torch.utils.data import DataLoader\nfrom tqdm import tqdm\n\nfrom nets.HIV_graph_classification.pna_net import PNANet\nfrom data.HIV import HIVDataset  # import dataset\nfrom train.train_HIV_graph_classification import train_epoch_sparse as train_epoch, \\\n    evaluate_network_sparse as evaluate_network\n\n\ndef gpu_setup(use_gpu, gpu_id):\n    os.environ[\"CUDA_DEVICE_ORDER\"] = \"PCI_BUS_ID\"\n    os.environ[\"CUDA_VISIBLE_DEVICES\"] = str(gpu_id)\n\n    if torch.cuda.is_available() and use_gpu:\n        print('cuda available with GPU:', torch.cuda.get_device_name(0))\n        device = torch.device(\"cuda\")\n    else:\n        print('cuda not available')\n        device = torch.device(\"cpu\")\n    return device\n\n\ndef view_model_param(net_params):\n    model = PNANet(net_params)\n    total_param = 0\n    print(\"MODEL DETAILS:\\n\")\n    # print(model)\n    for param in model.parameters():\n        # print(param.data.size())\n        total_param += np.prod(list(param.data.size()))\n    print('PNA Total parameters:', total_param)\n    return total_param\n\n\ndef train_val_pipeline(dataset, params, net_params):\n    t0 = time.time()\n    per_epoch_time = []\n\n    trainset, valset, testset = dataset.train, dataset.val, dataset.test\n    device = net_params['device']\n\n    # setting seeds\n    random.seed(params['seed'])\n    np.random.seed(params['seed'])\n    torch.manual_seed(params['seed'])\n    if device.type == 'cuda':\n        torch.cuda.manual_seed(params['seed'])\n\n    print(\"Training Graphs: \", len(trainset))\n    print(\"Validation Graphs: \", len(valset))\n    print(\"Test Graphs: \", len(testset))\n\n    model = PNANet(net_params)\n    model = model.to(device)\n\n    optimizer = optim.Adam(model.parameters(), lr=params['init_lr'], weight_decay=params['weight_decay'])\n    scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer, mode='min',\n                                                     factor=params['lr_reduce_factor'],\n                                                     patience=params['lr_schedule_patience'],\n                                                     verbose=True)\n\n    epoch_train_losses, epoch_val_losses = [], []\n    epoch_train_ROCs, epoch_val_ROCs, epoch_test_ROCs = [], [], []\n\n    train_loader = DataLoader(trainset, batch_size=params['batch_size'], shuffle=True, collate_fn=dataset.collate,\n                              pin_memory=True)\n    val_loader = DataLoader(valset, batch_size=params['batch_size'], shuffle=False, collate_fn=dataset.collate,\n                            pin_memory=True)\n    test_loader = DataLoader(testset, batch_size=params['batch_size'], shuffle=False, collate_fn=dataset.collate,\n                             pin_memory=True)\n\n    # At any point you can hit Ctrl + C to break out of training early.\n    try:\n        with tqdm(range(params['epochs']), unit='epoch') as t:\n            for epoch in t:\n                if epoch == -1:\n                    model.reset_params()\n\n                t.set_description('Epoch %d' % epoch)\n                start = time.time()\n\n                epoch_train_loss, epoch_train_roc, optimizer = train_epoch(model, optimizer, device, train_loader, epoch)\n                epoch_val_loss, epoch_val_roc = evaluate_network(model, device, val_loader, epoch)\n\n                epoch_train_losses.append(epoch_train_loss)\n                epoch_val_losses.append(epoch_val_loss)\n                epoch_train_ROCs.append(epoch_train_roc.item())\n                epoch_val_ROCs.append(epoch_val_roc.item())\n\n                _, epoch_test_roc = evaluate_network(model, device, test_loader, epoch)\n                epoch_test_ROCs.append(epoch_test_roc.item())\n\n                t.set_postfix(time=time.time() - start, lr=optimizer.param_groups[0]['lr'],\n                              train_loss=epoch_train_loss, val_loss=epoch_val_loss,\n                              train_ROC=epoch_train_roc.item(), val_ROC=epoch_val_roc.item(),\n                              test_ROC=epoch_test_roc.item(), refresh=False)\n\n                per_epoch_time.append(time.time() - start)\n                scheduler.step(-epoch_val_roc.item())\n\n                if optimizer.param_groups[0]['lr'] < params['min_lr']:\n                    print(\"\\n!! LR EQUAL TO MIN LR SET.\")\n                    break\n\n                # Stop training after params['max_time'] hours\n                if time.time() - t0 > params['max_time'] * 3600:\n                    print('-' * 89)\n                    print(\"Max_time for training elapsed {:.2f} hours, so stopping\".format(params['max_time']))\n                    break\n\n                print('')\n\n    except KeyboardInterrupt:\n        print('-' * 89)\n        print('Exiting from training early because of KeyboardInterrupt')\n\n    best_val_epoch = np.argmax(np.array(epoch_val_ROCs))\n    best_train_epoch = np.argmax(np.array(epoch_train_ROCs))\n    best_val_roc = epoch_val_ROCs[best_val_epoch]\n    best_val_test_roc = epoch_test_ROCs[best_val_epoch]\n    best_val_train_roc = epoch_train_ROCs[best_val_epoch]\n    best_train_roc = epoch_train_ROCs[best_train_epoch]\n\n    print(\"Best Train ROC: {:.4f}\".format(best_train_roc))\n    print(\"Best Val ROC: {:.4f}\".format(best_val_roc))\n    print(\"Test ROC of Best Val: {:.4f}\".format(best_val_test_roc))\n    print(\"Train ROC of Best Val: {:.4f}\".format(best_val_train_roc))\n    print(\"TOTAL TIME TAKEN: {:.4f}s\".format(time.time() - t0))\n    print(\"AVG TIME PER EPOCH: {:.4f}s\".format(np.mean(per_epoch_time)))\n\n\ndef main():\n    parser = argparse.ArgumentParser()\n    parser.add_argument('--config', help=\"Please give a config.json file with training/model/data/param details\")\n    parser.add_argument('--gpu_id', help=\"Please give a value for gpu id\")\n    parser.add_argument('--dataset', help=\"Please give a value for dataset name\")\n    parser.add_argument('--seed', help=\"Please give a value for seed\")\n    parser.add_argument('--epochs', type=int, help=\"Please give a value for epochs\")\n    parser.add_argument('--batch_size', help=\"Please give a value for batch_size\")\n    parser.add_argument('--init_lr', help=\"Please give a value for init_lr\")\n    parser.add_argument('--lr_reduce_factor', help=\"Please give a value for lr_reduce_factor\")\n    parser.add_argument('--lr_schedule_patience', help=\"Please give a value for lr_schedule_patience\")\n    parser.add_argument('--min_lr', help=\"Please give a value for min_lr\")\n    parser.add_argument('--weight_decay', help=\"Please give a value for weight_decay\")\n    parser.add_argument('--print_epoch_interval', help=\"Please give a value for print_epoch_interval\")\n    parser.add_argument('--L', help=\"Please give a value for L\")\n    parser.add_argument('--hidden_dim', help=\"Please give a value for hidden_dim\")\n    parser.add_argument('--out_dim', help=\"Please give a value for out_dim\")\n    parser.add_argument('--residual', help=\"Please give a value for residual\")\n    parser.add_argument('--edge_feat', help=\"Please give a value for edge_feat\")\n    parser.add_argument('--readout', help=\"Please give a value for readout\")\n    parser.add_argument('--in_feat_dropout', help=\"Please give a value for in_feat_dropout\")\n    parser.add_argument('--dropout', help=\"Please give a value for dropout\")\n    parser.add_argument('--batch_norm', help=\"Please give a value for batch_norm\")\n    parser.add_argument('--max_time', help=\"Please give a value for max_time\")\n    parser.add_argument('--expid', help='Experiment id.')\n    parser.add_argument('--aggregators', type=str, help='Aggregators to use.')\n    parser.add_argument('--scalers', type=str, help='Scalers to use.')\n    parser.add_argument('--posttrans_layers', type=int, help='posttrans_layers.')\n\n    args = parser.parse_args()\n    print(args.config)\n\n    with open(args.config) as f:\n        config = json.load(f)\n\n    # device\n    if args.gpu_id is not None:\n        config['gpu']['id'] = int(args.gpu_id)\n        config['gpu']['use'] = True\n    device = gpu_setup(config['gpu']['use'], config['gpu']['id'])\n\n    # dataset, out_dir\n    if args.dataset is not None:\n        DATASET_NAME = args.dataset\n    else:\n        DATASET_NAME = config['dataset']\n    dataset = HIVDataset(DATASET_NAME)\n\n    # parameters\n    params = config['params']\n    if args.seed is not None:\n        params['seed'] = int(args.seed)\n    if args.epochs is not None:\n        params['epochs'] = int(args.epochs)\n    if args.batch_size is not None:\n        params['batch_size'] = int(args.batch_size)\n    if args.init_lr is not None:\n        params['init_lr'] = float(args.init_lr)\n    if args.lr_reduce_factor is not None:\n        params['lr_reduce_factor'] = float(args.lr_reduce_factor)\n    if args.lr_schedule_patience is not None:\n        params['lr_schedule_patience'] = int(args.lr_schedule_patience)\n    if args.min_lr is not None:\n        params['min_lr'] = float(args.min_lr)\n    if args.weight_decay is not None:\n        params['weight_decay'] = float(args.weight_decay)\n    if args.print_epoch_interval is not None:\n        params['print_epoch_interval'] = int(args.print_epoch_interval)\n    if args.max_time is not None:\n        params['max_time'] = float(args.max_time)\n\n    # network parameters\n    net_params = config['net_params']\n    net_params['device'] = device\n    net_params['gpu_id'] = config['gpu']['id']\n    net_params['batch_size'] = params['batch_size']\n    if args.L is not None:\n        net_params['L'] = int(args.L)\n    if args.hidden_dim is not None:\n        net_params['hidden_dim'] = int(args.hidden_dim)\n    if args.out_dim is not None:\n        net_params['out_dim'] = int(args.out_dim)\n    if args.residual is not None:\n        net_params['residual'] = True if args.residual == 'True' else False\n    if args.edge_feat is not None:\n        net_params['edge_feat'] = True if args.edge_feat == 'True' else False\n    if args.readout is not None:\n        net_params['readout'] = args.readout\n    if args.in_feat_dropout is not None:\n        net_params['in_feat_dropout'] = float(args.in_feat_dropout)\n    if args.dropout is not None:\n        net_params['dropout'] = float(args.dropout)\n    if args.batch_norm is not None:\n        net_params['batch_norm'] = True if args.batch_norm == 'True' else False\n    if args.aggregators is not None:\n        net_params['aggregators'] = args.aggregators\n    if args.scalers is not None:\n        net_params['scalers'] = args.scalers\n    if args.posttrans_layers is not None:\n        net_params['posttrans_layers'] = args.posttrans_layers\n\n    D = torch.cat([torch.sparse.sum(g.adjacency_matrix(transpose=True), dim=-1).to_dense() for g in\n                   dataset.train.graph_lists])\n    net_params['avg_d'] = dict(lin=torch.mean(D),\n                               exp=torch.mean(torch.exp(torch.div(1, D)) - 1),\n                               log=torch.mean(torch.log(D + 1)))\n\n    net_params['total_param'] = view_model_param(net_params)\n    train_val_pipeline(dataset, params, net_params)\n\n\nmain()\n"
  },
  {
    "path": "realworld_benchmark/main_molecules.py",
    "content": "\"\"\"\n    IMPORTING LIBS\n\"\"\"\n\nimport numpy as np\nimport os\nimport time\nimport random\nimport argparse, json\n\nimport torch\n\nimport torch.optim as optim\nfrom torch.utils.data import DataLoader\n\nfrom tensorboardX import SummaryWriter\nfrom tqdm import tqdm\n\n\nclass DotDict(dict):\n    def __init__(self, **kwds):\n        self.update(kwds)\n        self.__dict__ = self\n\n\n\"\"\"\n    IMPORTING CUSTOM MODULES/METHODS\n\"\"\"\nfrom nets.molecules_graph_regression.pna_net import PNANet\nfrom data.molecules import MoleculeDataset  # import dataset\nfrom train.train_molecules_graph_regression import train_epoch, evaluate_network\n\n\"\"\"\n    GPU Setup\n\"\"\"\n\n\ndef gpu_setup(use_gpu, gpu_id):\n    os.environ[\"CUDA_DEVICE_ORDER\"] = \"PCI_BUS_ID\"\n    os.environ[\"CUDA_VISIBLE_DEVICES\"] = str(gpu_id)\n\n    if torch.cuda.is_available() and use_gpu:\n        print('cuda available with GPU:', torch.cuda.get_device_name(0))\n        device = torch.device(\"cuda\")\n    else:\n        print('cuda not available')\n        device = torch.device(\"cpu\")\n    return device\n\n\n\"\"\"\n    VIEWING MODEL CONFIG AND PARAMS\n\"\"\"\n\n\ndef view_model_param(net_params):\n    model = PNANet(net_params)\n    total_param = 0\n    print(\"MODEL DETAILS:\\n\")\n    # print(model)\n    for param in model.parameters():\n        # print(param.data.size())\n        total_param += np.prod(list(param.data.size()))\n    print('PNA Total parameters:', total_param)\n    return total_param\n\n\n\"\"\"\n    TRAINING CODE\n\"\"\"\n\n\ndef train_val_pipeline(dataset, params, net_params, dirs):\n    t0 = time.time()\n    per_epoch_time = []\n\n    DATASET_NAME = dataset.name\n    MODEL_NAME = 'PNA'\n\n    trainset, valset, testset = dataset.train, dataset.val, dataset.test\n\n    root_log_dir, root_ckpt_dir, write_file_name, write_config_file = dirs\n    device = net_params['device']\n\n    # Write the network and optimization hyper-parameters in folder config/\n    with open(write_config_file + '.txt', 'w') as f:\n        f.write(\"\"\"Dataset: {},\\nModel: {}\\n\\nparams={}\\n\\nnet_params={}\\n\\n\\nTotal Parameters: {}\\n\\n\"\"\".format(\n            DATASET_NAME, MODEL_NAME, params, net_params, net_params['total_param']))\n\n    log_dir = os.path.join(root_log_dir, \"RUN_\" + str(0))\n    writer = SummaryWriter(log_dir=log_dir)\n\n    # setting seeds\n    random.seed(params['seed'])\n    np.random.seed(params['seed'])\n    torch.manual_seed(params['seed'])\n    if device.type == 'cuda':\n        torch.cuda.manual_seed(params['seed'])\n\n    print(\"Training Graphs: \", len(trainset))\n    print(\"Validation Graphs: \", len(valset))\n    print(\"Test Graphs: \", len(testset))\n\n    model = PNANet(net_params)\n    model = model.to(device)\n\n    optimizer = optim.Adam(model.parameters(), lr=params['init_lr'], weight_decay=params['weight_decay'])\n    scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer, mode='min',\n                                                     factor=params['lr_reduce_factor'],\n                                                     patience=params['lr_schedule_patience'],\n                                                     verbose=True)\n\n    epoch_train_losses, epoch_val_losses = [], []\n    epoch_train_MAEs, epoch_val_MAEs = [], []\n\n    train_loader = DataLoader(trainset, batch_size=params['batch_size'], shuffle=True, collate_fn=dataset.collate)\n    val_loader = DataLoader(valset, batch_size=params['batch_size'], shuffle=False, collate_fn=dataset.collate)\n    test_loader = DataLoader(testset, batch_size=params['batch_size'], shuffle=False, collate_fn=dataset.collate)\n\n    # At any point you can hit Ctrl + C to break out of training early.\n    try:\n        with tqdm(range(params['epochs']), unit='epoch') as t:\n            for epoch in t:\n\n                t.set_description('Epoch %d' % epoch)\n\n                start = time.time()\n\n                epoch_train_loss, epoch_train_mae, optimizer = train_epoch(model, optimizer, device, train_loader,\n                                                                           epoch)\n                epoch_val_loss, epoch_val_mae = evaluate_network(model, device, val_loader, epoch)\n\n                epoch_train_losses.append(epoch_train_loss)\n                epoch_val_losses.append(epoch_val_loss)\n                epoch_train_MAEs.append(epoch_train_mae.detach().cpu().item())\n                epoch_val_MAEs.append(epoch_val_mae.detach().cpu().item())\n\n                writer.add_scalar('train/_loss', epoch_train_loss, epoch)\n                writer.add_scalar('val/_loss', epoch_val_loss, epoch)\n                writer.add_scalar('train/_mae', epoch_train_mae, epoch)\n                writer.add_scalar('val/_mae', epoch_val_mae, epoch)\n                writer.add_scalar('learning_rate', optimizer.param_groups[0]['lr'], epoch)\n\n                _, epoch_test_mae = evaluate_network(model, device, test_loader, epoch)\n                t.set_postfix(time=time.time() - start, lr=optimizer.param_groups[0]['lr'],\n                              train_loss=epoch_train_loss, val_loss=epoch_val_loss,\n                              train_MAE=epoch_train_mae.item(), val_MAE=epoch_val_mae.item(),\n                              test_MAE=epoch_test_mae.item(), refresh=False)\n\n                per_epoch_time.append(time.time() - start)\n\n                scheduler.step(epoch_val_loss)\n\n                if optimizer.param_groups[0]['lr'] < params['min_lr']:\n                    print(\"\\n!! LR EQUAL TO MIN LR SET.\")\n                    break\n\n                # Stop training after params['max_time'] hours\n                if time.time() - t0 > params['max_time'] * 3600:\n                    print('-' * 89)\n                    print(\"Max_time for training elapsed {:.2f} hours, so stopping\".format(params['max_time']))\n                    break\n\n    except KeyboardInterrupt:\n        print('-' * 89)\n        print('Exiting from training early because of KeyboardInterrupt')\n\n    _, test_mae = evaluate_network(model, device, test_loader, epoch)\n    _, val_mae = evaluate_network(model, device, val_loader, epoch)\n    _, train_mae = evaluate_network(model, device, train_loader, epoch)\n\n    test_mae = test_mae.item()\n    val_mae = val_mae.item()\n    train_mae = train_mae.item()\n\n    print(\"Train MAE: {:.4f}\".format(train_mae))\n    print(\"Val MAE: {:.4f}\".format(val_mae))\n    print(\"Test MAE: {:.4f}\".format(test_mae))\n    print(\"TOTAL TIME TAKEN: {:.4f}s\".format(time.time() - t0))\n    print(\"AVG TIME PER EPOCH: {:.4f}s\".format(np.mean(per_epoch_time)))\n\n    writer.close()\n\n    \"\"\"\n        Write the results in out_dir/results folder\n    \"\"\"\n    with open(write_file_name + '.txt', 'w') as f:\n        f.write(\"\"\"Dataset: {},\\nModel: {}\\n\\nparams={}\\n\\nnet_params={}\\n\\n{}\\n\\nTotal Parameters: {}\\n\\n\n    FINAL RESULTS\\nTEST MAE: {:.4f}\\nTRAIN MAE: {:.4f}\\n\\n\n    Total Time Taken: {:.4f} hrs\\nAverage Time Per Epoch: {:.4f} s\\n\\n\\n\"\"\" \\\n                .format(DATASET_NAME, MODEL_NAME, params, net_params, model, net_params['total_param'],\n                        np.mean(np.array(test_mae)), np.array(train_mae), (time.time() - t0) / 3600,\n                        np.mean(per_epoch_time)))\n\n\ndef main():\n    \"\"\"\n        USER CONTROLS\n    \"\"\"\n\n    parser = argparse.ArgumentParser()\n    parser.add_argument('--config', help=\"Please give a config.json file with training/model/data/param details\")\n    parser.add_argument('--gpu_id', help=\"Please give a value for gpu id\")\n    parser.add_argument('--model', help=\"Please give a value for model name\")\n    parser.add_argument('--dataset', help=\"Please give a value for dataset name\")\n    parser.add_argument('--out_dir', help=\"Please give a value for out_dir\")\n    parser.add_argument('--seed', help=\"Please give a value for seed\")\n    parser.add_argument('--epochs', help=\"Please give a value for epochs\")\n    parser.add_argument('--batch_size', help=\"Please give a value for batch_size\")\n    parser.add_argument('--init_lr', help=\"Please give a value for init_lr\")\n    parser.add_argument('--lr_reduce_factor', help=\"Please give a value for lr_reduce_factor\")\n    parser.add_argument('--lr_schedule_patience', help=\"Please give a value for lr_schedule_patience\")\n    parser.add_argument('--min_lr', help=\"Please give a value for min_lr\")\n    parser.add_argument('--weight_decay', help=\"Please give a value for weight_decay\")\n    parser.add_argument('--print_epoch_interval', help=\"Please give a value for print_epoch_interval\")\n    parser.add_argument('--L', help=\"Please give a value for L\")\n    parser.add_argument('--hidden_dim', help=\"Please give a value for hidden_dim\")\n    parser.add_argument('--out_dim', help=\"Please give a value for out_dim\")\n    parser.add_argument('--residual', help=\"Please give a value for residual\")\n    parser.add_argument('--edge_feat', help=\"Please give a value for edge_feat\")\n    parser.add_argument('--readout', help=\"Please give a value for readout\")\n    parser.add_argument('--kernel', help=\"Please give a value for kernel\")\n    parser.add_argument('--n_heads', help=\"Please give a value for n_heads\")\n    parser.add_argument('--gated', help=\"Please give a value for gated\")\n    parser.add_argument('--in_feat_dropout', help=\"Please give a value for in_feat_dropout\")\n    parser.add_argument('--dropout', help=\"Please give a value for dropout\")\n    parser.add_argument('--graph_norm', help=\"Please give a value for graph_norm\")\n    parser.add_argument('--batch_norm', help=\"Please give a value for batch_norm\")\n    parser.add_argument('--sage_aggregator', help=\"Please give a value for sage_aggregator\")\n    parser.add_argument('--data_mode', help=\"Please give a value for data_mode\")\n    parser.add_argument('--num_pool', help=\"Please give a value for num_pool\")\n    parser.add_argument('--gnn_per_block', help=\"Please give a value for gnn_per_block\")\n    parser.add_argument('--embedding_dim', help=\"Please give a value for embedding_dim\")\n    parser.add_argument('--pool_ratio', help=\"Please give a value for pool_ratio\")\n    parser.add_argument('--linkpred', help=\"Please give a value for linkpred\")\n    parser.add_argument('--cat', help=\"Please give a value for cat\")\n    parser.add_argument('--self_loop', help=\"Please give a value for self_loop\")\n    parser.add_argument('--max_time', help=\"Please give a value for max_time\")\n    parser.add_argument('--expid', help='Experiment id.')\n\n    # pna params\n    parser.add_argument('--aggregators', type=str, help='Aggregators to use.')\n    parser.add_argument('--scalers', type=str, help='Scalers to use.')\n    parser.add_argument('--towers', type=int, help='Towers to use.')\n    parser.add_argument('--divide_input_first', type=str, help='Whether to divide the input in first layers.')\n    parser.add_argument('--divide_input_last', type=str, help='Whether to divide the input in last layer.')\n    parser.add_argument('--gru', type=str, help='Whether to use gru.')\n    parser.add_argument('--edge_dim', type=int, help='Size of edge embeddings.')\n    parser.add_argument('--pretrans_layers', type=int, help='pretrans_layers.')\n    parser.add_argument('--posttrans_layers', type=int, help='posttrans_layers.')\n\n    args = parser.parse_args()\n\n    with open(args.config) as f:\n        config = json.load(f)\n\n    # device\n    if args.gpu_id is not None:\n        config['gpu']['id'] = int(args.gpu_id)\n        config['gpu']['use'] = True\n    device = gpu_setup(config['gpu']['use'], config['gpu']['id'])\n    # dataset, out_dir\n    if args.dataset is not None:\n        DATASET_NAME = args.dataset\n    else:\n        DATASET_NAME = config['dataset']\n    dataset = MoleculeDataset(DATASET_NAME)\n    if args.out_dir is not None:\n        out_dir = args.out_dir\n    else:\n        out_dir = config['out_dir']\n    # parameters\n    params = config['params']\n    if args.seed is not None:\n        params['seed'] = int(args.seed)\n    if args.epochs is not None:\n        params['epochs'] = int(args.epochs)\n    if args.batch_size is not None:\n        params['batch_size'] = int(args.batch_size)\n    if args.init_lr is not None:\n        params['init_lr'] = float(args.init_lr)\n    if args.lr_reduce_factor is not None:\n        params['lr_reduce_factor'] = float(args.lr_reduce_factor)\n    if args.lr_schedule_patience is not None:\n        params['lr_schedule_patience'] = int(args.lr_schedule_patience)\n    if args.min_lr is not None:\n        params['min_lr'] = float(args.min_lr)\n    if args.weight_decay is not None:\n        params['weight_decay'] = float(args.weight_decay)\n    if args.print_epoch_interval is not None:\n        params['print_epoch_interval'] = int(args.print_epoch_interval)\n    if args.max_time is not None:\n        params['max_time'] = float(args.max_time)\n\n    # network parameters\n    net_params = config['net_params']\n    net_params['device'] = device\n    net_params['gpu_id'] = config['gpu']['id']\n    net_params['batch_size'] = params['batch_size']\n    if args.L is not None:\n        net_params['L'] = int(args.L)\n    if args.hidden_dim is not None:\n        net_params['hidden_dim'] = int(args.hidden_dim)\n    if args.out_dim is not None:\n        net_params['out_dim'] = int(args.out_dim)\n    if args.residual is not None:\n        net_params['residual'] = True if args.residual == 'True' else False\n    if args.edge_feat is not None:\n        net_params['edge_feat'] = True if args.edge_feat == 'True' else False\n    if args.readout is not None:\n        net_params['readout'] = args.readout\n    if args.kernel is not None:\n        net_params['kernel'] = int(args.kernel)\n    if args.n_heads is not None:\n        net_params['n_heads'] = int(args.n_heads)\n    if args.gated is not None:\n        net_params['gated'] = True if args.gated == 'True' else False\n    if args.in_feat_dropout is not None:\n        net_params['in_feat_dropout'] = float(args.in_feat_dropout)\n    if args.dropout is not None:\n        net_params['dropout'] = float(args.dropout)\n    if args.graph_norm is not None:\n        net_params['graph_norm'] = True if args.graph_norm == 'True' else False\n    if args.batch_norm is not None:\n        net_params['batch_norm'] = True if args.batch_norm == 'True' else False\n    if args.sage_aggregator is not None:\n        net_params['sage_aggregator'] = args.sage_aggregator\n    if args.data_mode is not None:\n        net_params['data_mode'] = args.data_mode\n    if args.num_pool is not None:\n        net_params['num_pool'] = int(args.num_pool)\n    if args.gnn_per_block is not None:\n        net_params['gnn_per_block'] = int(args.gnn_per_block)\n    if args.embedding_dim is not None:\n        net_params['embedding_dim'] = int(args.embedding_dim)\n    if args.pool_ratio is not None:\n        net_params['pool_ratio'] = float(args.pool_ratio)\n    if args.linkpred is not None:\n        net_params['linkpred'] = True if args.linkpred == 'True' else False\n    if args.cat is not None:\n        net_params['cat'] = True if args.cat == 'True' else False\n    if args.self_loop is not None:\n        net_params['self_loop'] = True if args.self_loop == 'True' else False\n    if args.aggregators is not None:\n        net_params['aggregators'] = args.aggregators\n    if args.scalers is not None:\n        net_params['scalers'] = args.scalers\n    if args.towers is not None:\n        net_params['towers'] = args.towers\n    if args.divide_input_first is not None:\n        net_params['divide_input_first'] = True if args.divide_input_first == 'True' else False\n    if args.divide_input_last is not None:\n        net_params['divide_input_last'] = True if args.divide_input_last == 'True' else False\n    if args.gru is not None:\n        net_params['gru'] = True if args.gru == 'True' else False\n    if args.edge_dim is not None:\n        net_params['edge_dim'] = args.edge_dim\n    if args.pretrans_layers is not None:\n        net_params['pretrans_layers'] = args.pretrans_layers\n    if args.posttrans_layers is not None:\n        net_params['posttrans_layers'] = args.posttrans_layers\n\n    # ZINC\n    net_params['num_atom_type'] = dataset.num_atom_type\n    net_params['num_bond_type'] = dataset.num_bond_type\n\n    MODEL_NAME = 'PNA'\n    D = torch.cat([torch.sparse.sum(g.adjacency_matrix(transpose=True), dim=-1).to_dense() for g in\n                   dataset.train.graph_lists])\n    net_params['avg_d'] = dict(lin=torch.mean(D),\n                               exp=torch.mean(torch.exp(torch.div(1, D)) - 1),\n                               log=torch.mean(torch.log(D + 1)))\n\n    root_log_dir = out_dir + 'logs/' + MODEL_NAME + \"_\" + DATASET_NAME + \"_GPU\" + str(\n        config['gpu']['id']) + \"_\" + time.strftime('%Hh%Mm%Ss_on_%b_%d_%Y')\n    root_ckpt_dir = out_dir + 'checkpoints/' + MODEL_NAME + \"_\" + DATASET_NAME + \"_GPU\" + str(\n        config['gpu']['id']) + \"_\" + time.strftime('%Hh%Mm%Ss_on_%b_%d_%Y')\n    write_file_name = out_dir + 'results/result_' + MODEL_NAME + \"_\" + DATASET_NAME + \"_GPU\" + str(\n        config['gpu']['id']) + \"_\" + time.strftime('%Hh%Mm%Ss_on_%b_%d_%Y')\n    write_config_file = out_dir + 'configs/config_' + MODEL_NAME + \"_\" + DATASET_NAME + \"_GPU\" + str(\n        config['gpu']['id']) + \"_\" + time.strftime('%Hh%Mm%Ss_on_%b_%d_%Y')\n    dirs = root_log_dir, root_ckpt_dir, write_file_name, write_config_file\n\n    if not os.path.exists(out_dir + 'results'):\n        os.makedirs(out_dir + 'results')\n\n    if not os.path.exists(out_dir + 'configs'):\n        os.makedirs(out_dir + 'configs')\n\n    net_params['total_param'] = view_model_param(net_params)\n    train_val_pipeline(dataset, params, net_params, dirs)\n\n\nmain()\n"
  },
  {
    "path": "realworld_benchmark/main_superpixels.py",
    "content": "\"\"\"\n    IMPORTING LIBS\n\"\"\"\n\nimport numpy as np\nimport os\nimport socket\nimport time\nimport random\nimport glob\nimport argparse, json\nimport pickle\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\nimport torch.optim as optim\nfrom torch.utils.data import DataLoader\n\nfrom tensorboardX import SummaryWriter\nfrom tqdm import tqdm\n\n\nclass DotDict(dict):\n    def __init__(self, **kwds):\n        self.update(kwds)\n        self.__dict__ = self\n\n\n\"\"\"\n    IMPORTING CUSTOM MODULES/METHODS\n\"\"\"\nfrom nets.superpixels_graph_classification.pna_net import PNANet\nfrom data.superpixels import SuperPixDataset  # import dataset\nfrom train.train_superpixels_graph_classification import train_epoch, \\\n    evaluate_network  # import train functions\n\n\"\"\"\n    GPU Setup\n\"\"\"\n\n\ndef gpu_setup(use_gpu, gpu_id):\n    os.environ[\"CUDA_DEVICE_ORDER\"] = \"PCI_BUS_ID\"\n    os.environ[\"CUDA_VISIBLE_DEVICES\"] = str(gpu_id)\n\n    if torch.cuda.is_available() and use_gpu:\n        print('cuda available with GPU:', torch.cuda.get_device_name(0))\n        device = torch.device(\"cuda\")\n    else:\n        print('cuda not available')\n        device = torch.device(\"cpu\")\n    return device\n\n\n\"\"\"\n    VIEWING MODEL CONFIG AND PARAMS\n\"\"\"\n\n\ndef view_model_param(MODEL_NAME, net_params):\n    model = PNANet(net_params)\n    total_param = 0\n    print(\"MODEL DETAILS:\\n\")\n    # print(model)\n    for param in model.parameters():\n        # print(param.data.size())\n        total_param += np.prod(list(param.data.size()))\n    print('MODEL/Total parameters:', MODEL_NAME, total_param)\n    return total_param\n\n\n\"\"\"\n    TRAINING CODE\n\"\"\"\n\n\ndef train_val_pipeline(MODEL_NAME, dataset, params, net_params, dirs):\n    t0 = time.time()\n    per_epoch_time = []\n\n    DATASET_NAME = dataset.name\n\n    trainset, valset, testset = dataset.train, dataset.val, dataset.test\n\n    root_log_dir, root_ckpt_dir, write_file_name, write_config_file = dirs\n    device = net_params['device']\n\n    # Write the network and optimization hyper-parameters in folder config/\n    with open(write_config_file + '.txt', 'w') as f:\n        f.write(\"\"\"Dataset: {},\\nModel: {}\\n\\nparams={}\\n\\nnet_params={}\\n\\n\\nTotal Parameters: {}\\n\\n\"\"\".format(\n            DATASET_NAME, MODEL_NAME, params, net_params, net_params['total_param']))\n\n    log_dir = os.path.join(root_log_dir, \"RUN_\" + str(0))\n    writer = SummaryWriter(log_dir=log_dir)\n\n    # setting seeds\n    random.seed(params['seed'])\n    np.random.seed(params['seed'])\n    torch.manual_seed(params['seed'])\n    if device.type == 'cuda':\n        torch.cuda.manual_seed(params['seed'])\n\n    print(\"Training Graphs: \", len(trainset))\n    print(\"Validation Graphs: \", len(valset))\n    print(\"Test Graphs: \", len(testset))\n\n    model = PNANet(net_params)\n    model = model.to(device)\n\n    optimizer = optim.Adam(model.parameters(), lr=params['init_lr'], weight_decay=params['weight_decay'])\n    scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer, mode='min',\n                                                     factor=params['lr_reduce_factor'],\n                                                     patience=params['lr_schedule_patience'],\n                                                     verbose=True)\n\n    epoch_train_losses, epoch_val_losses = [], []\n    epoch_train_accs, epoch_val_accs = [], []\n\n    train_loader = DataLoader(trainset, batch_size=params['batch_size'], shuffle=True, collate_fn=dataset.collate)\n    val_loader = DataLoader(valset, batch_size=params['batch_size'], shuffle=False, collate_fn=dataset.collate)\n    test_loader = DataLoader(testset, batch_size=params['batch_size'], shuffle=False, collate_fn=dataset.collate)\n\n    # At any point you can hit Ctrl + C to break out of training early.\n    try:\n        with tqdm(range(params['epochs']), unit='epoch') as t:\n            for epoch in t:\n\n                t.set_description('Epoch %d' % epoch)\n\n                start = time.time()\n\n                epoch_train_loss, epoch_train_acc, optimizer = train_epoch(model, optimizer, device, train_loader,\n                                                                           epoch)\n                epoch_val_loss, epoch_val_acc = evaluate_network(model, device, val_loader, epoch)\n\n                epoch_train_losses.append(epoch_train_loss)\n                epoch_val_losses.append(epoch_val_loss)\n                epoch_train_accs.append(epoch_train_acc)\n                epoch_val_accs.append(epoch_val_acc)\n\n                writer.add_scalar('train/_loss', epoch_train_loss, epoch)\n                writer.add_scalar('val/_loss', epoch_val_loss, epoch)\n                writer.add_scalar('train/_acc', epoch_train_acc, epoch)\n                writer.add_scalar('val/_acc', epoch_val_acc, epoch)\n                writer.add_scalar('learning_rate', optimizer.param_groups[0]['lr'], epoch)\n\n                _, epoch_test_acc = evaluate_network(model, device, test_loader, epoch)\n                t.set_postfix(time=time.time() - start, lr=optimizer.param_groups[0]['lr'],\n                              train_loss=epoch_train_loss, val_loss=epoch_val_loss,\n                              train_acc=epoch_train_acc, val_acc=epoch_val_acc,\n                              test_acc=epoch_test_acc)\n\n                per_epoch_time.append(time.time() - start)\n\n                scheduler.step(epoch_val_loss)\n\n                if optimizer.param_groups[0]['lr'] < params['min_lr']:\n                    print(\"\\n!! LR EQUAL TO MIN LR SET.\")\n                    break\n\n                # Stop training after params['max_time'] hours\n                if time.time() - t0 > params['max_time'] * 3600:\n                    print('-' * 89)\n                    print(\"Max_time for training elapsed {:.2f} hours, so stopping\".format(params['max_time']))\n                    break\n\n    except KeyboardInterrupt:\n        print('-' * 89)\n        print('Exiting from training early because of KeyboardInterrupt')\n\n    _, test_acc = evaluate_network(model, device, test_loader, epoch)\n    _, val_acc = evaluate_network(model, device, val_loader, epoch)\n    _, train_acc = evaluate_network(model, device, train_loader, epoch)\n    print(\"Test Accuracy: {:.4f}\".format(test_acc))\n    print(\"Val Accuracy: {:.4f}\".format(val_acc))\n    print(\"Train Accuracy: {:.4f}\".format(train_acc))\n    print(\"TOTAL TIME TAKEN: {:.4f}s\".format(time.time() - t0))\n    print(\"AVG TIME PER EPOCH: {:.4f}s\".format(np.mean(per_epoch_time)))\n\n    writer.close()\n\n    \"\"\"\n        Write the results in out_dir/results folder\n    \"\"\"\n    with open(write_file_name + '.txt', 'w') as f:\n        f.write(\"\"\"Dataset: {},\\nModel: {}\\n\\nparams={}\\n\\nnet_params={}\\n\\n{}\\n\\nTotal Parameters: {}\\n\\n\n    FINAL RESULTS\\nTEST ACCURACY: {:.4f}\\nTRAIN ACCURACY: {:.4f}\\n\\n\n    Total Time Taken: {:.4f} hrs\\nAverage Time Per Epoch: {:.4f} s\\n\\n\\n\"\"\" \\\n                .format(DATASET_NAME, MODEL_NAME, params, net_params, model, net_params['total_param'],\n                        np.mean(np.array(test_acc)) * 100, np.mean(np.array(train_acc)) * 100,\n                        (time.time() - t0) / 3600, np.mean(per_epoch_time)))\n\n    # send results to gmail\n    try:\n        from gmail import send\n        subject = 'Result for Dataset: {}, Model: {}'.format(DATASET_NAME, MODEL_NAME)\n        body = \"\"\"Dataset: {},\\nModel: {}\\n\\nparams={}\\n\\nnet_params={}\\n\\n{}\\n\\nTotal Parameters: {}\\n\\n\n    FINAL RESULTS\\nTEST ACCURACY: {:.4f}\\nTRAIN ACCURACY: {:.4f}\\n\\n\n    Total Time Taken: {:.4f} hrs\\nAverage Time Per Epoch: {:.4f} s\\n\\n\\n\"\"\" \\\n            .format(DATASET_NAME, MODEL_NAME, params, net_params, model, net_params['total_param'],\n                    np.mean(np.array(test_acc)) * 100, np.mean(np.array(train_acc)) * 100, (time.time() - t0) / 3600,\n                    np.mean(per_epoch_time))\n        send(subject, body)\n    except:\n        pass\n\n\ndef main():\n    \"\"\"\n        USER CONTROLS\n    \"\"\"\n\n    parser = argparse.ArgumentParser()\n    parser.add_argument('--config', help=\"Please give a config.json file with training/model/data/param details\")\n    parser.add_argument('--gpu_id', help=\"Please give a value for gpu id\")\n    parser.add_argument('--model', help=\"Please give a value for model name\")\n    parser.add_argument('--dataset', help=\"Please give a value for dataset name\")\n    parser.add_argument('--out_dir', help=\"Please give a value for out_dir\")\n    parser.add_argument('--seed', help=\"Please give a value for seed\")\n    parser.add_argument('--epochs', help=\"Please give a value for epochs\")\n    parser.add_argument('--batch_size', help=\"Please give a value for batch_size\")\n    parser.add_argument('--init_lr', help=\"Please give a value for init_lr\")\n    parser.add_argument('--lr_reduce_factor', help=\"Please give a value for lr_reduce_factor\")\n    parser.add_argument('--lr_schedule_patience', help=\"Please give a value for lr_schedule_patience\")\n    parser.add_argument('--min_lr', help=\"Please give a value for min_lr\")\n    parser.add_argument('--weight_decay', help=\"Please give a value for weight_decay\")\n    parser.add_argument('--print_epoch_interval', help=\"Please give a value for print_epoch_interval\")\n    parser.add_argument('--L', help=\"Please give a value for L\")\n    parser.add_argument('--hidden_dim', help=\"Please give a value for hidden_dim\")\n    parser.add_argument('--out_dim', help=\"Please give a value for out_dim\")\n    parser.add_argument('--residual', help=\"Please give a value for residual\")\n    parser.add_argument('--edge_feat', help=\"Please give a value for edge_feat\")\n    parser.add_argument('--readout', help=\"Please give a value for readout\")\n    parser.add_argument('--kernel', help=\"Please give a value for kernel\")\n    parser.add_argument('--n_heads', help=\"Please give a value for n_heads\")\n    parser.add_argument('--gated', help=\"Please give a value for gated\")\n    parser.add_argument('--in_feat_dropout', help=\"Please give a value for in_feat_dropout\")\n    parser.add_argument('--dropout', help=\"Please give a value for dropout\")\n    parser.add_argument('--graph_norm', help=\"Please give a value for graph_norm\")\n    parser.add_argument('--batch_norm', help=\"Please give a value for batch_norm\")\n    parser.add_argument('--sage_aggregator', help=\"Please give a value for sage_aggregator\")\n    parser.add_argument('--data_mode', help=\"Please give a value for data_mode\")\n    parser.add_argument('--num_pool', help=\"Please give a value for num_pool\")\n    parser.add_argument('--gnn_per_block', help=\"Please give a value for gnn_per_block\")\n    parser.add_argument('--embedding_dim', help=\"Please give a value for embedding_dim\")\n    parser.add_argument('--pool_ratio', help=\"Please give a value for pool_ratio\")\n    parser.add_argument('--linkpred', help=\"Please give a value for linkpred\")\n    parser.add_argument('--cat', help=\"Please give a value for cat\")\n    parser.add_argument('--self_loop', help=\"Please give a value for self_loop\")\n    parser.add_argument('--max_time', help=\"Please give a value for max_time\")\n    parser.add_argument('--expid', help='Experiment id.')\n\n    # pna params\n    parser.add_argument('--aggregators', type=str, help='Aggregators to use.')\n    parser.add_argument('--scalers', type=str, help='Scalers to use.')\n    parser.add_argument('--towers', type=int, help='Towers to use.')\n    parser.add_argument('--divide_input_first', type=str, help='Whether to divide the input in first layers.')\n    parser.add_argument('--divide_input_last', type=str, help='Whether to divide the input in last layer.')\n    parser.add_argument('--gru', type=str, help='Whether to use gru.')\n    parser.add_argument('--edge_dim', type=int, help='Size of edge embeddings.')\n    parser.add_argument('--pretrans_layers', type=int, help='pretrans_layers.')\n    parser.add_argument('--posttrans_layers', type=int, help='posttrans_layers.')\n\n    args = parser.parse_args()\n\n    with open(args.config) as f:\n        config = json.load(f)\n\n    # device\n    if args.gpu_id is not None:\n        config['gpu']['id'] = int(args.gpu_id)\n        config['gpu']['use'] = True\n    device = gpu_setup(config['gpu']['use'], config['gpu']['id'])\n    # model, dataset, out_dir\n    if args.model is not None:\n        MODEL_NAME = args.model\n    else:\n        MODEL_NAME = config['model']\n    if args.dataset is not None:\n        DATASET_NAME = args.dataset\n    else:\n        DATASET_NAME = config['dataset']\n    dataset = SuperPixDataset(DATASET_NAME)\n    if args.out_dir is not None:\n        out_dir = args.out_dir\n    else:\n        out_dir = config['out_dir']\n    # parameters\n    params = config['params']\n    if args.seed is not None:\n        params['seed'] = int(args.seed)\n    if args.epochs is not None:\n        params['epochs'] = int(args.epochs)\n    if args.batch_size is not None:\n        params['batch_size'] = int(args.batch_size)\n    if args.init_lr is not None:\n        params['init_lr'] = float(args.init_lr)\n    if args.lr_reduce_factor is not None:\n        params['lr_reduce_factor'] = float(args.lr_reduce_factor)\n    if args.lr_schedule_patience is not None:\n        params['lr_schedule_patience'] = int(args.lr_schedule_patience)\n    if args.min_lr is not None:\n        params['min_lr'] = float(args.min_lr)\n    if args.weight_decay is not None:\n        params['weight_decay'] = float(args.weight_decay)\n    if args.print_epoch_interval is not None:\n        params['print_epoch_interval'] = int(args.print_epoch_interval)\n    if args.max_time is not None:\n        params['max_time'] = float(args.max_time)\n\n    # network parameters\n    net_params = config['net_params']\n    net_params['device'] = device\n    net_params['gpu_id'] = config['gpu']['id']\n    net_params['batch_size'] = params['batch_size']\n    if args.L is not None:\n        net_params['L'] = int(args.L)\n    if args.hidden_dim is not None:\n        net_params['hidden_dim'] = int(args.hidden_dim)\n    if args.out_dim is not None:\n        net_params['out_dim'] = int(args.out_dim)\n    if args.residual is not None:\n        net_params['residual'] = True if args.residual == 'True' else False\n    if args.edge_feat is not None:\n        net_params['edge_feat'] = True if args.edge_feat == 'True' else False\n    if args.readout is not None:\n        net_params['readout'] = args.readout\n    if args.kernel is not None:\n        net_params['kernel'] = int(args.kernel)\n    if args.n_heads is not None:\n        net_params['n_heads'] = int(args.n_heads)\n    if args.gated is not None:\n        net_params['gated'] = True if args.gated == 'True' else False\n    if args.in_feat_dropout is not None:\n        net_params['in_feat_dropout'] = float(args.in_feat_dropout)\n    if args.dropout is not None:\n        net_params['dropout'] = float(args.dropout)\n    if args.graph_norm is not None:\n        net_params['graph_norm'] = True if args.graph_norm == 'True' else False\n    if args.batch_norm is not None:\n        net_params['batch_norm'] = True if args.batch_norm == 'True' else False\n    if args.sage_aggregator is not None:\n        net_params['sage_aggregator'] = args.sage_aggregator\n    if args.data_mode is not None:\n        net_params['data_mode'] = args.data_mode\n    if args.num_pool is not None:\n        net_params['num_pool'] = int(args.num_pool)\n    if args.gnn_per_block is not None:\n        net_params['gnn_per_block'] = int(args.gnn_per_block)\n    if args.embedding_dim is not None:\n        net_params['embedding_dim'] = int(args.embedding_dim)\n    if args.pool_ratio is not None:\n        net_params['pool_ratio'] = float(args.pool_ratio)\n    if args.linkpred is not None:\n        net_params['linkpred'] = True if args.linkpred == 'True' else False\n    if args.cat is not None:\n        net_params['cat'] = True if args.cat == 'True' else False\n    if args.self_loop is not None:\n        net_params['self_loop'] = True if args.self_loop == 'True' else False\n    if args.aggregators is not None:\n        net_params['aggregators'] = args.aggregators\n    if args.scalers is not None:\n        net_params['scalers'] = args.scalers\n    if args.towers is not None:\n        net_params['towers'] = args.towers\n    if args.divide_input_first is not None:\n        net_params['divide_input_first'] = True if args.divide_input_first == 'True' else False\n    if args.divide_input_last is not None:\n        net_params['divide_input_last'] = True if args.divide_input_last == 'True' else False\n    if args.gru is not None:\n        net_params['gru'] = True if args.args == 'True' else False\n    if args.edge_dim is not None:\n        net_params['edge_dim'] = args.edge_dim\n    if args.pretrans_layers is not None:\n        net_params['pretrans_layers'] = args.pretrans_layers\n    if args.posttrans_layers is not None:\n        net_params['posttrans_layers'] = args.posttrans_layers\n\n    # Superpixels\n    net_params['in_dim'] = dataset.train[0][0].ndata['feat'][0].size(0)\n    net_params['in_dim_edge'] = dataset.train[0][0].edata['feat'][0].size(0)\n    num_classes = len(np.unique(np.array(dataset.train[:][1])))\n    net_params['n_classes'] = num_classes\n\n    if MODEL_NAME == 'PNA':\n        D = torch.cat([torch.sparse.sum(g.adjacency_matrix(transpose=True), dim=-1).to_dense() for g in\n                       dataset.train.graph_lists])\n        net_params['avg_d'] = dict(lin=torch.mean(D),\n                                   exp=torch.mean(torch.exp(torch.div(1, D)) - 1),\n                                   log=torch.mean(torch.log(D + 1)))\n\n    root_log_dir = out_dir + 'logs/' + MODEL_NAME + \"_\" + DATASET_NAME + \"_GPU\" + str(\n        config['gpu']['id']) + \"_\" + time.strftime('%Hh%Mm%Ss_on_%b_%d_%Y')\n    root_ckpt_dir = out_dir + 'checkpoints/' + MODEL_NAME + \"_\" + DATASET_NAME + \"_GPU\" + str(\n        config['gpu']['id']) + \"_\" + time.strftime('%Hh%Mm%Ss_on_%b_%d_%Y')\n    write_file_name = out_dir + 'results/result_' + MODEL_NAME + \"_\" + DATASET_NAME + \"_GPU\" + str(\n        config['gpu']['id']) + \"_\" + time.strftime('%Hh%Mm%Ss_on_%b_%d_%Y')\n    write_config_file = out_dir + 'configs/config_' + MODEL_NAME + \"_\" + DATASET_NAME + \"_GPU\" + str(\n        config['gpu']['id']) + \"_\" + time.strftime('%Hh%Mm%Ss_on_%b_%d_%Y')\n    dirs = root_log_dir, root_ckpt_dir, write_file_name, write_config_file\n\n    if not os.path.exists(out_dir + 'results'):\n        os.makedirs(out_dir + 'results')\n\n    if not os.path.exists(out_dir + 'configs'):\n        os.makedirs(out_dir + 'configs')\n\n    net_params['total_param'] = view_model_param(MODEL_NAME, net_params)\n    train_val_pipeline(MODEL_NAME, dataset, params, net_params, dirs)\n\n\nmain()\n"
  },
  {
    "path": "realworld_benchmark/nets/HIV_graph_classification/pna_net.py",
    "content": "import torch.nn as nn\nimport dgl\nfrom models.dgl.pna_layer import PNASimpleLayer\nfrom nets.mlp_readout_layer import MLPReadout\nimport torch\nfrom ogb.graphproppred.mol_encoder import AtomEncoder\n\n\nclass PNANet(nn.Module):\n    def __init__(self, net_params):\n        super().__init__()\n        hidden_dim = net_params['hidden_dim']\n        out_dim = net_params['out_dim']\n        in_feat_dropout = net_params['in_feat_dropout']\n        dropout = net_params['dropout']\n        n_layers = net_params['L']\n        self.readout = net_params['readout']\n        self.batch_norm = net_params['batch_norm']\n        self.aggregators = net_params['aggregators']\n        self.scalers = net_params['scalers']\n        self.avg_d = net_params['avg_d']\n        self.residual = net_params['residual']\n        posttrans_layers = net_params['posttrans_layers']\n        device = net_params['device']\n        self.device = device\n\n        self.in_feat_dropout = nn.Dropout(in_feat_dropout)\n        self.embedding_h = AtomEncoder(emb_dim=hidden_dim)\n\n        self.layers = nn.ModuleList(\n            [PNASimpleLayer(in_dim=hidden_dim, out_dim=hidden_dim, dropout=dropout,\n                      batch_norm=self.batch_norm, residual=self.residual, aggregators=self.aggregators,\n                      scalers=self.scalers, avg_d=self.avg_d, posttrans_layers=posttrans_layers)\n             for _ in range(n_layers - 1)])\n        self.layers.append(PNASimpleLayer(in_dim=hidden_dim, out_dim=out_dim, dropout=dropout,\n                                    batch_norm=self.batch_norm,\n                                    residual=self.residual, aggregators=self.aggregators, scalers=self.scalers,\n                                    avg_d=self.avg_d, posttrans_layers=posttrans_layers))\n\n        self.MLP_layer = MLPReadout(out_dim, 1)  # 1 out dim since regression problem\n\n    def forward(self, g, h):\n        h = self.embedding_h(h)\n        h = self.in_feat_dropout(h)\n\n        for i, conv in enumerate(self.layers):\n            h = conv(g, h)\n\n        g.ndata['h'] = h\n\n        if self.readout == \"sum\":\n            hg = dgl.sum_nodes(g, 'h')\n        elif self.readout == \"max\":\n            hg = dgl.max_nodes(g, 'h')\n        elif self.readout == \"mean\":\n            hg = dgl.mean_nodes(g, 'h')\n        else:\n            hg = dgl.mean_nodes(g, 'h')  # default readout is mean nodes\n\n        return self.MLP_layer(hg)\n\n    def loss(self, scores, labels):\n        loss = torch.nn.BCEWithLogitsLoss()(scores, labels.type(torch.FloatTensor).to('cuda').unsqueeze(-1))\n        return loss\n"
  },
  {
    "path": "realworld_benchmark/nets/gru.py",
    "content": "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\nclass GRU(nn.Module):\n    \"\"\"\n        Wrapper class for the GRU used by the GNN framework, nn.GRU is used for the Gated Recurrent Unit itself\n    \"\"\"\n\n    def __init__(self, input_size, hidden_size, device):\n        super(GRU, self).__init__()\n        self.input_size = input_size\n        self.hidden_size = hidden_size\n        self.gru = nn.GRU(input_size=input_size, hidden_size=hidden_size).to(device)\n\n    def forward(self, x, y):\n        \"\"\"\n        :param x:   shape: (B, N, Din) where Din <= input_size (difference is padded)\n        :param y:   shape: (B, N, Dh) where Dh <= hidden_size (difference is padded)\n        :return:    shape: (B, N, Dh)\n        \"\"\"\n        assert (x.shape[-1] <= self.input_size and y.shape[-1] <= self.hidden_size)\n        x = x.unsqueeze(0)\n        y = y.unsqueeze(0)\n        x = self.gru(x, y)[1]\n        x = x.squeeze()\n        return x\n"
  },
  {
    "path": "realworld_benchmark/nets/mlp_readout_layer.py",
    "content": "# MIT License\n# Copyright (c) 2020 Vijay Prakash Dwivedi, Chaitanya K. Joshi, Thomas Laurent, Yoshua Bengio, Xavier Bresson\n\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\n\"\"\"\n    MLP Layer used after graph vector representation\n\"\"\"\n\n\nclass MLPReadout(nn.Module):\n\n    def __init__(self, input_dim, output_dim, L=2):  # L=nb_hidden_layers\n        super().__init__()\n        list_FC_layers = [nn.Linear(input_dim // 2 ** l, input_dim // 2 ** (l + 1), bias=True) for l in range(L)]\n        list_FC_layers.append(nn.Linear(input_dim // 2 ** L, output_dim, bias=True))\n        self.FC_layers = nn.ModuleList(list_FC_layers)\n        self.L = L\n\n    def forward(self, x):\n        y = x\n        for l in range(self.L):\n            y = self.FC_layers[l](y)\n            y = F.relu(y)\n        y = self.FC_layers[self.L](y)\n        return y\n"
  },
  {
    "path": "realworld_benchmark/nets/molecules_graph_regression/pna_net.py",
    "content": "import torch.nn as nn\nimport dgl\n\nfrom nets.gru import GRU\nfrom models.dgl.pna_layer import PNALayer\nfrom nets.mlp_readout_layer import MLPReadout\n\n\"\"\"\n    PNA: Principal Neighbourhood Aggregation \n    Gabriele Corso, Luca Cavalleri, Dominique Beaini, Pietro Lio, Petar Velickovic\n    https://arxiv.org/abs/2004.05718\n    Architecture follows that in https://github.com/graphdeeplearning/benchmarking-gnns\n\"\"\"\n\n\nclass PNANet(nn.Module):\n    def __init__(self, net_params):\n        super().__init__()\n        num_atom_type = net_params['num_atom_type']\n        num_bond_type = net_params['num_bond_type']\n        hidden_dim = net_params['hidden_dim']\n        out_dim = net_params['out_dim']\n        in_feat_dropout = net_params['in_feat_dropout']\n        dropout = net_params['dropout']\n        n_layers = net_params['L']\n        self.readout = net_params['readout']\n        self.graph_norm = net_params['graph_norm']\n        self.batch_norm = net_params['batch_norm']\n        self.residual = net_params['residual']\n        self.aggregators = net_params['aggregators']\n        self.scalers = net_params['scalers']\n        self.avg_d = net_params['avg_d']\n        self.towers = net_params['towers']\n        self.divide_input_first = net_params['divide_input_first']\n        self.divide_input_last = net_params['divide_input_last']\n        self.edge_feat = net_params['edge_feat']\n        edge_dim = net_params['edge_dim']\n        pretrans_layers = net_params['pretrans_layers']\n        posttrans_layers = net_params['posttrans_layers']\n        self.gru_enable = net_params['gru']\n        device = net_params['device']\n\n        self.in_feat_dropout = nn.Dropout(in_feat_dropout)\n\n        self.embedding_h = nn.Embedding(num_atom_type, hidden_dim)\n\n        if self.edge_feat:\n            self.embedding_e = nn.Embedding(num_bond_type, edge_dim)\n\n        self.layers = nn.ModuleList([PNALayer(in_dim=hidden_dim, out_dim=hidden_dim, dropout=dropout,\n                                              graph_norm=self.graph_norm, batch_norm=self.batch_norm,\n                                              residual=self.residual, aggregators=self.aggregators, scalers=self.scalers,\n                                              avg_d=self.avg_d, towers=self.towers, edge_features=self.edge_feat,\n                                              edge_dim=edge_dim, divide_input=self.divide_input_first,\n                                              pretrans_layers=pretrans_layers, posttrans_layers=posttrans_layers) for _\n                                     in range(n_layers - 1)])\n        self.layers.append(PNALayer(in_dim=hidden_dim, out_dim=out_dim, dropout=dropout,\n                                    graph_norm=self.graph_norm, batch_norm=self.batch_norm,\n                                    residual=self.residual, aggregators=self.aggregators, scalers=self.scalers,\n                                    avg_d=self.avg_d, towers=self.towers, divide_input=self.divide_input_last,\n                                    edge_features=self.edge_feat, edge_dim=edge_dim,\n                                    pretrans_layers=pretrans_layers, posttrans_layers=posttrans_layers))\n\n        if self.gru_enable:\n            self.gru = GRU(hidden_dim, hidden_dim, device)\n\n        self.MLP_layer = MLPReadout(out_dim, 1)  # 1 out dim since regression problem\n\n    def forward(self, g, h, e, snorm_n, snorm_e):\n        h = self.embedding_h(h)\n        h = self.in_feat_dropout(h)\n        if self.edge_feat:\n            e = self.embedding_e(e)\n\n        for i, conv in enumerate(self.layers):\n            h_t = conv(g, h, e, snorm_n)\n            if self.gru_enable and i != len(self.layers) - 1:\n                h_t = self.gru(h, h_t)\n            h = h_t\n\n        g.ndata['h'] = h\n\n        if self.readout == \"sum\":\n            hg = dgl.sum_nodes(g, 'h')\n        elif self.readout == \"max\":\n            hg = dgl.max_nodes(g, 'h')\n        elif self.readout == \"mean\":\n            hg = dgl.mean_nodes(g, 'h')\n        else:\n            hg = dgl.mean_nodes(g, 'h')  # default readout is mean nodes\n\n        return self.MLP_layer(hg)\n\n    def loss(self, scores, targets):\n        loss = nn.L1Loss()(scores, targets)\n        return loss\n"
  },
  {
    "path": "realworld_benchmark/nets/superpixels_graph_classification/pna_net.py",
    "content": "import torch.nn as nn\n\nimport dgl\n\nfrom nets.gru import GRU\nfrom models.dgl.pna_layer import PNALayer\nfrom nets.mlp_readout_layer import MLPReadout\n\n\"\"\"\n    PNA: Principal Neighbourhood Aggregation \n    Gabriele Corso, Luca Cavalleri, Dominique Beaini, Pietro Lio, Petar Velickovic\n    https://arxiv.org/abs/2004.05718\n    Architecture follows that in https://github.com/graphdeeplearning/benchmarking-gnns\n\"\"\"\n\n\nclass PNANet(nn.Module):\n    def __init__(self, net_params):\n        super().__init__()\n        in_dim = net_params['in_dim']\n        in_dim_edge = net_params['in_dim_edge']\n        hidden_dim = net_params['hidden_dim']\n        out_dim = net_params['out_dim']\n        n_classes = net_params['n_classes']\n        in_feat_dropout = net_params['in_feat_dropout']\n        dropout = net_params['dropout']\n        n_layers = net_params['L']\n        self.readout = net_params['readout']\n        self.graph_norm = net_params['graph_norm']\n        self.batch_norm = net_params['batch_norm']\n        self.residual = net_params['residual']\n        self.aggregators = net_params['aggregators']\n        self.scalers = net_params['scalers']\n        self.avg_d = net_params['avg_d']\n        self.towers = net_params['towers']\n        self.divide_input_first = net_params['divide_input_first']\n        self.divide_input_last = net_params['divide_input_last']\n        self.edge_feat = net_params['edge_feat']\n        edge_dim = net_params['edge_dim']\n        pretrans_layers = net_params['pretrans_layers']\n        posttrans_layers = net_params['posttrans_layers']\n        self.gru_enable = net_params['gru']\n        device = net_params['device']\n\n        self.embedding_h = nn.Linear(in_dim, hidden_dim)\n\n        if self.edge_feat:\n            self.embedding_e = nn.Linear(in_dim_edge, edge_dim)\n\n        self.layers = nn.ModuleList([PNALayer(in_dim=hidden_dim, out_dim=hidden_dim, dropout=dropout,\n                                              graph_norm=self.graph_norm, batch_norm=self.batch_norm,\n                                              residual=self.residual, aggregators=self.aggregators,\n                                              scalers=self.scalers,\n                                              avg_d=self.avg_d, towers=self.towers, edge_features=self.edge_feat,\n                                              edge_dim=edge_dim, divide_input=self.divide_input_first,\n                                              pretrans_layers=pretrans_layers, posttrans_layers=posttrans_layers) for _\n                                     in range(n_layers - 1)])\n        self.layers.append(PNALayer(in_dim=hidden_dim, out_dim=out_dim, dropout=dropout,\n                                    graph_norm=self.graph_norm, batch_norm=self.batch_norm,\n                                    residual=self.residual, aggregators=self.aggregators, scalers=self.scalers,\n                                    avg_d=self.avg_d, towers=self.towers, divide_input=self.divide_input_last,\n                                    edge_features=self.edge_feat, edge_dim=edge_dim,\n                                    pretrans_layers=pretrans_layers, posttrans_layers=posttrans_layers))\n\n        if self.gru_enable:\n            self.gru = GRU(hidden_dim, hidden_dim, device)\n\n        self.MLP_layer = MLPReadout(out_dim, n_classes)\n\n    def forward(self, g, h, e, snorm_n, snorm_e):\n        h = self.embedding_h(h)\n        if self.edge_feat:\n            e = self.embedding_e(e)\n\n        for i, conv in enumerate(self.layers):\n            h_t = conv(g, h, e, snorm_n)\n            if self.gru_enable and i != len(self.layers) - 1:\n                h_t = self.gru(h, h_t)\n            h = h_t\n\n        g.ndata['h'] = h\n\n        if self.readout == \"sum\":\n            hg = dgl.sum_nodes(g, 'h')\n        elif self.readout == \"max\":\n            hg = dgl.max_nodes(g, 'h')\n        elif self.readout == \"mean\":\n            hg = dgl.mean_nodes(g, 'h')\n        else:\n            hg = dgl.mean_nodes(g, 'h')  # default readout is mean nodes\n\n        return self.MLP_layer(hg)\n\n    def loss(self, pred, label):\n        criterion = nn.CrossEntropyLoss()\n        loss = criterion(pred, label)\n        return loss\n"
  },
  {
    "path": "realworld_benchmark/train/metrics.py",
    "content": "# MIT License\n# Copyright (c) 2020 Vijay Prakash Dwivedi, Chaitanya K. Joshi, Thomas Laurent, Yoshua Bengio, Xavier Bresson\n\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\nfrom sklearn.metrics import confusion_matrix\nfrom sklearn.metrics import f1_score\nimport numpy as np\n\n\ndef MAE(scores, targets):\n    MAE = F.l1_loss(scores, targets)\n    return MAE\n\n\ndef accuracy_TU(scores, targets):\n    scores = scores.detach().argmax(dim=1)\n    acc = (scores==targets).float().sum().item()\n    return acc\n\n\ndef accuracy_MNIST_CIFAR(scores, targets):\n    scores = scores.detach().argmax(dim=1)\n    acc = (scores==targets).float().sum().item()\n    return acc\n\ndef accuracy_CITATION_GRAPH(scores, targets):\n    scores = scores.detach().argmax(dim=1)\n    acc = (scores==targets).float().sum().item()\n    acc = acc / len(targets)\n    return acc\n\n\ndef accuracy_SBM(scores, targets):\n    S = targets.cpu().numpy()\n    C = np.argmax( torch.nn.Softmax(dim=0)(scores).cpu().detach().numpy() , axis=1 )\n    CM = confusion_matrix(S,C).astype(np.float32)\n    nb_classes = CM.shape[0]\n    targets = targets.cpu().detach().numpy()\n    nb_non_empty_classes = 0\n    pr_classes = np.zeros(nb_classes)\n    for r in range(nb_classes):\n        cluster = np.where(targets==r)[0]\n        if cluster.shape[0] != 0:\n            pr_classes[r] = CM[r,r]/ float(cluster.shape[0])\n            if CM[r,r]>0:\n                nb_non_empty_classes += 1\n        else:\n            pr_classes[r] = 0.0\n    acc = 100.* np.sum(pr_classes)/ float(nb_non_empty_classes)\n    return acc\n\n\ndef binary_f1_score(scores, targets):\n    \"\"\"Computes the F1 score using scikit-learn for binary class labels. \n    \n    Returns the F1 score for the positive class, i.e. labelled '1'.\n    \"\"\"\n    y_true = targets.cpu().numpy()\n    y_pred = scores.argmax(dim=1).cpu().numpy()\n    return f1_score(y_true, y_pred, average='binary')\n\n  \ndef accuracy_VOC(scores, targets):\n    scores = scores.detach().argmax(dim=1).cpu()\n    targets = targets.cpu().detach().numpy()\n    acc = f1_score(scores, targets, average='weighted')\n    return acc\n"
  },
  {
    "path": "realworld_benchmark/train/train_HIV_graph_classification.py",
    "content": "import torch\nfrom ogb.graphproppred import Evaluator\n\ndef train_epoch_sparse(model, optimizer, device, data_loader, epoch):\n    model.train()\n    epoch_loss = 0\n    list_scores = []\n    list_labels = []\n    for iter, (batch_graphs, batch_labels) in enumerate(data_loader):\n        batch_x = batch_graphs.ndata['feat'].to(device)  # num x feat\n        batch_labels = batch_labels.to(device)\n        optimizer.zero_grad()\n        batch_scores = model.forward(batch_graphs, batch_x)\n        loss = model.loss(batch_scores, batch_labels)\n        loss.backward()\n        optimizer.step()\n        epoch_loss += loss.detach().item()\n        list_scores.append(batch_scores.detach())\n        list_labels.append(batch_labels.detach().unsqueeze(-1))\n\n    epoch_loss /= (iter + 1)\n    evaluator = Evaluator(name='ogbg-molhiv')\n    epoch_train_ROC = evaluator.eval({'y_pred': torch.cat(list_scores),\n                                       'y_true': torch.cat(list_labels)})['rocauc']\n\n    return epoch_loss, epoch_train_ROC, optimizer\n\n\ndef evaluate_network_sparse(model, device, data_loader, epoch):\n    model.eval()\n    epoch_test_loss = 0\n    epoch_test_ROC = 0\n    with torch.no_grad():\n        list_scores = []\n        list_labels = []\n        for iter, (batch_graphs, batch_labels) in enumerate(data_loader):\n            batch_x = batch_graphs.ndata['feat'].to(device)\n            batch_labels = batch_labels.to(device)\n            batch_scores = model.forward(batch_graphs, batch_x)\n            loss = model.loss(batch_scores, batch_labels)\n            epoch_test_loss += loss.detach().item()\n            list_scores.append(batch_scores.detach())\n            list_labels.append(batch_labels.detach().unsqueeze(-1))\n\n        epoch_test_loss /= (iter + 1)\n        evaluator = Evaluator(name='ogbg-molhiv')\n        epoch_test_ROC = evaluator.eval({'y_pred': torch.cat(list_scores),\n                                           'y_true': torch.cat(list_labels)})['rocauc']\n\n    return epoch_test_loss, epoch_test_ROC\n"
  },
  {
    "path": "realworld_benchmark/train/train_molecules_graph_regression.py",
    "content": "# MIT License\n# Copyright (c) 2020 Vijay Prakash Dwivedi, Chaitanya K. Joshi, Thomas Laurent, Yoshua Bengio, Xavier Bresson\n\n\n\"\"\"\n    Utility functions for training one epoch \n    and evaluating one epoch\n\"\"\"\nimport torch\nimport torch.nn as nn\nimport math\n\nfrom .metrics import MAE\n\ndef train_epoch(model, optimizer, device, data_loader, epoch):\n    model.train()\n    epoch_loss = 0\n    epoch_train_mae = 0\n    nb_data = 0\n    gpu_mem = 0\n    for iter, (batch_graphs, batch_targets, batch_snorm_n, batch_snorm_e) in enumerate(data_loader):\n        batch_x = batch_graphs.ndata['feat'].to(device)  # num x feat\n        batch_e = batch_graphs.edata['feat'].to(device)\n        batch_snorm_e = batch_snorm_e.to(device)\n        batch_targets = batch_targets.to(device)\n        batch_snorm_n = batch_snorm_n.to(device)         # num x 1\n        optimizer.zero_grad()\n        \n        batch_scores = model.forward(batch_graphs, batch_x, batch_e, batch_snorm_n, batch_snorm_e)\n        loss = model.loss(batch_scores, batch_targets)\n        loss.backward()\n        optimizer.step()\n        epoch_loss += loss.detach().item()\n        epoch_train_mae += MAE(batch_scores, batch_targets)\n        nb_data += batch_targets.size(0)\n    epoch_loss /= (iter + 1)\n    epoch_train_mae /= (iter + 1)\n    \n    return epoch_loss, epoch_train_mae, optimizer\n\ndef evaluate_network(model, device, data_loader, epoch):\n    model.eval()\n    epoch_test_loss = 0\n    epoch_test_mae = 0\n    nb_data = 0\n    with torch.no_grad():\n        for iter, (batch_graphs, batch_targets, batch_snorm_n, batch_snorm_e) in enumerate(data_loader):\n            batch_x = batch_graphs.ndata['feat'].to(device)\n            batch_e = batch_graphs.edata['feat'].to(device)\n            batch_snorm_e = batch_snorm_e.to(device)\n            batch_targets = batch_targets.to(device)\n            batch_snorm_n = batch_snorm_n.to(device)\n            \n            batch_scores = model.forward(batch_graphs, batch_x, batch_e, batch_snorm_n, batch_snorm_e)\n            loss = model.loss(batch_scores, batch_targets)\n            epoch_test_loss += loss.detach().item()\n            epoch_test_mae += MAE(batch_scores, batch_targets)\n            nb_data += batch_targets.size(0)\n        epoch_test_loss /= (iter + 1)\n        epoch_test_mae /= (iter + 1)\n        \n    return epoch_test_loss, epoch_test_mae"
  },
  {
    "path": "realworld_benchmark/train/train_superpixels_graph_classification.py",
    "content": "# MIT License\n# Copyright (c) 2020 Vijay Prakash Dwivedi, Chaitanya K. Joshi, Thomas Laurent, Yoshua Bengio, Xavier Bresson\n\n\n\"\"\"\n    Utility functions for training one epoch \n    and evaluating one epoch\n\"\"\"\nimport torch\nimport torch.nn as nn\nimport math\n\nfrom .metrics import accuracy_MNIST_CIFAR as accuracy\n\ndef train_epoch(model, optimizer, device, data_loader, epoch):\n    model.train()\n    epoch_loss = 0\n    epoch_train_acc = 0\n    nb_data = 0\n    gpu_mem = 0\n    for iter, (batch_graphs, batch_labels, batch_snorm_n, batch_snorm_e) in enumerate(data_loader):\n        batch_x = batch_graphs.ndata['feat'].to(device)  # num x feat\n        batch_e = batch_graphs.edata['feat'].to(device)\n        batch_snorm_e = batch_snorm_e.to(device)\n        batch_labels = batch_labels.to(device)\n        batch_snorm_n = batch_snorm_n.to(device)         # num x 1\n        optimizer.zero_grad()\n        \n        batch_scores = model.forward(batch_graphs, batch_x, batch_e, batch_snorm_n, batch_snorm_e)\n        loss = model.loss(batch_scores, batch_labels)\n        loss.backward()\n        optimizer.step()\n        epoch_loss += loss.detach().item()\n        epoch_train_acc += accuracy(batch_scores, batch_labels)\n        nb_data += batch_labels.size(0)\n    epoch_loss /= (iter + 1)\n    epoch_train_acc /= nb_data\n    \n    return epoch_loss, epoch_train_acc, optimizer\n\ndef evaluate_network(model, device, data_loader, epoch):\n    model.eval()\n    epoch_test_loss = 0\n    epoch_test_acc = 0\n    nb_data = 0\n    with torch.no_grad():\n        for iter, (batch_graphs, batch_labels, batch_snorm_n, batch_snorm_e) in enumerate(data_loader):\n            batch_x = batch_graphs.ndata['feat'].to(device)\n            batch_e = batch_graphs.edata['feat'].to(device)\n            batch_snorm_e = batch_snorm_e.to(device)\n            batch_labels = batch_labels.to(device)\n            batch_snorm_n = batch_snorm_n.to(device)\n            \n            batch_scores = model.forward(batch_graphs, batch_x, batch_e, batch_snorm_n, batch_snorm_e)\n            loss = model.loss(batch_scores, batch_labels) \n            epoch_test_loss += loss.detach().item()\n            epoch_test_acc += accuracy(batch_scores, batch_labels)\n            nb_data += batch_labels.size(0)\n        epoch_test_loss /= (iter + 1)\n        epoch_test_acc /= nb_data\n        \n    return epoch_test_loss, epoch_test_acc"
  }
]