[
  {
    "path": "LICENSE",
    "content": "MIT License\n\nCopyright (c) 2019 mohammadasghari\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "README.md",
    "content": "# Deep Q-learning (DQN) for Multi-agent Reinforcement Learning (RL)\n\nDQN implementation for two multi-agent environments: `agents_landmarks` and `predators_prey` (See [details.pdf](https://github.com/mohammadasghari/dqn-multi-agent-rl/blob/master/details.pdf) for a detailed description of these environments).\n\n## Code structure\n- `./environments/`: folder where the two environments (`agents_landmarks` and `predators_prey`) are stored. \n    1) `./environments/agents_landmarks`: in this environment, there exist ***n*** agents that must cooperate through actions to reach a set of ***n*** landmarks  in a two dimensional discrete ***k***-by-***k*** grid environment. \n    2) `./environments/predators_prey`: in this environment, ***n*** agents (called predators) must cooperate with each other to capture one prey in a two dimensional discrete ***k***-by-***k*** grid environment.\n- `./dqn_agent.py`: contains code for the implementation of DQN and its extensions (Double DQN, Dueling DQN, DQN with Prioritized Experience Replay) (See [details.pdf](https://github.com/mohammadasghari/dqn-multi-agent-rl/blob/master/details.pdf) for a detailed description of the DQN and its extensions).\n- `./brain.py`: contains code for the implementation of neural networks required for DQN (See [details.pdf](https://github.com/mohammadasghari/dqn-multi-agent-rl/blob/master/details.pdf) for a detailed description of the neural network implementation).\n- `./uniform_experience_replay.py`: contains code for the implementation of Uniform Experience Replay (UER) which can be used in DQN.\n- `./prioritized_experience_replay.py`: contains code for the implementation of Prioritized Experience Replay (PER) which can be used in DQN.\n- `./sum_tree.py`: contains code for the implementation of sum tree data structure which is used in Prioritized Experience Replay (PER).\n- `./agents_landmarks_multiagent.py`: contains code for applying DQN to the `agents_landmarks` environment.\n- `./predators_prey_multiagent.py`: contains code for applying DQN to the `predators_prey` environment.\n- `./results_agents_landmarks/`: folder where the results (neural net weights, rewards of the episodes, videos, figures, etc.) for the `agents_landmarks` environment are stored. \n- `./results_predators_prey/`: folder where the results (neural net weights, rewards of the episodes, videos, figures, etc.) for the `predators_prey` environment are stored. \n- `./details.pdf`: a pdf file including a detailed description of the DQN and its extensions, the environments, and the neural network implementation.\n\n## Results\n#### Predators and Prey Environment\nIn this environment, the prey is captured when one predator moves to the location of the prey while the other predators occupy, for support, the neighboring cells of the prey's location.\n##### Fixed prey (mode 0) \n <img src=\"/results_predators_prey/videos/prey_mode_0.gif\" height=\"400px\" width=\"400px\" >\n\n##### Random prey (mode 1) \n <img src=\"/results_predators_prey/videos/prey_mode_1.gif\" height=\"400px\" width=\"400px\" >\n \n##### Random escaping prey (mode 2) \n  <img src=\"/results_predators_prey/videos/prey_mode_2.gif\" height=\"400px\" width=\"400px\" >\n\n#### Agents and Landmarks Environment\n\n##### 10 agents and 10 landmarks\n<img src=\"/results_agents_landmarks/videos/10_10.gif\" height=\"400px\" width=\"400px\" >\n\n##### 16 agents and 16 landmarks\n<img src=\"/results_agents_landmarks/videos/16_16.gif\" height=\"400px\" width=\"400px\" >\n\n### Todos\n\n - Write required dependencies and installation steps\n - ...\n"
  },
  {
    "path": "agents_landmarks_multiagent.py",
    "content": "\"\"\"\nCreated on Wednesday Jan  16 2019\n\n@author: Seyed Mohammad Asghari\n@github: https://github.com/s3yyy3d-m\n\"\"\"\n\nimport numpy as np\nimport os\nimport random\nimport argparse\nimport pandas as pd\nfrom environments.agents_landmarks.env import agentslandmarks\nfrom dqn_agent import Agent\nimport glob\n\nARG_LIST = ['learning_rate', 'optimizer', 'memory_capacity', 'batch_size', 'target_frequency', 'maximum_exploration',\n            'max_timestep', 'first_step_memory', 'replay_steps', 'number_nodes', 'target_type', 'memory',\n            'prioritization_scale', 'dueling', 'agents_number', 'grid_size', 'game_mode', 'reward_mode']\n\n\ndef get_name_brain(args, idx):\n\n    file_name_str = '_'.join([str(args[x]) for x in ARG_LIST])\n\n    return './results_agents_landmarks/weights_files/' + file_name_str + '_' + str(idx) + '.h5'\n\n\ndef get_name_rewards(args):\n\n    file_name_str = '_'.join([str(args[x]) for x in ARG_LIST])\n\n    return './results_agents_landmarks/rewards_files/' + file_name_str + '.csv'\n\n\ndef get_name_timesteps(args):\n\n    file_name_str = '_'.join([str(args[x]) for x in ARG_LIST])\n\n    return './results_agents_landmarks/timesteps_files/' + file_name_str + '.csv'\n\n\nclass Environment(object):\n\n    def __init__(self, arguments):\n        current_path = os.path.dirname(__file__)  # Where your .py file is located\n        self.env = agentslandmarks(arguments, current_path)\n        self.episodes_number = arguments['episode_number']\n        self.render = arguments['render']\n        self.recorder = arguments['recorder']\n        self.max_ts = arguments['max_timestep']\n        self.test = arguments['test']\n        self.filling_steps = arguments['first_step_memory']\n        self.steps_b_updates = arguments['replay_steps']\n        self.max_random_moves = arguments['max_random_moves']\n\n        self.num_agents = arguments['agents_number']\n        self.num_landmarks = self.num_agents\n        self.game_mode = arguments['game_mode']\n        self.grid_size = arguments['grid_size']\n\n    def run(self, agents, file1, file2):\n\n        total_step = 0\n        rewards_list = []\n        timesteps_list = []\n        max_score = -10000\n        for episode_num in xrange(self.episodes_number):\n            state = self.env.reset()\n            if self.render:\n                self.env.render()\n\n            random_moves = random.randint(0, self.max_random_moves)\n\n            # create randomness in initial state\n            for _ in xrange(random_moves):\n                actions = [4 for _ in xrange(len(agents))]\n                state, _, _ = self.env.step(actions)\n                if self.render:\n                    self.env.render()\n\n            # converting list of positions to an array\n            state = np.array(state)\n            state = state.ravel()\n\n            done = False\n            reward_all = 0\n            time_step = 0\n            while not done and time_step < self.max_ts:\n\n                # if self.render:\n                #     self.env.render()\n                actions = []\n                for agent in agents:\n                    actions.append(agent.greedy_actor(state))\n                next_state, reward, done = self.env.step(actions)\n                # converting list of positions to an array\n                next_state = np.array(next_state)\n                next_state = next_state.ravel()\n\n                if not self.test:\n                    for agent in agents:\n                        agent.observe((state, actions, reward, next_state, done))\n                        if total_step >= self.filling_steps:\n                            agent.decay_epsilon()\n                            if time_step % self.steps_b_updates == 0:\n                                agent.replay()\n                            agent.update_target_model()\n\n                total_step += 1\n                time_step += 1\n                state = next_state\n                reward_all += reward\n\n                if self.render:\n                    self.env.render()\n\n            rewards_list.append(reward_all)\n            timesteps_list.append(time_step)\n\n            print(\"Episode {p}, Score: {s}, Final Step: {t}, Goal: {g}\".format(p=episode_num, s=reward_all,\n                                                                               t=time_step, g=done))\n\n            if self.recorder:\n                os.system(\"ffmpeg -r 2 -i ./results_agents_landmarks/snaps/%04d.png -b:v 40000 -minrate 40000 -maxrate 4000k -bufsize 1835k -c:v mjpeg -qscale:v 0 \"\n                          + \"./results_agents_landmarks/videos/{a1}_{a2}_{a3}_{a4}.avi\".format(a1=self.num_agents,\n                                                                                                 a2=self.num_landmarks,\n                                                                                                 a3=self.game_mode,\n                                                                                                 a4=self.grid_size))\n                files = glob.glob('./results_agents_landmarks/snaps/*')\n                for f in files:\n                    os.remove(f)\n\n            if not self.test:\n                if episode_num % 100 == 0:\n                    df = pd.DataFrame(rewards_list, columns=['score'])\n                    df.to_csv(file1)\n\n                    df = pd.DataFrame(timesteps_list, columns=['steps'])\n                    df.to_csv(file2)\n\n                    if total_step >= self.filling_steps:\n                        if reward_all > max_score:\n                            for agent in agents:\n                                agent.brain.save_model()\n                            max_score = reward_all\n\n\nif __name__ ==\"__main__\":\n\n    parser = argparse.ArgumentParser()\n    # DQN Parameters\n    parser.add_argument('-e', '--episode-number', default=1000000, type=int, help='Number of episodes')\n    parser.add_argument('-l', '--learning-rate', default=0.00005, type=float, help='Learning rate')\n    parser.add_argument('-op', '--optimizer', choices=['Adam', 'RMSProp'], default='RMSProp',\n                        help='Optimization method')\n    parser.add_argument('-m', '--memory-capacity', default=1000000, type=int, help='Memory capacity')\n    parser.add_argument('-b', '--batch-size', default=64, type=int, help='Batch size')\n    parser.add_argument('-t', '--target-frequency', default=10000, type=int,\n                        help='Number of steps between the updates of target network')\n    parser.add_argument('-x', '--maximum-exploration', default=100000, type=int, help='Maximum exploration step')\n    parser.add_argument('-fsm', '--first-step-memory', default=0, type=float,\n                        help='Number of initial steps for just filling the memory')\n    parser.add_argument('-rs', '--replay-steps', default=4, type=float, help='Steps between updating the network')\n    parser.add_argument('-nn', '--number-nodes', default=256, type=int, help='Number of nodes in each layer of NN')\n    parser.add_argument('-tt', '--target-type', choices=['DQN', 'DDQN'], default='DDQN')\n    parser.add_argument('-mt', '--memory', choices=['UER', 'PER'], default='PER')\n    parser.add_argument('-pl', '--prioritization-scale', default=0.5, type=float, help='Scale for prioritization')\n    parser.add_argument('-du', '--dueling', action='store_true', help='Enable Dueling architecture if \"store_false\" ')\n\n    parser.add_argument('-gn', '--gpu-num', default='2', type=str, help='Number of GPU to use')\n    parser.add_argument('-test', '--test', action='store_true', help='Enable the test phase if \"store_false\"')\n\n    # Game Parameters\n    parser.add_argument('-k', '--agents-number', default=5, type=int, help='The number of agents')\n    parser.add_argument('-g', '--grid-size', default=10, type=int, help='Grid size')\n    parser.add_argument('-ts', '--max-timestep', default=100, type=int, help='Maximum number of timesteps per episode')\n    parser.add_argument('-gm', '--game-mode', choices=[0, 1], type=int, default=1, help='Mode of the game, '\n                                                                                        '0: landmarks and agents fixed, '\n                                                                                        '1: landmarks and agents random ')\n\n    parser.add_argument('-rw', '--reward-mode', choices=[0, 1, 2], type=int, default=1, help='Mode of the reward,'\n                                                                                             '0: Only terminal rewards'\n                                                                                             '1: Partial rewards '\n                                                                                             '(number of unoccupied landmarks'\n                                                                                             '2: Full rewards '\n                                                                                             '(sum of dinstances of agents to landmarks)')\n\n    parser.add_argument('-rm', '--max-random-moves', default=0, type=int,\n                        help='Maximum number of random initial moves for the agents')\n\n\n    # Visualization Parameters\n    parser.add_argument('-r', '--render', action='store_false', help='Turn on visualization if \"store_false\"')\n    parser.add_argument('-re', '--recorder', action='store_true', help='Store the visualization as a movie '\n                                                                       'if \"store_false\"')\n\n    args = vars(parser.parse_args())\n    os.environ['CUDA_VISIBLE_DEVICES'] = args['gpu_num']\n\n    env = Environment(args)\n\n    state_size = env.env.state_size\n    action_space = env.env.action_space()\n\n    all_agents = []\n    for b_idx in xrange(args['agents_number']):\n\n        brain_file = get_name_brain(args, b_idx)\n        all_agents.append(Agent(state_size, action_space, b_idx, brain_file, args))\n\n    rewards_file = get_name_rewards(args)\n    timesteps_file = get_name_timesteps(args)\n\n    env.run(all_agents, rewards_file, timesteps_file)\n"
  },
  {
    "path": "brain.py",
    "content": "\"\"\"\nCreated on Wednesday Jan  16 2019\n\n@author: Seyed Mohammad Asghari\n@github: https://github.com/s3yyy3d-m\n\"\"\"\n\nimport os\nfrom keras.models import Sequential, Model\nfrom keras.layers import Dense, Lambda, Input, Concatenate\nfrom keras.optimizers import *\nimport tensorflow as tf\nfrom keras import backend as K\n\nHUBER_LOSS_DELTA = 1.0\n\n\ndef huber_loss(y_true, y_predict):\n    err = y_true - y_predict\n\n    cond = K.abs(err) < HUBER_LOSS_DELTA\n    L2 = 0.5 * K.square(err)\n    L1 = HUBER_LOSS_DELTA * (K.abs(err) - 0.5 * HUBER_LOSS_DELTA)\n    loss = tf.where(cond, L2, L1)\n\n    return K.mean(loss)\n\n\nclass Brain(object):\n\n    def __init__(self, state_size, action_size, brain_name, arguments):\n        self.state_size = state_size\n        self.action_size = action_size\n        self.weight_backup = brain_name\n        self.batch_size = arguments['batch_size']\n        self.learning_rate = arguments['learning_rate']\n        self.test = arguments['test']\n        self.num_nodes = arguments['number_nodes']\n        self.dueling = arguments['dueling']\n        self.optimizer_model = arguments['optimizer']\n        self.model = self._build_model()\n        self.model_ = self._build_model()\n\n    def _build_model(self):\n\n        if self.dueling:\n            x = Input(shape=(self.state_size,))\n\n            # a series of fully connected layer for estimating V(s)\n\n            y11 = Dense(self.num_nodes, activation='relu')(x)\n            y12 = Dense(self.num_nodes, activation='relu')(y11)\n            y13 = Dense(1, activation=\"linear\")(y12)\n\n            # a series of fully connected layer for estimating A(s,a)\n\n            y21 = Dense(self.num_nodes, activation='relu')(x)\n            y22 = Dense(self.num_nodes, activation='relu')(y21)\n            y23 = Dense(self.action_size, activation=\"linear\")(y22)\n\n            w = Concatenate(axis=-1)([y13, y23])\n\n            # combine V(s) and A(s,a) to get Q(s,a)\n            z = Lambda(lambda a: K.expand_dims(a[:, 0], axis=-1) + a[:, 1:] - K.mean(a[:, 1:], keepdims=True),\n                       output_shape=(self.action_size,))(w)\n        else:\n            x = Input(shape=(self.state_size,))\n\n            # a series of fully connected layer for estimating Q(s,a)\n\n            y1 = Dense(self.num_nodes, activation='relu')(x)\n            y2 = Dense(self.num_nodes, activation='relu')(y1)\n            z = Dense(self.action_size, activation=\"linear\")(y2)\n\n        model = Model(inputs=x, outputs=z)\n\n        if self.optimizer_model == 'Adam':\n            optimizer = Adam(lr=self.learning_rate, clipnorm=1.)\n        elif self.optimizer_model == 'RMSProp':\n            optimizer = RMSprop(lr=self.learning_rate, clipnorm=1.)\n        else:\n            print('Invalid optimizer!')\n\n        model.compile(loss=huber_loss, optimizer=optimizer)\n        \n        if self.test:\n            if not os.path.isfile(self.weight_backup):\n                print('Error:no file')\n            else:\n                model.load_weights(self.weight_backup)\n\n        return model\n\n    def train(self, x, y, sample_weight=None, epochs=1, verbose=0):  # x is the input to the network and y is the output\n\n        self.model.fit(x, y, batch_size=len(x), sample_weight=sample_weight, epochs=epochs, verbose=verbose)\n\n    def predict(self, state, target=False):\n        if target:  # get prediction from target network\n            return self.model_.predict(state)\n        else:  # get prediction from local network\n            return self.model.predict(state)\n\n    def predict_one_sample(self, state, target=False):\n        return self.predict(state.reshape(1,self.state_size), target=target).flatten()\n\n    def update_target_model(self):\n        self.model_.set_weights(self.model.get_weights())\n\n    def save_model(self):\n        self.model.save(self.weight_backup)"
  },
  {
    "path": "dqn_agent.py",
    "content": "\"\"\"\nCreated on Wednesday Jan  16 2019\n\n@author: Seyed Mohammad Asghari\n@github: https://github.com/s3yyy3d-m\n\"\"\"\n\nimport numpy as np\nimport random\n\nfrom brain import Brain\nfrom uniform_experience_replay import Memory as UER\nfrom prioritized_experience_replay import Memory as PER\n\nMAX_EPSILON = 1.0\nMIN_EPSILON = 0.01\n\nMIN_BETA = 0.4\nMAX_BETA = 1.0\n\n\nclass Agent(object):\n    \n    epsilon = MAX_EPSILON\n    beta = MIN_BETA\n\n    def __init__(self, state_size, action_size, bee_index, brain_name, arguments):\n        self.state_size = state_size\n        self.action_size = action_size\n        self.bee_index = bee_index\n        self.learning_rate = arguments['learning_rate']\n        self.gamma = 0.95\n        self.brain = Brain(self.state_size, self.action_size, brain_name, arguments)\n        self.memory_model = arguments['memory']\n\n        if self.memory_model == 'UER':\n            self.memory = UER(arguments['memory_capacity'])\n\n        elif self.memory_model == 'PER':\n            self.memory = PER(arguments['memory_capacity'], arguments['prioritization_scale'])\n\n        else:\n            print('Invalid memory model!')\n\n        self.target_type = arguments['target_type']\n        self.update_target_frequency = arguments['target_frequency']\n        self.max_exploration_step = arguments['maximum_exploration']\n        self.batch_size = arguments['batch_size']\n        self.step = 0\n        self.test = arguments['test']\n        if self.test:\n            self.epsilon = MIN_EPSILON\n\n    def greedy_actor(self, state):\n        if np.random.rand() <= self.epsilon:\n            return random.randrange(self.action_size)\n        else:\n            return np.argmax(self.brain.predict_one_sample(state))\n\n    def find_targets_per(self, batch):\n        batch_len = len(batch)\n\n        states = np.array([o[1][0] for o in batch])\n        states_ = np.array([o[1][3] for o in batch])\n\n        p = self.brain.predict(states)\n        p_ = self.brain.predict(states_)\n        pTarget_ = self.brain.predict(states_, target=True)\n\n        x = np.zeros((batch_len, self.state_size))\n        y = np.zeros((batch_len, self.action_size))\n        errors = np.zeros(batch_len)\n\n        for i in range(batch_len):\n            o = batch[i][1]\n            s = o[0]\n            a = o[1][self.bee_index]\n            r = o[2]\n            s_ = o[3]\n            done = o[4]\n\n            t = p[i]\n            old_value = t[a]\n            if done:\n                t[a] = r\n            else:\n                if self.target_type == 'DDQN':\n                    t[a] = r + self.gamma * pTarget_[i][np.argmax(p_[i])]\n                elif self.target_type == 'DQN':\n                    t[a] = r + self.gamma * np.amax(pTarget_[i])\n                else:\n                    print('Invalid type for target network!')\n\n            x[i] = s\n            y[i] = t\n            errors[i] = np.abs(t[a] - old_value)\n\n        return [x, y, errors]\n\n    def find_targets_uer(self, batch):\n        batch_len = len(batch)\n\n        states = np.array([o[0] for o in batch])\n        states_ = np.array([o[3] for o in batch])\n\n        p = self.brain.predict(states)\n        p_ = self.brain.predict(states_)\n        pTarget_ = self.brain.predict(states_, target=True)\n\n        x = np.zeros((batch_len, self.state_size))\n        y = np.zeros((batch_len, self.action_size))\n        errors = np.zeros(batch_len)\n\n        for i in range(batch_len):\n            o = batch[i]\n            s = o[0]\n            a = o[1][self.bee_index]\n            r = o[2]\n            s_ = o[3]\n            done = o[4]\n\n            t = p[i]\n            old_value = t[a]\n            if done:\n                t[a] = r\n            else:\n                if self.target_type == 'DDQN':\n                    t[a] = r + self.gamma * pTarget_[i][np.argmax(p_[i])]\n                elif self.target_type == 'DQN':\n                    t[a] = r + self.gamma * np.amax(pTarget_[i])\n                else:\n                    print('Invalid type for target network!')\n\n            x[i] = s\n            y[i] = t\n            errors[i] = np.abs(t[a] - old_value)\n\n        return [x, y]\n\n    def observe(self, sample):\n\n        if self.memory_model == 'UER':\n            self.memory.remember(sample)\n\n        elif self.memory_model == 'PER':\n            _, _, errors = self.find_targets_per([[0, sample]])\n            self.memory.remember(sample, errors[0])\n\n        else:\n            print('Invalid memory model!')\n\n    def decay_epsilon(self):\n        # slowly decrease Epsilon based on our experience\n        self.step += 1\n\n        if self.test:\n            self.epsilon = MIN_EPSILON\n            self.beta = MAX_BETA\n        else:\n            if self.step < self.max_exploration_step:\n                self.epsilon = MIN_EPSILON + (MAX_EPSILON - MIN_EPSILON) * (self.max_exploration_step - self.step)/self.max_exploration_step\n                self.beta = MAX_BETA + (MIN_BETA - MAX_BETA) * (self.max_exploration_step - self.step)/self.max_exploration_step\n            else:\n                self.epsilon = MIN_EPSILON\n\n    def replay(self):\n\n        if self.memory_model == 'UER':\n            batch = self.memory.sample(self.batch_size)\n            x, y = self.find_targets_uer(batch)\n            self.brain.train(x, y)\n\n        elif self.memory_model == 'PER':\n            [batch, batch_indices, batch_priorities] = self.memory.sample(self.batch_size)\n            x, y, errors = self.find_targets_per(batch)\n\n            normalized_batch_priorities = [float(i) / sum(batch_priorities) for i in batch_priorities]\n            importance_sampling_weights = [(self.batch_size * i) ** (-1 * self.beta)\n                                           for i in normalized_batch_priorities]\n            normalized_importance_sampling_weights = [float(i) / max(importance_sampling_weights)\n                                                      for i in importance_sampling_weights]\n            sample_weights = [errors[i] * normalized_importance_sampling_weights[i] for i in xrange(len(errors))]\n\n            self.brain.train(x, y, np.array(sample_weights))\n\n            self.memory.update(batch_indices, errors)\n\n        else:\n            print('Invalid memory model!')\n\n    def update_target_model(self):\n        if self.step % self.update_target_frequency == 0:\n            self.brain.update_target_model()"
  },
  {
    "path": "environments/__init__.py",
    "content": ""
  },
  {
    "path": "environments/agents_landmarks/__init__.py",
    "content": ""
  },
  {
    "path": "environments/agents_landmarks/env.py",
    "content": "\"\"\"\nCreated on Wednesday Jan  16 2019\n\n@author: Seyed Mohammad Asghari\n@github: https://github.com/s3yyy3d-m\n\"\"\"\n\nimport random\nimport operator\nimport numpy as np\nimport pygame\nimport sys\nimport os\n\n# Define some colors\nBLACK = (0, 0, 0)\nWHITE = (255, 255, 255)\nGREEN = (0, 255, 0)\nRED = (255, 0, 0)\nBLUE = (0, 0, 255)\nGRAY = (128, 128, 128)\nORANGE = (255, 128, 0)\n\n# This sets the WIDTH and HEIGHT of each grid location\nWIDTH = 60\nHEIGHT = 60\n\n# This sets the margin between each cell\nMARGIN = 1\n\n\nclass agentslandmarks:\n    UP = 0\n    DOWN = 1\n    LEFT = 2\n    RIGHT = 3\n    STAY = 4\n    A = [UP, DOWN, LEFT, RIGHT, STAY]\n    A_DIFF = [(-1, 0), (1, 0), (0, -1), (0, 1), (0, 0)]\n\n    def __init__(self, args, current_path):\n        self.game_mode = args['game_mode']\n        self.reward_mode = args['reward_mode']\n        self.num_agents = args['agents_number']\n        self.num_landmarks = self.num_agents\n        self.grid_size = args['grid_size']\n        self.state_size = (self.num_agents + self.num_landmarks) * 2\n        self.agents_positions = []\n        self.landmarks_positions = []\n\n        self.render_flag = args['render']\n        self.recorder_flag = args['recorder']\n        # enables visualizer\n        if self.render_flag:\n            [self.screen, self.my_font] = self.gui_setup()\n            self.step_num = 1\n\n            resource_path = os.path.join(current_path, 'environments')  # The resource folder path\n            resource_path = os.path.join(resource_path, 'agents_landmarks')  # The resource folder path\n            image_path = os.path.join(resource_path, 'images')  # The image folder path\n\n            img = pygame.image.load(os.path.join(image_path, 'agent.jpg')).convert()\n            self.img_agent = pygame.transform.scale(img, (WIDTH, WIDTH))\n            img = pygame.image.load(os.path.join(image_path, 'landmark.jpg')).convert()\n            self.img_landmark = pygame.transform.scale(img, (WIDTH, WIDTH))\n            img = pygame.image.load(os.path.join(image_path, 'agent_landmark.jpg')).convert()\n            self.img_agent_landmark = pygame.transform.scale(img, (WIDTH, WIDTH))\n            img = pygame.image.load(os.path.join(image_path, 'agent_agent_landmark.jpg')).convert()\n            self.img_agent_agent_landmark = pygame.transform.scale(img, (WIDTH, WIDTH))\n            img = pygame.image.load(os.path.join(image_path, 'agent_agent.jpg')).convert()\n            self.img_agent_agent = pygame.transform.scale(img, (WIDTH, WIDTH))\n\n            if self.recorder_flag:\n                self.snaps_path = os.path.join(current_path, 'results_agents_landmarks')  # The resource folder path\n                self.snaps_path = os.path.join(self.snaps_path, 'snaps')  # The resource folder path\n\n        self.cells = []\n        self.positions_idx = []\n\n        # self.agents_collide_flag = args['collide_flag']\n        # self.penalty_per_collision = args['penalty_collision']\n        self.num_episodes = 0\n        self.terminal = False\n\n    def set_positions_idx(self):\n\n        cells = [(i, j) for i in range(0, self.grid_size) for j in range(0, self.grid_size)]\n\n        positions_idx = []\n\n        if self.game_mode == 0:\n            # first enter the positions for the landmarks and then for the agents. If the grid is n*n, then the\n            # positions are\n            #  0                1             2     ...     n-1\n            #  n              n+1           n+2     ...    2n-1\n            # 2n             2n+1          2n+2     ...    3n-1\n            #  .                .             .       .       .\n            #  .                .             .       .       .\n            #  .                .             .       .       .\n            # (n-1)*n   (n-1)*n+1     (n-1)*n+2     ...   n*n+1\n            # , e.g.,\n            # positions_idx = [0, 6, 23, 24] where 0 and 6 are the positions of landmarks and 23 and 24 are positions\n            # of agents\n            positions_idx = []\n\n        if self.game_mode == 1:\n            positions_idx = np.random.choice(len(cells), size=self.num_landmarks + self.num_agents,\n                                             replace=False)\n\n        return [cells, positions_idx]\n\n    def reset(self):  # initialize the world\n\n        self.terminal = False\n        [self.cells, self.positions_idx] = self.set_positions_idx()\n\n        # separate the generated position indices for walls, pursuers, and evaders\n        landmarks_positions_idx = self.positions_idx[0:self.num_landmarks]\n        agents_positions_idx = self.positions_idx[self.num_landmarks:self.num_landmarks + self.num_agents]\n\n        # map generated position indices to positions\n        self.landmarks_positions = [self.cells[pos] for pos in landmarks_positions_idx]\n        self.agents_positions = [self.cells[pos] for pos in agents_positions_idx]\n\n        initial_state = list(sum(self.landmarks_positions + self.agents_positions, ()))\n\n        return initial_state\n\n    def step(self, agents_actions):\n        # update the position of agents\n        self.agents_positions = self.update_positions(self.agents_positions, agents_actions)\n\n        if self.reward_mode == 0:\n\n            binary_cover_list = []\n\n            for landmark in self.landmarks_positions:\n                distances = [np.linalg.norm(np.array(landmark) - np.array(agent_pos), 1)\n                             for agent_pos in self.agents_positions]\n\n                min_dist = min(distances)\n\n                if min_dist == 0:\n                    binary_cover_list.append(min_dist)\n                else:\n                    binary_cover_list.append(1)\n\n            # check the terminal case\n            if sum(binary_cover_list) == 0:\n                reward = 0\n                self.terminal = True\n            else:\n                reward = -1\n                self.terminal = False\n\n        if self.reward_mode == 1:\n\n            binary_cover_list = []\n\n            for landmark in self.landmarks_positions:\n                distances = [np.linalg.norm(np.array(landmark) - np.array(agent_pos), 1)\n                             for agent_pos in self.agents_positions]\n\n                min_dist = min(distances)\n\n                if min_dist == 0:\n                    binary_cover_list.append(0)\n                else:\n                    binary_cover_list.append(1)\n\n            reward = -1 * sum(binary_cover_list)\n            # check the terminal case\n            if reward == 0:\n                self.terminal = True\n            else:\n                self.terminal = False\n\n        if self.reward_mode == 2:\n\n            # calculate the sum of minimum distances of agents to landmarks\n            reward = 0\n            for landmark in self.landmarks_positions:\n                distances = [np.linalg.norm(np.array(landmark) - np.array(agent_pos), 1)\n                             for agent_pos in self.agents_positions]\n\n                reward -= min(distances)\n\n            # check the terminal case\n            if reward == 0:\n                self.terminal = True\n\n        new_state = list(sum(self.landmarks_positions + self.agents_positions, ()))\n\n        return [new_state, reward, self.terminal]\n\n    def update_positions(self, pos_list, act_list):\n        positions_action_applied = []\n        for idx in xrange(len(pos_list)):\n            if act_list[idx] != 4:\n                pos_act_applied = map(operator.add, pos_list[idx], self.A_DIFF[act_list[idx]])\n                # checks to make sure the new pos in inside the grid\n                for i in xrange(0, 2):\n                    if pos_act_applied[i] < 0:\n                        pos_act_applied[i] = 0\n                    if pos_act_applied[i] >= self.grid_size:\n                        pos_act_applied[i] = self.grid_size - 1\n                positions_action_applied.append(tuple(pos_act_applied))\n            else:\n                positions_action_applied.append(pos_list[idx])\n\n        final_positions = []\n\n        for pos_idx in xrange(len(pos_list)):\n            if positions_action_applied[pos_idx] == pos_list[pos_idx]:\n                final_positions.append(pos_list[pos_idx])\n            elif positions_action_applied[pos_idx] not in pos_list and positions_action_applied[\n                pos_idx] not in positions_action_applied[\n                                0:pos_idx] + positions_action_applied[\n                                             pos_idx + 1:]:\n                final_positions.append(positions_action_applied[pos_idx])\n            else:\n                final_positions.append(pos_list[pos_idx])\n\n        return final_positions\n\n    def action_space(self):\n        return len(self.A)\n\n    def render(self):\n\n        pygame.time.delay(500)\n        pygame.display.flip()\n\n        for event in pygame.event.get():\n            if event.type == pygame.QUIT:\n                sys.exit()\n\n        self.screen.fill(BLACK)\n        text = self.my_font.render(\"Step: {0}\".format(self.step_num), 1, WHITE)\n        self.screen.blit(text, (5, 15))\n\n        for row in range(self.grid_size):\n            for column in range(self.grid_size):\n                pos = (row, column)\n\n                frequency = self.find_frequency(pos, self.agents_positions)\n\n                if pos in self.landmarks_positions and frequency >= 1:\n                    if frequency == 1:\n                        self.screen.blit(self.img_agent_landmark,\n                                         ((MARGIN + WIDTH) * column + MARGIN, (MARGIN + HEIGHT) * row + MARGIN + 50))\n                    else:\n                        self.screen.blit(self.img_agent_agent_landmark,\n                                         ((MARGIN + WIDTH) * column + MARGIN, (MARGIN + HEIGHT) * row + MARGIN + 50))\n\n                elif pos in self.landmarks_positions:\n                    self.screen.blit(self.img_landmark,\n                                     ((MARGIN + WIDTH) * column + MARGIN, (MARGIN + HEIGHT) * row + MARGIN + 50))\n\n                elif frequency >= 1:\n                    if frequency == 1:\n                        self.screen.blit(self.img_agent,\n                                         ((MARGIN + WIDTH) * column + MARGIN, (MARGIN + HEIGHT) * row + MARGIN + 50))\n                    elif frequency > 1:\n                        self.screen.blit(self.img_agent_agent,\n                                         ((MARGIN + WIDTH) * column + MARGIN, (MARGIN + HEIGHT) * row + MARGIN + 50))\n                    else:\n                        print('Error!')\n                else:\n                    pygame.draw.rect(self.screen, WHITE,\n                                     [(MARGIN + WIDTH) * column + MARGIN, (MARGIN + HEIGHT) * row + MARGIN + 50, WIDTH,\n                                      HEIGHT])\n\n        if self.recorder_flag:\n            file_name = \"%04d.png\" % self.step_num\n            pygame.image.save(self.screen, os.path.join(self.snaps_path, file_name))\n\n        if not self.terminal:\n            self.step_num += 1\n\n    def gui_setup(self):\n\n        # Initialize pygame\n        pygame.init()\n\n        # Set the HEIGHT and WIDTH of the screen\n        board_size_x = (WIDTH + MARGIN) * self.grid_size\n        board_size_y = (HEIGHT + MARGIN) * self.grid_size\n\n        window_size_x = int(board_size_x)\n        window_size_y = int(board_size_y * 1.2)\n\n        window_size = [window_size_x, window_size_y]\n        screen = pygame.display.set_mode(window_size)\n\n        # Set title of screen\n        pygame.display.set_caption(\"Agents-and-Landmarks Game\")\n\n        myfont = pygame.font.SysFont(\"monospace\", 30)\n\n        return [screen, myfont]\n\n    def find_frequency(self, a, items):\n        freq = 0\n        for item in items:\n            if item == a:\n                freq += 1\n\n        return freq\n"
  },
  {
    "path": "environments/predators_prey/__init__.py",
    "content": ""
  },
  {
    "path": "environments/predators_prey/env.py",
    "content": "\"\"\"\nCreated on Wednesday Jan  16 2019\n\n@author: Seyed Mohammad Asghari\n@github: https://github.com/s3yyy3d-m\n\"\"\"\n\nimport random\nimport operator\nimport numpy as np\nimport pygame\nimport sys\nimport os\n\n# Define some colors\nBLACK = (0, 0, 0)\nWHITE = (255, 255, 255)\nGREEN = (0, 255, 0)\nRED = (255, 0, 0)\nBLUE = (0, 0, 255)\nGRAY = (128, 128, 128)\nORANGE = (255, 128, 0)\n\n# This sets the WIDTH and HEIGHT of each grid location\nWIDTH = 60\nHEIGHT = 60\n\n# This sets the margin between each cell\nMARGIN = 1\n\n\nclass PredatorsPrey(object):\n\n    UP = 0\n    DOWN = 1\n    LEFT = 2\n    RIGHT = 3\n    STAY = 4\n    A = [UP, DOWN, LEFT, RIGHT, STAY]\n    A_DIFF = [(-1, 0), (1, 0), (0, -1), (0, 1), (0,0)]\n\n    def __init__(self, args, current_path):\n\n        self.num_predators = args['agents_number']\n        self.num_preys = 1\n        self.preys_mode = args['preys_mode']\n        self.num_walls = 0\n        self.grid_size = args['grid_size']\n\n        self.game_mode = args['game_mode']\n        self.reward_mode = args['reward_mode']\n\n        self.state_size = (self.num_preys + self.num_predators + self.num_walls)*2\n        self.predators_positions = []\n        self.preys_positions = []\n        self.walls_positions = []\n        self.render_flag = args['render']\n        self.recorder_flag = args['recorder']\n        # enables visualizer\n        if self.render_flag:\n            [self.screen, self.my_font] = self.gui_setup()\n            self.step_num = 1\n\n            resource_path = os.path.join(current_path, 'environments')  # The resource folder path\n            resource_path = os.path.join(resource_path, 'predators_prey')  # The resource folder path\n            image_path = os.path.join(resource_path, 'images')  # The image folder path\n\n            img = pygame.image.load(os.path.join(image_path, 'predator_prey.jpg')).convert()\n            self.img_predator_prey = pygame.transform.scale(img, (WIDTH, WIDTH))\n            img = pygame.image.load(os.path.join(image_path, 'predator.jpg')).convert()\n            self.img_predator = pygame.transform.scale(img, (WIDTH, WIDTH))\n            img = pygame.image.load(os.path.join(image_path, 'prey.jpg')).convert()\n            self.img_prey = pygame.transform.scale(img, (WIDTH, WIDTH))\n\n            if self.recorder_flag:\n                self.snaps_path = os.path.join(current_path, 'results_predators_prey')  # The resource folder path\n                self.snaps_path = os.path.join(self.snaps_path, 'snaps')  # The resource folder path\n\n        self.cells = []\n        self.agents_positions_idx = []\n\n        self.num_episodes = 0\n        self.terminal = False\n\n    def set_positions_idx(self):\n\n        cells = [(i, j) for i in range(0, self.grid_size) for j in range(0, self.grid_size)]\n\n        positions_idx = []\n\n        if self.game_mode == 0:\n            # first enter the positions for the agents (predators) and the single prey. If the grid is n*n,\n            # then the positions are\n            #  0                1             2     ...     n-1\n            #  n              n+1           n+2     ...    2n-1\n            # 2n             2n+1          2n+2     ...    3n-1\n            #  .                .             .       .       .\n            #  .                .             .       .       .\n            #  .                .             .       .       .\n            # (n-1)*n   (n-1)*n+1     (n-1)*n+2     ...   n*n+1\n            # , e.g.,\n            # positions_idx = [0, 6, 23, 24] where 0, 6, and 23 are the positions of the agents 24 is the position\n            # of the prey\n            positions_idx = []\n\n        if self.game_mode == 1:\n            positions_idx = np.random.choice(len(cells), size=self.num_predators + self.num_preys, replace=False)\n\n        return [cells, positions_idx]\n\n    def reset(self):  # initialize the world\n        self.terminal = False\n        self.num_catches = 0\n\n        [self.cells, self.agents_positions_idx] = self.set_positions_idx()\n\n        # separate the generated position indices for walls, predators, and preys\n        walls_positions_idx = self.agents_positions_idx[0:self.num_walls]\n        predators_positions_idx = self.agents_positions_idx[self.num_walls:self.num_walls + self.num_predators]\n        preys_positions_idx = self.agents_positions_idx[self.num_walls + self.num_predators:]\n\n        # map generated position indices to positions\n        self.walls_positions = [self.cells[pos] for pos in walls_positions_idx]\n        self.predators_positions = [self.cells[pos] for pos in predators_positions_idx]\n        self.preys_positions = [self.cells[pos] for pos in preys_positions_idx]\n\n        initial_state = list(sum(self.walls_positions + self.predators_positions + self.preys_positions, ()))\n\n        return initial_state\n\n    def fix_prey(self):\n        return 4\n\n    def actor_prey_random(self):\n        return random.randrange(self.action_space())\n\n    def actor_prey_random_escape(self, prey_index):\n        prey_pos = self.preys_positions[prey_index]\n        [_, action_to_neighbors] = self.empty_neighbor_finder(prey_pos)\n\n        return random.choice(action_to_neighbors)\n\n    def neighbor_finder(self, pos):\n        neighbors_pos = []\n        action_to_neighbor = []\n        pos_repeat = [pos for _ in xrange(4)]\n        for idx in xrange(4):\n            neighbor_pos = map(operator.add, pos_repeat[idx], self.A_DIFF[idx])\n            if neighbor_pos[0] in range(0,self.grid_size) and neighbor_pos[1] in range(0,self.grid_size)\\\n                    and neighbor_pos not in self.walls_positions:\n                neighbors_pos.append(neighbor_pos)\n                action_to_neighbor.append(idx)\n\n        neighbors_pos.append(pos)\n        action_to_neighbor.append(4)\n\n        return [neighbors_pos, action_to_neighbor]\n\n    def empty_neighbor_finder(self, pos):\n        neighbors_pos = []\n        action_to_neighbor = []\n        pos_repeat = [pos for _ in xrange(4)]\n        for idx in xrange(4):\n            neighbor_pos = map(operator.add, pos_repeat[idx], self.A_DIFF[idx])\n            if neighbor_pos[0] in range(0,self.grid_size) and neighbor_pos[1] in range(0, self.grid_size)\\\n                    and neighbor_pos not in self.walls_positions:\n                neighbors_pos.append(neighbor_pos)\n                action_to_neighbor.append(idx)\n\n        neighbors_pos.append(pos)\n        action_to_neighbor.append(4)\n\n        empty_neighbors_pos = []\n        action_to_empty_neighbor = []\n\n        for idx in xrange(len(neighbors_pos)):\n            if tuple(neighbors_pos[idx]) not in self.predators_positions:\n                empty_neighbors_pos.append(neighbors_pos[idx])\n                action_to_empty_neighbor.append(action_to_neighbor[idx])\n\n        return [empty_neighbors_pos, action_to_empty_neighbor]\n\n    def step(self, predators_actions):\n        # update the position of preys\n        preys_actions = []\n        for prey_idx in xrange(len(self.preys_positions)):\n            if self.preys_mode == 0:\n                preys_actions.append(self.fix_prey())\n            elif self.preys_mode == 1:\n                preys_actions.append(self.actor_prey_random_escape(prey_idx))\n            elif self.preys_mode == 2:\n                preys_actions.append(self.actor_prey_random())\n            else:\n                print('Invalid mode for the prey')\n\n        self.preys_positions = self.update_positions(self.preys_positions, preys_actions)\n        # update the position of predators\n        self.predators_positions = self.update_positions(self.predators_positions, predators_actions)\n        # check whether any predator catches any prey\n        [reward, self.terminal] = self.check_catching()\n        new_state = list(sum(self.walls_positions + self.predators_positions + self.preys_positions,()))\n\n        return [new_state, reward, self.terminal]\n\n    def check_catching(self):\n        new_preys_position = []\n        terminal_flag = False\n        # checks to see whether the position of any prey is the same of as the position of any predator\n\n        if self.reward_mode == 0:\n\n            for prey_pos in self.preys_positions:\n                new_preys_position.append(prey_pos)\n\n            distances = 0\n            for predator in self.predators_positions:\n                distances += np.linalg.norm(np.array(predator) - np.array(self.preys_positions[0]), 1)\n\n            [prey_empty_neigbours, _] = self.empty_neighbor_finder(self.preys_positions[0])\n\n            # check the terminal case\n            if int(distances) == self.num_predators - 1 or len(prey_empty_neigbours) == 0:\n                terminal_flag = True\n                reward = 0\n\n            else:\n                reward = -1\n\n        elif self.reward_mode == 1:\n\n            for prey_pos in self.preys_positions:\n                new_preys_position.append(prey_pos)\n\n            distances = 0\n            for predator in self.predators_positions:\n                distances += np.linalg.norm(np.array(predator) - np.array(self.preys_positions[0]), 1)\n\n            [prey_empty_neigbours, _] = self.empty_neighbor_finder(self.preys_positions[0])\n\n            # check the terminal case\n            if int(distances) == self.num_predators - 1 or len(prey_empty_neigbours) == 0:\n                terminal_flag = True\n                reward = 0\n\n            else:\n                reward = -1 * distances\n\n        else:\n            print('Invalid game mode')\n\n        self.preys_positions = new_preys_position\n\n        return [reward, terminal_flag]\n\n    def update_positions(self, pos_list, act_list):\n        positions_action_applied = []\n        for idx in xrange(len(pos_list)):\n            if act_list[idx] != 4:\n                pos_act_applied = map(operator.add, pos_list[idx], self.A_DIFF[act_list[idx]])\n                # checks to make sure the new pos in inside the grid\n                for i in xrange(0, 2):\n                    if pos_act_applied[i] < 0:\n                        pos_act_applied[i] = 0\n                    if pos_act_applied[i] >= self.grid_size:\n                        pos_act_applied[i] = self.grid_size - 1\n                positions_action_applied.append(tuple(pos_act_applied))\n            else:\n                positions_action_applied.append(pos_list[idx])\n\n        final_positions = []\n\n        for pos_idx in xrange(len(pos_list)):\n            if positions_action_applied[pos_idx] == pos_list[pos_idx]:\n                final_positions.append(pos_list[pos_idx])\n            elif positions_action_applied[pos_idx] not in pos_list and positions_action_applied[pos_idx] not in positions_action_applied[\n                                                                                          0:pos_idx] + positions_action_applied[\n                                                                                                       pos_idx + 1:]:\n                final_positions.append(positions_action_applied[pos_idx])\n            else:\n                final_positions.append(pos_list[pos_idx])\n\n        return final_positions\n\n    def action_space(self):\n        return len(self.A)\n\n    def render(self):\n\n        pygame.time.wait(500)\n        pygame.display.flip()\n\n        for event in pygame.event.get():\n            if event.type == pygame.QUIT:\n                sys.exit()\n\n        self.screen.fill(BLACK)\n        text = self.my_font.render(\"Step: {0}\".format(self.step_num), 1, WHITE)\n        self.screen.blit(text, (5, 15))\n\n        # for row in range(self.grid_size):\n        #     for column in range(self.grid_size):\n        #         pos = (row, column)\n        #         if pos in self.predators_positions and pos in self.preys_positions:\n        #             color = ORANGE\n        #         elif pos in self.predators_positions:\n        #             color = BLUE\n        #         elif pos in self.preys_positions:\n        #             color = RED\n        #         else:\n        #             color = WHITE\n        #         pygame.draw.rect(self.screen, color,\n        #                          [(MARGIN + WIDTH) * column + MARGIN, (MARGIN + HEIGHT) * row + MARGIN + 50, WIDTH,\n        #                           HEIGHT])\n\n        for row in range(self.grid_size):\n            for column in range(self.grid_size):\n                pos = (row, column)\n                if pos in self.predators_positions and pos in self.preys_positions:\n                    self.screen.blit(self.img_predator_prey,\n                                     ((MARGIN + WIDTH) * column + MARGIN, (MARGIN + HEIGHT) * row + MARGIN + 50))\n                elif pos in self.predators_positions:\n                    self.screen.blit(self.img_predator,\n                                     ((MARGIN + WIDTH) * column + MARGIN, (MARGIN + HEIGHT) * row + MARGIN + 50))\n                elif pos in self.preys_positions:\n                    self.screen.blit(self.img_prey,\n                                     ((MARGIN + WIDTH) * column + MARGIN, (MARGIN + HEIGHT) * row + MARGIN + 50))\n                else:\n                    color = WHITE\n                    pygame.draw.rect(self.screen, color,\n                                 [(MARGIN + WIDTH) * column + MARGIN, (MARGIN + HEIGHT) * row + MARGIN + 50, WIDTH,\n                                  HEIGHT])\n\n        if self.recorder_flag:\n            file_name = \"%04d.png\" % self.step_num\n            pygame.image.save(self.screen, os.path.join(self.snaps_path, file_name))\n\n        if not self.terminal:\n            self.step_num += 1\n\n    def gui_setup(self):\n\n        # Initialize pygame\n        pygame.init()\n\n        # Set the HEIGHT and WIDTH of the screen\n        board_size_x = (WIDTH + MARGIN) * self.grid_size\n        board_size_y = (HEIGHT + MARGIN) * self.grid_size\n\n        window_size_x = int(board_size_x*1.01)\n        window_size_y = int(board_size_y * 1.2)\n\n        window_size = [window_size_x, window_size_y]\n        screen = pygame.display.set_mode(window_size, 0, 32)\n\n        # Set title of screen\n        pygame.display.set_caption(\"Predators-and-Prey Game\")\n\n        myfont = pygame.font.SysFont(\"monospace\", 30)\n\n        return [screen, myfont]\n"
  },
  {
    "path": "predators_prey_multiagent.py",
    "content": "\"\"\"\nCreated on Wednesday Jan  16 2019\n\n@author: Seyed Mohammad Asghari\n@github: https://github.com/s3yyy3d-m\n\"\"\"\n\nimport numpy as np\nimport os\nimport random\nimport argparse\nimport pandas as pd\nfrom environments.predators_prey.env import PredatorsPrey\nfrom dqn_agent import Agent\nimport glob\n\n\nARG_LIST = ['learning_rate', 'optimizer', 'memory_capacity', 'batch_size', 'target_frequency', 'maximum_exploration',\n            'max_timestep', 'first_step_memory', 'replay_steps', 'number_nodes', 'target_type', 'memory',\n            'prioritization_scale', 'dueling', 'agents_number', 'grid_size', 'game_mode', 'reward_mode']\n\n\ndef get_name_brain(args, idx):\n\n    file_name_str = '_'.join([str(args[x]) for x in ARG_LIST])\n\n    return './results_predators_prey/weights_files/' + file_name_str + '_' + str(idx) + '.h5'\n\n\ndef get_name_rewards(args):\n\n    file_name_str = '_'.join([str(args[x]) for x in ARG_LIST])\n\n    return './results_predators_prey/rewards_files/' + file_name_str + '.csv'\n\n\ndef get_name_timesteps(args):\n\n    file_name_str = '_'.join([str(args[x]) for x in ARG_LIST])\n\n    return './results_predators_prey/timesteps_files/' + file_name_str + '.csv'\n\n\nclass Environment(object):\n\n    def __init__(self, arguments):\n        current_path = os.path.dirname(__file__)  # Where your .py file is located\n        self.env = PredatorsPrey(arguments, current_path)\n        self.episodes_number = arguments['episode_number']\n        self.render = arguments['render']\n        self.recorder = arguments['recorder']\n        self.max_ts = arguments['max_timestep']\n        self.test = arguments['test']\n        self.filling_steps = arguments['first_step_memory']\n        self.steps_b_updates = arguments['replay_steps']\n        self.max_random_moves = arguments['max_random_moves']\n\n        self.num_predators = arguments['agents_number']\n        self.num_preys = 1\n        self.preys_mode = arguments['preys_mode']\n        self.game_mode = arguments['game_mode']\n        self.grid_size = arguments['grid_size']\n\n\n    def run(self, agents, file1, file2):\n\n        total_step = 0\n        rewards_list = []\n        timesteps_list = []\n        max_score = -10000\n        for episode_num in xrange(self.episodes_number):\n            state = self.env.reset()\n            if self.render:\n                self.env.render()\n\n            random_moves = random.randint(0, self.max_random_moves)\n\n            # create randomness in initial state\n            for _ in xrange(random_moves):\n                actions = [4 for _ in xrange(len(agents))]\n                state, _, _ = self.env.step(actions)\n                if self.render:\n                    self.env.render()\n\n            # converting list of positions to an array\n            state = np.array(state)\n            state = state.ravel()\n\n            done = False\n            reward_all = 0\n            time_step = 0\n            while not done and time_step < self.max_ts:\n\n                # if self.render:\n                #     self.env.render()\n                actions = []\n                for agent in agents:\n                    actions.append(agent.greedy_actor(state))\n                next_state, reward, done = self.env.step(actions)\n                # converting list of positions to an array\n                next_state = np.array(next_state)\n                next_state = next_state.ravel()\n\n                if not self.test:\n                    for agent in agents:\n                        agent.observe((state, actions, reward, next_state, done))\n                        if total_step >= self.filling_steps:\n                            agent.decay_epsilon()\n                            if time_step % self.steps_b_updates == 0:\n                                agent.replay()\n                            agent.update_target_model()\n\n                total_step += 1\n                time_step += 1\n                state = next_state\n                reward_all += reward\n\n                if self.render:\n                    self.env.render()\n\n            rewards_list.append(reward_all)\n            timesteps_list.append(time_step)\n\n            print(\"Episode {p}, Score: {s}, Final Step: {t}, Goal: {g}\".format(p=episode_num, s=reward_all,\n                                                                               t=time_step, g=done))\n            if self.recorder:\n                os.system(\"ffmpeg -r 4 -i ./results_predators_prey/snaps/%04d.png -b:v 40000 -minrate 40000 -maxrate 4000k -bufsize 1835k -c:v mjpeg -qscale:v 0 \"\n                          + \"./results_predators_prey/videos/{a1}_{a2}_{a3}_{a4}_{a5}.avi\".format(a1=self.num_predators,\n                                                                                            a2=self.num_preys,\n                                                                                            a3=self.preys_mode,\n                                                                                            a4=self.game_mode,\n                                                                                            a5=self.grid_size))\n\n                files = glob.glob('./results_predators_prey/snaps/*')\n                for f in files:\n                    os.remove(f)\n\n            if not self.test:\n                if episode_num % 100 == 0:\n                    df = pd.DataFrame(rewards_list, columns=['score'])\n                    df.to_csv(file1)\n\n                    df = pd.DataFrame(timesteps_list, columns=['steps'])\n                    df.to_csv(file2)\n\n                    if total_step >= self.filling_steps:\n                        if reward_all > max_score:\n                            for agent in agents:\n                                agent.brain.save_model()\n                            max_score = reward_all\n\n\nif __name__ ==\"__main__\":\n\n    parser = argparse.ArgumentParser()\n    # DQN Parameters\n    parser.add_argument('-e', '--episode-number', default=1, type=int, help='Number of episodes')\n    parser.add_argument('-l', '--learning-rate', default=0.00005, type=float, help='Learning rate')\n    parser.add_argument('-op', '--optimizer', choices=['Adam', 'RMSProp'], default='RMSProp',\n                        help='Optimization method')\n    parser.add_argument('-m', '--memory-capacity', default=1000000, type=int, help='Memory capacity')\n    parser.add_argument('-b', '--batch-size', default=64, type=int, help='Batch size')\n    parser.add_argument('-t', '--target-frequency', default=10000, type=int,\n                        help='Number of steps between the updates of target network')\n    parser.add_argument('-x', '--maximum-exploration', default=100000, type=int, help='Maximum exploration step')\n    parser.add_argument('-fsm', '--first-step-memory', default=0, type=float,\n                        help='Number of initial steps for just filling the memory')\n    parser.add_argument('-rs', '--replay-steps', default=4, type=float, help='Steps between updating the network')\n    parser.add_argument('-nn', '--number-nodes', default=256, type=int, help='Number of nodes in each layer of NN')\n    parser.add_argument('-tt', '--target-type', choices=['DQN', 'DDQN'], default='DQN')\n    parser.add_argument('-mt', '--memory', choices=['UER', 'PER'], default='PER')\n    parser.add_argument('-pl', '--prioritization-scale', default=0.5, type=float, help='Scale for prioritization')\n    parser.add_argument('-du', '--dueling', action='store_true', help='Enable Dueling architecture if \"store_false\" ')\n\n    parser.add_argument('-gn', '--gpu-num', default='2', type=str, help='Number of GPU to use')\n    parser.add_argument('-test', '--test', action='store_true', help='Enable the test phase if \"store_false\"')\n\n    # Game Parameters\n    parser.add_argument('-k', '--agents-number', default=3, type=int, help='The number of agents')\n    parser.add_argument('-g', '--grid-size', default=5, type=int, help='Grid size')\n    parser.add_argument('-ts', '--max-timestep', default=100, type=int, help='Maximum number of timesteps per episode')\n    parser.add_argument('-gm', '--game-mode', choices=[0, 1], type=int, default=1, help='Mode of the game, '\n                                                                                        '0: prey and agents (predators)'\n                                                                                        'are fixed,'\n                                                                                        '1: prey and agents (predators)'\n                                                                                        'are random ')\n\n    parser.add_argument('-rw', '--reward-mode', choices=[0, 1], type=int, default=1, help='Mode of the reward,'\n                                                                                          '0: Only terminal rewards, '\n                                                                                          '1: Full rewards,'\n                                                                                          '(sum of dinstances of agents'\n                                                                                          ' to the prey)')\n\n    parser.add_argument('-rm', '--max-random-moves', default=0, type=int,\n                        help='Maximum number of random initial moves for agents')\n\n    parser.add_argument('-evm', '--preys-mode', choices=[0, 1, 2], type=int, default=2, help='Mode of preys:'\n                                                                                             '0: fixed,'\n                                                                                             '1: random,'\n                                                                                             '2: random escape')\n\n    # Visualization Parameters\n    parser.add_argument('-r', '--render', action='store_false', help='Turn on visualization if \"store_false\"')\n    parser.add_argument('-re', '--recorder', action='store_true', help='Store the visualization as a movie if '\n                                                                       '\"store_false\"')\n\n    args = vars(parser.parse_args())\n    os.environ['CUDA_VISIBLE_DEVICES'] = args['gpu_num']\n\n    env = Environment(args)\n\n    state_size = env.env.state_size\n    action_space = env.env.action_space()\n\n    all_agents = []\n    for b_idx in xrange(args['agents_number']):\n\n        brain_file = get_name_brain(args, b_idx)\n        all_agents.append(Agent(state_size, action_space, b_idx, brain_file, args))\n\n    rewards_file = get_name_rewards(args)\n    timesteps_file = get_name_timesteps(args)\n\n    env.run(all_agents, rewards_file, timesteps_file)\n"
  },
  {
    "path": "prioritized_experience_replay.py",
    "content": "\"\"\"\nCreated on Wednesday Jan  16 2019\n\n@author: Seyed Mohammad Asghari\n@github: https://github.com/s3yyy3d-m\n\"\"\"\n\nimport random\nfrom sum_tree import SumTree as ST\n\n\nclass Memory(object):\n    e = 0.05\n\n    def __init__(self, capacity, pr_scale):\n        self.capacity = capacity\n        self.memory = ST(self.capacity)\n        self.pr_scale = pr_scale\n        self.max_pr = 0\n\n    def get_priority(self, error):\n        return (error + self.e) ** self.pr_scale\n\n    def remember(self, sample, error):\n        p = self.get_priority(error)\n\n        self_max = max(self.max_pr, p)\n        self.memory.add(self_max, sample)\n\n    def sample(self, n):\n        sample_batch = []\n        sample_batch_indices = []\n        sample_batch_priorities = []\n        num_segments = self.memory.total() / n\n\n        for i in xrange(n):\n            left = num_segments * i\n            right = num_segments * (i + 1)\n\n            s = random.uniform(left, right)\n            idx, pr, data = self.memory.get(s)\n            sample_batch.append((idx, data))\n            sample_batch_indices.append(idx)\n            sample_batch_priorities.append(pr)\n\n        return [sample_batch, sample_batch_indices, sample_batch_priorities]\n\n    def update(self, batch_indices, errors):\n        for i in xrange(len(batch_indices)):\n            p = self.get_priority(errors[i])\n            self.memory.update(batch_indices[i], p)"
  },
  {
    "path": "results_predators_prey/rewards_files/5e-05_RMSProp_1000000_64_10000_100000_100_0_4_256_DQN_PER_0.5_False_3_5_1_1.csv",
    "content": ",score\n0,-984.0\n"
  },
  {
    "path": "results_predators_prey/timesteps_files/5e-05_RMSProp_1000000_64_10000_100000_100_0_4_256_DQN_PER_0.5_False_3_5_1_1.csv",
    "content": ",steps\n0,100\n"
  },
  {
    "path": "sum_tree.py",
    "content": "import numpy\n\n\nclass SumTree(object):\n\n    def __init__(self, capacity):\n        self.write = 0\n        self.capacity = capacity\n        self.tree = numpy.zeros(2*capacity - 1)\n        self.data = numpy.zeros(capacity, dtype=object)\n\n    def _propagate(self, idx, change):\n        parent = (idx - 1) // 2\n\n        self.tree[parent] += change\n\n        if parent != 0:\n            self._propagate(parent, change)\n\n    def _retrieve(self, idx, s):\n        left = 2 * idx + 1\n        right = left + 1\n\n        if left >= len(self.tree):\n            return idx\n\n        if s <= self.tree[left]:\n            return self._retrieve(left, s)\n        else:\n            return self._retrieve(right, s-self.tree[left])\n\n    def total(self):\n        return self.tree[0]\n\n    def add(self, p, data):\n        idx = self.write + self.capacity - 1\n\n        self.data[self.write] = data\n        self.update(idx, p)\n\n        self.write += 1\n        if self.write >= self.capacity:\n            self.write = 0\n\n    def update(self, idx, p):\n        change = p - self.tree[idx]\n\n        self.tree[idx] = p\n        self._propagate(idx, change)\n\n    # def get_real_idx(self, data_idx):\n    #\n    #     tempIdx = data_idx - self.write\n    #     if tempIdx >= 0:\n    #         return tempIdx\n    #     else:\n    #         return tempIdx + self.capacity\n\n    def get(self, s):\n        idx = self._retrieve(0, s)\n        dataIdx = idx - self.capacity + 1\n        # realIdx = self.get_real_idx(dataIdx)\n\n        return idx, self.tree[idx], self.data[dataIdx]\n"
  },
  {
    "path": "uniform_experience_replay.py",
    "content": "\"\"\"\nCreated on Wednesday Jan  16 2019\n\n@author: Seyed Mohammad Asghari\n@github: https://github.com/s3yyy3d-m\n\"\"\"\n\nimport random\nfrom collections import deque\n\n\nclass Memory(object):\n\n    def __init__(self, capacity):\n        self.capacity = capacity\n        self.memory = deque(maxlen=self.capacity)\n\n    def remember(self, sample):\n        self.memory.append(sample)\n\n    def sample(self, n):\n        n = min(n, len(self.memory))\n        sample_batch = random.sample(self.memory, n)\n\n        return sample_batch"
  }
]