[
  {
    "path": ".gitignore",
    "content": ".idea/\ntmp/\np4src/build\napp/target\napp/src/main/resources/p4info.txt\napp/src/main/resources/bmv2.json\nptf/stratum_bmv2.log\nptf/p4rt_write.log\nptf/ptf.log\nptf/ptf.pcap\n**/*.iml\n**/*.pyc\n**/*.bak\n**/.classpath\n**/.project\n**/.settings\n**/.factorypath\nutil/.pipe_cfg\n"
  },
  {
    "path": ".travis.yml",
    "content": "dist: xenial\n\nlanguage: python\n\nservices:\n  - docker\n\npython:\n  - \"3.5\"\n\ninstall:\n  - make deps\n\nscript:\n  - make check check-sr check-gtp NGSDN_TUTORIAL_SUDO=sudo\n"
  },
  {
    "path": "EXERCISE-1.md",
    "content": "# Exercise 1: P4Runtime Basics\n\nThis exercise provides a hands-on introduction to the P4Runtime API. You will be\nasked to:\n\n1. Look at the P4 starter code\n2. Compile it for the BMv2 software switch and understand the output (P4Info\n   and BMv2 JSON files)\n3. Start Mininet with a 2x2 topology of `stratum_bmv2` switches\n4. Use the P4Runtime Shell to manually insert table entries in one of the\n   switches to provide connectivity between hosts\n\n## 1. Look at the P4 program\n\nTo get started, let's have a look a the P4 program:\n[p4src/main.p4](p4src/main.p4)\n\nIn the rest of the exercises, you will be asked to build a leaf-spine data\ncenter fabric based on IPv6. To make things easier, we provide a starter P4\nprogram which contains:\n\n* Header definitions\n* Parser implementation\n* Ingress and egress pipeline implementation (incomplete)\n* Checksum verification/update\n\nThe implementation already provides logic for L2 bridging and ACL behaviors. We\nsuggest you start by taking a **quick look** at the whole program to understand\nits structure. When you're done, try answering the following questions, while\nreferring to the P4 program to understand the different parts in more details.\n\n**Parser**\n\n* List all the protocol headers that can be extracted from a packet.\n* Which header is expected to be the first one when parsing a new packet\n\n**Ingress pipeline**\n\n* For the L2 bridging case, which table is used to replicate NDP requests to\n  all host-facing ports? What type of match is used in that table?\n* In the ACL table, what's the difference between `send_to_cpu` and\n  `clone_to_cpu` actions?\n* In the apply block, what is the first table applied to a packet? Are P4Runtime\n  packet-out treated differently?\n\n**Egress pipeline**\n\n* For multicast packets, can they be replicated to the ingress port?\n\n**Deparser**\n\n* What is the first header to be serialized on the wire and in which case?\n\n## 2. Compile P4 program\n\nThe next step is to compile the P4 program for the BMv2 `simple_switch` target.\nFor this, we will use the open source P4_16 compiler ([p4c][p4c]) which includes\na backend for this specific target, named `p4c-bm2-ss`.\n\nTo compile the program, open a terminal window in the exercise VM and type the\nfollowing command:\n\n```\nmake p4-build\n```\n\nYou should see the following output:\n\n```\n*** Building P4 program...\ndocker run --rm -v /home/sdn/ngsdn-tutorial:/workdir -w /workdir\n opennetworking/p4c:stable \\\n                p4c-bm2-ss --arch v1model -o p4src/build/bmv2.json \\\n                --p4runtime-files p4src/build/p4info.txt --Wdisable=unsupported \\\n                p4src/main.p4\n*** P4 program compiled successfully! Output files are in p4src/build\n```\n\nWe have instrumented the Makefile to use a containerized version of the\n`p4c-bm2-ss` compiler. If you look at the arguments when calling `p4c-bm2-ss `,\nyou will notice that we are asking the compiler to:\n\n* Compile for the v1model architecture (`--arch` argument);\n* Put the main output in `p4src/build/bmv2.json` (`-o`);\n* Generate a P4Info file in `p4src/build/p4info.txt` (`--p4runtime-files`);\n* Ignore some warnings about unsupported features (`--Wdisable=unsupported`).\n  It's ok to ignore such warnings here, as they are generated because of a bug\n  in p4c.\n\n### Compiler output\n\n#### bmv2.json\n\nThis file defines a configuration for the BMv2 `simple_switch` target in JSON\nformat. When `simple_switch` receives a new packet, it uses this configuration\nto process the packet in a way that is consistent with the P4 program.\n\nThis is quite a big file, but don't worry, there's no need to understand its\ncontent for the sake of this exercise. If you want to learn more, a\nspecification of the BMv2 JSON format is provided here:\n<https://github.com/p4lang/behavioral-model/blob/master/docs/JSON_format.md>\n\n#### p4info.txt\n\nThis file contains an instance of a P4Info schema for our P4 program, expressed\nusing the Protobuf Text format.\n\nTake a look at this file and try to answer the following questions:\n\n1. What is the fully qualified name of the `l2_exact_table`? What is its numeric\n   ID?\n2. To which P4 entity does the ID `16812802` belong to? A table, an action, or\n   something else? What is the corresponding fully qualified name?\n3. For the `IngressPipeImpl.set_egress_port` action, how many parameters are\n   defined for this action? What is the bitwidth of the parameter named\n   `port_num`?\n4. At the end of the file, look for the definition of the\n   `controller_packet_metadata` message with name `packet_out` at the end of the\n   file. Now look at the definition of `header cpu_out_header_t` in the P4\n   program. Do you see any relationship between the two?\n\n## 3. Start Mininet topology\n\nIt's now time to start an emulated network of `stratum_bmv2` switches. We will\nprogram one of the switches using the compiler output obtained in the previous\nstep.\n\nTo start the topology, use the following command:\n\n```\nmake start\n```\n\nThis command will start two Docker containers, one for mininet and one for ONOS.\nYou can ignore the ONOS one for now, we will use that in exercises 3 and 4.\n\nTo make sure the container is started without errors, you can use the `make\nmn-log` command to show the Mininet log. Verify that you see the following\noutput (press Ctrl-C to exit):\n\n```\n$ make mn-log\ndocker-compose logs -f mininet\nAttaching to mininet\nmininet    | *** Error setting resource limits. Mininet's performance may be affected.\nmininet    | *** Creating network\nmininet    | *** Adding hosts:\nmininet    | h1a h1b h1c h2 h3 h4\nmininet    | *** Adding switches:\nmininet    | leaf1 leaf2 spine1 spine2\nmininet    | *** Adding links:\nmininet    | (h1a, leaf1) (h1b, leaf1) (h1c, leaf1) (h2, leaf1) (h3, leaf2) (h4, leaf2) (spine1, leaf1) (spine1, leaf2) (spine2, leaf1) (spine2, leaf2)\nmininet    | *** Configuring hosts\nmininet    | h1a h1b h1c h2 h3 h4\nmininet    | *** Starting controller\nmininet    |\nmininet    | *** Starting 4 switches\nmininet    | leaf1 stratum_bmv2 @ 50001\nmininet    | leaf2 stratum_bmv2 @ 50002\nmininet    | spine1 stratum_bmv2 @ 50003\nmininet    | spine2 stratum_bmv2 @ 50004\nmininet    |\nmininet    | *** Starting CLI:\n```\n\nYou can ignore the \"*** Error setting resource limits...\".\n\nThe parameters to start the mininet container are specified in\n[docker-compose.yml](docker-compose.yml). The container is configured to execute\nthe topology script defined in [mininet/topo-v6.py](mininet/topo-v6.py).\n\nThe topology includes 4 switches, arranged in a 2x2 fabric topology, as well as\n6 hosts attached to leaf switches. 3 hosts `h1a`, `h1b`, and `h1c`, are\nconfigured to be part of the same IPv6 subnet. In the next step you will be\nasked to use P4Runtime to insert table entries to be able to ping between\ntwo hosts of this subnet.\n\n![topo-v6](img/topo-v6.png)\n\n### stratum_bmv2 temporary files\n\nWhen starting the Mininet container, a set of files related to the execution of\neach `stratum_bmv2` instance is generated in the\n`tmp`directory. Examples include:\n\n* `tmp/leaf1/stratum_bmv2.log`: contains the stratum_bmv2 log for switch\n  `leaf1`;\n* `tmp/leaf1/chassis-config.txt`: the Stratum \"chassis config\" file used to\n   specify the initial port configuration to use at switch startup; This file is\n   automatically generated by the `StratumBmv2Switch` class invoked by\n   [mininet/topo-v6.py](mininet/topo-v6.py).\n* `tmp/leaf1/write-reqs.txt`: a log of all P4Runtime write requests processed by\n  the switch (the file might not exist if the switch has not received any write\n  request).\n\n## 4. Program leaf1 using P4Runtime\n\nFor this part we will use the [P4Runtime Shell][p4runtime-sh], an interactive\nPython CLI that can be used to connect to a P4Runtime server and can run\nP4Runtime commands. For example, it can be used to create, read, update, and\ndelete flow table entries.\n\nThe shell can be started in two modes, with or without a P4 pipeline config. In\nthe first case, the shell will take care of pushing the given pipeline config to\nthe switch using the P4Runtime `SetPipelineConfig` RPC. In the second case, the\nshell will try to retrieve the P4Info that is currently configured in the switch.\n\nIn both cases, the shell makes use of the P4Info file to:\n* allow specifying runtime entities such as table entries using P4Info names\n  rather then numeric IDs (much easier to remember and read);\n* provide autocompletion;\n* validate the CLI commands.\n\nFinally, when connecting to a P4Runtime server, the specification mandates that\nwe provide a mastership election ID to be able to write state, such as the\npipeline config and table entries.\n\nTo connect the P4Runtime Shell to `leaf1` and push the pipeline configuration\nobtained before, use the following command:\n\n```\nutil/p4rt-sh --grpc-addr localhost:50001 --config p4src/build/p4info.txt,p4src/build/bmv2.json --election-id 0,1\n```\n\n`util/p4rt-sh` is a simple Python script that invokes the P4Runtime Shell\nDocker container with the given arguments. For a list of arguments you can type\n`util/p4rt-sh --help`.\n\n**Note:** we use `--grpc-addr localhost:50001` as the Mininet container is\nexecuted locally, and `50001` is the TCP port associated to the gRPC server\nexposed by `leaf1`.\n\nIf the shell started successfully, you should see the following output:\n\n```\n*** Connecting to P4Runtime server at host.docker.internal:50001 ...\n*** Welcome to the IPython shell for P4Runtime ***\nP4Runtime sh >>>\n```\n\n#### Available commands\n\nUse commands like `tables`, `actions`, `action_profiles`, `counters`,\n`direct_counters`, and other named after the P4Info message fields, to query\ninformation about P4Info objects.\n\nCommands such as `table_entry`, `action_profile_member`, `action_profile_group`,\n`counter_entry`, `direct_counter_entry`, `meter_entry`, `direct_meter_entry`,\n`multicast_group_entry`, and `clone_session_entry`, can be used to read/write\nthe corresponding P4Runtime entities.\n\nType the command name followed by `?` for information on each command,\ne.g. `table_entry?`.\n\nFor more information on P4Runtime Shell, check the official documentation at:\n<https://github.com/p4lang/p4runtime-shell>\n\nThe shell supports autocompletion when pressing `tab`. For example:\n\n```\ntables[\"IngressPipeImpl.<tab>\n```\n\nwill show all tables defined inside the `IngressPipeImpl` block.\n\n### Bridging connectivity test\n\nUse the following steps to verify connectivity on leaf1 after inserting the\nrequired P4Runtime table entries. For this part, you will need to use the\nMininet CLI.\n\nOn a new terminal window, attach to the Mininet CLI using `make mn-cli`.\n\nYou should see the following output:\n\n```\n*** Attaching to Mininet CLI...\n*** To detach press Ctrl-D (Mininet will keep running)\nmininet>\n```\n\n### Insert static NDP entries\n\nTo be able to ping two IPv6 hosts in the same subnet, first, the hosts need to\nresolve their respective MAC address using the Neighbor Discovery Protocol\n(NDP). This is equivalent to ARP in IPv4 networks. For example, when trying to\nping `h1b` from `h1a`, `h1a` will first generate an NDP Neighbor Solicitation\n(NS) message to resolve the MAC address of `h1b`. Once `h1b` receives the NDP NS\nmessage, it should reply with an NDP Neighbor Advertisement (NA) with its own\nMAC address. Now both host are aware of each other MAC address and the ping\npackets can be exchanged.\n\nAs you might have noted by looking at the P4 program before, the switch should\nbe able to handle NDP packets if correctly programmed using P4Runtime (see\n`l2_ternary_table`), however, **to keep things simple for now, let's insert two\nstatic NDP entries in our hosts.**\n\nAdd an NDP entry to `h1a`, mapping `h1b`'s IPv6 address (`2001:1:1::b`) to its\nMAC address (`00:00:00:00:00:1B`):\n\n```\nmininet> h1a ip -6 neigh replace 2001:1:1::B lladdr 00:00:00:00:00:1B dev h1a-eth0\n```\n\nAnd vice versa, add an NDP entry to `h1b` to resolve `h1a`'s address:\n\n```\nmininet> h1b ip -6 neigh replace 2001:1:1::A lladdr 00:00:00:00:00:1A dev h1b-eth0\n```\n\n### Start ping\n\nStart a ping between `h1a` and `h1b`. It should not work as we have not inserted\nany P4Runtime table entry for forward these packets.\n\n```\nmininet> h1a ping h1b\n```\n\nYou should see no output form the ping command. You can leave that command\nrunning for now.\n\n### Insert P4Runtime table entries\n\nTo be able to forward ping packets, we need to add two table entries on\n`l2_exact_table` in `leaf1` -- one that matches on destination MAC address\nof `h1b` and forwards traffic to port 4 (where `h1b` is attached), and\nvice versa (`h1a` is attached to port 3).\n\nLet's use the P4Runtime shell to create and insert such entries. Looking at the\nP4Info file, use the commands below to insert the following two entries in the\n`l2_exact_table`:\n\n| Match (Ethernet dest) | Egress port number  |\n|-----------------------|-------------------- |\n| `00:00:00:00:00:1B`   | 4                   |\n| `00:00:00:00:00:1A`   | 3                   |\n\nTo create a table entry object:\n\n```\nP4Runtime sh >>> te = table_entry[\"P4INFO-TABLE-NAME\"](action = \"<P4INFO-ACTION-NAME>\")\n```\n\nMake sure to use the fully qualified name for each entity, e.g.\n`IngressPipeImpl.l2_exact_table`, `IngressPipeImpl.set_egress_port`, etc.\n\nTo specify a match field:\n\n```\nP4Runtime sh >>> te.match[\"P4INFO-MATCH-FIELD-NAME\"] = (\"VALUE\")\n```\n\n`VALUE` can be a MAC address expressed in Colon-Hexadecimal notation\n(e.g., `00:11:22:AA:BB:CC`), or IP address in dot notation, or an arbitrary\nstring. Based on the information contained in the P4Info, P4Runtime shell will\ninternally convert that value to a Protobuf byte string.\n\nThe specify the values for the table entry action parameters:\n\n```\nP4Runtime sh >>> te.action[\"P4INFO-ACTION-PARAM-NAME\"] = (\"VALUE\")\n```\n\nYou can show the table entry object in Protobuf Text format, using the `print`\ncommand:\n\n```\nP4Runtime sh >>> print(te)\n```\n\nThe shell internally takes care of populating the fields of the corresponding\nProtobuf message by using the content of the P4Info file.\n\nTo insert the entry (this will issue a P4Runtime Write RPC to the switch):\n\n```\nP4Runtime sh >>> te.insert()\n```\n\nTo read table entries from the switch (this will issue a P4Runtime Read RPC):\n\n```\nP4Runtime sh >>> for te in table_entry[\"P4INFO-TABLE-NAME\"].read():\n            ...:     print(te)\n            ...:\n```\n\nAfter inserting the two entries, ping should work. Go pack to the Mininet CLI\nterminal with the ping command running and verify that you see an output similar\nto this:\n\n```\nmininet> h1a ping h1b\nPING 2001:1:1::b(2001:1:1::b) 56 data bytes\n64 bytes from 2001:1:1::b: icmp_seq=956 ttl=64 time=1.65 ms\n64 bytes from 2001:1:1::b: icmp_seq=957 ttl=64 time=1.28 ms\n64 bytes from 2001:1:1::b: icmp_seq=958 ttl=64 time=1.69 ms\n...\n```\n\n## Congratulations!\n\nYou have completed the first exercise! Leave Mininet running, as you will need it\nfor the following exercises.\n\n[p4c]: https://github.com/p4lang/p4c\n[p4runtime-sh]: https://github.com/p4lang/p4runtime-shell\n"
  },
  {
    "path": "EXERCISE-2.md",
    "content": "# Exercise 2: Yang, OpenConfig, and gNMI Basics\n\nThis exercise is designed to give you more exposure to YANG, OpenConfig,\nand gNMI. It includes:\n\n1. Understanding the YANG language\n2. Understand YANG encoding\n3. Understanding YANG-enabled transport protocols (using gNMI)\n\n## 1. Understanding the YANG language\n\nWe start with a simple YANG module called `demo-port` in\n[`yang/demo-port.yang`](./yang/demo-port.yang)\n\nTake a look at the model and try to derive the structure. What are valid values\nfor each of the leaf nodes?\n\nThis model is self contained, so it isn't too difficult to work it out. However,\nmost YANG models are defined over many files that makes it very complicated to\nwork out the overall structure.\n\nTo make this easier, we can use a tool called `pyang` to try to visualize the\nstructure of the model.\n\nStart by entering the yang-tools Docker container:\n\n```\n$ make yang-tools\nbash-4.4#\n```\n\nNext, run `pyang` on the `demo-port.yang` model:\n\n```\nbash-4.4# pyang -f tree demo-port.yang\n```\n\nYou should see a tree representation of the `demo-port` module. Does this match\nyour expectations?\n\n------\n\n*Extra Credit:* Try to add a new leaf node\nto `port-config` or `port-state` grouping, then rerun `pyang` and see where your\nnew leaf was added.\n\n------\n\nWe can also use `pyang` to visualize a more complicated set of models, like the\nset of OpenConfig models that Stratum uses.\n\nThese models have already been loaded into the `yang-tools` container in the\n`/models` directory.\n\n```\nbash-4.4# pyang -f tree \\\n    -p ietf \\\n    -p openconfig \\\n    -p hercules \\\n    openconfig/interfaces/openconfig-interfaces.yang \\\n    openconfig/interfaces/openconfig-if-ethernet.yang  \\\n    openconfig/platform/* \\\n    openconfig/qos/* \\\n    openconfig/system/openconfig-system.yang \\\n    hercules/openconfig-hercules-*.yang  | less\n```\n\nYou should see a tree structure of the models displayed in `less`. You can use\nthe Arrow keys or `j`/`k` to scroll up and down. Type `q` to quit.\n\nIn the interface model, we can see the path to enable or disable an interface:\n`interfaces/interface[name]/config/enabled`\n\nWhat is the path to read the number of incoming packets (`in-pkts`) on an interface?\n\n------\n\n*Extra Credit:* Take a look at the models in the\n`/models` directory or browse them on Github:\n<https://github.com/openconfig/public/tree/master/release/models>\n\nTry to find the description of the `enabled` or `in-pkts` leaf nodes.\n\n*Hint:* Take a look at the `openconfig-interfaces.yang` file.\n\n------\n\n## 2. Understand YANG encoding\n\nThere is no specific YANG data encoding, but data adhering to YANG models can be\nencoded into XML, JSON, or Protobuf (among other formats). Each of these formats\nhas it's own schema format.\n\nFirst, we can look at YANG's first and canonical representation format XML. To\nsee a empty skeleton of data encoded in XML, run:\n\n```\nbash-4.4# pyang -f sample-xml-skeleton demo-port.yang\n```\n\nThis skeleton should match the tree representation we saw in part 1.\n\nWe can also use `pyang` to generate a DSDL schema based on the YANG model:\n\n```\nbash-4.4# pyang -f dsdl demo-port.yang | xmllint --format -\n```\n\nThe first part of the schema describes the tree structure, and the second part\ndescribes the value constraints for the leaf nodes.\n\n*Extra credit:* Try adding new speed identity (e.g. `SPEED_100G`) or changing\nthe range for `port-number` values in `demo-port.yang`, then rerun `pyang -f\ndsdl`. Do you see your changes reflected in the DSDL schema?\n\n------\n\nNext, we will look at encoding data using Protocol Buffers (protobuf). The\nprotobuf encoding is a more compact binary encoding than XML, and libraries can\nbe automatically generated for dozens of languages. We can use\n[`ygot`](https://github.com/openconfig/ygot)'s `proto_generator` to generate\nprotobuf messages from our YANG model.\n\n```\nbash-4.4# proto_generator -output_dir=/proto -package_name=tutorial demo-port.yang\n```\n\n`proto_generator` will generate two files:\n* `/proto/tutorial/demo_port/demo_port.proto`\n* `/proto/tutorial/enums/enums.proto`\n\nOpen `demo_port.proto` using `less`:\n\n```\nbash-4.4# less /proto/tutorial/demo_port/demo_port.proto\n```\n\nThis file contains a top-level Ports message that matches the structure defined\nin the YANG model. You can see that `proto_generator` also adds a\n`yext.schemapath` custom option to each protobuf message field that explicitly\nmaps to the YANG leaf path. Enums (like `tutorial.enums.DemoPortSPEED`) aren't\nincluded in this file, but `proto_generator` puts them in a separate file:\n`enums.proto`\n\nOpen `enums.proto` using `less`:\n\n```\nbash-4.4# less /proto/tutorial/enums/enums.proto\n```\n\nYou should see an enum for the 10GB speed, along with any other speeds that you\nadded if you completed the extra credit above.\n\n\nWe can also use `proto_generator` to build the protobuf messages for the\nOpenConfig models that Stratum uses:\n\n```\nbash-4.4# proto_generator \\\n    -generate_fakeroot \\\n    -output_dir=/proto \\\n    -package_name=openconfig \\\n    -exclude_modules=ietf-interfaces \\\n    -compress_paths \\\n    -base_import_path= \\\n    -path=ietf,openconfig,hercules \\\n    openconfig/interfaces/openconfig-interfaces.yang \\\n    openconfig/interfaces/openconfig-if-ip.yang \\\n    openconfig/lacp/openconfig-lacp.yang \\\n    openconfig/platform/openconfig-platform-linecard.yang \\\n    openconfig/platform/openconfig-platform-port.yang \\\n    openconfig/platform/openconfig-platform-transceiver.yang \\\n    openconfig/platform/openconfig-platform.yang \\\n    openconfig/system/openconfig-system.yang \\\n    openconfig/vlan/openconfig-vlan.yang \\\n    hercules/openconfig-hercules-interfaces.yang \\\n    hercules/openconfig-hercules-platform-chassis.yang \\\n    hercules/openconfig-hercules-platform-linecard.yang \\\n    hercules/openconfig-hercules-platform-node.yang \\\n    hercules/openconfig-hercules-platform-port.yang \\\n    hercules/openconfig-hercules-platform.yang \\\n    hercules/openconfig-hercules-qos.yang \\\n    hercules/openconfig-hercules.yang\n```\n\nYou will find `openconfig.proto` and `enums.proto` in the `/proto/openconfig` directory.\n\n------\n\n*Extra Credit:* Try to find the Protobuf message fields used to enable a port or\nget the ingress packets counter in the protobuf messages.\n\n*Hint:* Searching by schemapath might help.\n\n------\n\n`ygot` can also be used to generate Go structs that adhere to the YANG model\nand that are capable of validating the structure, type, and values of data.\n\n------\n\n*Extra Credit:* If you have extra time or are interested in using YANG and Go\ntogether, try generating Go code for the `demo-port` module.\n\n```\nbash-4.4# mkdir -p /goSrc\nbash-4.4# generator -output_dir=/goSrc -package_name=tutorial demo-port.yang\n```\n\nTake a look at the Go files in `/goSrc`.\n\n------\n\nYou can now quit out of the container (using `Ctrl-D` or `exit`).\n\n## 3. Understanding YANG-enabled transport protocols\n\nThere are several YANG-model agnostic protocols that can be used to get or set\ndata that adheres to a model, like NETCONF, RESTCONF, and gNMI.\n\nThis part focuses on using the protobuf encoding over gNMI.\n\nFirst, make sure your Mininet container is still running.\n\n```\n$ make start\ndocker-compose up -d\nmininet is up-to-date\nonos is up-to-date\n```\n\nIf you see the following output, then Mininet was not running:\n\n```\nStarting mininet ... done\nStarting onos    ... done\n```\n\nYou will need to go back to Exercise 1 and install forwarding rules to\nre-establish pings between `h1a` and `h1b` for later parts of this exercise.\n\nIf you could not complete Exercise 1, you can use the following P4Runtime-sh\ncommands to enable connectivity:\n\n```python\nte = table_entry['IngressPipeImpl.l2_exact_table'](action='IngressPipeImpl.set_egress_port')\nte.match['hdr.ethernet.dst_addr'] = '00:00:00:00:00:1A'\nte.action['port_num'] = '3'\nte.insert()\n\nte = table_entry['IngressPipeImpl.l2_exact_table'](action='IngressPipeImpl.set_egress_port')\nte.match['hdr.ethernet.dst_addr'] = '00:00:00:00:00:1B'\nte.action['port_num'] = '4'\nte.insert()\n```\n\nNext, we will use a [gNMI client CLI](https://github.com/Yi-Tseng/Yi-s-gNMI-tool)\nto read the all of the configuration from the Stratum switche `leaf1` in our\nMininet network:\n\n```\n$ util/gnmi-cli --grpc-addr localhost:50001 get /\n```\n\nThe first part of the output shows the request that was made by the CLI:\n```\nREQUEST\npath {\n}\ntype: CONFIG\nencoding: PROTO\n```\n\nThe path being requested is the empty path (which means the root of the config\ntree), the type of data is just the config tree, and the requested encoding for\nthe response is protobuf.\n\nThe second part of the output shows the response from Stratum:\n```\nRESPONSE\nnotification {\n  update {\n    path {\n    }\n    val {\n      any_val {\n        type_url: \"type.googleapis.com/openconfig.Device\"\n        value: \\252\\221\\231\\304\\001\\... TRUNCATED\n      }\n    }\n  }\n}\n```\n\nYou can see that Stratum provides a response of type `openconfig.Device`, which\nis the top-level message defined in `openconfig.proto`. The response is the\nbinary encoding of the data based on the protobuf message.\n\nThe value is not human readable, but we can translate the reply using a\nutility that converts between the binary and textual representations of the\nprotobuf message.\n\nWe can rerun the command, but this time pipe the output through the converter\nutility (then pipe that output to `less` to make scrolling easier):\n\n```\n$ util/gnmi-cli --grpc-addr localhost:50001 get / | util/oc-pb-decoder | less\n```\n\nThe contents of the response should now be easier to read. Scroll down to the first\n`interface`. Is the interface enabled? What is the speed of the port?\n\n------\n\n*Extra credit:* Can you find `in-pkts`? If not, why do you think they are\nmissing?\n\n-------\n\nOne of the benefits of gNMI is it's \"schema-less\" encoding, which allows clients\nor devices to update only the paths that need to be updated. This is\nparticularly useful for subscriptions.\n\nFirst, let's try out the schema-less representation by requesting the\nconfiguration port between `leaf1` and `h1a`:\n\n```\n$ util/gnmi-cli --grpc-addr localhost:50001 get \\\n    /interfaces/interface[name=leaf1-eth3]/config\n```\n\nYou should see this response containing 2 leafs under config - **enabled** and\n**health-indicator**:\n\n```\nRESPONSE\nnotification {\n  update {\n    path {\n      elem {\n        name: \"interfaces\"\n      }\n      elem {\n        name: \"interface\"\n        key {\n          key: \"name\"\n          value: \"leaf1-eth3\"\n        }\n      }\n      elem {\n        name: \"config\"\n      }\n      elem {\n        name: \"enabled\"\n      }\n    }\n    val {\n      bool_val: true\n    }\n  }\n}\nnotification {\n  update {\n    path {\n      elem {\n        name: \"interfaces\"\n      }\n      elem {\n        name: \"interface\"\n        key {\n          key: \"name\"\n          value: \"leaf1-eth3\"\n        }\n      }\n      elem {\n        name: \"config\"\n      }\n      elem {\n        name: \"health-indicator\"\n      }\n    }\n    val {\n      string_val: \"GOOD\"\n    }\n  }\n}\n```\n\nThe schema-less representation provides and `update` for each leaf containing\nboth the path the value of the leaf. You can confirm that the interface\nis enabled (set to `true`).\n\nNext, we will subscribe to the ingress unicast packet counters for the interface\non `leaf1` attached to `h1a` (port 3):\n\n```\n$ util/gnmi-cli --grpc-addr localhost:50001 \\\n    --interval 1000 sub-sample \\\n    /interfaces/interface[name=leaf1-eth3]/state/counters/in-unicast-pkts\n```\n\nThe first part of the output shows the request being made by the CLI:\n```\nREQUEST\nsubscribe {\n  subscription {\n    path {\n      elem {\n        name: \"interfaces\"\n      }\n      elem {\n        name: \"interface\"\n        key {\n          key: \"name\"\n          value: \"leaf1-eth3\"\n        }\n      }\n      elem {\n        name: \"state\"\n      }\n      elem {\n        name: \"counters\"\n      }\n      elem {\n        name: \"in-unicast-pkts\"\n      }\n    }\n    mode: SAMPLE\n    sample_interval: 1000\n  }\n  updates_only: true\n}\n```\n\nWe have the subscription path, the type of subscription (sampling) and the\nsampling rate (every 1000ms, or 1s).\n\nThe second part of the output is a stream of responses:\n```\nRESPONSE\nupdate {\n  timestamp: 1567895852136043891\n  update {\n    path {\n      elem {\n        name: \"interfaces\"\n      }\n      elem {\n        name: \"interface\"\n        key {\n          key: \"name\"\n          value: \"leaf1-eth3\"\n        }\n      }\n      elem {\n        name: \"state\"\n      }\n      elem {\n        name: \"counters\"\n      }\n      elem {\n        name: \"in-unicast-pkts\"\n      }\n    }\n    val {\n      uint_val: 1592\n    }\n  }\n}\n```\n\nEach response has a timestamp, path, and new value. Because we are sampling, you\nshould see a new update printed every second. Leave this running, while we\ngenerate some traffic.\n\nIn another window, open the Mininet CLI and start a ping:\n\n```\n$ make mn-cli\n*** Attaching to Mininet CLI...\n*** To detach press Ctrl-D (Mininet will keep running)\nmininet> h1a ping h1b\n```\n\nIn the first window, you should see the `uint_val` increase by 1 every second\nwhile your ping is still running. (If it's not exactly 1, then there could be\nother traffic like NDP messages contributing to the increase.)\n\nYou can stop the gNMI subscription using `Ctrl-C`.\n\nFinally, we will monitor link events using gNMI's on-change subscriptions.\n\nStart a subscription for the operational status of the first switch's first port:\n\n```\n$ util/gnmi-cli --grpc-addr localhost:50001 sub-onchange \\\n    /interfaces/interface[name=leaf1-eth3]/state/oper-status\n```\n\nYou should immediately see a response which indicates that port 1 is `UP`:\n\n```\nRESPONSE\nupdate {\n  timestamp: 1567896668419430407\n  update {\n    path {\n      elem {\n        name: \"interfaces\"\n      }\n      elem {\n        name: \"interface\"\n        key {\n          key: \"name\"\n          value: \"leaf1-eth3\"\n        }\n      }\n      elem {\n        name: \"state\"\n      }\n      elem {\n        name: \"oper-status\"\n      }\n    }\n    val {\n      string_val: \"UP\"\n    }\n  }\n}\n```\n\nIn the shell running the Mininet CLI, let's take down the interface on `leaf1`\nconnected to `h1a`:\n\n```\nmininet> sh ifconfig leaf1-eth3 down\n```\n\nYou should see a response in your gNMI CLI window showing that the interface on\n`leaf1` connected to `h1a` is `DOWN`:\n\n```\nRESPONSE\nupdate {\n  timestamp: 1567896891549363399\n  update {\n    path {\n      elem {\n        name: \"interfaces\"\n      }\n      elem {\n        name: \"interface\"\n        key {\n          key: \"name\"\n          value: \"leaf1-eth3\"\n        }\n      }\n      elem {\n        name: \"state\"\n      }\n      elem {\n        name: \"oper-status\"\n      }\n    }\n    val {\n      string_val: \"DOWN\"\n    }\n  }\n}\n```\n\nWe can bring back the interface using the following Mininet command:\n\n```\nmininet> sh ifconfig leaf1-eth3 up\n```\n\nYou should see another response in your gNMI CLI window that indicates the\ninterface is `UP`.\n\n------\n\n*Extra credit:* We can also use gNMI to disable or enable an interface.\n\nLeave your gNMI subscription for operational status changes running.\n\nIn the Mininet CLI, start a ping between two hosts.\n\n```\nmininet> h1a ping h1b\n```\n\nYou should see replies being showed in the Mininet CLI.\n\nIn a third window, we will use the gNMI CLI to change the configuration value of\nthe `enabled` leaf from `true` to `false`.\n\n```\n$ util/gnmi-cli --grpc-addr localhost:50001 set \\\n    /interfaces/interface[name=leaf1-eth3]/config/enabled \\\n    --bool-val false\n```\n\nIn the gNMI set window, you should see a request indicating the new value for the\n`enabled` leaf:\n\n```\nREQUEST\nupdate {\n  path {\n    elem {\n      name: \"interfaces\"\n    }\n    elem {\n      name: \"interface\"\n      key {\n        key: \"name\"\n        value: \"leaf1-eth3\"\n      }\n    }\n    elem {\n      name: \"config\"\n    }\n    elem {\n      name: \"enabled\"\n    }\n  }\n  val {\n    bool_val: false\n  }\n}\n```\n\nIn the gNMI subscription window, you should see a new response indicating that\nthe operational status of `leaf1-eth3` is `DOWN`:\n\n```\nRESPONSE\nupdate {\n  timestamp: 1567896891549363399\n  update {\n    path {\n      elem {\n        name: \"interfaces\"\n      }\n      elem {\n        name: \"interface\"\n        key {\n          key: \"name\"\n          value: \"leaf1-eth3\"\n        }\n      }\n      elem {\n        name: \"state\"\n      }\n      elem {\n        name: \"oper-status\"\n      }\n    }\n    val {\n      string_val: \"DOWN\"\n    }\n  }\n}\n```\n\nAnd in the Mininet CLI window, you should observe that the ping has stopped\nworking.\n\nNext, we can re-nable the port:\n```\n$ util/gnmi-cli --grpc-addr localhost:50001 set \\\n    /interfaces/interface[name=leaf1-eth3]/config/enabled \\\n    --bool-val true\n```\n\nYou should see another update in the gNMI subscription window indicating the\ninterface is `UP`, and the ping should resume in the Mininet CLI wondow.\n\n## Congratulations!\n\nYou have completed the second exercise!\n"
  },
  {
    "path": "EXERCISE-3.md",
    "content": "# Exercise 3: Using ONOS as the Control Plane\n\nThis exercise provides a hands-on introduction to ONOS, where you will learn how\nto:\n\n1. Start ONOS along with a set of built-in apps for basic services such as\n   topology discovery\n2. Load a custom ONOS app and pipeconf\n3. Push a configuration file to ONOS to discover and control the\n   `stratum_bmv2` switches using P4Runtime and gNMI\n4. Access the ONOS CLI and UI to verify that all `stratum_bmv2` switches have\n   been discovered and configured correctly.\n\n## 1. Start ONOS\n\nIn a terminal window, type:\n\n```\n$ make restart\n```\n\nThis command will restart the ONOS and Mininet containers, in case those were\nrunning from previous exercises, clearing any previous state.\n\nThe parameters to start the ONOS container are specified in [docker\n-compose.yml](docker-compose.yml). The container is configured to pass the\nenvironment variable `ONOS_APPS`, used to define the built-in apps to load\nduring startup.\n\nIn our case, this variable has value:\n\n```\nONOS_APPS=gui2,drivers.bmv2,lldpprovider,hostprovider\n```\n\nrequesting ONOS to pre-load the following built-in apps:\n\n* `gui2`: ONOS web user interface (available at <http://localhost:8181/onos/ui>)\n* `drivers.bmv2`: BMv2/Stratum drivers based on P4Runtime, gNMI, and gNOI\n* `lldpprovider`: LLDP-based link discovery application (used in Exercise 4)\n* `hostprovider`: Host discovery application (used in Exercise 4)\n\n\nOnce ONOS has started, you can check its log using the `make onos-log` command.\n\nTo **verify that all required apps have been activated**, run the following\ncommand in a new terminal window to access the ONOS CLI. Use password `rocks`\nwhen prompted:\n\n```\n$ make onos-cli\n```\n\nIf you see the following error, then ONOS is still starting; wait a minute and try again.\n```\nssh_exchange_identification: Connection closed by remote host\nmake: *** [onos-cli] Error 255\n```\n\nWhen you see the Password prompt, type the default password: `rocks`. \nThen type the following command in the ONOS CLI to show the list of running apps:\n\n```\nonos> apps -a -s\n```\n\nMake sure you see the following list of apps displayed:\n\n```\n*   5 org.onosproject.protocols.grpc        2.2.2    gRPC Protocol Subsystem\n*   6 org.onosproject.protocols.gnmi        2.2.2    gNMI Protocol Subsystem\n*  29 org.onosproject.drivers               2.2.2    Default Drivers\n*  34 org.onosproject.generaldeviceprovider 2.2.2    General Device Provider\n*  35 org.onosproject.protocols.p4runtime   2.2.2    P4Runtime Protocol Subsystem\n*  36 org.onosproject.p4runtime             2.2.2    P4Runtime Provider\n*  37 org.onosproject.drivers.p4runtime     2.2.2    P4Runtime Drivers\n*  42 org.onosproject.protocols.gnoi        2.2.2    gNOI Protocol Subsystem\n*  52 org.onosproject.hostprovider          2.2.2    Host Location Provider\n*  53 org.onosproject.lldpprovider          2.2.2    LLDP Link Provider\n*  66 org.onosproject.drivers.gnoi          2.2.2    gNOI Drivers\n*  70 org.onosproject.drivers.gnmi          2.2.2    gNMI Drivers\n*  71 org.onosproject.pipelines.basic       2.2.2    Basic Pipelines\n*  72 org.onosproject.drivers.stratum       2.2.2    Stratum Drivers\n* 161 org.onosproject.gui2                  2.2.2    ONOS GUI2\n* 181 org.onosproject.drivers.bmv2          2.2.2    BMv2 Drivers\n```\n\nThere are definitely more apps than defined in `$ONOS_APPS`. That's\nbecause each app in ONOS can define other apps as dependencies. When loading an\napp, ONOS automatically resolves dependencies and loads all other required apps.\n\n#### Disable link discovery service\n\nLink discovery will be the focus of the next exercise. For now, this service\nlacks support in the P4 program. We suggest you deactivate it for the rest of\nthis exercise, to avoid running into issues. Use the following ONOS\nCLI command to deactivate the link discovery service.\n\n```\nonos> app deactivate lldpprovider\n```\n\nTo exit the ONOS CLI, use `Ctrl-D`. This will stop the CLI process\nbut will not affect ONOS itself.\n\n#### Restart ONOS in case of errors\n\nIf anything goes wrong and you need to kill ONOS, you can use command `make\nrestart` to restart both Mininet and ONOS.\n\n## 2. Build app and register pipeconf\n\nInside the [app/](./app) directory you will find a starter implementation of an\nONOS app that includes a pipeconf. The pipeconf-related files are the following:\n\n* [PipeconfLoader.java][PipeconfLoader.java]: A component that registers the\n  pipeconf at app activation;\n* [InterpreterImpl.java][InterpreterImpl.java]: An implementation of the\n  `PipelineInterpreter` driver behavior;\n* [PipelinerImpl.java][PipelinerImpl.java]: An implementation of the `Pipeliner`\n  driver behavior;\n\nTo build the ONOS app (including the pipeconf), run the following\ncommand in the second terminal window:\n\n```\n$ make app-build\n```\n\nThis will produce a binary file `app/target/ngsdn-tutorial-1.0-SNAPSHOT.oar`\nthat we will use to install the application in the running ONOS instance.\n\nUse the following command to load the app into ONOS and activate it:\n\n```\n$ make app-reload\n```\n\nAfter the app has been activated, you should see the following messages in the\nONOS log (`make onos-log`) signaling that the pipeconf has been registered and\nthe different app components have been started:\n\n```\nINFO  [PiPipeconfManager] New pipeconf registered: org.onosproject.ngsdn-tutorial (fingerprint=...)\nINFO  [MainComponent] Started\n```\n\nAlternatively, you can show the list of registered pipeconfs using the ONOS CLI\n(`make onos-cli`) command:\n\n```\nonos> pipeconfs\n```\n\n## 3. Push netcfg to ONOS\n\nNow that ONOS and Mininet are running, it's time to let ONOS know how to reach\nthe four switches and control them. We do this by using a configuration file\nlocated at [mininet/netcfg.json](mininet/netcfg.json), which contains\ninformation such as:\n\n* The gRPC address and port associated with each Stratum device;\n* The ONOS driver to use for each device, `stratum-bmv2` in this case;\n* The pipeconf to use for each device, `org.onosproject.ngsdn-tutorial` in this\n  case, as defined in [PipeconfLoader.java][PipeconfLoader.java];\n* Configuration specific to our custom app (`fabricDeviceConfig`)\n\nThis file also contains information related to the IPv6 configuration associated\nwith each switch interface. We will discuss this information in more details in\nthe next exercises.\n\nOn a terminal window, type:\n\n```\n$ make netcfg\n```\n\nThis command will push the `netcfg.json` to ONOS, triggering discovery and\nconfiguration of the 4 switches.\n\nCheck the ONOS log (`make onos-log`), you should see messages like:\n\n```\nINFO  [GrpcChannelControllerImpl] Creating new gRPC channel grpc:///mininet:50001?device_id=1...\n...\nINFO  [StreamClientImpl] Setting mastership on device:leaf1...\n...\nINFO  [PipelineConfigClientImpl] Setting pipeline config for device:leaf1 to org.onosproject.ngsdn-tutorial...\n...\nINFO  [GnmiDeviceStateSubscriber] Started gNMI subscription for 6 ports on device:leaf1\n...\nINFO  [DeviceManager] Device device:leaf1 port [leaf1-eth1](1) status changed (enabled=true)\nINFO  [DeviceManager] Device device:leaf1 port [leaf1-eth2](2) status changed (enabled=true)\nINFO  [DeviceManager] Device device:leaf1 port [leaf1-eth3](3) status changed (enabled=true)\nINFO  [DeviceManager] Device device:leaf1 port [leaf1-eth4](4) status changed (enabled=true)\nINFO  [DeviceManager] Device device:leaf1 port [leaf1-eth5](5) status changed (enabled=true)\nINFO  [DeviceManager] Device device:leaf1 port [leaf1-eth6](6) status changed (enabled=true)\n```\n\n## 4. Use the ONOS CLI to verify the network configuration\n\nAccess the ONOS CLI using `make onos-cli`. Enter the following command to\nverify the network config pushed before:\n\n```\nonos> netcfg\n```\n\n#### Devices\n\nVerify that all 4 devices have been discovered and are connected:\n\n```\nonos> devices -s\nid=device:leaf1, available=true, role=MASTER, type=SWITCH, driver=stratum-bmv2:org.onosproject.ngsdn-tutorial\nid=device:leaf2, available=true, role=MASTER, type=SWITCH, driver=stratum-bmv2:org.onosproject.ngsdn-tutorial\nid=device:spine1, available=true, role=MASTER, type=SWITCH, driver=stratum-bmv2:org.onosproject.ngsdn-tutorial\nid=device:spine2, available=true, role=MASTER, type=SWITCH, driver=stratum-bmv2:org.onosproject.ngsdn-tutorial\n```\n\nMake sure you see `available=true` for all devices. That means ONOS has a gRPC\nchannel open to the device and the pipeline configuration has been pushed.\n\n\n#### Ports\n\nCheck port information, obtained by ONOS by performing a gNMI Get RPC for the\nOpenConfig Interfaces model:\n\n```\nonos> ports -s device:spine1\nid=device:spine1, available=true, role=MASTER, type=SWITCH, driver=stratum-bmv2:org.onosproject.ngsdn-tutorial\n  port=[spine1-eth1](1), state=enabled, type=copper, speed=10000 , ...\n  port=[spine1-eth2](2), state=enabled, type=copper, speed=10000 , ...\n```\n\nCheck port statistics, also obtained by querying the OpenConfig Interfaces model\nvia gNMI:\n\n```\nonos> portstats device:spine1\ndeviceId=device:spine1\n   port=[spine1-eth1](1), pktRx=114, pktTx=114, bytesRx=14022, bytesTx=14136, pktRxDrp=0, pktTxDrp=0, Dur=173\n   port=[spine1-eth2](2), pktRx=114, pktTx=114, bytesRx=14022, bytesTx=14136, pktRxDrp=0, pktTxDrp=0, Dur=173\n\n```\n\n#### Flow rules and groups\n\nCheck the ONOS flow rules. You should see three flow rules for each device. For\nexample, to show all flow rules installed so far on device `leaf1`:\n\n```\nonos> flows -s any device:leaf1\ndeviceId=device:leaf1, flowRuleCount=3\n    ADDED, bytes=0, packets=0, table=IngressPipeImpl.acl_table, priority=40000, selector=[ETH_TYPE:arp], treatment=[immediate=[IngressPipeImpl.clone_to_cpu()]]\n    ADDED, bytes=0, packets=0, table=IngressPipeImpl.acl_table, priority=40000, selector=[ETH_TYPE:ipv6, IP_PROTO:58, ICMPV6_TYPE:136], treatment=[immediate=[IngressPipeImpl.clone_to_cpu()]]\n    ADDED, bytes=0, packets=0, table=IngressPipeImpl.acl_table, priority=40000, selector=[ETH_TYPE:ipv6, IP_PROTO:58, ICMPV6_TYPE:135], treatment=[immediate=[IngressPipeImpl.clone_to_cpu()]]\n\n```\n\nThis list include flow rules installed by the ONOS built-in services such as\n`hostprovider`. We'll talk more about these services in the next exercise.\n\nTo show all groups installed so far, you can use the `groups` command. For\nexample to show groups on `leaf1`:\n```\nonos> groups any device:leaf1\ndeviceId=device:leaf1, groupCount=1\n   id=0x63, state=ADDED, type=CLONE, bytes=0, packets=0, appId=org.onosproject.core, referenceCount=0\n       id=0x63, bucket=1, bytes=0, packets=0, weight=-1, actions=[OUTPUT:CONTROLLER]\n```\n\n\"Group\" is an ONOS northbound abstraction that is mapped internally to different\ntypes of P4Runtime entities. In this case, you should see 1 group of type\n`CLONE`, internally mapped to a P4Runtime `CloneSessionEntry`, here used to\nclone packets to the controller via packet-in. We'll talk more about controller\npacket-in/out in the next session.\n\n## 5. Visualize the topology on the ONOS web UI\n\nUsing the ONF Cloud Tutorial Portal, access the ONOS UI.\nIf you are running the VM on your laptop, open up a browser (e.g. Firefox) to\n<http://127.0.0.1:8181/onos/ui>.\n\nWhen asked, use the username `onos` and password `rocks`.\n\nYou should see 4 devices in the topology view, corresponding to the 4 switches\nof our 2x2 fabric. Press `L` to show device labels. Because link discovery is\nnot enabled, the ONOS UI will not show any links between the devices.\n\nWhile here, feel free to interact with and discover the ONOS UI. For more\ninformation on how to use the ONOS web UI please refer to this guide:\n\n<https://wiki.onosproject.org/x/OYMg>\n\nThere is a way to show the pipeconf details for a given device, can you find it?\n\n#### Pipeconf UI\n\nIn the ONOS topology view click on one of the switches (e.g `device:leaf1`)\nand the Device Details panel appears. In that panel click on the Pipeconf icon\n(the last one), to open the Pipeconf view for that device.\n\n![device-leaf1-details-panel](img/device-leaf1-details-panel.png)\n\nHere you will find info on the pipeconf currently used by the specific device,\nincluding details of the P4 tables.\n\n![onos-gui-pipeconf-leaf1](img/onos-gui-pipeconf-leaf1.png)\n\nClicking the table row brings up the details panel, showing details of the match\nfields, actions, action parameter bit widths, etc.\n\n\n## Congratulations!\n\nYou have completed the third exercise! If you're feeling ambitious,\nyou can do the extra credit steps below.\n\n### Extra Credit: Inspect stratum_bmv2 internal state\n\nYou can use the P4Runtime shell to dump all table entries currently\ninstalled on the switch by ONOS. In a separate terminal window, start a\nP4Runtime shell for leaf1:\n\n```\n$ util/p4rt-sh --grpc-addr localhost:50001 --election-id 0,1\n```\n\nOn the shell prompt, type the following command to dump all entries from the ACL\ntable:\n\n```\nP4Runtime sh >>> for te in table_entry[\"IngressPipeImpl.acl_table\"].read():\n            ...:     print(te)\n            ...:\n```\n\nYou should see exactly three entries, each one corresponding to a flow rule\nin ONOS. For example, the flow rule matching on NDP NS packets should look\nlike this in the P4runtime shell:\n\n```\ntable_id: 33557865 (\"IngressPipeImpl.acl_table\")\nmatch {\n  field_id: 4 (\"hdr.ethernet.ether_type\")\n  ternary {\n    value: \"\\\\x86\\\\xdd\"\n    mask: \"\\\\xff\\\\xff\"\n  }\n}\nmatch {\n  field_id: 5 (\"hdr.ipv6.next_hdr\")\n  ternary {\n    value: \"\\\\x3a\"\n    mask: \"\\\\xff\"\n  }\n}\nmatch {\n  field_id: 6 (\"hdr.icmpv6.type\")\n  ternary {\n    value: \"\\\\x87\"\n    mask: \"\\\\xff\"\n  }\n}\naction {\n  action {\n    action_id: 16782152 (\"IngressPipeImpl.clone_to_cpu\")\n  }\n}\npriority: 40001\n```\n\n### Extra Credit: Show ONOS gRPC log\n\nONOS provides a debugging feature that dumps all gRPC messages\nexchanged with a device to a file. To enable this feature, type the\nfollowing command in the ONOS CLI (`make onos-cli`):\n\n```\nonos> cfg set org.onosproject.grpc.ctl.GrpcChannelControllerImpl enableMessageLog true\n```\n\nCheck the content of directory `tmp/onos` in the `ngsdn-tutorial` root. You\nshould see many files, some of which starting with name `grpc___mininet_`. You\nshould see four such files, one file per device, named after the gRPC\nport used to establish the gRPC chanel.\n\nCheck content of one of these files, you should see a dump of the gRPC messages\nin Protobuf Text format for messages like:\n\n* P4Runtime `PacketIn` and `PacketOut`;\n* P4Runtime Read RPCs used to periodically dump table entries and read counters;\n* gNMI Get RPCs to read port counters.\n\nRemember to disable the gRPC message logging in ONOS when you're done, to avoid\naffecting performances:\n\n```\nonos> cfg set org.onosproject.grpc.ctl.GrpcChannelControllerImpl enableMessageLog false\n```\n\n[PipeconfLoader.java]: app/src/main/java/org/onosproject/ngsdn/tutorial/pipeconf/PipeconfLoader.java\n[InterpreterImpl.java]: app/src/main/java/org/onosproject/ngsdn/tutorial/pipeconf/InterpreterImpl.java\n[PipelinerImpl.java]: app/src/main/java/org/onosproject/ngsdn/tutorial/pipeconf/PipelinerImpl.java\n"
  },
  {
    "path": "EXERCISE-4.md",
    "content": "# Exercise 4: Enabling ONOS Built-in Services\n\nIn this exercise, you will integrate ONOS built-in services for link and\nhost discovery with your P4 program. Such built-in services are based on the\nability of switches to send data plane packets to the controller (packet-in) and\nvice versa (packet-out).\n\nTo make this work with your P4 program, you will need to apply simple changes to\nthe starter P4 code, validate the P4 changes using PTF-based data plane unit\ntests, and finally, apply changes to the pipeconf Java implementation to enable\nONOS's built-in apps to use packet-in/out via P4Runtime.\n\nThe exercise has two parts:\n\n1. Enable packet I/O and verify link discovery\n2. Host discovery & L2 bridging\n\n\n## Part 1: Enable packet I/O and verify link discovery\n\nWe start by reviewing how controller packet I/O works with P4Runtime.\n\n### Background: Controller packet I/O with P4Runtime\n\nThe P4 program under [p4src/main.p4](p4src/main.p4) provides support for\ncarrying arbitrary metadata in P4Runtime `PacketIn` and `PacketOut` messages.\nTwo special headers are defined and annotated with the standard P4 annotation\n`@controller_header`:\n\n```p4\n@controller_header(\"packet_in\")\nheader cpu_in_header_t {\n    port_num_t ingress_port;\n    bit<7> _pad;\n}\n\n@controller_header(\"packet_out\")\nheader cpu_out_header_t {\n    port_num_t egress_port;\n    bit<7> _pad;\n}\n```\n\nThese headers are used to carry the original switch ingress port of a packet-in,\nand to specify the intended output port for a packet-out.\n\nWhen the P4Runtime agent in Stratum receives a packet from the switch CPU port,\nit expects to find the `cpu_in_header_t` header as the first one in the frame.\nIndeed, it looks at the `controller_packet_metadata` part of the P4Info file to\ndetermine the number of bits to strip at the beginning of the frame and to\npopulate the corresponding metadata field of the `PacketIn` message, including\nthe ingress port as in this case.\n\nSimilarly, when Stratum receives a P4Runtime `PacketOut` message, it uses the\nvalues found in the `PacketOut`'s metadata fields to serialize and prepend a\n`cpu_out_header_t` to the frame before feeding it to the pipeline parser.\n\n\n### 1. Modify P4 program\n\nThe P4 starter code already provides support for the following capabilities:\n\n* Parse the `cpu_out` header (if the ingress port is the CPU one)\n* Emit the `cpu_in` header as the first one in the deparser\n* Provide an ACL table with ternary match fields and an action to send or clone\n  packets to the CPU port (used to generate a packet-ins)\n\nSomething is missing to provide complete packet-in/out support, and you have to\nmodify the P4 program to implement it:\n\n1. Open `p4src/main.p4`;\n2. Modify the code where requested (look for `TODO EXERCISE 4`);\n3. Compile the modified P4 program using the `make p4-build` command. Make sure\n   to address any compiler errors before continuing.\n\nAt this point, our P4 pipeline should be ready for testing.\n\n### 2. Run PTF tests\n\nBefore starting ONOS, let's make sure the P4 changes work as expected by\nrunning some PTF tests. But first, you need to apply a few simple changes to the\ntest case implementation.\n\nOpen file `ptf/tests/packetio.py` and modify wherever requested (look for `TODO\nEXERCISE 4`). This test file provides two test cases: one for packet-in and\none for packet-out. In both test cases, you will have to modify the implementation to\nuse the same name for P4Runtime entities as specified in the P4Info file\nobtained after compiling the P4 program (`p4src/build/p4info.txt`).\n\nTo run all the tests for this exercise:\n\n    make p4-test TEST=packetio\n\nThis command will run all tests in the `packetio` group (i.e. the content of\n`ptf/tests/packetio.py`). To run a specific test case you can use:\n\n    make p4-test TEST=<PYTHON MODULE>.<TEST CLASS NAME>\n\nFor example:\n\n    make p4-test TEST=packetio.PacketOutTest\n\n#### Check for regressions\n\nTo make sures the new changes are not breaking other features, make sure to run\ntests for L2 bridging support.\n\n    make p4-test TEST=bridging\n\nIf all tests succeed, congratulations! You can move to the next step.\n\n#### How to debug failing tests?\n\nWhen running PTF tests, multiple files are produced that you can use to spot bugs:\n\n* `ptf/stratum_bmv2.log`: BMv2 log with trace level (showing tables matched and\n  other info for each packet)\n* `ptf/p4rt_write.log`: Log of all P4Runtime Write requests\n* `ptf/ptf.pcap`: PCAP file with all packets sent and received during tests\n  (the tutorial VM comes with Wireshark for easier visualization)\n* `ptf/ptf.log`: PTF log of all packet operations (sent and received)\n\n### 3. Modify ONOS pipeline interpreter\n\nThe `PipelineInterpreter` is the ONOS driver behavior used to map the ONOS\nrepresentation of packet-in/out to one that is consistent with your P4\npipeline (along with other similar mappings).\n\nSpecifically, to use services like link and host discovery, ONOS built-in apps\nneed to be able to set the output port of a packet-out and access the original\ningress port of a packet-in.\n\nIn the following, you will be asked to apply a few simple changes to the\n`PipelineInterpreter` implementation:\n\n1. Open file:\n   `app/src/main/java/org/onosproject/ngsdn/tutorial/pipeconf/InterpreterImpl.java`\n\n2. Modify wherever requested (look for `TODO EXERCISE 4`), specifically:\n\n    * Look for a method named `buildPacketOut`, modify the implementation to use the\n      same name of the **egress port** metadata field for the `packet_out`\n      header as specified in the P4Info file.\n\n    * Look for method `mapInboundPacket`, modify the implementation to use the\n      same name of the **ingress port** metadata field for the `packet_in`\n      header as specified in the P4Info file.\n\n3. Build ONOS app (including the pipeconf) with the command `make app-build`.\n\nThe P4 compiler outputs (`bmv2.json` and `p4info.txt`) are copied in the app\nresource folder (`app/src/main/resources`) and will be included in the ONOS app\nbinary. The copy that gets included in the ONOS app will be the one that gets\ndeployed by ONOS to the device after the connection is initiated.\n\n### 4. Restart ONOS\n\n**Note:** ONOS should be already running, and in theory, there should be no need\nto restart it. However, while ONOS supports reloading the pipeconf with a\nmodified one (e.g., with updated `bmv2.json` and `p4info.txt`), the version of\nONOS used in this tutorial (2.2.0, the most recent at the time of writing) does\nnot support reloading the pipeconf behavior classes, in which case the old\nclasses will still be used. For this reason, to reload a modified version of\n`InterpreterImpl.java`, you need to kill ONOS first.\n\nIn a terminal window, type:\n\n```\n$ make restart\n```\n\nThis command will restart all containers, removing any state from previous\nexecutions, including ONOS.\n\nWait approximately 20 seconds for ONOS to completing booting, or check\nthe ONOS log (`make onos-log`) until no more messages are shown.\n\n### 5. Load updated app and register pipeconf\n\nOn a terminal window, type:\n\n```\n$ make app-reload\n```\n\nThis command will upload to ONOS and activate the app binary previously built (located at app/target/ngsdn-tutorial-1.0-SNAPSHOT.oar).\n\n### 6. Push netcfg to ONOS to trigger device and link discovery\n\nOn a terminal window, type:\n\n```\n$ make netcfg\n```\n\nUse the ONOS CLI to verify that all devices have been discovered:\n\n```\nonos> devices -s\nid=device:leaf1, available=true, role=MASTER, type=SWITCH, driver=stratum-bmv2:org.onosproject.ngsdn-tutorial\nid=device:leaf2, available=true, role=MASTER, type=SWITCH, driver=stratum-bmv2:org.onosproject.ngsdn-tutorial\nid=device:spine1, available=true, role=MASTER, type=SWITCH, driver=stratum-bmv2:org.onosproject.ngsdn-tutorial\nid=device:spine2, available=true, role=MASTER, type=SWITCH, driver=stratum-bmv2:org.onosproject.ngsdn-tutorial\n```\n\nVerify that all links have been discovered. You should see 8 links in total,\neach one representing a direction of the 4 bidirectional links of our Mininet\ntopology:\n\n```\nonos> links\nsrc=device:leaf1/1, dst=device:spine1/1, type=DIRECT, state=ACTIVE, expected=false\nsrc=device:leaf1/2, dst=device:spine2/1, type=DIRECT, state=ACTIVE, expected=false\nsrc=device:leaf2/1, dst=device:spine1/2, type=DIRECT, state=ACTIVE, expected=false\nsrc=device:leaf2/2, dst=device:spine2/2, type=DIRECT, state=ACTIVE, expected=false\nsrc=device:spine1/1, dst=device:leaf1/1, type=DIRECT, state=ACTIVE, expected=false\nsrc=device:spine1/2, dst=device:leaf2/1, type=DIRECT, state=ACTIVE, expected=false\nsrc=device:spine2/1, dst=device:leaf1/2, type=DIRECT, state=ACTIVE, expected=false\nsrc=device:spine2/2, dst=device:leaf2/2, type=DIRECT, state=ACTIVE, expected=false\n```\n\n**If you don't see a link**, check the ONOS log (`make onos-log`) for any\nerrors with packet-in/out handling. In case of errors, it's possible that you\nhave not modified `InterpreterImpl.java` correctly. In this case, go back to\nstep 3.\n\nYou should see 5 flow rules for each device. For example,\nto show all flow rules installed so far on device `leaf1`:\n\n```\nonos> flows -s any device:leaf1\ndeviceId=device:leaf1, flowRuleCount=5\n    ADDED, ..., table=IngressPipeImpl.acl_table, priority=40000, selector=[ETH_TYPE:lldp], treatment=[immediate=[IngressPipeImpl.clone_to_cpu()]]\n    ADDED, ..., table=IngressPipeImpl.acl_table, priority=40000, selector=[ETH_TYPE:bddp], treatment=[immediate=[IngressPipeImpl.clone_to_cpu()]]\n    ADDED, ..., table=IngressPipeImpl.acl_table, priority=40000, selector=[ETH_TYPE:arp], treatment=[immediate=[IngressPipeImpl.clone_to_cpu()]]\n    ADDED, ..., table=IngressPipeImpl.acl_table, priority=40000, selector=[ETH_TYPE:ipv6, IP_PROTO:58, ICMPV6_TYPE:136], treatment=[immediate=[IngressPipeImpl.clone_to_cpu()]]\n    ADDED, ..., table=IngressPipeImpl.acl_table, priority=40000, selector=[ETH_TYPE:ipv6, IP_PROTO:58, ICMPV6_TYPE:135], treatment=[immediate=[IngressPipeImpl.clone_to_cpu()]]\n    ...\n```\n\nThese flow rules are the result of the translation of flow objectives generated\nby the `hostprovider` and `lldpprovider` built-in apps.\n\nFlow objectives are translated by the pipeconf, which provides a `Pipeliner`\nbehavior implementation ([PipelinerImpl.java][PipelinerImpl.java]). These flow\nrules specify a match key by using ONOS standard/known header fields, such as\n`ETH_TYPE`, `ICMPV6_TYPE`, etc. These types are mapped to P4Info-specific match\nfields by the pipeline interpreter\n([InterpreterImpl.java][InterpreterImpl.java]; look for method\n`mapCriterionType`)\n\nThe `hostprovider` app provides host discovery capabilities by intercepting ARP\n(`selector=[ETH_TYPE:arp]`) and NDP packets (`selector=[ETH_TYPE:ipv6,\nIP_PROTO:58, ICMPV6_TYPE:...]`), which are cloned to the controller\n(`treatment=[immediate=[IngressPipeImpl.clone_to_cpu()]]`). Similarly,\n`lldpprovider` generates flow objectives to intercept LLDP and BDDP packets\n(`selector=[ETH_TYPE:lldp]` and `selector=[ETH_TYPE:bddp]` ) periodically\nemitted on all devices' ports as P4Runtime packet-outs, allowing automatic link\ndiscovery.\n\nAll flow rules refer to P4 action `clone_to_cpu()`, which invokes a\nv1model-specific primitive to set the clone session ID:\n\n```p4\naction clone_to_cpu() {\n    clone3(CloneType.I2E, CPU_CLONE_SESSION_ID, ...);\n}\n```\n\nTo actually generate P4Runtime packet-in messages for matched packets, the\npipeconf's pipeliner generates a `CLONE` *group*, internally translated into a\nP4Runtime`CloneSessionEntry`, that maps `CPU_CLONE_SESSION_ID` to a set of\nports, just the CPU one in this case.\n\nTo show all groups installed in ONOS, you can use the `groups` command. For\nexample, to show groups on `leaf1`:\n```\nonos> groups any device:leaf1\ndeviceId=device:leaf1, groupCount=1\n   id=0x63, state=ADDED, type=CLONE, ..., appId=org.onosproject.core, referenceCount=0\n       id=0x63, bucket=1, ..., weight=-1, actions=[OUTPUT:CONTROLLER]\n```\n\n### 7. Visualize links on the ONOS UI\n\nUsing the ONF Cloud Tutorial Portal, access the ONOS UI.\nIf you are running the VM on your laptop, open up a browser\n(e.g. Firefox) to <http://127.0.0.1:8181/onos/ui>.\n\nOn the same page where the ONOS topology view is shown:\n* Press `L` to show device labels;\n* Press `A` multiple times until you see link stats, in either\n  packets/seconds (pps) or bits/seconds.\n\nLink stats are derived by ONOS by periodically obtaining the port counters for\neach device. ONOS internally uses gNMI to read port information, including\ncounters.\n\nIn this case, you should see approximately 1 packet/s, as that's the rate of\npacket-outs generated by the `lldpprovider` app.\n\n## Part 2: Host discovery & L2 bridging\n\nBy fixing packet I/O support in the pipeline interpreter we did not only get\nlink discovery, but also enabled the built-in `hostprovider` app to perform\n*host* discovery. This service is required by our tutorial app to populate\nthe bridging tables of our P4 pipeline, to forward packets based on the\nEthernet destination address.\n\nIndeed, the `hostprovider` app works by snooping incoming ARP/NDP packets on the\nswitch and deducing where a host is connected to from the packet-in message\nmetadata. Other apps in ONOS, like our tutorial app, can then listen for\nhost-related events and access information about their addresses (IP, MAC) and\nlocation.\n\nIn the following, you will be asked to enable the app's `L2BridgingComponent`,\nand to verify that host discovery works by pinging hosts on Mininet. But before,\nit's useful to review how the starter code implements L2 bridging.\n\n### Background: Our implementation of L2 bridging\n\nTo make things easier, the starter code assumes that hosts of a given subnet are\nall connected to the same leaf, and two interfaces of two different leaves\ncannot be configured with the same IPv6 subnet. In other words, L2 bridging is\nallowed only for hosts connected to the same leaf.\n\nThe Mininet script [topo-v6.py](mininet/topo-v6.py) used in this tutorial\ndefines 4 subnets:\n\n* `2001:1:1::/64` with 3 hosts connected to `leaf1` (`h1a`, `h1b`, and `h1c`)\n* `2001:1:2::/64` with 1 hosts connected to `leaf1` (`h2`)\n* `2001:2:3::/64` with 1 hosts connected to `leaf2` (`h3`)\n* `2001:2:4::/64` with 1 hosts connected to `leaf2` (`h4`)\n\nThe same IPv6 prefixes are defined in the [netcfg.json](mininet/netcfg.json)\nfile and are used to provide interface configuration to ONOS.\n\n#### Data plane\n\nThe P4 code defines tables to forward packets based on the Ethernet address,\nprecisely, two distinct tables, to handle two different types of L2 entries:\n\n1. Unicast entries: which will be filled in by the control plane when the\n   location (port) of new hosts is learned.\n2. Broadcast/multicast entries: used replicate NDP Neighbor Solicitation\n   (NS) messages to all host-facing ports;\n\nFor (2), unlike ARP messages in IPv4, which are broadcasted to Ethernet\ndestination address FF:FF:FF:FF:FF:FF, NDP messages are sent to special Ethernet\naddresses specified by RFC2464. These addresses are prefixed with 33:33 and the\nlast four octets are the last four octets of the IPv6 destination multicast\naddress. The most straightforward way of matching on such IPv6\nbroadcast/multicast packets, without digging in the details of RFC2464, is to\nuse a ternary match on `33:33:**:**:**:**`, where `*` means \"don't care\".\n\nFor this reason, our solution defines two tables. One that matches in an exact\nfashion `l2_exact_table` (easier to scale on switch ASIC memory) and one that\nuses ternary matching `l2_ternary_table` (which requires more expensive TCAM\nmemories, usually much smaller).\n\nThese tables are applied to packets in an order defined in the `apply` block\nof the ingress pipeline (`IngressPipeImpl`):\n\n```p4\nif (!l2_exact_table.apply().hit) {\n    l2_ternary_table.apply();\n}\n```\n\nThe ternary table has lower priority, and it's applied only if a matching entry\nis not found in the exact one.\n\n**Note**: we won't be using VLANs to segment our L2 domains. As such, when\nmatching packets in the `l2_ternary_table`, these will be broadcasted to ALL\nhost-facing ports.\n\n#### Control plane (L2BridgingComponent)\n\nWe already provide an ONOS app component controlling the L2 bridging tables of\nthe P4 program: [L2BridgingComponent.java][L2BridgingComponent.java]\n\nThis app component defines two event listeners located at the bottom of the\n`L2BridgingComponent` class, `InternalDeviceListener` for device events (e.g.\nconnection of a new switch) and `InternalHostListener` for host events (e.g. new\nhost discovered). These listeners in turn call methods like:\n\n* `setUpDevice()`: responsible for creating multicast groups for all\n  host-facing ports and inserting flow rules for the `l2_ternary_table` pointing\n  to such groups.\n\n* `learnHost()`: responsible for inserting unicast L2 entries based on the\n  discovered host location.\n\nTo support reloading the app implementation, these methods are also called at\ncomponent activation for all devices and hosts known by ONOS at the time of\nactivation (look for methods `activate()` and `setUpAllDevices()`).\n\nTo keep things simple, our broadcast domain will be restricted to a single\ndevice, i.e. we allow packet replication only for ports of the same leaf switch.\nAs such, we can exclude ports going to the spines from the multicast group. To\ndetermine whether a port is expected to be facing hosts or not, we look at the\ninterface configuration in [netcfg.json](mininet/netcfg.json) file (look for the\n`ports` section of the JSON file).\n\n### 1. Enable L2BridgingComponent and reload the app\n\nBefore starting, you need to enable the app's L2BridgingComponent, which is\ncurrently disabled.\n\n1. Open file:\n   `app/src/main/java/org/onosproject/ngsdn/tutorial/L2BridgingComponent.java`\n\n2. Look for the class definition at the top and enable the component by setting\n   the `enabled` flag to `true`\n\n   ```java\n   @Component(\n           immediate = true,\n           enabled = true\n   )\n   public class L2BridgingComponent {\n   ```\n\n3. Build the ONOS app with `make app-build`\n\n4. Re-load the app to apply the changes with `make app-reload`\n\nAfter reloading the app, you should see the following messages in the ONOS log\n(`make onos-log`):\n\n```\nINFO  [L2BridgingComponent] Started\n...\nINFO  [L2BridgingComponent] *** L2 BRIDGING - Starting initial set up for device:leaf1...\nINFO  [L2BridgingComponent] Adding L2 multicast group with 4 ports on device:leaf1...\nINFO  [L2BridgingComponent] Adding L2 multicast rules on device:leaf1...\n...\nINFO  [L2BridgingComponent] *** L2 BRIDGING - Starting initial set up for device:leaf2...\nINFO  [L2BridgingComponent] Adding L2 multicast group with 2 ports on device:leaf2...\nINFO  [L2BridgingComponent] Adding L2 multicast rules on device:leaf2...\n...\n```\n\n### 2. Examine flow rules and groups\n\nCheck the ONOS flow rules, you should see 2 new flow rules for the\n`l2_ternary_table` installed by L2BridgingComponent. For example, to show\nall flow rules installed so far on device `leaf1`:\n\n```\nonos> flows -s any device:leaf1\ndeviceId=device:leaf1, flowRuleCount=...\n    ...\n    ADDED, ..., table=IngressPipeImpl.l2_ternary_table, priority=10, selector=[hdr.ethernet.dst_addr=0x333300000000&&&0xffff00000000], treatment=[immediate=[IngressPipeImpl.set_multicast_group(gid=0xff)]]\n    ADDED, ..., table=IngressPipeImpl.l2_ternary_table, priority=10, selector=[hdr.ethernet.dst_addr=0xffffffffffff&&&0xffffffffffff], treatment=[immediate=[IngressPipeImpl.set_multicast_group(gid=0xff)]]\n    ...\n```\n\nTo show also the multicast groups, you can use the `groups` command. For example\nto show groups on `leaf1`:\n```\nonos> groups any device:leaf1\ndeviceId=device:leaf1, groupCount=2\n   id=0x63, state=ADDED, type=CLONE, ..., appId=org.onosproject.core, referenceCount=0\n       id=0x63, bucket=1, ..., weight=-1, actions=[OUTPUT:CONTROLLER]\n   id=0xff, state=ADDED, type=ALL, ..., appId=org.onosproject.ngsdn-tutorial, referenceCount=0\n       id=0xff, bucket=1, ..., weight=-1, actions=[OUTPUT:3]\n       id=0xff, bucket=2, ..., weight=-1, actions=[OUTPUT:4]\n       id=0xff, bucket=3, ..., weight=-1, actions=[OUTPUT:5]\n       id=0xff, bucket=4, ..., weight=-1, actions=[OUTPUT:6]\n```\n\nThe `ALL` group is a new one, created by our app (`appId=org.onosproject.ngsdn-tutorial`).\nGroups of type `ALL` in ONOS map to P4Runtime `MulticastGroupEntry`, in this\ncase used to broadcast NDP NS packets to all host-facing ports.\n\n### 3. Test L2 bridging on Mininet\n\nTo verify that L2 bridging works as intended, send a ping between hosts in the\nsame subnet:\n\n```\nmininet> h1a ping h1b\nPING 2001:1:1::b(2001:1:1::b) 56 data bytes\n64 bytes from 2001:1:1::b: icmp_seq=2 ttl=64 time=0.580 ms\n64 bytes from 2001:1:1::b: icmp_seq=3 ttl=64 time=0.483 ms\n64 bytes from 2001:1:1::b: icmp_seq=4 ttl=64 time=0.484 ms\n...\n```\n\nIn contrast to Exercise 1, here we have NOT set any NDP static entries.\nInstead, NDP NS and NA packets are handled by the data plane thanks to the `ALL`\ngroup and `l2_ternary_table`'s flow rule described above. Moreover, given the\nACL flow rules to clone NDP packets to the controller, hosts can be discovered\nby ONOS. Host discovery events are used by `L2BridgingComponent.java` to insert\nentries in the P4 `l2_exact_table`. Check the ONOS log, you should see messages\nrelated to the discovery of hosts `h1a` and `h1b`:\n\n```\nINFO  [L2BridgingComponent] HOST_ADDED event! host=00:00:00:00:00:1A/None, deviceId=device:leaf1, port=3\nINFO  [L2BridgingComponent] Adding L2 unicast rule on device:leaf1 for host 00:00:00:00:00:1A/None (port 3)...\nINFO  [L2BridgingComponent] HOST_ADDED event! host=00:00:00:00:00:1B/None, deviceId=device:leaf1, port=4\nINFO  [L2BridgingComponent] Adding L2 unicast rule on device:leaf1 for host 00:00:00:00:00:1B/None (port 4).\n```\n\n### 4. Visualize hosts on the ONOS CLI and web UI\n\nYou should see exactly two hosts in the ONOS CLI (`make onos-cli`):\n\n```\nonos> hosts -s\nid=00:00:00:00:00:1A/None, mac=00:00:00:00:00:1A, locations=[device:leaf1/3], vlan=None, ip(s)=[2001:1:1::a]\nid=00:00:00:00:00:1B/None, mac=00:00:00:00:00:1B, locations=[device:leaf1/4], vlan=None, ip(s)=[2001:1:1::b]\n```\n\nUsing the ONF Cloud Tutorial Portal, access the ONOS UI.\nIf you are running the VM on your laptop, open up a browser (e.g. Firefox) to\n<http://127.0.0.1:8181/onos/ui>.\n\nTo toggle showing hosts on the topology view, press `H` on your keyboard.\n\n## Congratulations!\n\nYou have completed the fourth exercise!\n\n[PipeconfLoader.java]: app/src/main/java/org/onosproject/ngsdn/tutorial/pipeconf/PipeconfLoader.java\n[InterpreterImpl.java]: app/src/main/java/org/onosproject/ngsdn/tutorial/pipeconf/InterpreterImpl.java\n[PipelinerImpl.java]: app/src/main/java/org/onosproject/ngsdn/tutorial/pipeconf/PipelinerImpl.java\n[L2BridgingComponent.java]: app/src/main/java/org/onosproject/ngsdn/tutorial/L2BridgingComponent.java\n"
  },
  {
    "path": "EXERCISE-5.md",
    "content": "# Exercise 5: IPv6 Routing\n\nIn this exercise, you will be modifying the P4 program and ONOS app to add\nsupport for IPv6-based (L3) routing between all hosts connected to the fabric,\nwith support for ECMP to balance traffic flows across multiple spines.\n\n## Background\n\n### Requirements\n\nAt this stage, we want our fabric to behave like a standard IP fabric, with\nswitches functioning as routers. As such, the following requirements should be\nsatisfied:\n\n* Leaf interfaces should be assigned with an IPv6 address (the gateway address)\n  and a MAC address that we will call `myStationMac`;\n* Leaf switches should be able to handle NDP Neighbor Solicitation (NS)\n  messages -- sent by hosts to resolve the MAC address associated with the\n  switch interface/gateway IPv6 addresses, by replying with NDP Neighbor\n  Advertisements (NA) notifying their `myStationMac` address;\n* Packets received with Ethernet destination `myStationMac` should be processed\n  through the routing tables (traffic that is not dropped can then be\n  processed through the bridging tables);\n* When routing, the P4 program should look at the IPv6 destination address. If a\n  matching entry is found, the packet should be forwarded to a given next hop\n  and the packet's Ethernet addresses should be modified accordingly (source set\n  to `myStationMac` and destination to the next hop one);\n* When routing packets to a different leaf across the spines, leaf switches\n  should be able to use ECMP to distribute traffic.\n\n### Configuration\n\nThe [netcfg.json](mininet/netcfg.json) file includes a special configuration for\neach device named `fabricDeviceConfig`, this block defines 3 values:\n\n * `myStationMac`: MAC address associated with each device, i.e., the router MAC\n   address;\n * `mySid`: the SRv6 segment ID of the device, used in the next exercise.\n * `isSpine`: a boolean flag, indicating whether the device should be considered\n   as a spine switch.\n\nMoreover, the [netcfg.json](mininet/netcfg.json) file also includes a list of\ninterfaces with an IPv6 prefix assigned to them (look under the `ports` section\nof the file). The same IPv6 addresses are used in the Mininet topology script\n[topo-v6.py](mininet/topo-v6.py).\n\n### Try pinging hosts in different subnets\n\nSimilarly to the previous exercise, let's start by using Mininet to verify that\npinging between hosts on different subnets does NOT work. It will be your task\nto make it work.\n\nOn the Mininet CLI:\n\n```\nmininet> h2 ping h3\nPING 2001:2:3::1(2001:2:3::1) 56 data bytes\nFrom 2001:1:2::1 icmp_seq=1 Destination unreachable: Address unreachable\nFrom 2001:1:2::1 icmp_seq=2 Destination unreachable: Address unreachable\nFrom 2001:1:2::1 icmp_seq=3 Destination unreachable: Address unreachable\n...\n```\n\nIf you check the ONOS log, you will notice that `h2` has been discovered:\n\n```\nINFO  [L2BridgingComponent] HOST_ADDED event! host=00:00:00:00:00:20/None, deviceId=device:leaf1, port=6\nINFO  [L2BridgingComponent] Adding L2 unicast rule on device:leaf1 for host 00:00:00:00:00:20/None (port 6)...\n```\n\nThat's because `h2` sends NDP NS messages to resolve the MAC address of its\ngateway (`2001:1:2::ff` as configured in [topo-v6.py](mininet/topo-v6.py)).\n\nWe can check the IPv6 neighbor table for `h2` to see that the resolution\nhas failed:\n\n```\nmininet> h2 ip -6 n\n2001:1:2::ff dev h2-eth0  FAILED\n```\n\n## 1. Modify P4 program\n\nThe first step will be to add new tables to `main.p4`.\n\n#### P4-based generation of NDP messages\n\nWe already provide ways to handle NDP NS and NA exchanged by hosts connected to\nthe same subnet (see `l2_ternary_table`). However, for hosts, the Linux\nnetworking stack takes care of generating a NDP NA reply. For the switches in\nour fabric, there's no traditional networking stack associated to it.\n\nThere are multiple solutions to this problem:\n\n* we can configure hosts with static NDP entries, removing the need for the\n  switch to reply to NDP NS packets;\n* we can intercept NDP NS via packet-in, generate a corresponding NDP NA\n  reply in ONOS, and send it back via packet-out; or\n* we can instruct the switch to generate NDP NA replies using P4\n  (i.e., we write P4 code that takes care of replying to NDP requests without any\n  intervention from the control plane).\n\n**Note:** The rest of the exercise assumes you will decide to implement the last\noption. You can decide to go with a different one, but you should keep in mind\nthat there will be less starter code for you to re-use.\n\nThe idea is simple: NDP NA packets have the same header structure as NDP NS\nones. They are both ICMPv6 packets with different header field values, such as\ndifferent ICMPv6 type, different Ethernet addresses etc. A switch that knows the\nMAC address of a given IPv6 target address found in an NDP NS request, can\ntransform the same packet to an NDP NA reply by modifying some of its fields.\n\nTo implement P4-based generation of NDP NA messages, look in\n[p4src/snippets.p4](p4src/snippets.p4), we already provide an action named\n`ndp_ns_to_na` to transform an NDP NS packet into an NDP NA one. Your task is to\nimplement a table that uses such action.\n\nThis table should define a mapping between the interface IPv6 addresses provided\nin [netcfg.json](mininet/netcfg.json) and the `myStationMac` associated to each\nswitch (also defined in netcfg.json). When an NDP NS packet is received, asking\nto resolve one of such IPv6 addresses, the `ndp_ns_to_na` action should be\ninvoked with the given `myStationMac` as parameter. The ONOS app will be\nresponsible of inserting entries in this table according to the content of\nnetcfg.json.\n\nThe ONOS app already provides a component\n[NdpReplyComponent.java](app/src/main/java/org/onosproject/ngsdn/tutorial/NdpReplyComponent.java)\nresponsible of inserting entries in this table.\n\nThe component is currently disabled. You will need to enable and modify it in\nthe next steps, but for now, let's focus on the P4 program.\n\n#### LPM IPv6 routing table\n\nThe main table for this exercise will be an L3 table that matches on the\ndestination IPv6 address. You should create a table that performs longest\nprefix match (LPM) on the destination address and performs the required packet\ntransformations:\n\n1. Replace the source Ethernet address with the destination one, expected to be\n   `myStationMac` (see next section on \"My Station\" table).\n2. Set the destination Ethernet to the next hop's address (passed as an action\n   argument).\n3. Decrement the IPv6 `hop_limit`.\n\nThis L3 table and action should provide a mapping between a given IPv6 prefix\nand a next hop MAC address. In our solution (and hence in the PTF starter code\nand ONOS app), we re-use the L2 table defined in Exercise 2 to provide a mapping\nbetween the next hop MAC address and an output port. If you want to apply the\nsame solution, make sure to call the L3 table before the L2 one in the `apply`\nblock.\n\nMoreover, we will want to drop the packet when the IPv6 hop limit reaches 0.\nThis can be accomplished by inserting logic in the `apply` block that inspects\nthe field after applying your L3 table.\n\nAt this point, your pipeline should properly match, transform, and forward IPv6\npackets.\n\n**Note:** For simplicity, we are using a global routing table. If you would like\nto segment your routing table in virtual ones (i.e. using a VRF ID), you can\ntackle this as extra credit.\n\n#### \"My Station\" table\n\nYou may realize that at this point the switch will perform IPv6 routing\nindiscriminately, which is technically incorrect. The switch should only route\nEthernet frames that are destined for the router's Ethernet address\n(`myStationMac`).\n\nTo address this issue, you will need to create a table that matches the\ndestination Ethernet address and marks the packet for routing if there is a\nmatch. We call this the \"My Station\" table.\n\nYou are free to use a specific action or metadata to carry this information, or\nfor simplicity, you can use `NoAction` and check for a hit in this table in your\n`apply` block. Remember to update your `apply` block after creating this table.\n\n#### Adding support for ECMP with action selectors\n\nThe last modification you will make to the pipeline is to add an\n`action_selector` that hashes traffic between the different possible next\nhops. In our leaf-spine topology, we have an equal-cost path for each spine for\nevery leaf pair, and we want to be able to take advantage of that.\n\nWe have already defined the P4 `ecmp_selector` in\n[p4src/snippets.p4](p4src/snippets.p4), but you will need to add the selector to\nyour L3 table. You will also need to add the selector fields as match keys.\n\nFor IPv6 traffic, you will need to include the source and destination IPv6\naddresses as well as the IPv6 flow label as part of the ECMP hash, but you are\nfree to include other parts of the packet header if you would like. For example,\nyou could include the rest of the 5-tuple (i.e. L4 proto and ports); the L4\nports are parsed into `local_metadata` if would like to use them. For more\ndetails on the required fields for hashing IPv6 traffic, see RFC6438.\n\nYou can compile the program using `make p4-build`.\n\nMake sure to address any compiler errors before continuing.\n\nAt this point, our P4 pipeline should be ready for testing.\n\n## 2. Run PTF tests\n\nTests for the IPv6 routing behavior are located in `ptf/tests/routing.py`. Open\nthat file up and modify wherever requested (look for `TODO EXERCISE 5`).\n\nTo run all the tests for this exercise:\n\n    make p4-test TEST=routing\n\nThis command will run all tests in the `routing` group (i.e. the content of\n`ptf/tests/routing.py`). To run a specific test case you can use:\n\n    make p4-test TEST=<PYTHON MODULE>.<TEST CLASS NAME>\n\nFor example:\n\n    make p4-test TEST=routing.NdpReplyGenTest\n\n\n#### Check for regressions\n\nTo make sure the new changes are not breaking other features, run the\ntests of the previous exercises as well.\n\n    make p4-test TEST=packetio\n    make p4-test TEST=bridging\n    make p4-test TEST=routing\n\nIf all tests succeed, congratulations! You can move to the next step.\n\n## 3. Modify ONOS app\n\nThe last part of the exercise is to update the starter code for the routing\ncomponents of our ONOS app, located here:\n\n* `app/src/main/java/org/onosproject/ngsdn/tutorial/Ipv6RoutingComponent.java`\n* `app/src/main/java/org/onosproject/ngsdn/tutorial/NdpReplyComponent.java`\n\nOpen those files and modify wherever requested (look for `TODO EXERCISE 5`).\n\n#### Ipv6RoutingComponent.java\n\nThe starter code already provides an implementation for event listeners and the\nrouting policy (i.e., methods triggered as a consequence of topology\nevents), for example to compute ECMP groups based on the available\nlinks between leaves and the spine.\n\nYou are asked to modify the implementation of four methods.\n\n* `setUpMyStationTable()`: to insert flow rules for the \"My Station\" table;\n\n* `createNextHopGroup()`: responsible of creating the ONOS equivalent of a\n  P4Runtime action profile group for the ECMP selector of the routing table;\n\n* `createRoutingRule()`: to create a flow rule for the IPv6 routing table;\n\n* `createL2NextHopRule()`: to create flow rules mapping next hop MAC addresses\n  (used in the ECMP groups) to output ports. You can find a similar method in\n  the `L2BridgingComponent` (see `learnHost()` method). This one is called to\n  create L2 rules between switches, e.g. to forward packets between leaves and\n  spines. There's no need to handle L2 rules for hosts since those are inserted\n  by the `L2BridgingComponent`.\n\n#### NdpReplyComponent.java\n\nThis component listens to device events. Each time a new device is added in\nONOS, it uses the content of `netcfg.json` to populate the NDP reply table.\n\nYou are asked to modify the implementation of method `buildNdpReplyFlowRule()`,\nto insert the name of the table and action to generate NDP replies.\n\n#### Enable the routing components\n\nOnce you are confident your solution to the previous step works, before\nbuilding and reloading the app, remember to enable the routing-related\ncomponents by setting the `enabled` flag to `true` at the top of the class\ndefinition.\n\nFor IPv6 routing to work, you should enable the following components:\n\n* `Ipv6RoutingComponent.java`\n* `NdpReplyComponent.java`\n\n#### Build and reload the app\n\nUse the following command to build and reload your app while ONOS is running:\n\n```\n$ make app-build app-reload\n```\n\nWhen building the app, the modified P4 compiler outputs (`bmv2.json` and\n`p4info.txt`) will be packaged together along with the Java classes. After\nreloading the app, you should see messages in the ONOS log signaling that a new\npipeline configuration has been set and the `Ipv6RoutingComponent` and\n`NdpReplyComponent` have been activated. Also check the log for potentially\nharmful messages (`make onos-log`). If needed, take a look at section **Appendix\nA: Understanding ONOS error logs** at the end of this exercise.\n\n## 4. Test IPv6 routing on Mininet\n\n#### Verify ping\n\nType the following commands in the Mininet CLI, in order:\n\n```\nmininet> h2 ping h3\nmininet> h3 ping h2\nPING 2001:1:2::1(2001:1:2::1) 56 data bytes\n64 bytes from 2001:1:2::1: icmp_seq=2 ttl=61 time=2.39 ms\n64 bytes from 2001:1:2::1: icmp_seq=3 ttl=61 time=2.29 ms\n64 bytes from 2001:1:2::1: icmp_seq=4 ttl=61 time=2.71 ms\n...\n```\n\nPinging between `h3` and `h2` should work now. If ping does NOT work,\ncheck section **Appendix B: Troubleshooting** at the end of this exercise.\n\nThe ONOS log should show messages such as:\n\n```\nINFO  [Ipv6RoutingComponent] HOST_ADDED event! host=00:00:00:00:00:20/None, deviceId=device:leaf1, port=6\nINFO  [Ipv6RoutingComponent] Adding routes on device:leaf1 for host 00:00:00:00:00:20/None [[2001:1:2::1]]\n...\nINFO  [Ipv6RoutingComponent] HOST_ADDED event! host=00:00:00:00:00:30/None, deviceId=device:leaf2, port=3\nINFO  [Ipv6RoutingComponent] Adding routes on device:leaf2 for host 00:00:00:00:00:30/None [[2001:2:3::1]]\n...\n```\n\nIf you don't see messages about the discovery of `h2` (`00:00:00:00:00:20`)\nit's because ONOS has already discovered that host when you tried to ping at\nthe beginning of the exercise.\n\n**Note:** we need to start the ping first from `h2` and then from `h3` to let\nONOS discover the location of both hosts before ping packets can be forwarded.\nThat's because the current implementation requires hosts to generate NDP NS\npackets to be discovered by ONOS. To avoid having to manually generate NDP NS\nmessages, a possible solution could be:\n\n* Configure IPv6 hosts in Mininet to periodically and automatically generate a\n  different type of NDP messages, named Router Solicitation (RS).\n\n* Insert a flow rule in the ACL table to clone NDP RS packets to the CPU. This\n  would require matching on a different value of ICMPv6 code other than NDP NA\n  and NS.\n\n* Modify the `hostprovider` built-in app implementation to learn host location\n  from NDP RS messages (it currently uses only NDP NA and NS).\n\n#### Verify P4-based NDP NA generation\n\nTo verify that the P4-based generation of NDP NA replies by the switch is\nworking, you can check the neighbor table of `h2` or `h3`. It should show\nsomething similar to this:\n\n```\nmininet> h3 ip -6 n\n2001:2:3::ff dev h3-eth0 lladdr 00:aa:00:00:00:02 router REACHABLE\n```\n\nwhere `2001:2:3::ff` is the IPv6 gateway address defined in `netcfg.json` and\n`topo-v6.py`, and `00:aa:00:00:00:02` is the `myStationMac` defined for `leaf2`\nin `netcfg.json`.\n\n#### Visualize ECMP using the ONOS web UI\n\nTo verify that ECMP is working, let's start multiple parallel traffic flows from\n`h2` to `h3` using iperf. In the Mininet command prompt, type:\n\n```\nmininet> h2 iperf -c h3 -u -V -P5 -b1M -t600 -i1\n```\n\nThis commands starts an iperf client on `h2`, sending UDP packets (`-u`)\nover IPv6 (`-V`) to `h3` (`-c`). In doing this, we generate 5 distinct flows\n(`-P5`), each one capped at 1Mbit/s (`-b1M`), running for 10 minutes (`-t600`)\nand reporting stats every 1 second (`-i1`).\n\nSince we are generating UDP traffic, there's no need to start an iperf server\non `h3`.\n\nUsing the ONF Cloud Tutorial Portal, access the ONOS UI.\nIf you are using the tutorial VM, open up a browser (e.g. Firefox) to\n<http://127.0.0.1:8181/onos/ui>. When asked, use the username `onos` and\npassword `rocks`.\n\nOn the same page showing the ONOS topology:\n\n* Press `H` on your keyboard to show hosts;\n* Press `L` to show device labels;\n* Press `A` multiple times until you see port/link stats, in either\n  packets/seconds (pps) or bits/seconds.\n\nIf you completed the P4 and app implementation correctly, and ECMP is working,\nyou should see traffic being forwarded to both spines as in the screenshot\nbelow:\n\n<img src=\"img/routing-ecmp.png\" alt=\"ECMP Test\" width=\"344\"/>\n\n## Congratulations!\n\nYou have completed the fifth exercise! Now your fabric is capable of forwarding\nIPv6 traffic between any host.\n\n## Appendix A: Understanding ONOS error logs\n\nThere are two main types of errors that you might see when\nreloading the app:\n\n1. Write errors, such as removing a nonexistent entity or inserting one that\n   already exists:\n\n    ```\n    WARN  [WriteResponseImpl] Unable to DELETE PRE entry on device...: NOT_FOUND Multicast group does not exist ...\n    WARN  [WriteResponseImpl] Unable to INSERT table entry on device...: ALREADY_EXIST Match entry exists, use MODIFY if you wish to change action ...\n    ```\n\n    These are usually transient errors and **you should not worry about them**.\n    They describe a temporary inconsistency of the ONOS-internal device state,\n    which should be soon recovered by a periodic reconciliation mechanism.\n    The ONOS core periodically polls the device state to make sure its\n    internal representation is accurate, while writing any pending modifications\n    to the device, solving these errors.\n\n    Otherwise, if you see them appearing periodically (every 3-4 seconds), it\n    means the reconciliation process is not working and something else is wrong.\n    Try re-loading the app (`make app-reload`); if that doesn't resolve the\n    warnings, check with the instructors.\n\n2. Translation errors, signifying that ONOS is not able to translate the flow\n   rules (or groups) generated by apps, to a representation that is compatible\n   with your P4Info. For example:\n\n    ```\n    WARN  [P4RuntimeFlowRuleProgrammable] Unable to translate flow rule for pipeconf 'org.onosproject.ngsdn-tutorial':...\n    ```\n\n    **Carefully read the error message and make changes to the app as needed.**\n    Chances are that you are using a table, match field, or action name that\n    does not exist in your P4Info. Check your P4Info file, modify, and reload the\n    app (`make app-build app-reload`).\n\n## Appendix B: Troubleshooting\n\nIf ping is not working, here are few steps you can take to troubleshoot your\nnetwork:\n\n1. **Check that all flow rules and groups have been written successfully to the\n   device.** Using ONOS CLI commands such as `flows -s any device:leaf1` and\n   `groups any device:leaf1`, verify that all flows and groups are in state\n   `ADDED`. If you see other states such as `PENDING_ADD`, check the ONOS log\n   for possible errors with writing those entries to the device. You can also\n   use the ONOS web UI to check flows and group state. \n   \n   2. Check `netcfg` in ONOS CLI. If network config is missing, run `make netcfg` again to configure devices and hosts.\n\n2. **Use table counters to verify that tables are being hit as expected.**\n   If you don't already have direct counters defined for your table(s), modify\n   the P4 program to add some, build and reload the app (`make app-build\n   app-reload`). ONOS should automatically detect that and poll counters every\n   3-4 seconds (the same period for the reconciliation process). To check their\n   values, you can either use the ONOS CLI (`flows -s any device:leaf1`) or the\n   web UI.\n\n3. **Double check the PTF tests** and make sure you are creating similar flow\n   rules in the `Ipv6RoutingComponent.java` and `NdpReplyComponenet.java`. Do\n   you notice any difference?\n\n4. **Look at the BMv2 logs for possible errors.** Check file\n   `/tmp/leaf1/stratum_bmv2.log`.\n\n5. If here and still not working, **reach out to one of the instructors for\n   assistance.**\n"
  },
  {
    "path": "EXERCISE-6.md",
    "content": "# Exercise 6: Segment Routing v6 (SRv6)\n\nIn this exercise, you will be implementing a simplified version of segment\nrouting, a source routing method that steers traffic through a specified set of\nnodes.\n\n## Background\n\nThis exercise is based on an IETF draft specification called SRv6, which uses\nIPv6 packets to frame traffic that follows an SRv6 policy. SRv6 packets use the\nIPv6 routing header, and they can either encapsulate IPv6 (or IPv4) packets\nentirely or they can just inject an IPv6 routing header into an existing IPv6\npacket.\n\nThe IPv6 routing header looks as follows:\n```\n     0                   1                   2                   3\n     0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1\n    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+\n    | Next Header   |  Hdr Ext Len  | Routing Type  | Segments Left |\n    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+\n    |  Last Entry   |     Flags     |              Tag              |\n    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+\n    |                                                               |\n    |            Segment List[0] (128 bits IPv6 address)            |\n    |                                                               |\n    |                                                               |\n    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+\n    |                                                               |\n    |                                                               |\n                                  ...\n    |                                                               |\n    |                                                               |\n    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+\n    |                                                               |\n    |            Segment List[n] (128 bits IPv6 address)            |\n    |                                                               |\n    |                                                               |\n    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+\n```\n\nThe **Next Header** field is the type of either the next IPv6 header or the\npayload.\n\nFor SRv6, the **Routing Type** is 4.\n\n**Segments Left** points to the index of the current segment in the segment\nlist. In properly formed SRv6 packets, the IPv6 destination address equals\n`Segment List[Segments Left]`. The original IPv6 address should be `Segment\nList[0]` in our exercise so that traffic is routed to the correct destination\neventually.\n\n**Last Entry** is the index of the last entry in the segment list.\n\nNote: This means it should be one less than the length of the list. (In the\nexample above, the list is `n+1` entries and last entry should be `n`.)\n\nFinally, the **Segment List** is a reverse-sorted list of IPv6 addresses to be\ntraversed in a specific SRv6 policy. The last entry in the list is the first\nsegment in the SRv6 policy. The list is not typically mutated; the entire header\nis inserted or removed as a whole.\n\nTo keep things simple and because we are already using IPv6, your solution will\njust be adding the routing header to the existing IPv6 packet. (We won't be\nembedding entire packets inside of new IPv6 packets with an SRv6 policy,\nalthough the spec allows it and there are valid use cases for doing so.)\n\nAs you may have already noticed, SRv6 uses IPv6 addresses to identify segments\nin a policy. While the format of the addresses is the same as IPv6, the address\nspace is typically different from the space used for switch's internal IPv6\naddresses. The format of the address also differs. A typical IPv6 unicast\naddress is broken into a network prefix and host identifier pieces, and a subnet\nmask is used to delineate the boundary between the two. A typical SRv6 segment\nidentifier (SID) is broken into a locator, a function identifier, and\noptionally, function arguments. The locator must be routable, which enables both\nSRv6-enable and unaware nodes to participate in forwarding.\n\nHINT: Due to optional arguments, longest prefix match on the 128-bit SID is\npreferred to exact match.\n\nThere are three types of nodes of interest in a segment routed network:\n\n1. Source Node - the node (either host or switch) that injects the SRv6 policy.\n2. Transit Node - a node that forwards an SRv6 packet, but is not the\n   destination for the traffic\n3. Endpoint Node - a participating waypoint in an SRv6 policy that will modify\n   the SRv6 header and perform a specified function\n\nIn our implementation, we simplify these types into two roles:\n\n* Endpoint Node - for traffic to the switch's SID, update the SRv6 header\n  (decrement segments left), set the IPv6 destination address to the next\n  segment, and forward the packets (\"End\" behavior). For simplicity, we will\n  always remove the SRv6 header on the penultimate segment in the policy (called\n  Penultimate Segment Pop or PSP in the spec).\n\n* Transit Node - by default, forward traffic normally if it is not destined for\n  the switch's IP address or its SID (\"T\" behavior). Allow the control plane to\n  add rules to inject SRv6 policy for traffic destined to specific IPv6\n  addresses (\"T.Insert\" behavior).\n\nFor more details, you can read the draft specification here:\nhttps://tools.ietf.org/id/draft-filsfils-spring-srv6-network-programming-06.html\n\n\n## 1. Adding tables for SRv6\n\nWe have already defined the SRv6 header as well as included the logic for\nparsing the header in `main.p4`.\n\nThe next step is to add two for each of the two roles specified above.\nIn addition to the tables, you will also need to write the action for the\nendpoint node table (otherwise called the \"My SID\" table); in `snippets.p4`, we\nhave provided the `t_insert` actions for policies of length 2 and 3, which\nshould be sufficient to get you started.\n\nOnce you've finished that, you will need to apply the tables in the `apply`\nblock at the bottom of your `EgressPipeImpl` section. You will want to apply the\ntables after checking that the L2 destination address matches the switch's, and\nbefore the L3 table is applied (because you'll want to use the same routing\nentries to forward traffic after the SRv6 policy is applied). You can also apply\nthe PSP behavior as part of your `apply` logic because we will always be\napplying it if we are the penultimate SID.\n\n## 2. Testing the pipeline with Packet Test Framework (PTF)\n\nIn this exercise, you will be modifying tests in [srv6.py](ptf/tests/srv6.py) to\nverify the SRv6 behavior of the pipeline.\n\nThere are four tests in `srv6.py`:\n\n* Srv6InsertTest: Tests SRv6 insert behavior, where the switch receives an IPv6\n  packet and inserts the SRv6 header.\n\n* Srv6TransitTest: Tests SRv6 transit behavior, where the switch ignores the\n  SRv6 header and routes the packet normally, without applying any SRv6-related\n  modifications.\n\n* Srv6EndTest: Tests SRv6 end behavior (without pop), where the switch forwards\n  the packet to the next SID found in the SRv6 header.\n\n* Srv6EndPspTest: Tests SRv6 End with Penultimate Segment Pop (PSP) behavior,\n  where the switch SID is the penultimate in the SID list and the switch removes\n  the SRv6 header before routing the packet to it's final destination (last SID\n  in the list).\n\nYou should be able to find `TODO EXERCISE 6` in [srv6.py](ptf/tests/srv6.py)\nwith some hints.\n\nTo run all the tests for this exercise:\n\n    make p4-test TEST=srv6\n\nThis command will run all tests in the `srv6` group (i.e. the content of\n`ptf/tests/srv6.py`). To run a specific test case you can use:\n\n    make p4-test TEST=<PYTHON MODULE>.<TEST CLASS NAME>\n\nFor example:\n\n    make p4-test TEST=srv6.Srv6InsertTest\n\n**Check for regressions**\n\nAt this point, our P4 program should be complete. We can check to make sure that\nwe haven't broken anything from the previous exercises by running all tests from\nthe `ptf/tests` directory:\n\n```\n$ make p4-test\n```\n\nNow we have shown that we can install basic rules and pass SRv6 traffic using BMv2.\n\n## 3. Building the ONOS App\n\nFor the ONOS application, you will need to update `Srv6Component.java` in the\nfollowing ways:\n\n* Complete the `setUpMySidTable` method which will insert an entry into the M\n  SID table that matches the specified device's SID and performs the `end`\n  action. This function is called whenever a new device is connected.\n\n* Complete the `insertSrv6InsertRule` function, which creates a `t_insert` rule\n  along for the provided SRv6 policy. This function is called by the\n  `srv6-insert` CLI command.\n\n* Complete the `clearSrv6InsertRules`, which is called by the `srv6-clear` CLI\n  command.\n\nOnce you are finished, you should rebuild and reload your app. This will also\nrebuild and republish any changes to your P4 code and the ONOS pipeconf. Don't\nforget to enable your Srv6Component at the top of the file.\n\nAs with previous exercises, you can use the following command to build and\nreload the app:\n\n```\n$ make app-build app-reload\n```\n\n## 4. Inserting SRv6 policies\n\nThe next step is to show that traffic can be steered using an SRv6 policy.\n\nYou should start a ping between `h2` and `h4`:\n```\nmininet> h2 ping h4\n```\n\nUsing the ONOS UI, you can observe which paths are being used for the ping\npackets.\n\n- Press `a` until you see \"Port stats (packets/second)\"\n- Press `l` to show device labels\n\n<img src=\"img/srv6-ping-1.png\" alt=\"Ping Test\" width=\"344\"/>\n\nOnce you determine which of the spines your packets are being hashed to (and it\ncould be both, with requests and replies taking different paths), you should\ninsert a set of SRv6 policies that sends the ping packets via the other spine\n(or the spine of your choice).\n\nTo add new SRv6 policies, you should use the `srv6-insert` command.\n\n```\nonos> srv6-insert <device ID> <segment list>\n```\n\nNote: In our topology, the SID for spine1 is `3:201:2::` and the SID for spine\nis `3:202:2::`.\n\nFor example, to add a policy that forwards traffic between h2 and h4 though\nspine1 and leaf2, you can use the following command:\n\n* Insert the SRv6 policy from h2 to h4 on leaf1 (through spine1 and leaf2)\n```\nonos> srv6-insert device:leaf1 3:201:2:: 3:102:2:: 2001:2:4::1\nInstalling path on device device:leaf1: 3:201:2::, 3:102:2::, 2001:2:4::1\n```\n* Insert the SRv6 policy from h4 to h2 on leaf2 (through spine1 and leaf1)\n```\nonos> srv6-insert device:leaf2 3:201:2:: 3:101:2:: 2001:1:2::1\nInstalling path on device device:leaf2: 3:201:2::, 3:101:2::, 2001:1:2::1\n```\n\nThese commands will match on traffic to the last segment on the specified device\n(e.g. match `2001:1:4::1` on `leaf1`). You can update the command to allow for\nmore specific match criteria as extra credit.\n\nYou can confirm that your rule has been added using a variant of the following:\n\n(HINT: Make sure to update the tableId to match the one in your P4 program.)\n\n```\nonos> flows any device:leaf1 | grep tableId=IngressPipeImpl.srv6_transit\n    id=c000006d73f05e, state=ADDED, bytes=0, packets=0, duration=871, liveType=UNKNOWN, priority=10,\n    tableId=IngressPipeImpl.srv6_transit,\n    appId=org.p4.srv6-tutorial,\n    selector=[hdr.ipv6.dst_addr=0x20010001000400000000000000000001/128],\n    treatment=DefaultTrafficTreatment{immediate=[\n        IngressPipeImpl.srv6_t_insert_3(\n            s3=0x20010001000400000000000000000001,\n            s1=0x30201000200000000000000000000,\n            s2=0x30102000200000000000000000000)],\n    deferred=[], transition=None, meter=[], cleared=false, StatTrigger=null, metadata=null}\n```\n\nYou should now return to the ONOS UI to confirm that traffic is flowing through\nthe specified spine.\n\n<img src=\"img/srv6-ping-2.png\" alt=\"SRv6 Ping Test\" width=\"335\"/>\n\n## 5. Debugging and Clean Up\n\nIf you need to remove your SRv6 policies, you can use the `srv6-clear` command\nto clear all SRv6 policies from a specific device. For example to remove flows\nfrom `leaf1`, use this command:\n\n```\nonos> srv6-clear device:leaf1\n```\n\nTo verify that the device inserts the correct SRv6 header, you can use\n**Wireshark** to capture packet from each device port.\n\nFor example, if you want to capture packet from port 1 of spine1, capture\npackets from interface `spine1-eth1`.\n\nNOTE: `spine1-eth1` is connected to leaf1, and `spine1-eth2` is connected to\nleaf2; spine two follows the same pattern.\n\n## Congratulations!\n\nYou have completed the sixth exercise! Now your fabric is capable of steering\ntraffic using SRv6.\n"
  },
  {
    "path": "EXERCISE-7.md",
    "content": "# Exercise 7: Trellis Basics\n\nThe goal of this exercise is to learn how to set up and configure an emulated\nTrellis environment with a simple 2x2 topology.\n\n## Background\n\nTrellis is a set of built-in ONOS applications that provide the control plane\nfor an IP fabric based on MPLS segment-routing. That is, similar in purpose to\nthe app we have been developing in the previous exercise, but instead of using\nIPv6-based routing or SRv6, Trellis uses MPLS labels to forward packets between\nleaf switches and across the spines.\n\nTrellis apps are deployed in Tier-1 carrier networks, and for this reason they\nare deemed production-grade. These apps provide an extensive feature set such\nas:\n\n* Carrier-oriented networking capabilities: from basic L2 and L3 forwarding, to\n  multicast, QinQ, pseudo-wires, integration with external control planes such\n  as BGP, OSPF, DHCP relay, etc.\n* Fault-tolerance and high-availability: as Trellis is designed to take full\n  advantage of the ONOS distributed core, e.g., to withstand controller\n  failures. It also provides dataplane-level resiliency against link failures\n  and switch failures (with paired leaves and dual-homed hosts). See figure\n  below.\n* Single-pane-of-glass monitoring and troubleshooting, with dedicated tools such\n  as T3.\n\n\n![trellis-features](img/trellis-features.png)\n\nTrellis is made of several apps running on top of ONOS, the main one is\n`segmentrouting`, and its implementation can be found in the ONOS source tree:\n[onos/apps/segmentrouting] (open on GitHub)\n\n`segmentrouting` abstracts the leaf and spine switches to make the fabric appear\nas \"one big IP router\", such that operators can program them using APIs similar\nto that of a traditional router (e.g. to configure VLANs, subnets, routes, etc.)\nThe app listens to operator-provided configuration, as well as topology events,\nto program the switches with the necessary forwarding rules. Because of this\n\"one big IP router\" abstraction, operators can independently scale the topology\nto add more capacity or ports by adding more leaves and spines.\n\n`segmentrouting` and other Trellis apps use the ONOS FlowObjective API, which\nallow them to be pipeline-agnostic. As a matter of fact, Trellis was initially\ndesigned to work with fixed-function switches exposing an OpenFlow agent (such\nas Broadcom Tomahawk, Trident2, and Qumran via the OF-DPA pipeline). However, in\nrecent years, support for P4 programmable switches was enabled without changing\nthe Trellis apps, but instead providing a special ONOS pipeconf that brings in a\nP4 program complemented by a set of drivers that among other things are\nresponsible for translating flow objectives to the P4 program-specific tables.\n\nThis P4 program is named `fabric.p4`. It's implementation along with the\ncorresponding pipeconf drivers can be found in the ONOS source tree:\n[onos/pipelines/fabric] (open on GitHub)\n\nThis pipeconf currently works on the `stratum_bmv2` software switch as well as\non Intel Barefoot Tofino-based switches (the [fabric-tofino] project provides\ninstructions and scripts to create a Tofino-enabled pipeconf).\n\nWe will come back to the details of `fabric.p4` in the next lab, for now, let's\nkeep in mind that instead of building our own custom pipeconf, we will use one\nprovided with ONOS.\n\nThe goal of the exercise is to learn the Trellis basics by writing a\nconfiguration in the form of a netcfg JSON file to set up bridging and\nIPv4 routing of traffic between hosts.\n\nFor a gentle overview of Trellis, please check the online book\n\"Software-Defined Networks: A Systems Approach\":\n<https://sdn.systemsapproach.org/trellis.html>\n\nFinally, the official Trellis documentation is also available online:\n<https://docs.trellisfabric.org/>\n\n### Topology\n\nWe will use a topology similar to previous exercises, however, instead of IPv6\nwe will use IPv4 hosts. The topology file is located under\n[mininet/topo-v4.py][topo-v4.py]. While the Trellis apps support IPv6, the P4\nprogram does not, yet. Development of IPv6 support in `fabric.p4` is work in\nprogress.\n\n![topo-v4](img/topo-v4.png)\n\nExactly like in previous exercises, the Mininet script [topo-v4.py] used here\ndefines 4 IPv4 subnets:\n\n* `172.16.1.0/24` with 3 hosts connected to `leaf1` (`h1a`, `h1b`, and `h1c`)\n* `172.16.2.0/24` with 1 hosts connected to `leaf1` (`h2`)\n* `172.16.3.0/24` with 1 hosts connected to `leaf2` (`h3`)\n* `172.16.4.0/24` with 1 hosts connected to `leaf2` (`h4`)\n\n### VLAN tagged vs. untagged ports\n\nAs usually done in a traditional router, different subnets are associated to\ndifferent VLANs. For this reason, Trellis allows configuring ports with\ndifferent VLANs, either untagged or tagged.\n\nAn **untagged** port expects packets to be received and sent **without** a VLAN\ntag, but internally, the switch processes all packets as belonging to a given\npre-configured VLAN ID. Similarly, when transmitting packets, the VLAN tag is\nremoved.\n\nFor **tagged** ports, packets are expected to be received **with** a VLAN tag\nthat has ID that belongs to a pre-configured set of known ones. Packets received\nuntagged or with an unknown VLAN ID, are dropped.\n\nIn our topology, we want the following VLAN configuration:\n\n* `leaf1` port `3` and `4`: VLAN `100` untagged (host `h1a` and `h1b`)\n* `leaf1` port `5`: VLAN `100` tagged (`h1c`)\n* `leaf1` port `6`: VLAN `200` tagged (`h2`)\n* `leaf2` port `3`: VLAN `300` tagged (`h3`)\n* `leaf2` port `4`: VLAN `400` untagged (`h4`)\n\nIn the Mininet script [topo-v4.py], we use different host Python classes to\ncreate untagged and tagged hosts.\n\nFor example, for `h1a` attached to untagged port `leaf1-3`, we use the\n`IPv4Host` class:\n```\n# Excerpt from mininet/topo-v4.py\nh1a = self.addHost('h1a', cls=IPv4Host, mac=\"00:00:00:00:00:1A\",\n                   ip='172.16.1.1/24', gw='172.16.1.254')\n```\n\nFor `h2`, which instead is attached to tagged port `leaf1-6`, we use the\n`TaggedIPv4Host` class:\n\n```\nh2 = self.addHost('h2', cls=TaggedIPv4Host, mac=\"00:00:00:00:00:20\",\n                  ip='172.16.2.1/24', gw='172.16.2.254', vlan=200)\n```\n\nIn the same Python file, you can find the implementation for both classes. For\n`TaggedIPv4Host` we use standard Linux commands to create a VLAN tagged\ninterface.\n\n### Configuration via netcfg\n\nThe JSON file in [mininet/netcfg-sr.json][netcfg-sr.json] includes the necessary\nconfiguration for ONOS and the Trellis apps to program switches to forward\ntraffic between hosts of the topology described above.\n\n**NOTE**: this is a similar but different file then the one used in previous\nexercises. Notice the `-sr` suffix, where `sr` stands for `segmentrouting`, as\nthis file contains the necessary configuration for such app to work.\n\nTake a look at both the old file ([netcfg.json]) and the new one\n([netcfg-sr.json]), can you spot the differences? To help, try answering the\nfollowing questions:\n\n* What is the pipeconf ID used for all 4 switches? Which\n  pipeconf ID did we use before? Why is it different?\n* In the new file, each device has a `\"segmentrouting\"` config block (JSON\n  subtree). Do you see any similarities with the previous file and the\n  `\"fabricDeviceConfig\"` block?\n* How come all `\"fabricDeviceConfig\"` blocks are gone in the new file?\n* Look at the `\"interfaces\"` config blocks, what has changed w.r.t. the old\n  file?\n* In the new file, why do the untagged interfaces have only one VLAN ID value,\n  while the tagged ones can take many (JSON array)?\n* Is the `interfaces` block provided for all host-facing ports? Which ports are\n  missing and which hosts are attached to those ports?\n\n\n## 1. Restart ONOS and Mininet with the IPv4 topology\n\nSince we want to use a new topology with IPv4 hosts, we need to reset the\ncurrent environment:\n\n    $ make reset\n\nThis command will stop ONOS and Mininet and remove any state associated with\nthem.\n\nRe-start ONOS and Mininet, this time with the new IPv4 topology:\n\n**IMPORTANT:** please notice the `-v4` suffix!\n\n    $ make start-v4\n\nWait about 1 minute before proceeding with the next steps. This will\ngive ONOS time to start all of its subsystems.\n\n## 2. Load fabric pipeconf and segmentrouting\n\nDifferently from previous exercises, instead of building and installing our own\npipeconf and app, here we use built-in ones.\n\nOpen up the ONOS CLI (`make onos-cli`) and activate the following apps:\n\n    onos> app activate fabric \n    onos> app activate segmentrouting\n\n**NOTE:** The full ID for both apps is `org.onosproject.pipelines.fabric` and\n`org.onosproject.segmentrouting`, respectively. For convenience, when activating\nbuilt-in apps using the ONOS CLI, you can specify just the last piece of the\nfull ID (after the last dot.)\n\n**NOTE 2:** The `fabric` app has the minimal purpose of registering\npipeconfs in the system. Different from `segmentrouting`, even if we\ncall them both apps, `fabric` does not interact with the network in\nany way.\n\n#### Verify apps\n\nVerify that all apps have been activated successfully:\n\n    onos> apps -s -a\n    *  18 org.onosproject.drivers               2.2.2    Default Drivers\n    *  37 org.onosproject.protocols.grpc        2.2.2    gRPC Protocol Subsystem\n    *  38 org.onosproject.protocols.gnmi        2.2.2    gNMI Protocol Subsystem\n    *  39 org.onosproject.generaldeviceprovider 2.2.2    General Device Provider\n    *  40 org.onosproject.protocols.gnoi        2.2.2    gNOI Protocol Subsystem\n    *  41 org.onosproject.drivers.gnoi          2.2.2    gNOI Drivers\n    *  42 org.onosproject.route-service         2.2.2    Route Service Server\n    *  43 org.onosproject.mcast                 2.2.2    Multicast traffic control\n    *  44 org.onosproject.portloadbalancer      2.2.2    Port Load Balance Service\n    *  45 org.onosproject.segmentrouting        2.2.2    Segment Routing\n    *  53 org.onosproject.hostprovider          2.2.2    Host Location Provider\n    *  54 org.onosproject.lldpprovider          2.2.2    LLDP Link Provider\n    *  64 org.onosproject.protocols.p4runtime   2.2.2    P4Runtime Protocol Subsystem\n    *  65 org.onosproject.p4runtime             2.2.2    P4Runtime Provider\n    *  99 org.onosproject.drivers.gnmi          2.2.2    gNMI Drivers\n    * 100 org.onosproject.drivers.p4runtime     2.2.2    P4Runtime Drivers\n    * 101 org.onosproject.pipelines.basic       2.2.2    Basic Pipelines\n    * 102 org.onosproject.drivers.stratum       2.2.2    Stratum Drivers\n    * 103 org.onosproject.drivers.bmv2          2.2.2    BMv2 Drivers\n    * 111 org.onosproject.pipelines.fabric      2.2.2    Fabric Pipeline\n    * 164 org.onosproject.gui2                  2.2.2    ONOS GUI2\n\nVerify that you have the above 20 apps active in your ONOS instance. If you are\nwondering why so many apps, remember from EXERCISE 3 that the ONOS container in\n[docker-compose.yml] is configured to pass the environment variable `ONOS_APPS`\nwhich defines built-in apps to load during startup.\n\nIn our case this variable has value:\n\n    ONOS_APPS=gui2,drivers.bmv2,lldpprovider,hostprovider\n\nMoreover, `segmentrouting` requires other apps as dependencies, such as\n`route-service`, `mcast`, and `portloadbalancer`. The combination of all these\napps (and others that we do not need in this exercise) is what makes Trellis.\n\n#### Verify pipeconfs\n\nVerify that the `fabric` pipeconfs have been registered successfully:\n\n    onos> pipeconfs\n    id=org.onosproject.pipelines.fabric-full, behaviors=[PortStatisticsDiscovery, PiPipelineInterpreter, Pipeliner, IntProgrammable], extensions=[P4_INFO_TEXT, BMV2_JSON, CPU_PORT_TXT]\n    id=org.onosproject.pipelines.int, behaviors=[PiPipelineInterpreter, Pipeliner, PortStatisticsDiscovery, IntProgrammable], extensions=[P4_INFO_TEXT, BMV2_JSON]\n    id=org.onosproject.pipelines.fabric-spgw-int, behaviors=[PortStatisticsDiscovery, PiPipelineInterpreter, Pipeliner, IntProgrammable], extensions=[P4_INFO_TEXT, BMV2_JSON, CPU_PORT_TXT]\n    id=org.onosproject.pipelines.fabric, behaviors=[PortStatisticsDiscovery, PiPipelineInterpreter, Pipeliner], extensions=[P4_INFO_TEXT, BMV2_JSON, CPU_PORT_TXT]\n    id=org.onosproject.pipelines.fabric-bng, behaviors=[PortStatisticsDiscovery, PiPipelineInterpreter, Pipeliner, BngProgrammable], extensions=[P4_INFO_TEXT, BMV2_JSON, CPU_PORT_TXT]\n    id=org.onosproject.pipelines.fabric-spgw, behaviors=[PortStatisticsDiscovery, PiPipelineInterpreter, Pipeliner], extensions=[P4_INFO_TEXT, BMV2_JSON, CPU_PORT_TXT]\n    id=org.onosproject.pipelines.fabric-int, behaviors=[PortStatisticsDiscovery, PiPipelineInterpreter, Pipeliner, IntProgrammable], extensions=[P4_INFO_TEXT, BMV2_JSON, CPU_PORT_TXT]\n    id=org.onosproject.pipelines.basic, behaviors=[PiPipelineInterpreter, Pipeliner, PortStatisticsDiscovery], extensions=[P4_INFO_TEXT, BMV2_JSON]\n\nWondering why so many pipeconfs? `fabric.p4` comes in different \"profiles\", used\nto enable different dataplane features in the pipeline. We'll come back\nto the differences between different profiles in the next exercise, for now\nlet's make sure the basic one `org.onosproject.pipelines.fabric` is loaded.\nThis is the one we need to program all four switches, as specified in\n[netcfg-sr.json].\n\n#### Increase reconciliation frequency (optional, but recommended)\n\nRun the following commands in the ONOS CLI:\n\n    onos> cfg set org.onosproject.net.flow.impl.FlowRuleManager fallbackFlowPollFrequency 4\n    onos> cfg set org.onosproject.net.group.impl.GroupManager fallbackGroupPollFrequency 3\n\nThis command tells the ONOS core to modify the period (in seconds) between\nreconciliation checks. Reconciliation is used to verify that switches have the\nexpected forwarding state and to correct any inconsistencies, i.e., writing any\npending flow rule and group. When running ONOS and the emulated switches in the\nsame machine (especially those with low CPU/memory), it might happen that\nP4Runtime write requests time out because the system is overloaded.\n\nThe default reconciliation period is 30 seconds, the above commands set it to 4\nseconds for flow rules, and 3 seconds for groups.\n\n## 3. Push netcfg-sr.json to ONOS\n\nOn a terminal window, type:\n\n**IMPORTANT**: please notice the `-sr` suffix!\n\n    $ make netcfg-sr\n\nAs we learned in EXERCISE 3, this command will push [netcfg-sr.json] to ONOS,\ntriggering discovery and configuration of the 4 switches. Moreover, since the\nfile specifies a `segmentrouting` config block for each switch, this will\ninstruct the `segmentrouting` app in ONOS to take control of all of them, i.e.,\nthe app will start generating flow objectives that will be translated into flow\nrules for the `fabric.p4` pipeline.\n\nCheck the ONOS log (`make onos-log`). You should see numerous messages from\ncomponents such as `TopologyHandler`, `LinkHandler`, `SegmentRoutingManager`,\netc., signaling that switches have been discovered and programmed.\n\nYou should also see warning messages such as:\n\n```\n[ForwardingObjectiveTranslator] Cannot translate DefaultForwardingObjective: unsupported forwarding function type 'PSEUDO_WIRE'...\n```\n\nThis is normal, as not all Trellis features are supported in `fabric.p4`. One of\nsuch feature is [pseudo-wire] (L2 tunneling across the L3 fabric). You can\nignore that.\n\nThis error is generated by the Pipeliner driver behavior of the `fabric`\npipeconf, which recognizes that the given flow objective cannot be translated.\n\n#### Check configuration in ONOS\n\nVerify that all interfaces have been configured successfully:\n\n    onos> interfaces\n    leaf1-3: port=device:leaf1/3 ips=[172.16.1.254/24] mac=00:AA:00:00:00:01 vlanUntagged=100\n    leaf1-4: port=device:leaf1/4 ips=[172.16.1.254/24] mac=00:AA:00:00:00:01 vlanUntagged=100\n    leaf1-5: port=device:leaf1/5 ips=[172.16.1.254/24] mac=00:AA:00:00:00:01 vlanTagged=[100]\n    leaf1-6: port=device:leaf1/6 ips=[172.16.2.254/24] mac=00:AA:00:00:00:01 vlanTagged=[200]\n\nYou should see four interfaces in total (for all host-facing ports of `leaf1`),\nconfigured as in the [netcfg-sr.json] file. You will have to add the\nconfiguration for `leaf2`'s ports later in this exercise.\n\nA similar output can be obtained by using a `segmentrouting`-specific command:\n\n    onos> sr-device-subnets\n    device:leaf1\n        172.16.1.0/24\n        172.16.2.0/24\n    device:spine1\n    device:spine2\n    device:leaf2\n\nThis command lists all device-subnet mapping known to `segmentrouting`. For a\nlist of other available sr-specific commands, type `sr-` and press\n<kbd>tab</kbd> (as for command auto-completion).\n\nAnother interesting command is `sr-ecmp-spg`, which lists all computed ECMP\nshortest-path graphs:\n\n    onos> sr-ecmp-spg \n    Root Device: device:leaf1 ECMP Paths: \n      Paths from device:leaf1 to device:spine1\n           ==  : device:leaf1/1 -> device:spine1/1\n      Paths from device:leaf1 to device:spine2\n           ==  : device:leaf1/2 -> device:spine2/1\n      Paths from device:leaf1 to device:leaf2\n           ==  : device:leaf1/2 -> device:spine2/1 : device:spine2/2 -> device:leaf2/2\n           ==  : device:leaf1/1 -> device:spine1/1 : device:spine1/2 -> device:leaf2/1\n\n    Root Device: device:spine1 ECMP Paths: \n      Paths from device:spine1 to device:leaf1\n           ==  : device:spine1/1 -> device:leaf1/1\n      Paths from device:spine1 to device:spine2\n           ==  : device:spine1/2 -> device:leaf2/1 : device:leaf2/2 -> device:spine2/2\n           ==  : device:spine1/1 -> device:leaf1/1 : device:leaf1/2 -> device:spine2/1\n      Paths from device:spine1 to device:leaf2\n           ==  : device:spine1/2 -> device:leaf2/1\n\n    Root Device: device:spine2 ECMP Paths: \n      Paths from device:spine2 to device:leaf1\n           ==  : device:spine2/1 -> device:leaf1/2\n      Paths from device:spine2 to device:spine1\n           ==  : device:spine2/1 -> device:leaf1/2 : device:leaf1/1 -> device:spine1/1\n           ==  : device:spine2/2 -> device:leaf2/2 : device:leaf2/1 -> device:spine1/2\n      Paths from device:spine2 to device:leaf2\n           ==  : device:spine2/2 -> device:leaf2/2\n\n    Root Device: device:leaf2 ECMP Paths: \n      Paths from device:leaf2 to device:leaf1\n           ==  : device:leaf2/1 -> device:spine1/2 : device:spine1/1 -> device:leaf1/1\n           ==  : device:leaf2/2 -> device:spine2/2 : device:spine2/1 -> device:leaf1/2\n      Paths from device:leaf2 to device:spine1\n           ==  : device:leaf2/1 -> device:spine1/2\n      Paths from device:leaf2 to device:spine2\n           ==  : device:leaf2/2 -> device:spine2/2\n\nThese graphs are used by `segmentrouting` to program flow rules and groups\n(action selectors) in `fabric.p4`, needed to load balance traffic across\nmultiple spines/paths.\n\nVerify that no hosts have been discovered so far:\n\n    onos> hosts\n\nYou should get an empty output.\n\nVerify that all initial flows and groups have be programmed successfully:\n\n    onos> flows -c added\n    deviceId=device:leaf1, flowRuleCount=52\n    deviceId=device:spine1, flowRuleCount=28\n    deviceId=device:spine2, flowRuleCount=28\n    deviceId=device:leaf2, flowRuleCount=36\n    onos> groups -c added\n    deviceId=device:leaf1, groupCount=5\n    deviceId=device:leaf2, groupCount=3\n    deviceId=device:spine1, groupCount=5\n    deviceId=device:spine2, groupCount=5\n\nYou should see the same `flowRuleCount` and `groupCount` in your\noutput. To dump the whole set of flow rules and groups, remove the\n`-c` argument from the command. `added` is used to filter only\nentities that are known to have been written to the switch (i.e., the\nP4Runtime Write RPC was successful.)\n\n## 4. Connectivity test\n\n#### Same-subnet hosts (bridging)\n\nOpen up the Mininet CLI (`make mn-cli`). Start by pinging `h1a` and `h1c`,\nwhich are both on the same subnet (VLAN `100` 172.16.1.0/24):\n\n    mininet> h1a ping h1c\n    PING 172.16.1.3 (172.16.1.3) 56(84) bytes of data.\n    64 bytes from 172.16.1.3: icmp_seq=1 ttl=63 time=13.7 ms\n    64 bytes from 172.16.1.3: icmp_seq=2 ttl=63 time=3.63 ms\n    64 bytes from 172.16.1.3: icmp_seq=3 ttl=63 time=3.52 ms\n    ...\n\nPing should work. Check the ONOS log, you should see an output similar to that\nof exercises 4-5.\n\n    [HostHandler] Host 00:00:00:00:00:1A/None is added at [device:leaf1/3]\n    [HostHandler] Populating bridging entry for host 00:00:00:00:00:1A/None at device:leaf1:3\n    [HostHandler] Populating routing rule for 172.16.1.1 at device:leaf1/3\n    [HostHandler] Host 00:00:00:00:00:1C/100 is added at [device:leaf1/5]\n    [HostHandler] Populating bridging entry for host 00:00:00:00:00:1C/100 at device:leaf1:5\n    [HostHandler] Populating routing rule for 172.16.1.3 at device:leaf1/5\n\nThat's because `segmentrouting` operates in a way that is similar to the custom\napp of previous exercises. Hosts are discovered by the built-in service\n`hostprovider` intercepting packets such as ARP or NDP. For hosts in the same\nsubnet, to support ARP resolution, multicast (ALL) groups are used to replicate\nARP requests to all ports belonging to the same VLAN. `segmentrouting` listens\nfor host events, when a new one is discovered, it installs the necessary\nbridging and routing rules.\n\n#### Hosts on different subnets (routing)\n\nOn the Mininet prompt, start a ping to `h2` from any host in the subnet with\nVLAN `100`, for example, from `h1a`:\n\n    mininet> h1a ping h2\n\nThe **ping should NOT work**, and the reason is that the location of `h2` is not\nknown to ONOS, yet. Usually, Trellis is used in networks where hosts use DHCP\nfor addressing. In such setup, we could use the DHCP relay app in ONOS to learn\nhost locations and addresses when the hosts request an IP address via DHCP.\nHowever, in this simpler topology, we need to manually trigger `h2` to generate\nsome packets to be discovered by ONOS.\n\nWhen using `segmentrouting`, the easiest way to have ONOS discover an host, is\nto ping the gateway address that we configured in [netcfg-sr.json], or that you\ncan derive from the ONOS CLI (`onos> interfaces`):\n\n    mininet> h2 ping 172.16.2.254\n    PING 172.16.2.254 (172.16.2.254) 56(84) bytes of data.\n    64 bytes from 172.16.2.254: icmp_seq=1 ttl=64 time=28.9 ms\n    64 bytes from 172.16.2.254: icmp_seq=2 ttl=64 time=12.6 ms\n    64 bytes from 172.16.2.254: icmp_seq=3 ttl=64 time=15.2 ms\n    ...\n\nPing is working, and ONOS should have discovered `h2` by now. But, who is\nreplying to our pings?\n\nIf you check the ARP table for h2:\n\n    mininet> h2 arp\n    Address                  HWtype  HWaddress           Flags Mask            Iface\n    172.16.2.254             ether   00:aa:00:00:00:01   C                     h2-eth0.200\n\nYou should recognize MAC address `00:aa:00:00:00:01` as the one associated with\n`leaf1` in [netcfg-sr.json]. That's it, the `segmentrouting` app in ONOS is\nreplying to our ICMP echo request (ping) packets! Ping requests are intercepted\nby means of P4Runtime packet-in, while replies are generated and injected via\nP4Runtime packet-out. This is equivalent to pinging the interface of a\ntraditional router.\n\nAt this point, ping from `h1a` to `h2` should work:\n\n    mininet> h1a ping h2\n    PING 172.16.2.1 (172.16.2.1) 56(84) bytes of data.\n    64 bytes from 172.16.2.1: icmp_seq=1 ttl=63 time=6.23 ms\n    64 bytes from 172.16.2.1: icmp_seq=2 ttl=63 time=3.81 ms\n    64 bytes from 172.16.2.1: icmp_seq=3 ttl=63 time=3.84 ms\n    ...\n\nMoreover, you can check that all hosts pinged so far have been discovered by\nONOS:\n\n    onos> hosts -s\n    id=00:00:00:00:00:1A/None, mac=00:00:00:00:00:1A, locations=[device:leaf1/3], vlan=None, ip(s)=[172.16.1.1]\n    id=00:00:00:00:00:1C/100, mac=00:00:00:00:00:1C, locations=[device:leaf1/5], vlan=100, ip(s)=[172.16.1.3]\n    id=00:00:00:00:00:20/200, mac=00:00:00:00:00:20, locations=[device:leaf1/6], vlan=200, ip(s)=[172.16.2.1]\n\n## 5. Dump packets to see VLAN tags (optional)\n\nTODO: detailed instructions for this step are still a work-in-progress.\n\nIf you feel adventurous, start a ping between any two hosts, and use the tool\n[util/mn-pcap](util/mn-pcap) to dump packets to a PCAP file. After dumping\npackets, the tool tries to open the pcap file on wireshark (if installed).\n\nFor example, to dump packets out of the `h2` main interface:\n\n    $ util/mn-pcap h2\n\n## 6. Add missing interface config\n\nStart a ping from `h3` to any other host, for example `h2`:\n\n    mininet> h3 ping h2\n    ...\n\nIt should NOT work. Can you explain why?\n\nLet's check the ONOS log (`make onos-log`). You should see the following\nmessages:\n\n    ...\n    INFO  [HostHandler] Host 00:00:00:00:00:30/None is added at [device:leaf2/3]\n    INFO  [HostHandler] Populating bridging entry for host 00:00:00:00:00:30/None at device:leaf2:3\n    WARN  [RoutingRulePopulator] Untagged host 00:00:00:00:00:30/None is not allowed on device:leaf2/3 without untagged or nativevlan config\n    WARN  [RoutingRulePopulator] Fail to build fwd obj for host 00:00:00:00:00:30/None. Abort.\n    INFO  [HostHandler] 172.16.3.1 is not included in the subnet config of device:leaf2/3. Ignored.\n\n\n`h3` is discovered because ONOS intercepted the ARP request to resolve `h3`'s\ngateway IP address (`172.16.3.254`), but the rest of the programming fails\nbecause we have not provided a valid Trellis configuration for the switch port\nfacing `h3` (`leaf2/3`). Indeed, if you look at [netcfg-sr.json] you will notice\nthat the `\"ports\"` section includes a config block for all `leaf1` host-facing\nports, but it does NOT provide any for `leaf2`.\n\nAs a matter of fact, if you try to start a ping from `h4` (attached to `leaf2`),\nthat should NOT work as well.\n\nIt is your task to modify the [netcfg-sr.json] to add the necessary config\nblocks to enable connectivity for `h3` and `h4`:\n\n1. Open up [netcfg-sr.json].\n2. Look for the `\"ports\"` section.\n3. Provide a config for ports `device:leaf2/3` and `device:leaf2/4`. When doing\n   so, look at the config for other ports as a reference, but make sure to\n   provide the right IPv4 gateway address, subnet, and VLAN configuration\n   described at the beginning of this document.\n4. When done, push the updated file to ONOS using `make netcfg-sr`.\n5. Verify that the two new interface configs show up when using the ONOS\n   CLI (`onos> interfaces`).\n6. If you don't see the new interfaces in the ONOS CLI, verify the ONOS log\n   (`make onos-log`) for any possible error, and eventually go back to step 3.\n7. If you struggle to make it work, a solution is available in the\n   `solution/mininet` directory.\n\nLet's try to ping the corresponding gateway address from `h3` and `h4`:\n\n    mininet> h3 ping 172.16.3.254\n    PING 172.16.3.254 (172.16.3.254) 56(84) bytes of data.\n    64 bytes from 172.16.3.254: icmp_seq=1 ttl=64 time=66.5 ms\n    64 bytes from 172.16.3.254: icmp_seq=2 ttl=64 time=19.1 ms\n    64 bytes from 172.16.3.254: icmp_seq=3 ttl=64 time=27.5 ms\n    ...\n\n    mininet> h4 ping 172.16.4.254\n    PING 172.16.4.254 (172.16.4.254) 56(84) bytes of data.\n    64 bytes from 172.16.4.254: icmp_seq=1 ttl=64 time=45.2 ms\n    64 bytes from 172.16.4.254: icmp_seq=2 ttl=64 time=12.7 ms\n    64 bytes from 172.16.4.254: icmp_seq=3 ttl=64 time=22.0 ms\n    ...\n\nAt this point, ping between all hosts should work. You can try that using the\nspecial `pingall` command in the Mininet CLI:\n\n    mininet> pingall\n    *** Ping: testing ping reachability\n    h1a -> h1b h1c h2 h3 h4 \n    h1b -> h1a h1c h2 h3 h4 \n    h1c -> h1a h1b h2 h3 h4 \n    h2 -> h1a h1b h1c h3 h4 \n    h3 -> h1a h1b h1c h2 h4 \n    h4 -> h1a h1b h1c h2 h3 \n    *** Results: 0% dropped (30/30 received)\n\n## Congratulations!\n\nYou have completed the seventh exercise! You were able to use ONOS built-in\nTrellis apps such as `segmentrouting` and the `fabric` pipeconf to configure\nforwarding in a 2x2 leaf-spine fabric of IPv4 hosts.\n\n[topo-v4.py]: mininet/topo-v4.py\n[netcfg-sr.json]: mininet/netcfg-sr.json\n[netcfg.json]: mininet/netcfg.json\n[docker-compose.yml]: docker-compose.yml\n[pseudo-wire]: https://en.wikipedia.org/wiki/Pseudo-wire\n[onos/apps/segmentrouting]: https://github.com/opennetworkinglab/onos/tree/2.2.2/apps/segmentrouting\n[onos/pipelines/fabric]: https://github.com/opennetworkinglab/onos/tree/2.2.2/pipelines/fabric\n[fabric-tofino]: https://github.com/opencord/fabric-tofino\n"
  },
  {
    "path": "EXERCISE-8.md",
    "content": "# Exercise 8: GTP termination with fabric.p4\n\nThe goal of this exercise is to learn how to use Trellis and fabric.p4 to\nencapsulate and route packets using the GPRS Tunnelling Protocol (GTP) header as\nin 4G mobile core networks.\n\n## Background\n\n![Topology GTP](img/topo-gtp.png)\n\nThe topology we will use in this exercise ([topo-gtp.py]) is a very simple one,\nwith the usual 2x2 fabric, but only two hosts. We assume our fabric is used as\npart of a 4G (i.e, LTE) network, connecting base stations to a Packet Data\nNetwork (PDN), such as the Internet. The two hosts in our topology represent:\n\n* An eNodeB, i.e., a base station providing radio connectivity to User\n  Equipments (UEs) such as smartphones;\n* A host on the Packet Data Network (PDN), i.e., any host on the Internet.\n\nTo provide connectivity between the UEs and the Internet, we need to program our\nfabric to act as a Serving and Packet Gateway (SPGW). The SPGW is a complex and\nfeature-rich component of the 4G mobile core architecture that is used by the\nbase stations as a gateway to the PDN. Base stations aggregate UE traffic in GTP\ntunnels (one or more per UE). The SPGW has many functions, among which that of\nterminating such tunnels. In other words, it encapsulates downlink traffic\n(Internet→UE) in an additional IPv4+UDP+GTP-U header, and it removes it for the\nuplink direction (UE→Internet).\n\nIn this exercise you will learn how to:\n\n* Program a switch with the `fabric-spgw` profile;\n* Use Trellis to route traffic from the PDN to the eNodeB;\n* Use the ONOS REST APIs to enable GTP encapsulation of downlink traffic on\n  `leaf1`.\n\n\n## 1. Start ONOS and Mininet with GTP topology\n\nSince we want to use a different topology, we need to reset the current\nenvironment (if currently active):\n\n    $ make reset\n\nThis command will stop ONOS and Mininet and remove any state associated with\nthem.\n\nRe-start ONOS and Mininet, this time with the new topology:\n\n**IMPORTANT:** please notice the `-gtp` suffix!\n\n    $ make start-gtp\n\nWait about 1 minute before proceeding with the next steps, this will give time\nto ONOS to start all of its subsystems.\n\n## 2. Load additional apps\n\nAs in the previous exercises, let's activate the `segmentrouting` and `fabric`\npipeconf app using the ONOS CLI (`make onos-cli`):\n\n    onos> app activate fabric \n    onos> app activate segmentrouting\n\nLet's also activate a third app named `netcfghostprovider`:\n\n    onos> app activate netcfghostprovider \n\nThe `netcfghostprovider` (Network Config Host Provider ) is a built-in service\nsimilar to the `hostprovider` (Host Location Provider) seen in the previous\nexercises. It is responsible for registering hosts in the system, however,\ndifferently from `hostprovider`, it does not listen for ARP or DHCP packet-ins\nto automatically discover hosts. Instead, it uses information in the netcfg JSON\nfile, allowing operators to pre-populate the ONOS host store.\n\nThis is useful for static topologies and to avoid relying on ARP, DHCP, and\nother host-generated traffic. In this exercise, we use the netcfg JSON to\nconfigure the location of the `enodeb` and `pdn` hosts.\n\n#### Verify apps\n\nThe complete list of apps should look like the following (21 in total)\n\n    onos> apps -s -a\n    *  18 org.onosproject.drivers              2.2.2    Default Drivers\n    *  37 org.onosproject.protocols.grpc       2.2.2    gRPC Protocol Subsystem\n    *  38 org.onosproject.protocols.gnmi       2.2.2    gNMI Protocol Subsystem\n    *  39 org.onosproject.generaldeviceprovider 2.2.2    General Device Provider\n    *  40 org.onosproject.protocols.gnoi       2.2.2    gNOI Protocol Subsystem\n    *  41 org.onosproject.drivers.gnoi         2.2.2    gNOI Drivers\n    *  42 org.onosproject.route-service        2.2.2    Route Service Server\n    *  43 org.onosproject.mcast                2.2.2    Multicast traffic control\n    *  44 org.onosproject.portloadbalancer     2.2.2    Port Load Balance Service\n    *  45 org.onosproject.segmentrouting       2.2.2    Segment Routing\n    *  53 org.onosproject.hostprovider         2.2.2    Host Location Provider\n    *  54 org.onosproject.lldpprovider         2.2.2    LLDP Link Provider\n    *  64 org.onosproject.protocols.p4runtime  2.2.2    P4Runtime Protocol Subsystem\n    *  65 org.onosproject.p4runtime            2.2.2    P4Runtime Provider\n    *  96 org.onosproject.netcfghostprovider   2.2.2    Network Config Host Provider\n    *  99 org.onosproject.drivers.gnmi         2.2.2    gNMI Drivers\n    * 100 org.onosproject.drivers.p4runtime    2.2.2    P4Runtime Drivers\n    * 101 org.onosproject.pipelines.basic      2.2.2    Basic Pipelines\n    * 102 org.onosproject.drivers.stratum      2.2.2    Stratum Drivers\n    * 103 org.onosproject.drivers.bmv2         2.2.2    BMv2 Drivers\n    * 111 org.onosproject.pipelines.fabric     2.2.2    Fabric Pipeline\n    * 164 org.onosproject.gui2                 2.2.2    ONOS GUI2\n\n#### Verify pipeconfs\n\nAll `fabric` pipeconf profiles should have been registered by now. Take a note\non the full ID of the one with SPGW capabilities (`fabric-spgw`), you will need\nthis ID in the next step.\n\n    onos> pipeconfs\n    id=org.onosproject.pipelines.fabric-full, behaviors=[PortStatisticsDiscovery, PiPipelineInterpreter, Pipeliner, IntProgrammable], extensions=[P4_INFO_TEXT, BMV2_JSON, CPU_PORT_TXT]\n    id=org.onosproject.pipelines.int, behaviors=[PiPipelineInterpreter, Pipeliner, PortStatisticsDiscovery, IntProgrammable], extensions=[P4_INFO_TEXT, BMV2_JSON]\n    id=org.onosproject.pipelines.fabric-spgw-int, behaviors=[PortStatisticsDiscovery, PiPipelineInterpreter, Pipeliner, IntProgrammable], extensions=[P4_INFO_TEXT, BMV2_JSON, CPU_PORT_TXT]\n    id=org.onosproject.pipelines.fabric, behaviors=[PortStatisticsDiscovery, PiPipelineInterpreter, Pipeliner], extensions=[P4_INFO_TEXT, BMV2_JSON, CPU_PORT_TXT]\n    id=org.onosproject.pipelines.fabric-bng, behaviors=[PortStatisticsDiscovery, PiPipelineInterpreter, Pipeliner, BngProgrammable], extensions=[P4_INFO_TEXT, BMV2_JSON, CPU_PORT_TXT]\n    id=org.onosproject.pipelines.fabric-spgw, behaviors=[PortStatisticsDiscovery, PiPipelineInterpreter, Pipeliner], extensions=[P4_INFO_TEXT, BMV2_JSON, CPU_PORT_TXT]\n    id=org.onosproject.pipelines.fabric-int, behaviors=[PortStatisticsDiscovery, PiPipelineInterpreter, Pipeliner, IntProgrammable], extensions=[P4_INFO_TEXT, BMV2_JSON, CPU_PORT_TXT]\n    id=org.onosproject.pipelines.basic, behaviors=[PiPipelineInterpreter, Pipeliner, PortStatisticsDiscovery], extensions=[P4_INFO_TEXT, BMV2_JSON]\n\n## 3. Modify and push netcfg to use fabric-spgw profile\n\nUp until now, we have used topologies where all switches were configured with\nthe same pipeconf, and so the same P4 program.\n\nIn this exercise, we want all switches to run with the basic `fabric` profile,\nwhile `leaf1` should act as the SPGW, and so we want it programmed with the\n`fabric-spgw` profile.\n\n#### Modify netcfg JSON\n\nLet's modify the netcfg JSON to use the `fabric-spgw` profile on switch\n`leaf1`.\n\n1. Open up file [netcfg-gtp.json] and look for the configuration of `leaf1`\n   in thw `\"devices\"` block. It should look like this:\n\n    ```\n      \"devices\": {\n        \"device:leaf1\": {\n          \"basic\": {\n            \"managementAddress\": \"grpc://mininet:50001?device_id=1\",\n            \"driver\": \"stratum-bmv2\",\n            \"pipeconf\": \"org.onosproject.pipelines.fabric\",\n    ...\n    ```\n\n2. Modify the `pipeconf` property to use the full ID of the `fabric-spgw`\n   profile obtained in the previous step.\n\n3. Save the file.\n\n#### Push netcfg to ONOS\n\nOn a terminal window, type:\n\n**IMPORTANT**: please notice the `-gtp` suffix!\n\n    $ make netcfg-gtp\n\nUse the ONOS CLI (`make onos-cli`) to verify that all 4 switches are connected\nto ONOS and provisioned with the right pipeconf/profile.\n\n    onos> devices -s\n    id=device:leaf1, available=true, role=MASTER, type=SWITCH, driver=stratum-bmv2:org.onosproject.pipelines.fabric-spgw\n    id=device:leaf2, available=true, role=MASTER, type=SWITCH, driver=stratum-bmv2:org.onosproject.pipelines.fabric\n    id=device:spine1, available=true, role=MASTER, type=SWITCH, driver=stratum-bmv2:org.onosproject.pipelines.fabric\n    id=device:spine2, available=true, role=MASTER, type=SWITCH, driver=stratum-bmv2:org.onosproject.pipelines.fabric\n\nMake sure `leaf1` has driver with the `fabric-sgw` pipeconf, while all other\nswitches should have the basic `fabric` pipeconf.\n\n##### Troubleshooting\n\nIf `leaf1` does NOT have `available=true`, it probably means that you have\ninserted the wrong pipeconf ID in [netcfg-gtp.json] and ONOS is not able to\nperform the initial provisioning.\n\nCheck the ONOS log (`make onos-log`) for possible errors. Remember from the\nprevious exercise that some errors are expected (e.g., for unsupported\n`PSEUDO_WIRE` flow objectives). If you see an error like this:\n\n    ERROR [DeviceTaskExecutor] Unable to complete task CONNECTION_SETUP for device:leaf1: pipeconf ... not registered\n\nIt means you have to go to the previous step to correct your pipeconf ID. Modify\nthe [netcfg-gtp.json] file and push it again using `make netcfg-gtp`. Use the\nONOS CLI and log to make sure the issue is fixed before proceeding.\n\n#### Check configuration in ONOS\n\nCheck the interface configuration. In this topology we want `segmentrouting` to\nforward traffic based on two IP subnets:\n\n    onos> interfaces\n    leaf1-3: port=device:leaf1/3 ips=[10.0.100.254/24] mac=00:AA:00:00:00:01 vlanUntagged=100\n    leaf2-3: port=device:leaf2/3 ips=[10.0.200.254/24] mac=00:AA:00:00:00:02 vlanUntagged=200\n\nCheck that the `enodeb` and `pdn` hosts have been discovered:\n\n    onos> hosts\n    id=00:00:00:00:00:10/None, mac=00:00:00:00:00:10, locations=[device:leaf1/3], auxLocations=null, vlan=None, ip(s)=[10.0.100.1], ..., name=enodeb, ..., provider=host:org.onosproject.netcfghost, configured=true\n    id=00:00:00:00:00:20/None, mac=00:00:00:00:00:20, locations=[device:leaf2/3], auxLocations=null, vlan=None, ip(s)=[10.0.200.1], ..., name=pdn, ..., provider=host:org.onosproject.netcfghost, configured=true\n\n`provider=host:org.onosproject.netcfghost` and `configured=true` are indications\nthat the host entry was created by `netcfghostprovider`.\n\n## 4. Verify IP connectivity between PDN and eNodeB\n\nSince the two hosts have already been discovered, they should be pingable.\n\nUsing the Mininet CLI (`make mn-cli`) start a ping between `enodeb` and `pdn`:\n\n    mininet> enodeb ping pdn\n    PING 10.0.200.1 (10.0.200.1) 56(84) bytes of data.\n    64 bytes from 10.0.200.1: icmp_seq=1 ttl=62 time=1053 ms\n    64 bytes from 10.0.200.1: icmp_seq=2 ttl=62 time=49.0 ms\n    64 bytes from 10.0.200.1: icmp_seq=3 ttl=62 time=9.63 ms\n    ...\n\n## 5. Start PDN and eNodeB processes\n\nWe have created two Python scripts to emulate the PDN sending downlink\ntraffic to the UEs, and the eNodeB, expecting to receive the\nsame traffic but GTP-encapsulated.\n\nIn a new terminal window, start the [send-udp.py] script on the `pdn` host:\n\n    $ util/mn-cmd pdn /mininet/send-udp.py\n    Sending 5 UDP packets per second to 17.0.0.1...\n\n[util/mn-cmd] is a convenience script to run commands on mininet hosts when\nusing multiple terminal windows.\n\n[mininet/send-udp.py][send-udp.py] generates packets with destination\nIPv4 address `17.0.0.1` (UE address). In the rest of the exercise we\nwill configure Trellis to route these packets through switch `leaf1`, and\nwe will insert a flow rule in this switch to perform the GTP\nencapsulation. For now, this traffic will be dropped at `leaf2`.\n\nOn a second terminal window, start the [recv-gtp.py] script on the `enodeb`\nhost:\n\n```\n$ util/mn-cmd enodeb /mininet/recv-gtp.py   \nWill print a line for each UDP packet received...\n```\n\n[mininet/recv-gtp.py][recv-gtp.py] simply sniffs packets received and prints\nthem on screen, informing if the packet is GTP encapsulated or not. You should\nsee no packets being printed for the moment.\n\n#### Use ONOS UI to visualize traffic\n\nUsing the ONF Cloud Tutorial Portal, access the ONOS UI.\nIf you are running a VM on your laptop, open up a browser\n(e.g. Firefox) to <http://127.0.0.1:8181/onos/ui>.\n\nWhen asked, use the username `onos` and password `rocks`.\n\nTo show hosts, press <kbd>H</kbd>. To show real-time link utilization, press\n<kbd>A</kbd> multiple times until showing \"Port stats (packets / second)\".\n\nYou should see traffic (5 pps) on the link between the `pdn` host and `leaf2`,\nbut not on other links. **Packets are dropped at switch `leaf2` as this switch\ndoes not know how to route packets with IPv4 destination `17.0.0.1`.**\n\n## 6. Install route for UE subnet and debug table entries\n\nUsing the ONOS CLI (`make onos-cli`), type the following command to add a route\nfor the UE subnet (`17.0.0.0/24`) with next hop the `enodeb` (`10.0.100.1`):\n\n    onos> route-add 17.0.0.0/24 10.0.100.1\n\nCheck that the new route has been successfully added:\n\n```\nonos> routes\nB: Best route, R: Resolved route\n\nTable: ipv4\nB R  Network            Next Hop        Source (Node)\n> *  17.0.0.0/24        10.0.100.1      STATIC (-)\n   Total: 1\n...\n```\n\nSince `10.0.100.1` is a known host to ONOS, i.e., we know its location in the\ntopology (see `*` under `R` column, which stands for \"Resolved route\",)\n`segmentrouting` uses this information to compute paths and install the\nnecessary table entries to forward packets with IPv4 destination address\nmatching `17.0.0.0/24`.\n\nOpen up the terminal window with the [recv-gtp.py] script on the `enodeb` host,\nyou should see the following output:\n\n    [...] 691 bytes: 10.0.200.1 -> 17.0.0.1, is_gtp_encap=False\n        Ether / IP / UDP 10.0.200.1:80 > 17.0.0.1:400 / Raw\n    ....\n\nThese lines indicate that a packet has been received at the eNodeB. The static\nroute is working! However, there's no trace of GTP headers, yet. We'll get back\nto this soon, but for now, let's take a look at table entries in `fabric.p4`.\n\nFeel free to also check on the ONOS UI to see packets forwarded across the\nspines, and delivered to the eNodeB (the next hop of our static route).\n\n#### Debug fabric.p4 table entries\n\nYou can verify that the table entries for the static route have been added to\nthe switches by \"grepping\" the output of the ONOS CLI `flows` command, for\nexample for `leaf2`:\n\n    onos> flows -s any device:leaf2 | grep \"17.0.0.0\"\n        ADDED, bytes=0, packets=0, table=FabricIngress.forwarding.routing_v4, priority=48010, selector=[IPV4_DST:17.0.0.0/24], treatment=[immediate=[FabricIngress.forwarding.set_next_id_routing_v4(next_id=0xd)]]\n\n\nOne entry has been `ADDED` to table `FabricIngress.forwarding.routing_v4` with\n`next_id=0xd`.\n\nLet's grep flow rules for `next_id=0xd`:\n\n    onos> flows -s any device:leaf2 | grep \"next_id=0xd\"\n        ADDED, bytes=0, packets=0, table=FabricIngress.forwarding.routing_v4, priority=48010, selector=[IPV4_DST:17.0.0.0/24], treatment=[immediate=[FabricIngress.forwarding.set_next_id_routing_v4(next_id=0xd)]]\n        ADDED, bytes=0, packets=0, table=FabricIngress.forwarding.routing_v4, priority=48010, selector=[IPV4_DST:10.0.100.0/24], treatment=[immediate=[FabricIngress.forwarding.set_next_id_routing_v4(next_id=0xd)]]\n        ADDED, bytes=1674881, packets=2429, table=FabricIngress.next.hashed, priority=0, selector=[next_id=0xd], treatment=[immediate=[GROUP:0xd]]\n        ADDED, bytes=1674881, packets=2429, table=FabricIngress.next.next_vlan, priority=0, selector=[next_id=0xd], treatment=[immediate=[FabricIngress.next.set_vlan(vlan_id=0xffe)]]\n\nNotice that another route shares the same next ID (`10.0.100.0/24` from\nthe to interface config for `leaf1`), and that the next ID points to a group\nwith the same value (`GROUP:0xd`).\n\nLet's take a look at this specific group:\n\n    onos> groups any device:leaf2 | grep \"0xd\"\n       id=0xd, state=ADDED, type=SELECT, bytes=0, packets=0, appId=org.onosproject.segmentrouting, referenceCount=0\n           id=0xd, bucket=1, bytes=0, packets=0, weight=1, actions=[FabricIngress.next.mpls_routing_hashed(dmac=0xbb00000001, port_num=0x1, smac=0xaa00000002, label=0x65)]\n           id=0xd, bucket=2, bytes=0, packets=0, weight=1, actions=[FabricIngress.next.mpls_routing_hashed(dmac=0xbb00000002, port_num=0x2, smac=0xaa00000002, label=0x65)]\n\nThis `SELECT`  group is used to hash traffic to the spines (i.e., ECMP) and to\npush an MPLS label with hex value`0x65`, or 101 in base 10.\n\nSpine switches will use this label to forward packets. Can you tell what 101\nidentifies here? Hint: take a look at [netcfg-gtp.json]\n\n## 7. Use ONOS REST APIs to create GTP flow rule\n\nFinally, it is time to instruct `leaf1` to encapsulate traffic with a GTP tunnel\nheader. To do this, we will insert a special table entry in the \"SPGW portion\"\nof the [fabric.p4] pipeline, implemented in file [spgw.p4]. Specifically, we\nwill insert one entry in the [dl_sess_lookup] table, responsible for handling\ndownlink traffic (i.e., with match on the UE IPv4 address) by setting the GTP\ntunnel info which will be used to perform the encapsulation (action\n`set_dl_sess_info`).\n\n**NOTE:** this version of spgw.p4 is from ONOS v2.2.2 (the same used in this\nexercise). The P4 code might have changed recently, and you might see different\ntables if you open up the same file in a different branch.\n\nTo insert the flow rule, we will not use an app (which we would have to\nimplement from scratch!), but instead, we will use the ONOS REST APIs. To learn\nmore about the available APIs, use the following URL to open up the\nautomatically generated API docs from your running ONOS instance:\n<http://127.0.0.1:8181/onos/v1/docs/>\n\nThe specific API we will use to create new flow rules is `POST /flows`,\ndescribed here:\n<http://127.0.0.1:8181/onos/v1/docs/#!/flows/post_flows>\n\nThis API takes a JSON request. The file\n[mininet/flowrule-gtp.json][flowrule-gtp.json] specifies the flow rule we\nwant to create. This file is incomplete, and you need to modify it before we can\nsend it via the REST APIs.\n\n1. Open up file [mininet/flowrule-gtp.json][flowrule-gtp.json].\n\n   Look for the `\"selector\"` section that specifies the match fields:\n   ```\n   \"selector\": {\n     \"criteria\": [\n       {\n         \"type\": \"IPV4_DST\",\n         \"ip\": \"<INSERT HERE UE IP ADDR>/32\"\n       }\n     ]\n   },\n   ...\n   ```\n    \n2. Modify the `ip` field to match on the IP address of the UE (17.0.0.1).\n    \n    Since the `dl_sess_lookup` table performs exact match on the IPv4\n    address, make sure to specify the match field with `/32` prefix length.\n    \n    Also, note that the `set_dl_sess_info` action, is specified as\n    `PROTOCOL_INDEPENDENT`. This is the ONOS terminology to describe custom\n    flow rule actions. For this reason, the action parameters are specified\n    as byte strings in hexadecimal format:\n    \n    * `\"teid\": \"BEEF\"`: GTP tunnel identifier (48879 in decimal base)\n    * `\"s1u_enb_addr\": \"0a006401\"`: destination IPv4 address of the\n      GTP tunnel, i.e., outer IPv4 header (10.0.100.1). This is the address of\n      the eNodeB.\n    * `\"s1u_sgw_addr\": \"0a0064fe\"`: source address of the GTP outer IPv4\n      header (10.0.100.254). This is the address of the switch interface\n      configured in Trellis.\n    \n3. Save the [flowrule-gtp.json] file.\n\n4. Push the flow rule to ONOS using the REST APIs.\n\n   On a terminal window, type the following commands:\n   \n   ```\n   $ make flowrule-gtp\n   ```\n\n   This command uses `cURL` to push the flow rule JSON file to the ONOS REST API\n   endpoint. If the flow rule has been created correctly, you should see an\n   output similar to the following one:\n\n   ```\n   *** Pushing flowrule-gtp.json to ONOS...\n   curl --fail -sSL --user onos:rocks --noproxy localhost -X POST -H 'Content-Type:application/json' \\\n                   http://localhost:8181/onos/v1/flows?appId=rest-api -d@./mininet/flowrule-gtp.json\n   {\"flows\":[{\"deviceId\":\"device:leaf1\",\"flowId\":\"54606147878186474\"}]}\n\n   ```\n\n5. Check the eNodeB process. You should see that the received packets\n   are now GTP encapsulated!\n\n    ```\n    [...] 727 bytes: 10.0.100.254 -> 10.0.100.1, is_gtp_encap=True\n        Ether / IP / UDP / GTP_U_Header / IP / UDP 10.0.200.1:80 > 17.0.0.1:400 / Raw\n    ```\n\n## Congratulations!\n\nYou have completed the eighth exercise! You were able to use fabric.p4 and\nTrellis to encapsulate traffic in GTP tunnels and to route it across the fabric.\n\n[topo-gtp.py]: mininet/topo-gtp.py\n[netcfg-gtp.json]: mininet/netcfg-gtp.json\n[send-udp.py]: mininet/send-udp.py\n[recv-gtp.py]: mininet/recv-gtp.py\n[util/mn-cmd]: util/mn-cmd\n[fabric.p4]: https://github.com/opennetworkinglab/onos/blob/2.2.2/pipelines/fabric/impl/src/main/resources/fabric.p4\n[spgw.p4]: https://github.com/opennetworkinglab/onos/blob/2.2.2/pipelines/fabric/impl/src/main/resources/include/spgw.p4\n[dl_sess_lookup]: https://github.com/opennetworkinglab/onos/blob/2.2.2/pipelines/fabric/impl/src/main/resources/include/spgw.p4#L70\n[flowrule-gtp.json]: mininet/flowrule-gtp.json\n"
  },
  {
    "path": "LICENSE",
    "content": "                                 Apache License\n                           Version 2.0, January 2004\n                        http://www.apache.org/licenses/\n\n   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\n\n   1. Definitions.\n\n      \"License\" shall mean the terms and conditions for use, reproduction,\n      and distribution as defined by Sections 1 through 9 of this document.\n\n      \"Licensor\" shall mean the copyright owner or entity authorized by\n      the copyright owner that is granting the License.\n\n      \"Legal Entity\" shall mean the union of the acting entity and all\n      other entities that control, are controlled by, or are under common\n      control with that entity. For the purposes of this definition,\n      \"control\" means (i) the power, direct or indirect, to cause the\n      direction or management of such entity, whether by contract or\n      otherwise, or (ii) ownership of fifty percent (50%) or more of the\n      outstanding shares, or (iii) beneficial ownership of such entity.\n\n      \"You\" (or \"Your\") shall mean an individual or Legal Entity\n      exercising permissions granted by this License.\n\n      \"Source\" form shall mean the preferred form for making modifications,\n      including but not limited to software source code, documentation\n      source, and configuration files.\n\n      \"Object\" form shall mean any form resulting from mechanical\n      transformation or translation of a Source form, including but\n      not limited to compiled object code, generated documentation,\n      and conversions to other media types.\n\n      \"Work\" shall mean the work of authorship, whether in Source or\n      Object form, made available under the License, as indicated by a\n      copyright notice that is included in or attached to the work\n      (an example is provided in the Appendix below).\n\n      \"Derivative Works\" shall mean any work, whether in Source or Object\n      form, that is based on (or derived from) the Work and for which the\n      editorial revisions, annotations, elaborations, or other modifications\n      represent, as a whole, an original work of authorship. For the purposes\n      of this License, Derivative Works shall not include works that remain\n      separable from, or merely link (or bind by name) to the interfaces of,\n      the Work and Derivative Works thereof.\n\n      \"Contribution\" shall mean any work of authorship, including\n      the original version of the Work and any modifications or additions\n      to that Work or Derivative Works thereof, that is intentionally\n      submitted to Licensor for inclusion in the Work by the copyright owner\n      or by an individual or Legal Entity authorized to submit on behalf of\n      the copyright owner. For the purposes of this definition, \"submitted\"\n      means any form of electronic, verbal, or written communication sent\n      to the Licensor or its representatives, including but not limited to\n      communication on electronic mailing lists, source code control systems,\n      and issue tracking systems that are managed by, or on behalf of, the\n      Licensor for the purpose of discussing and improving the Work, but\n      excluding communication that is conspicuously marked or otherwise\n      designated in writing by the copyright owner as \"Not a Contribution.\"\n\n      \"Contributor\" shall mean Licensor and any individual or Legal Entity\n      on behalf of whom a Contribution has been received by Licensor and\n      subsequently incorporated within the Work.\n\n   2. Grant of Copyright License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      copyright license to reproduce, prepare Derivative Works of,\n      publicly display, publicly perform, sublicense, and distribute the\n      Work and such Derivative Works in Source or Object form.\n\n   3. Grant of Patent License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      (except as stated in this section) patent license to make, have made,\n      use, offer to sell, sell, import, and otherwise transfer the Work,\n      where such license applies only to those patent claims licensable\n      by such Contributor that are necessarily infringed by their\n      Contribution(s) alone or by combination of their Contribution(s)\n      with the Work to which such Contribution(s) was submitted. If You\n      institute patent litigation against any entity (including a\n      cross-claim or counterclaim in a lawsuit) alleging that the Work\n      or a Contribution incorporated within the Work constitutes direct\n      or contributory patent infringement, then any patent licenses\n      granted to You under this License for that Work shall terminate\n      as of the date such litigation is filed.\n\n   4. Redistribution. You may reproduce and distribute copies of the\n      Work or Derivative Works thereof in any medium, with or without\n      modifications, and in Source or Object form, provided that You\n      meet the following conditions:\n\n      (a) You must give any other recipients of the Work or\n          Derivative Works a copy of this License; and\n\n      (b) You must cause any modified files to carry prominent notices\n          stating that You changed the files; and\n\n      (c) You must retain, in the Source form of any Derivative Works\n          that You distribute, all copyright, patent, trademark, and\n          attribution notices from the Source form of the Work,\n          excluding those notices that do not pertain to any part of\n          the Derivative Works; and\n\n      (d) If the Work includes a \"NOTICE\" text file as part of its\n          distribution, then any Derivative Works that You distribute must\n          include a readable copy of the attribution notices contained\n          within such NOTICE file, excluding those notices that do not\n          pertain to any part of the Derivative Works, in at least one\n          of the following places: within a NOTICE text file distributed\n          as part of the Derivative Works; within the Source form or\n          documentation, if provided along with the Derivative Works; or,\n          within a display generated by the Derivative Works, if and\n          wherever such third-party notices normally appear. The contents\n          of the NOTICE file are for informational purposes only and\n          do not modify the License. You may add Your own attribution\n          notices within Derivative Works that You distribute, alongside\n          or as an addendum to the NOTICE text from the Work, provided\n          that such additional attribution notices cannot be construed\n          as modifying the License.\n\n      You may add Your own copyright statement to Your modifications and\n      may provide additional or different license terms and conditions\n      for use, reproduction, or distribution of Your modifications, or\n      for any such Derivative Works as a whole, provided Your use,\n      reproduction, and distribution of the Work otherwise complies with\n      the conditions stated in this License.\n\n   5. Submission of Contributions. Unless You explicitly state otherwise,\n      any Contribution intentionally submitted for inclusion in the Work\n      by You to the Licensor shall be under the terms and conditions of\n      this License, without any additional terms or conditions.\n      Notwithstanding the above, nothing herein shall supersede or modify\n      the terms of any separate license agreement you may have executed\n      with Licensor regarding such Contributions.\n\n   6. Trademarks. This License does not grant permission to use the trade\n      names, trademarks, service marks, or product names of the Licensor,\n      except as required for reasonable and customary use in describing the\n      origin of the Work and reproducing the content of the NOTICE file.\n\n   7. Disclaimer of Warranty. Unless required by applicable law or\n      agreed to in writing, Licensor provides the Work (and each\n      Contributor provides its Contributions) on an \"AS IS\" BASIS,\n      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n      implied, including, without limitation, any warranties or conditions\n      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A\n      PARTICULAR PURPOSE. You are solely responsible for determining the\n      appropriateness of using or redistributing the Work and assume any\n      risks associated with Your exercise of permissions under this License.\n\n   8. Limitation of Liability. In no event and under no legal theory,\n      whether in tort (including negligence), contract, or otherwise,\n      unless required by applicable law (such as deliberate and grossly\n      negligent acts) or agreed to in writing, shall any Contributor be\n      liable to You for damages, including any direct, indirect, special,\n      incidental, or consequential damages of any character arising as a\n      result of this License or out of the use or inability to use the\n      Work (including but not limited to damages for loss of goodwill,\n      work stoppage, computer failure or malfunction, or any and all\n      other commercial damages or losses), even if such Contributor\n      has been advised of the possibility of such damages.\n\n   9. Accepting Warranty or Additional Liability. While redistributing\n      the Work or Derivative Works thereof, You may choose to offer,\n      and charge a fee for, acceptance of support, warranty, indemnity,\n      or other liability obligations and/or rights consistent with this\n      License. However, in accepting such obligations, You may act only\n      on Your own behalf and on Your sole responsibility, not on behalf\n      of any other Contributor, and only if You agree to indemnify,\n      defend, and hold each Contributor harmless for any liability\n      incurred by, or claims asserted against, such Contributor by reason\n      of your accepting any such warranty or additional liability.\n\n   END OF TERMS AND CONDITIONS\n\n   APPENDIX: How to apply the Apache License to your work.\n\n      To apply the Apache License to your work, attach the following\n      boilerplate notice, with the fields enclosed by brackets \"[]\"\n      replaced with your own identifying information. (Don't include\n      the brackets!)  The text should be enclosed in the appropriate\n      comment syntax for the file format. We also recommend that a\n      file or class name and description of purpose be included on the\n      same \"printed page\" as the copyright notice for easier\n      identification within third-party archives.\n\n   Copyright [yyyy] [name of copyright owner]\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n"
  },
  {
    "path": "Makefile",
    "content": "mkfile_path := $(abspath $(lastword $(MAKEFILE_LIST)))\ncurr_dir := $(patsubst %/,%,$(dir $(mkfile_path)))\n\ninclude util/docker/Makefile.vars\n\nonos_url := http://localhost:8181/onos\nonos_curl := curl --fail -sSL --user onos:rocks --noproxy localhost\napp_name := org.onosproject.ngsdn-tutorial\n\nNGSDN_TUTORIAL_SUDO ?=\n\ndefault:\n\t$(error Please specify a make target (see README.md))\n\n_docker_pull_all:\n\tdocker pull ${ONOS_IMG}@${ONOS_SHA}\n\tdocker tag ${ONOS_IMG}@${ONOS_SHA} ${ONOS_IMG}\n\tdocker pull ${P4RT_SH_IMG}@${P4RT_SH_SHA}\n\tdocker tag ${P4RT_SH_IMG}@${P4RT_SH_SHA} ${P4RT_SH_IMG}\n\tdocker pull ${P4C_IMG}@${P4C_SHA}\n\tdocker tag ${P4C_IMG}@${P4C_SHA} ${P4C_IMG}\n\tdocker pull ${STRATUM_BMV2_IMG}@${STRATUM_BMV2_SHA}\n\tdocker tag ${STRATUM_BMV2_IMG}@${STRATUM_BMV2_SHA} ${STRATUM_BMV2_IMG}\n\tdocker pull ${MVN_IMG}@${MVN_SHA}\n\tdocker tag ${MVN_IMG}@${MVN_SHA} ${MVN_IMG}\n\tdocker pull ${GNMI_CLI_IMG}@${GNMI_CLI_SHA}\n\tdocker tag ${GNMI_CLI_IMG}@${GNMI_CLI_SHA} ${GNMI_CLI_IMG}\n\tdocker pull ${YANG_IMG}@${YANG_SHA}\n\tdocker tag ${YANG_IMG}@${YANG_SHA} ${YANG_IMG}\n\tdocker pull ${SSHPASS_IMG}@${SSHPASS_SHA}\n\tdocker tag ${SSHPASS_IMG}@${SSHPASS_SHA} ${SSHPASS_IMG}\n\ndeps: _docker_pull_all\n\n_start:\n\t$(info *** Starting ONOS and Mininet (${NGSDN_TOPO_PY})... )\n\t@mkdir -p tmp/onos\n\t@NGSDN_TOPO_PY=${NGSDN_TOPO_PY} docker-compose up -d\n\nstart: NGSDN_TOPO_PY := topo-v6.py\nstart: _start\n\nstart-v4: NGSDN_TOPO_PY := topo-v4.py\nstart-v4: _start\n\nstart-gtp: NGSDN_TOPO_PY := topo-gtp.py\nstart-gtp: _start\n\nstop:\n\t$(info *** Stopping ONOS and Mininet...)\n\t@NGSDN_TOPO_PY=foo docker-compose down -t0\n\nrestart: reset start\n\nonos-cli:\n\t$(info *** Connecting to the ONOS CLI... password: rocks)\n\t$(info *** Top exit press Ctrl-D)\n\t@ssh -o \"UserKnownHostsFile=/dev/null\" -o \"StrictHostKeyChecking=no\" -o LogLevel=ERROR -p 8101 onos@localhost\n\nonos-log:\n\tdocker-compose logs -f onos\n\nonos-ui:\n\topen ${onos_url}/ui\n\nmn-cli:\n\t$(info *** Attaching to Mininet CLI...)\n\t$(info *** To detach press Ctrl-D (Mininet will keep running))\n\t-@docker attach --detach-keys \"ctrl-d\" $(shell docker-compose ps -q mininet) || echo \"*** Detached from Mininet CLI\"\n\nmn-log:\n\tdocker logs -f mininet\n\n_netcfg:\n\t$(info *** Pushing ${NGSDN_NETCFG_JSON} to ONOS...)\n\t${onos_curl} -X POST -H 'Content-Type:application/json' \\\n\t\t${onos_url}/v1/network/configuration -d@./mininet/${NGSDN_NETCFG_JSON}\n\t@echo\n\nnetcfg: NGSDN_NETCFG_JSON := netcfg.json\nnetcfg: _netcfg\n\nnetcfg-sr: NGSDN_NETCFG_JSON := netcfg-sr.json\nnetcfg-sr: _netcfg\n\nnetcfg-gtp: NGSDN_NETCFG_JSON := netcfg-gtp.json\nnetcfg-gtp: _netcfg\n\nflowrule-gtp:\n\t$(info *** Pushing flowrule-gtp.json to ONOS...)\n\t${onos_curl} -X POST -H 'Content-Type:application/json' \\\n\t\t${onos_url}/v1/flows?appId=rest-api -d@./mininet/flowrule-gtp.json\n\t@echo\n\nflowrule-clean:\n\t$(info *** Removing all flows installed via REST APIs...)\n\t${onos_curl} -X DELETE -H 'Content-Type:application/json' \\\n\t\t${onos_url}/v1/flows/application/rest-api\n\t@echo\n\nreset: stop\n\t-$(NGSDN_TUTORIAL_SUDO) rm -rf ./tmp\n\nclean:\n\t-$(NGSDN_TUTORIAL_SUDO) rm -rf p4src/build\n\t-$(NGSDN_TUTORIAL_SUDO) rm -rf app/target\n\t-$(NGSDN_TUTORIAL_SUDO) rm -rf app/src/main/resources/bmv2.json\n\t-$(NGSDN_TUTORIAL_SUDO) rm -rf app/src/main/resources/p4info.txt\n\np4-build: p4src/main.p4\n\t$(info *** Building P4 program...)\n\t@mkdir -p p4src/build\n\tdocker run --rm -v ${curr_dir}:/workdir -w /workdir ${P4C_IMG} \\\n\t\tp4c-bm2-ss --arch v1model -o p4src/build/bmv2.json \\\n\t\t--p4runtime-files p4src/build/p4info.txt --Wdisable=unsupported \\\n\t\tp4src/main.p4\n\t@echo \"*** P4 program compiled successfully! Output files are in p4src/build\"\n\np4-test:\n\t@cd ptf && PTF_DOCKER_IMG=$(STRATUM_BMV2_IMG) ./run_tests $(TEST)\n\n_copy_p4c_out:\n\t$(info *** Copying p4c outputs to app resources...)\n\t@mkdir -p app/src/main/resources\n\tcp -f p4src/build/p4info.txt app/src/main/resources/\n\tcp -f p4src/build/bmv2.json app/src/main/resources/\n\n_mvn_package:\n\t$(info *** Building ONOS app...)\n\t@mkdir -p app/target\n\t@docker run --rm -v ${curr_dir}/app:/mvn-src -w /mvn-src ${MVN_IMG} mvn -o clean package\n\napp-build: p4-build _copy_p4c_out _mvn_package\n\t$(info *** ONOS app .oar package created succesfully)\n\t@ls -1 app/target/*.oar\n\napp-install:\n\t$(info *** Installing and activating app in ONOS...)\n\t${onos_curl} -X POST -HContent-Type:application/octet-stream \\\n\t\t'${onos_url}/v1/applications?activate=true' \\\n\t\t--data-binary @app/target/ngsdn-tutorial-1.0-SNAPSHOT.oar\n\t@echo\n\napp-uninstall:\n\t$(info *** Uninstalling app from ONOS (if present)...)\n\t-${onos_curl} -X DELETE ${onos_url}/v1/applications/${app_name}\n\t@echo\n\napp-reload: app-uninstall app-install\n\nyang-tools:\n\tdocker run --rm -it -v ${curr_dir}/yang/demo-port.yang:/models/demo-port.yang ${YANG_IMG}\n\nsolution-apply:\n\tmkdir working_copy\n\tcp -r app working_copy/app\n\tcp -r p4src working_copy/p4src\n\tcp -r ptf working_copy/ptf\n\tcp -r mininet working_copy/mininet\n\trsync -r solution/ ./\n\nsolution-revert:\n\ttest -d working_copy\n\t$(NGSDN_TUTORIAL_SUDO) rm -rf ./app/*\n\t$(NGSDN_TUTORIAL_SUDO) rm -rf ./p4src/*\n\t$(NGSDN_TUTORIAL_SUDO) rm -rf ./ptf/*\n\t$(NGSDN_TUTORIAL_SUDO) rm -rf ./mininet/*\n\tcp -r working_copy/* ./\n\t$(NGSDN_TUTORIAL_SUDO) rm -rf working_copy/\n\ncheck:\n\tmake reset\n\t# P4 starter code and app should compile\n\tmake p4-build\n\tmake app-build\n\t# Check solution\n\tmake solution-apply\n\tmake start\n\tmake p4-build\n\tmake p4-test\n\tmake app-build\n\tsleep 30\n\tmake app-reload\n\tsleep 10\n\tmake netcfg\n\tsleep 10\n\t# The first ping(s) might fail because of a known race condition in the\n\t# L2BridgingComponenet. Ping all hosts.\n\t-util/mn-cmd h1a ping -c 1 2001:1:1::b\n\tutil/mn-cmd h1a ping -c 1 2001:1:1::b\n\t-util/mn-cmd h1b ping -c 1 2001:1:1::c\n\tutil/mn-cmd h1b ping -c 1 2001:1:1::c\n\t-util/mn-cmd h2 ping -c 1 2001:1:1::b\n\tutil/mn-cmd h2 ping -c 1 2001:1:1::b\n\tutil/mn-cmd h2 ping -c 1 2001:1:1::a\n\tutil/mn-cmd h2 ping -c 1 2001:1:1::c\n\t-util/mn-cmd h3 ping -c 1 2001:1:2::1\n\tutil/mn-cmd h3 ping -c 1 2001:1:2::1\n\tutil/mn-cmd h3 ping -c 1 2001:1:1::a\n\tutil/mn-cmd h3 ping -c 1 2001:1:1::b\n\tutil/mn-cmd h3 ping -c 1 2001:1:1::c\n\t-util/mn-cmd h4 ping -c 1 2001:1:2::1\n\tutil/mn-cmd h4 ping -c 1 2001:1:2::1\n\tutil/mn-cmd h4 ping -c 1 2001:1:1::a\n\tutil/mn-cmd h4 ping -c 1 2001:1:1::b\n\tutil/mn-cmd h4 ping -c 1 2001:1:1::c\n\tmake stop\n\tmake solution-revert\n\ncheck-sr:\n\tmake reset\n\tmake start-v4\n\tsleep 45\n\tutil/onos-cmd app activate segmentrouting\n\tutil/onos-cmd app activate pipelines.fabric\n\tsleep 15\n\tmake netcfg-sr\n\tsleep 20\n\tutil/mn-cmd h1a ping -c 1 172.16.1.3\n\tutil/mn-cmd h1b ping -c 1 172.16.1.3\n\tutil/mn-cmd h2 ping -c 1 172.16.2.254\n\tsleep 5\n\tutil/mn-cmd h2 ping -c 1 172.16.1.1\n\tutil/mn-cmd h2 ping -c 1 172.16.1.2\n\tutil/mn-cmd h2 ping -c 1 172.16.1.3\n\t# ping from h3 and h4 should not work without the solution\n\t! util/mn-cmd h3 ping -c 1 172.16.3.254\n\t! util/mn-cmd h4 ping -c 1 172.16.4.254\n\tmake solution-apply\n\tmake netcfg-sr\n\tsleep 20\n\tutil/mn-cmd h3 ping -c 1 172.16.3.254\n\tutil/mn-cmd h4 ping -c 1 172.16.4.254\n\tsleep 5\n\tutil/mn-cmd h3 ping -c 1 172.16.1.1\n\tutil/mn-cmd h3 ping -c 1 172.16.1.2\n\tutil/mn-cmd h3 ping -c 1 172.16.1.3\n\tutil/mn-cmd h3 ping -c 1 172.16.2.1\n\tutil/mn-cmd h3 ping -c 1 172.16.4.1\n\tutil/mn-cmd h4 ping -c 1 172.16.1.1\n\tutil/mn-cmd h4 ping -c 1 172.16.1.2\n\tutil/mn-cmd h4 ping -c 1 172.16.1.3\n\tutil/mn-cmd h4 ping -c 1 172.16.2.1\n\tmake stop\n\tmake solution-revert\n\ncheck-gtp:\n\tmake reset\n\tmake start-gtp\n\tsleep 45\n\tutil/onos-cmd app activate segmentrouting\n\tutil/onos-cmd app activate pipelines.fabric\n\tutil/onos-cmd app activate netcfghostprovider\n\tsleep 15\n\tmake solution-apply\n\tmake netcfg-gtp\n\tsleep 20\n\tutil/mn-cmd enodeb ping -c 1 10.0.100.254\n\tutil/mn-cmd pdn ping -c 1 10.0.200.254\n\tutil/onos-cmd route-add 17.0.0.0/24 10.0.100.1\n\tmake flowrule-gtp\n\t# util/mn-cmd requires a TTY because it uses docker -it option\n\t# hence we use screen for putting it in the background\n\tscreen -d -m util/mn-cmd pdn /mininet/send-udp.py\n\tutil/mn-cmd enodeb /mininet/recv-gtp.py -e\n\tmake stop\n\tmake solution-revert\n"
  },
  {
    "path": "README.md",
    "content": "# Next-Gen SDN Tutorial (Advanced)\n\nWelcome to the Next-Gen SDN tutorial!\n\nThis tutorial is targeted at students and practitioners who want to learn about\nthe building blocks of the next-generation SDN (NG-SDN) architecture, such as:\n\n* Data plane programming and control via P4 and P4Runtime\n* Configuration via YANG, OpenConfig, and gNMI\n* Stratum switch OS\n* ONOS SDN controller\n\nTutorial sessions are organized around a sequence of hands-on exercises that\nshow how to build a leaf-spine data center fabric based on IPv6, using P4,\nStratum, and ONOS. Exercises assume an intermediate knowledge of the P4\nlanguage, and a basic knowledge of Java and Python. Participants will be\nprovided with a starter P4 program and ONOS app implementation. Exercises will\nfocus on concepts such as:\n\n* Using Stratum APIs (P4Runtime, gNMI, OpenConfig, gNOI)\n* Using ONOS with devices programmed with arbitrary P4 programs\n* Writing ONOS applications to provide the control plane logic\n  (bridging, routing, ECMP, etc.)\n* Testing using bmv2 in Mininet\n* PTF-based P4 unit tests\n\n## Basic vs. advanced version\n\nThis tutorial comes in two versions: basic (`master` branch), and advanced\n(this branch).\n\nThe basic version contains fewer exercises, and it does not assume prior\nknowledge of the P4 language. Instead, it provides a gentle introduction to it.\nCheck the `master` branch of this repo if you're interested in the basic\nversion.\n\nIf you're interested in the advanced version, keep reading.\n\n## Slides\n\nTutorial slides are available online:\n<http://bit.ly/adv-ngsdn-tutorial-slides>\n\nThese slides provide an introduction to the topics covered in the tutorial. We\nsuggest you look at it before starting to work on the exercises.\n\n## System requirements\n\nIf you are taking this tutorial at an event organized by ONF, you should have\nreceived credentials to access the **ONF Cloud Tutorial Platform**, in which\ncase you can skip this section. Keep reading if you are interested in working on\nthe exercises on your laptop.\n\nTo facilitate access to the tools required to complete this tutorial, we provide\ntwo options for you to choose from:\n\n1. Download a pre-packaged VM with all included; **OR**\n2. Manually install Docker and other dependencies.\n\n### Option 1 - Download tutorial VM\n\nUse the following link to download the VM (4 GB):\n* <http://bit.ly/ngsdn-tutorial-ova>\n\nThe VM is in .ova format and has been created using VirtualBox v5.2.32. To run\nthe VM you can use any modern virtualization system, although we recommend using\nVirtualBox. For instructions on how to get VirtualBox and import the VM, use the\nfollowing links:\n\n* <https://www.virtualbox.org/wiki/Downloads>\n* <https://docs.oracle.com/cd/E26217_01/E26796/html/qs-import-vm.html>\n\nAlternatively, you can use the scripts in [util/vm](util/vm) to build a VM on\nyour machine using Vagrant.\n\n**Recommended VM configuration:**\nThe current configuration of the VM is 4 GB of RAM and 4 core CPU. These are the\nrecommended minimum system requirements to complete the exercises. When\nimported, the VM takes approx. 8 GB of HDD space. For a smooth experience, we\nrecommend running the VM on a host system that has at least the double of\nresources.\n\n**VM user credentials:**\nUse credentials `sdn`/`rocks` to log in the Ubuntu system.\n\n### Option 2 - Manually install Docker and other dependencies\n\nAll exercises can be executed by installing the following dependencies:\n\n* Docker v1.13.0+ (with docker-compose)\n* make\n* Python 3\n* Bash-like Unix shell\n* Wireshark (optional)\n\n**Note for Windows users**: all scripts have been tested on macOS and Ubuntu.\nAlthough we think they should work on Windows, we have not tested it. For this\nreason, we advise Windows users to prefer Option 1.\n\n## Get this repo or pull latest changes\n\nTo work on the exercises you will need to clone this repo:\n\n    cd ~\n    git clone -b advanced https://github.com/opennetworkinglab/ngsdn-tutorial\n\nIf the `ngsdn-tutorial` directory is already present, make sure to update its\ncontent:\n\n    cd ~/ngsdn-tutorial\n    git pull origin advanced\n\n## Download / upgrade dependencies\n\nThe VM may have shipped with an older version of the dependencies than we would\nlike to use for the exercises. You can upgrade to the latest version using the\nfollowing command:\n\n    cd ~/ngsdn-tutorial\n    make deps\n\nThis command will download all necessary Docker images (~1.5 GB) allowing you to\nwork off-line. For this reason, we recommend running this step ahead of the\ntutorial, with a reliable Internet connection.\n\n## Using an IDE to work on the exercises\n\nDuring the exercises you will need to write code in multiple languages such as\nP4, Java, and Python. While the exercises do not prescribe the use of any\nspecific IDE or code editor, the **ONF Cloud Tutorial Platform** provides access\nto a web-based version of Visual Studio Code (VS Code).\n\nIf you are using the tutorial VM, you will find the Java IDE [IntelliJ IDEA\nCommunity Edition](https://www.jetbrains.com/idea/), already pre-loaded with\nplugins for P4 syntax highlighting and Python development. We suggest using\nIntelliJ IDEA especially when working on the ONOS app, as it provides code\ncompletion for all ONOS APIs.\n\n## Repo structure\n\nThis repo is structured as follows:\n\n * `p4src/` P4 implementation\n * `yang/` Yang model used in exercise 2\n * `app/` custom ONOS app Java implementation\n * `mininet/` Mininet script to emulate a 2x2 leaf-spine fabric topology of\n   `stratum_bmv2` devices\n * `util/` Utility scripts\n * `ptf/` P4 data plane unit tests based on Packet Test Framework (PTF)\n\n## Tutorial commands\n\nTo facilitate working on the exercises, we provide a set of make-based commands\nto control the different aspects of the tutorial. Commands will be introduced in\nthe exercises, here's a quick reference:\n\n| Make command        | Description                                            |\n|---------------------|------------------------------------------------------- |\n| `make deps`         | Pull and build all required dependencies               |\n| `make p4-build`     | Build P4 program                                       |\n| `make p4-test`      | Run PTF tests                                          |\n| `make start`        | Start Mininet and ONOS containers                      |\n| `make stop`         | Stop all containers                                    |\n| `make restart`      | Restart containers clearing any previous state         |\n| `make onos-cli`     | Access the ONOS CLI (password: `rocks`, Ctrl-D to exit)|\n| `make onos-log`     | Show the ONOS log                                      |\n| `make mn-cli`       | Access the Mininet CLI (Ctrl-D to exit)                |\n| `make mn-log`       | Show the Mininet log (i.e., the CLI output)            |\n| `make app-build`    | Build custom ONOS app                                  |\n| `make app-reload`   | Install and activate the ONOS app                      |\n| `make netcfg`       | Push netcfg.json file (network config) to ONOS         |\n\n## Exercises\n\nClick on the exercise name to see the instructions:\n\n 1. [P4Runtime basics](./EXERCISE-1.md)\n 2. [Yang, OpenConfig, and gNMI basics](./EXERCISE-2.md)\n 3. [Using ONOS as the control plane](./EXERCISE-3.md)\n 4. [Enabling ONOS built-in services](./EXERCISE-4.md)\n 5. [Implementing IPv6 routing with ECMP](./EXERCISE-5.md)\n 6. [Implementing SRv6](./EXERCISE-6.md)\n 7. [Trellis Basics](./EXERCISE-7.md)\n 8. [GTP termination with fabric.p4](./EXERCISE-8.md)\n\n## Solutions\n\nYou can find solutions for each exercise in the [solution](solution) directory.\nFeel free to compare your solution to the reference one whenever you feel stuck.\n\n[![Build Status](https://travis-ci.org/opennetworkinglab/ngsdn-tutorial.svg?branch=advanced)](https://travis-ci.org/opennetworkinglab/ngsdn-tutorial)\n"
  },
  {
    "path": "app/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!--\n  ~ Copyright 2019 Open Networking Foundation\n  ~\n  ~ Licensed under the Apache License, Version 2.0 (the \"License\");\n  ~ you may not use this file except in compliance with the License.\n  ~ You may obtain a copy of the License at\n  ~\n  ~     http://www.apache.org/licenses/LICENSE-2.0\n  ~\n  ~ Unless required by applicable law or agreed to in writing, software\n  ~ distributed under the License is distributed on an \"AS IS\" BASIS,\n  ~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n  ~ See the License for the specific language governing permissions and\n  ~ limitations under the License.\n  -->\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\"\n         xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n         xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\">\n    <modelVersion>4.0.0</modelVersion>\n\n    <parent>\n        <groupId>org.onosproject</groupId>\n        <artifactId>onos-dependencies</artifactId>\n        <version>2.2.2</version>\n    </parent>\n\n    <groupId>org.onosproject</groupId>\n    <artifactId>ngsdn-tutorial</artifactId>\n    <version>1.0-SNAPSHOT</version>\n    <packaging>bundle</packaging>\n\n    <description>NG-SDN tutorial app</description>\n    <url>http://www.onosproject.org</url>\n\n    <properties>\n        <onos.app.name>org.onosproject.ngsdn-tutorial</onos.app.name>\n        <onos.app.title>NG-SDN Tutorial App</onos.app.title>\n        <onos.app.origin>https://www.onosproject.org</onos.app.origin>\n        <onos.app.category>Traffic Steering</onos.app.category>\n        <onos.app.url>https://www.onosproject.org</onos.app.url>\n        <onos.app.readme>\n            Provides IPv6 routing capabilities to a leaf-spine network of\n            Stratum switches\n        </onos.app.readme>\n    </properties>\n\n    <dependencies>\n        <dependency>\n            <groupId>org.onosproject</groupId>\n            <artifactId>onos-api</artifactId>\n            <version>${onos.version}</version>\n            <scope>provided</scope>\n        </dependency>\n\n        <dependency>\n            <groupId>org.onosproject</groupId>\n            <artifactId>onos-protocols-p4runtime-model</artifactId>\n            <version>${onos.version}</version>\n            <scope>provided</scope>\n        </dependency>\n\n        <dependency>\n            <groupId>org.onosproject</groupId>\n            <artifactId>onos-protocols-p4runtime-api</artifactId>\n            <version>${onos.version}</version>\n            <scope>provided</scope>\n        </dependency>\n\n        <dependency>\n            <groupId>org.onosproject</groupId>\n            <artifactId>onos-protocols-grpc-api</artifactId>\n            <version>${onos.version}</version>\n            <scope>provided</scope>\n        </dependency>\n\n        <dependency>\n            <groupId>org.onosproject</groupId>\n            <artifactId>onlab-osgi</artifactId>\n            <version>${onos.version}</version>\n            <scope>provided</scope>\n        </dependency>\n\n        <dependency>\n            <groupId>org.onosproject</groupId>\n            <artifactId>onlab-misc</artifactId>\n            <version>${onos.version}</version>\n            <scope>provided</scope>\n        </dependency>\n\n        <dependency>\n            <groupId>org.onosproject</groupId>\n            <artifactId>onos-cli</artifactId>\n            <version>${onos.version}</version>\n            <scope>provided</scope>\n        </dependency>\n\n        <dependency>\n            <groupId>org.slf4j</groupId>\n            <artifactId>slf4j-api</artifactId>\n            <scope>provided</scope>\n        </dependency>\n\n        <dependency>\n            <groupId>com.google.guava</groupId>\n            <artifactId>guava</artifactId>\n            <scope>provided</scope>\n        </dependency>\n\n        <dependency>\n            <groupId>com.fasterxml.jackson.core</groupId>\n            <artifactId>jackson-databind</artifactId>\n            <scope>provided</scope>\n        </dependency>\n\n        <dependency>\n            <groupId>junit</groupId>\n            <artifactId>junit</artifactId>\n            <scope>test</scope>\n        </dependency>\n\n        <dependency>\n            <groupId>org.onosproject</groupId>\n            <artifactId>onos-api</artifactId>\n            <version>${onos.version}</version>\n            <scope>test</scope>\n            <classifier>tests</classifier>\n        </dependency>\n\n        <dependency>\n            <groupId>org.osgi</groupId>\n            <artifactId>org.osgi.service.component.annotations</artifactId>\n            <scope>provided</scope>\n        </dependency>\n\n        <dependency>\n            <groupId>org.osgi</groupId>\n            <artifactId>org.osgi.core</artifactId>\n            <scope>provided</scope>\n        </dependency>\n\n        <dependency>\n            <groupId>org.apache.karaf.shell</groupId>\n            <artifactId>org.apache.karaf.shell.console</artifactId>\n            <scope>provided</scope>\n        </dependency>\n    </dependencies>\n\n    <build>\n        <plugins>\n            <plugin>\n                <groupId>org.apache.felix</groupId>\n                <artifactId>maven-bundle-plugin</artifactId>\n                <configuration>\n                    <instructions>\n                        <Karaf-Commands>\n                            org.onosproject.ngsdn.tutorial.cli\n                        </Karaf-Commands>\n                    </instructions>\n                </configuration>\n            </plugin>\n            <plugin>\n                <groupId>org.onosproject</groupId>\n                <artifactId>onos-maven-plugin</artifactId>\n            </plugin>\n        </plugins>\n    </build>\n\n</project>\n"
  },
  {
    "path": "app/src/main/java/org/onosproject/ngsdn/tutorial/AppConstants.java",
    "content": "/*\n * Copyright 2019-present Open Networking Foundation\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may not use this file except in compliance with the License.\n * You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\npackage org.onosproject.ngsdn.tutorial;\n\nimport org.onosproject.net.pi.model.PiPipeconfId;\n\npublic class AppConstants {\n\n    public static final String APP_NAME = \"org.onosproject.ngsdn-tutorial\";\n    public static final PiPipeconfId PIPECONF_ID = new PiPipeconfId(\"org.onosproject.ngsdn-tutorial\");\n\n    public static final int DEFAULT_FLOW_RULE_PRIORITY = 10;\n    public static final int INITIAL_SETUP_DELAY = 2; // Seconds.\n    public static final int CLEAN_UP_DELAY = 2000; // milliseconds\n    public static final int DEFAULT_CLEAN_UP_RETRY_TIMES = 10;\n\n    public static final int CPU_PORT_ID = 255;\n    public static final int CPU_CLONE_SESSION_ID = 99;\n}\n"
  },
  {
    "path": "app/src/main/java/org/onosproject/ngsdn/tutorial/Ipv6RoutingComponent.java",
    "content": "/*\n * Copyright 2019-present Open Networking Foundation\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may not use this file except in compliance with the License.\n * You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\npackage org.onosproject.ngsdn.tutorial;\n\nimport com.google.common.collect.Lists;\nimport org.onlab.packet.Ip6Address;\nimport org.onlab.packet.Ip6Prefix;\nimport org.onlab.packet.IpAddress;\nimport org.onlab.packet.IpPrefix;\nimport org.onlab.packet.MacAddress;\nimport org.onlab.util.ItemNotFoundException;\nimport org.onosproject.core.ApplicationId;\nimport org.onosproject.mastership.MastershipService;\nimport org.onosproject.net.Device;\nimport org.onosproject.net.DeviceId;\nimport org.onosproject.net.Host;\nimport org.onosproject.net.Link;\nimport org.onosproject.net.PortNumber;\nimport org.onosproject.net.config.NetworkConfigService;\nimport org.onosproject.net.device.DeviceEvent;\nimport org.onosproject.net.device.DeviceListener;\nimport org.onosproject.net.device.DeviceService;\nimport org.onosproject.net.flow.FlowRule;\nimport org.onosproject.net.flow.FlowRuleService;\nimport org.onosproject.net.flow.criteria.PiCriterion;\nimport org.onosproject.net.group.GroupDescription;\nimport org.onosproject.net.group.GroupService;\nimport org.onosproject.net.host.HostEvent;\nimport org.onosproject.net.host.HostListener;\nimport org.onosproject.net.host.HostService;\nimport org.onosproject.net.host.InterfaceIpAddress;\nimport org.onosproject.net.intf.Interface;\nimport org.onosproject.net.intf.InterfaceService;\nimport org.onosproject.net.link.LinkEvent;\nimport org.onosproject.net.link.LinkListener;\nimport org.onosproject.net.link.LinkService;\nimport org.onosproject.net.pi.model.PiActionId;\nimport org.onosproject.net.pi.model.PiActionParamId;\nimport org.onosproject.net.pi.model.PiMatchFieldId;\nimport org.onosproject.net.pi.runtime.PiAction;\nimport org.onosproject.net.pi.runtime.PiActionParam;\nimport org.onosproject.net.pi.runtime.PiActionProfileGroupId;\nimport org.onosproject.net.pi.runtime.PiTableAction;\nimport org.osgi.service.component.annotations.Activate;\nimport org.osgi.service.component.annotations.Component;\nimport org.osgi.service.component.annotations.Deactivate;\nimport org.osgi.service.component.annotations.Reference;\nimport org.osgi.service.component.annotations.ReferenceCardinality;\nimport org.onosproject.ngsdn.tutorial.common.FabricDeviceConfig;\nimport org.onosproject.ngsdn.tutorial.common.Utils;\nimport org.slf4j.Logger;\nimport org.slf4j.LoggerFactory;\n\nimport java.util.Collection;\nimport java.util.Collections;\nimport java.util.List;\nimport java.util.Optional;\nimport java.util.Set;\nimport java.util.stream.Collectors;\n\nimport static com.google.common.collect.Streams.stream;\nimport static org.onosproject.ngsdn.tutorial.AppConstants.INITIAL_SETUP_DELAY;\n\n/**\n * App component that configures devices to provide IPv6 routing capabilities\n * across the whole fabric.\n */\n@Component(\n        immediate = true,\n        // *** TODO EXERCISE 5\n        // set to true when ready\n        enabled = false\n)\npublic class Ipv6RoutingComponent {\n\n    private static final Logger log = LoggerFactory.getLogger(Ipv6RoutingComponent.class);\n\n    private static final int DEFAULT_ECMP_GROUP_ID = 0xec3b0000;\n    private static final long GROUP_INSERT_DELAY_MILLIS = 200;\n\n    private final HostListener hostListener = new InternalHostListener();\n    private final LinkListener linkListener = new InternalLinkListener();\n    private final DeviceListener deviceListener = new InternalDeviceListener();\n\n    private ApplicationId appId;\n\n    //--------------------------------------------------------------------------\n    // ONOS CORE SERVICE BINDING\n    //\n    // These variables are set by the Karaf runtime environment before calling\n    // the activate() method.\n    //--------------------------------------------------------------------------\n\n    @Reference(cardinality = ReferenceCardinality.MANDATORY)\n    private FlowRuleService flowRuleService;\n\n    @Reference(cardinality = ReferenceCardinality.MANDATORY)\n    private HostService hostService;\n\n    @Reference(cardinality = ReferenceCardinality.MANDATORY)\n    private MastershipService mastershipService;\n\n    @Reference(cardinality = ReferenceCardinality.MANDATORY)\n    private GroupService groupService;\n\n    @Reference(cardinality = ReferenceCardinality.MANDATORY)\n    private DeviceService deviceService;\n\n    @Reference(cardinality = ReferenceCardinality.MANDATORY)\n    private NetworkConfigService networkConfigService;\n\n    @Reference(cardinality = ReferenceCardinality.MANDATORY)\n    private InterfaceService interfaceService;\n\n    @Reference(cardinality = ReferenceCardinality.MANDATORY)\n    private LinkService linkService;\n\n    @Reference(cardinality = ReferenceCardinality.MANDATORY)\n    private MainComponent mainComponent;\n\n    //--------------------------------------------------------------------------\n    // COMPONENT ACTIVATION.\n    //\n    // When loading/unloading the app the Karaf runtime environment will call\n    // activate()/deactivate().\n    //--------------------------------------------------------------------------\n\n    @Activate\n    protected void activate() {\n        appId = mainComponent.getAppId();\n\n        hostService.addListener(hostListener);\n        linkService.addListener(linkListener);\n        deviceService.addListener(deviceListener);\n\n        // Schedule set up for all devices.\n        mainComponent.scheduleTask(this::setUpAllDevices, INITIAL_SETUP_DELAY);\n\n        log.info(\"Started\");\n    }\n\n    @Deactivate\n    protected void deactivate() {\n        hostService.removeListener(hostListener);\n        linkService.removeListener(linkListener);\n        deviceService.removeListener(deviceListener);\n\n        log.info(\"Stopped\");\n    }\n\n    //--------------------------------------------------------------------------\n    // METHODS TO COMPLETE.\n    //\n    // Complete the implementation wherever you see TODO.\n    //--------------------------------------------------------------------------\n\n    /**\n     * Sets up the \"My Station\" table for the given device using the\n     * myStationMac address found in the config.\n     * <p>\n     * This method will be called at component activation for each device\n     * (switch) known by ONOS, and every time a new device-added event is\n     * captured by the InternalDeviceListener defined below.\n     *\n     * @param deviceId the device ID\n     */\n    private void setUpMyStationTable(DeviceId deviceId) {\n\n        log.info(\"Adding My Station rules to {}...\", deviceId);\n\n        final MacAddress myStationMac = getMyStationMac(deviceId);\n\n        // HINT: in our solution, the My Station table matches on the *ethernet\n        // destination* and there is only one action called *NoAction*, which is\n        // used as an indication of \"table hit\" in the control block.\n\n        // *** TODO EXERCISE 5\n        // Modify P4Runtime entity names to match content of P4Info file (look\n        // for the fully qualified name of tables, match fields, and actions.\n        // ---- START SOLUTION ----\n        final String tableId = \"MODIFY ME\";\n\n        final PiCriterion match = PiCriterion.builder()\n                .matchExact(\n                        PiMatchFieldId.of(\"MODIFY ME\"),\n                        myStationMac.toBytes())\n                .build();\n\n        // Creates an action which do *NoAction* when hit.\n        final PiTableAction action = PiAction.builder()\n                .withId(PiActionId.of(\"MODIFY ME\"))\n                .build();\n        // ---- END SOLUTION ----\n\n        final FlowRule myStationRule = Utils.buildFlowRule(\n                deviceId, appId, tableId, match, action);\n\n        flowRuleService.applyFlowRules(myStationRule);\n    }\n\n    /**\n     * Creates an ONOS SELECT group for the routing table to provide ECMP\n     * forwarding for the given collection of next hop MAC addresses. ONOS\n     * SELECT groups are equivalent to P4Runtime action selector groups.\n     * <p>\n     * This method will be called by the routing policy methods below to insert\n     * groups in the L3 table\n     *\n     * @param nextHopMacs the collection of mac addresses of next hops\n     * @param deviceId    the device where the group will be installed\n     * @return a SELECT group\n     */\n    private GroupDescription createNextHopGroup(int groupId,\n                                                Collection<MacAddress> nextHopMacs,\n                                                DeviceId deviceId) {\n\n        String actionProfileId = \"IngressPipeImpl.ecmp_selector\";\n\n        final List<PiAction> actions = Lists.newArrayList();\n\n        // Build one \"set next hop\" action for each next hop\n        // *** TODO EXERCISE 5\n        // Modify P4Runtime entity names to match content of P4Info file (look\n        // for the fully qualified name of tables, match fields, and actions.\n        // ---- START SOLUTION ----\n        final String tableId = \"MODIFY ME\";\n        for (MacAddress nextHopMac : nextHopMacs) {\n            final PiAction action = PiAction.builder()\n                    .withId(PiActionId.of(\"MODIFY ME\"))\n                    .withParameter(new PiActionParam(\n                            // Action param name.\n                            PiActionParamId.of(\"MODIFY ME\"),\n                            // Action param value.\n                            nextHopMac.toBytes()))\n                    .build();\n\n            actions.add(action);\n        }\n        // ---- END SOLUTION ----\n\n        return Utils.buildSelectGroup(\n                deviceId, tableId, actionProfileId, groupId, actions, appId);\n    }\n\n    /**\n     * Creates a routing flow rule that matches on the given IPv6 prefix and\n     * executes the given group ID (created before).\n     *\n     * @param deviceId  the device where flow rule will be installed\n     * @param ip6Prefix the IPv6 prefix\n     * @param groupId   the group ID\n     * @return a flow rule\n     */\n    private FlowRule createRoutingRule(DeviceId deviceId, Ip6Prefix ip6Prefix,\n                                       int groupId) {\n\n        // *** TODO EXERCISE 5\n        // Modify P4Runtime entity names to match content of P4Info file (look\n        // for the fully qualified name of tables, match fields, and actions.\n        // ---- START SOLUTION ----\n        final String tableId = \"MODIFY ME\";\n        final PiCriterion match = PiCriterion.builder()\n                .matchLpm(\n                        PiMatchFieldId.of(\"MODIFY ME\"),\n                        ip6Prefix.address().toOctets(),\n                        ip6Prefix.prefixLength())\n                .build();\n\n        final PiTableAction action = PiActionProfileGroupId.of(groupId);\n        // ---- END SOLUTION ----\n\n        return Utils.buildFlowRule(\n                deviceId, appId, tableId, match, action);\n    }\n\n    /**\n     * Creates a flow rule for the L2 table mapping the given next hop MAC to\n     * the given output port.\n     * <p>\n     * This is called by the routing policy methods below to establish L2-based\n     * forwarding inside the fabric, e.g., when deviceId is a leaf switch and\n     * nextHopMac is the one of a spine switch.\n     *\n     * @param deviceId   the device\n     * @param nexthopMac the next hop (destination) mac\n     * @param outPort    the output port\n     */\n    private FlowRule createL2NextHopRule(DeviceId deviceId, MacAddress nexthopMac,\n                                         PortNumber outPort) {\n\n        // *** TODO EXERCISE 5\n        // Modify P4Runtime entity names to match content of P4Info file (look\n        // for the fully qualified name of tables, match fields, and actions.\n        // ---- START SOLUTION ----\n        final String tableId = \"MODIFY ME\";\n        final PiCriterion match = PiCriterion.builder()\n                .matchExact(PiMatchFieldId.of(\"MODIFY ME\"),\n                        nexthopMac.toBytes())\n                .build();\n\n\n        final PiAction action = PiAction.builder()\n                .withId(PiActionId.of(\"MODIFY ME\"))\n                .withParameter(new PiActionParam(\n                        PiActionParamId.of(\"MODIFY ME\"),\n                        outPort.toLong()))\n                .build();\n        // ---- END SOLUTION ----\n\n        return Utils.buildFlowRule(\n                deviceId, appId, tableId, match, action);\n    }\n\n    //--------------------------------------------------------------------------\n    // EVENT LISTENERS\n    //\n    // Events are processed only if isRelevant() returns true.\n    //--------------------------------------------------------------------------\n\n    /**\n     * Listener of host events which triggers configuration of routing rules on\n     * the device where the host is attached.\n     */\n    class InternalHostListener implements HostListener {\n\n        @Override\n        public boolean isRelevant(HostEvent event) {\n            switch (event.type()) {\n                case HOST_ADDED:\n                    break;\n                case HOST_REMOVED:\n                case HOST_UPDATED:\n                case HOST_MOVED:\n                default:\n                    // Ignore other events.\n                    // Food for thoughts:\n                    // how to support host moved/removed events?\n                    return false;\n            }\n            // Process host event only if this controller instance is the master\n            // for the device where this host is attached.\n            final Host host = event.subject();\n            final DeviceId deviceId = host.location().deviceId();\n            return mastershipService.isLocalMaster(deviceId);\n        }\n\n        @Override\n        public void event(HostEvent event) {\n            Host host = event.subject();\n            DeviceId deviceId = host.location().deviceId();\n            mainComponent.getExecutorService().execute(() -> {\n                log.info(\"{} event! host={}, deviceId={}, port={}\",\n                        event.type(), host.id(), deviceId, host.location().port());\n                setUpHostRules(deviceId, host);\n            });\n        }\n    }\n\n    /**\n     * Listener of link events, which triggers configuration of routing rules to\n     * forward packets across the fabric, i.e. from leaves to spines and vice\n     * versa.\n     * <p>\n     * Reacting to link events instead of device ones, allows us to make sure\n     * all device are always configured with a topology view that includes all\n     * links, e.g. modifying an ECMP group as soon as a new link is added. The\n     * downside is that we might be configuring the same device twice for the\n     * same set of links/paths. However, the ONOS core treats these cases as a\n     * no-op when the device is already configured with the desired forwarding\n     * state (i.e. flows and groups)\n     */\n    class InternalLinkListener implements LinkListener {\n\n        @Override\n        public boolean isRelevant(LinkEvent event) {\n            switch (event.type()) {\n                case LINK_ADDED:\n                    break;\n                case LINK_UPDATED:\n                case LINK_REMOVED:\n                default:\n                    return false;\n            }\n            DeviceId srcDev = event.subject().src().deviceId();\n            DeviceId dstDev = event.subject().dst().deviceId();\n            return mastershipService.isLocalMaster(srcDev) ||\n                    mastershipService.isLocalMaster(dstDev);\n        }\n\n        @Override\n        public void event(LinkEvent event) {\n            DeviceId srcDev = event.subject().src().deviceId();\n            DeviceId dstDev = event.subject().dst().deviceId();\n\n            if (mastershipService.isLocalMaster(srcDev)) {\n                mainComponent.getExecutorService().execute(() -> {\n                    log.info(\"{} event! Configuring {}... linkSrc={}, linkDst={}\",\n                            event.type(), srcDev, srcDev, dstDev);\n                    setUpFabricRoutes(srcDev);\n                    setUpL2NextHopRules(srcDev);\n                });\n            }\n            if (mastershipService.isLocalMaster(dstDev)) {\n                mainComponent.getExecutorService().execute(() -> {\n                    log.info(\"{} event! Configuring {}... linkSrc={}, linkDst={}\",\n                            event.type(), dstDev, srcDev, dstDev);\n                    setUpFabricRoutes(dstDev);\n                    setUpL2NextHopRules(dstDev);\n                });\n            }\n        }\n    }\n\n    /**\n     * Listener of device events which triggers configuration of the My Station\n     * table.\n     */\n    class InternalDeviceListener implements DeviceListener {\n\n        @Override\n        public boolean isRelevant(DeviceEvent event) {\n            switch (event.type()) {\n                case DEVICE_AVAILABILITY_CHANGED:\n                case DEVICE_ADDED:\n                    break;\n                default:\n                    return false;\n            }\n            // Process device event if this controller instance is the master\n            // for the device and the device is available.\n            DeviceId deviceId = event.subject().id();\n            return mastershipService.isLocalMaster(deviceId) &&\n                    deviceService.isAvailable(event.subject().id());\n        }\n\n        @Override\n        public void event(DeviceEvent event) {\n            mainComponent.getExecutorService().execute(() -> {\n                DeviceId deviceId = event.subject().id();\n                log.info(\"{} event! device id={}\", event.type(), deviceId);\n                setUpMyStationTable(deviceId);\n            });\n        }\n    }\n\n    //--------------------------------------------------------------------------\n    // ROUTING POLICY METHODS\n    //\n    // Called by event listeners, these methods implement the actual routing\n    // policy, responsible of computing paths and creating ECMP groups.\n    //--------------------------------------------------------------------------\n\n    /**\n     * Set up L2 nexthop rules of a device to providing forwarding inside the\n     * fabric, i.e. between leaf and spine switches.\n     *\n     * @param deviceId the device ID\n     */\n    private void setUpL2NextHopRules(DeviceId deviceId) {\n\n        Set<Link> egressLinks = linkService.getDeviceEgressLinks(deviceId);\n\n        for (Link link : egressLinks) {\n            // For each other switch directly connected to this.\n            final DeviceId nextHopDevice = link.dst().deviceId();\n            // Get port of this device connecting to next hop.\n            final PortNumber outPort = link.src().port();\n            // Get next hop MAC address.\n            final MacAddress nextHopMac = getMyStationMac(nextHopDevice);\n\n            final FlowRule nextHopRule = createL2NextHopRule(\n                    deviceId, nextHopMac, outPort);\n\n            flowRuleService.applyFlowRules(nextHopRule);\n        }\n    }\n\n    /**\n     * Sets up the given device with the necessary rules to route packets to the\n     * given host.\n     *\n     * @param deviceId deviceId the device ID\n     * @param host     the host\n     */\n    private void setUpHostRules(DeviceId deviceId, Host host) {\n\n        // Get all IPv6 addresses associated to this host. In this tutorial we\n        // use hosts with only 1 IPv6 address.\n        final Collection<Ip6Address> hostIpv6Addrs = host.ipAddresses().stream()\n                .filter(IpAddress::isIp6)\n                .map(IpAddress::getIp6Address)\n                .collect(Collectors.toSet());\n\n        if (hostIpv6Addrs.isEmpty()) {\n            // Ignore.\n            log.debug(\"No IPv6 addresses for host {}, ignore\", host.id());\n            return;\n        } else {\n            log.info(\"Adding routes on {} for host {} [{}]\",\n                    deviceId, host.id(), hostIpv6Addrs);\n        }\n\n        // Create an ECMP group with only one member, where the group ID is\n        // derived from the host MAC.\n        final MacAddress hostMac = host.mac();\n        int groupId = macToGroupId(hostMac);\n\n        final GroupDescription group = createNextHopGroup(\n                groupId, Collections.singleton(hostMac), deviceId);\n\n        // Map each host IPV6 address to corresponding /128 prefix and obtain a\n        // flow rule that points to the group ID. In this tutorial we expect\n        // only one flow rule per host.\n        final List<FlowRule> flowRules = hostIpv6Addrs.stream()\n                .map(IpAddress::toIpPrefix)\n                .filter(IpPrefix::isIp6)\n                .map(IpPrefix::getIp6Prefix)\n                .map(prefix -> createRoutingRule(deviceId, prefix, groupId))\n                .collect(Collectors.toList());\n\n        // Helper function to install flows after groups, since here flows\n        // points to the group and P4Runtime enforces this dependency during\n        // write operations.\n        insertInOrder(group, flowRules);\n    }\n\n    /**\n     * Set up routes on a given device to forward packets across the fabric,\n     * making a distinction between spines and leaves.\n     *\n     * @param deviceId the device ID.\n     */\n    private void setUpFabricRoutes(DeviceId deviceId) {\n        if (isSpine(deviceId)) {\n            setUpSpineRoutes(deviceId);\n        } else {\n            setUpLeafRoutes(deviceId);\n        }\n    }\n\n    /**\n     * Insert routing rules on the given spine switch, matching on leaf\n     * interface subnets and forwarding packets to the corresponding leaf.\n     *\n     * @param spineId the spine device ID\n     */\n    private void setUpSpineRoutes(DeviceId spineId) {\n\n        log.info(\"Adding up spine routes on {}...\", spineId);\n\n        for (Device device : deviceService.getDevices()) {\n\n            if (isSpine(device.id())) {\n                // We only need routes to leaf switches. Ignore spines.\n                continue;\n            }\n\n            final DeviceId leafId = device.id();\n            final MacAddress leafMac = getMyStationMac(leafId);\n            final Set<Ip6Prefix> subnetsToRoute = getInterfaceIpv6Prefixes(leafId);\n\n            // Since we're here, we also add a route for SRv6 (Exercise 7), to\n            // forward packets with IPv6 dst the SID of a leaf switch.\n            final Ip6Address leafSid = getDeviceSid(leafId);\n            subnetsToRoute.add(Ip6Prefix.valueOf(leafSid, 128));\n\n            // Create a group with only one member.\n            int groupId = macToGroupId(leafMac);\n\n            GroupDescription group = createNextHopGroup(\n                    groupId, Collections.singleton(leafMac), spineId);\n\n            List<FlowRule> flowRules = subnetsToRoute.stream()\n                    .map(subnet -> createRoutingRule(spineId, subnet, groupId))\n                    .collect(Collectors.toList());\n\n            insertInOrder(group, flowRules);\n        }\n    }\n\n    /**\n     * Insert routing rules on the given leaf switch, matching on interface\n     * subnets associated to other leaves and forwarding packets the spines\n     * using ECMP.\n     *\n     * @param leafId the leaf device ID\n     */\n    private void setUpLeafRoutes(DeviceId leafId) {\n        log.info(\"Setting up leaf routes: {}\", leafId);\n\n        // Get the set of subnets (interface IPv6 prefixes) associated to other\n        // leafs but not this one.\n        Set<Ip6Prefix> subnetsToRouteViaSpines = stream(deviceService.getDevices())\n                .map(Device::id)\n                .filter(this::isLeaf)\n                .filter(deviceId -> !deviceId.equals(leafId))\n                .map(this::getInterfaceIpv6Prefixes)\n                .flatMap(Collection::stream)\n                .collect(Collectors.toSet());\n\n        // Get myStationMac address of all spines.\n        Set<MacAddress> spineMacs = stream(deviceService.getDevices())\n                .map(Device::id)\n                .filter(this::isSpine)\n                .map(this::getMyStationMac)\n                .collect(Collectors.toSet());\n\n        // Create an ECMP group to distribute traffic across all spines.\n        final int groupId = DEFAULT_ECMP_GROUP_ID;\n        final GroupDescription ecmpGroup = createNextHopGroup(\n                groupId, spineMacs, leafId);\n\n        // Generate a flow rule for each subnet pointing to the ECMP group.\n        List<FlowRule> flowRules = subnetsToRouteViaSpines.stream()\n                .map(subnet -> createRoutingRule(leafId, subnet, groupId))\n                .collect(Collectors.toList());\n\n        insertInOrder(ecmpGroup, flowRules);\n\n        // Since we're here, we also add a route for SRv6 (Exercise 7), to\n        // forward packets with IPv6 dst the SID of a spine switch, in this case\n        // using a single-member group.\n        stream(deviceService.getDevices())\n                .map(Device::id)\n                .filter(this::isSpine)\n                .forEach(spineId -> {\n                    MacAddress spineMac = getMyStationMac(spineId);\n                    Ip6Address spineSid = getDeviceSid(spineId);\n                    int spineGroupId = macToGroupId(spineMac);\n                    GroupDescription group = createNextHopGroup(\n                            spineGroupId, Collections.singleton(spineMac), leafId);\n                    FlowRule routingRule = createRoutingRule(\n                            leafId, Ip6Prefix.valueOf(spineSid, 128),\n                            spineGroupId);\n                    insertInOrder(group, Collections.singleton(routingRule));\n                });\n    }\n\n    //--------------------------------------------------------------------------\n    // UTILITY METHODS\n    //--------------------------------------------------------------------------\n\n    /**\n     * Returns true if the given device has isSpine flag set to true in the\n     * config, false otherwise.\n     *\n     * @param deviceId the device ID\n     * @return true if the device is a spine, false otherwise\n     */\n    private boolean isSpine(DeviceId deviceId) {\n        return getDeviceConfig(deviceId).map(FabricDeviceConfig::isSpine)\n                .orElseThrow(() -> new ItemNotFoundException(\n                        \"Missing isSpine config for \" + deviceId));\n    }\n\n    /**\n     * Returns true if the given device is not configured as spine.\n     *\n     * @param deviceId the device ID\n     * @return true if the device is a leaf, false otherwise\n     */\n    private boolean isLeaf(DeviceId deviceId) {\n        return !isSpine(deviceId);\n    }\n\n    /**\n     * Returns the MAC address configured in the \"myStationMac\" property of the\n     * given device config.\n     *\n     * @param deviceId the device ID\n     * @return MyStation MAC address\n     */\n    private MacAddress getMyStationMac(DeviceId deviceId) {\n        return getDeviceConfig(deviceId)\n                .map(FabricDeviceConfig::myStationMac)\n                .orElseThrow(() -> new ItemNotFoundException(\n                        \"Missing myStationMac config for \" + deviceId));\n    }\n\n    /**\n     * Returns the FabricDeviceConfig config object for the given device.\n     *\n     * @param deviceId the device ID\n     * @return FabricDeviceConfig device config\n     */\n    private Optional<FabricDeviceConfig> getDeviceConfig(DeviceId deviceId) {\n        FabricDeviceConfig config = networkConfigService.getConfig(\n                deviceId, FabricDeviceConfig.class);\n        return Optional.ofNullable(config);\n    }\n\n    /**\n     * Returns the set of interface IPv6 subnets (prefixes) configured for the\n     * given device.\n     *\n     * @param deviceId the device ID\n     * @return set of IPv6 prefixes\n     */\n    private Set<Ip6Prefix> getInterfaceIpv6Prefixes(DeviceId deviceId) {\n        return interfaceService.getInterfaces().stream()\n                .filter(iface -> iface.connectPoint().deviceId().equals(deviceId))\n                .map(Interface::ipAddressesList)\n                .flatMap(Collection::stream)\n                .map(InterfaceIpAddress::subnetAddress)\n                .filter(IpPrefix::isIp6)\n                .map(IpPrefix::getIp6Prefix)\n                .collect(Collectors.toSet());\n    }\n\n    /**\n     * Returns a 32 bit bit group ID from the given MAC address.\n     *\n     * @param mac the MAC address\n     * @return an integer\n     */\n    private int macToGroupId(MacAddress mac) {\n        return mac.hashCode() & 0x7fffffff;\n    }\n\n    /**\n     * Inserts the given groups and flow rules in order, groups first, then flow\n     * rules. In P4Runtime, when operating on an indirect table (i.e. with\n     * action selectors), groups must be inserted before table entries.\n     *\n     * @param group     the group\n     * @param flowRules the flow rules depending on the group\n     */\n    private void insertInOrder(GroupDescription group, Collection<FlowRule> flowRules) {\n        try {\n            groupService.addGroup(group);\n            // Wait for groups to be inserted.\n            Thread.sleep(GROUP_INSERT_DELAY_MILLIS);\n            flowRules.forEach(flowRuleService::applyFlowRules);\n        } catch (InterruptedException e) {\n            log.error(\"Interrupted!\", e);\n            Thread.currentThread().interrupt();\n        }\n    }\n\n    /**\n     * Gets Srv6 SID for the given device.\n     *\n     * @param deviceId the device ID\n     * @return SID for the device\n     */\n    private Ip6Address getDeviceSid(DeviceId deviceId) {\n        return getDeviceConfig(deviceId)\n                .map(FabricDeviceConfig::mySid)\n                .orElseThrow(() -> new ItemNotFoundException(\n                        \"Missing mySid config for \" + deviceId));\n    }\n\n    /**\n     * Sets up IPv6 routing on all devices known by ONOS and for which this ONOS\n     * node instance is currently master.\n     */\n    private synchronized void setUpAllDevices() {\n        // Set up host routes\n        stream(deviceService.getAvailableDevices())\n                .map(Device::id)\n                .filter(mastershipService::isLocalMaster)\n                .forEach(deviceId -> {\n                    log.info(\"*** IPV6 ROUTING - Starting initial set up for {}...\", deviceId);\n                    setUpMyStationTable(deviceId);\n                    setUpFabricRoutes(deviceId);\n                    setUpL2NextHopRules(deviceId);\n                    hostService.getConnectedHosts(deviceId)\n                            .forEach(host -> setUpHostRules(deviceId, host));\n                });\n    }\n}\n"
  },
  {
    "path": "app/src/main/java/org/onosproject/ngsdn/tutorial/L2BridgingComponent.java",
    "content": "/*\n * Copyright 2019-present Open Networking Foundation\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may not use this file except in compliance with the License.\n * You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\npackage org.onosproject.ngsdn.tutorial;\n\nimport org.onlab.packet.MacAddress;\nimport org.onosproject.core.ApplicationId;\nimport org.onosproject.mastership.MastershipService;\nimport org.onosproject.net.ConnectPoint;\nimport org.onosproject.net.DeviceId;\nimport org.onosproject.net.Host;\nimport org.onosproject.net.PortNumber;\nimport org.onosproject.net.config.NetworkConfigService;\nimport org.onosproject.net.device.DeviceEvent;\nimport org.onosproject.net.device.DeviceListener;\nimport org.onosproject.net.device.DeviceService;\nimport org.onosproject.net.flow.FlowRule;\nimport org.onosproject.net.flow.FlowRuleService;\nimport org.onosproject.net.flow.criteria.PiCriterion;\nimport org.onosproject.net.group.GroupDescription;\nimport org.onosproject.net.group.GroupService;\nimport org.onosproject.net.host.HostEvent;\nimport org.onosproject.net.host.HostListener;\nimport org.onosproject.net.host.HostService;\nimport org.onosproject.net.intf.Interface;\nimport org.onosproject.net.intf.InterfaceService;\nimport org.onosproject.net.pi.model.PiActionId;\nimport org.onosproject.net.pi.model.PiActionParamId;\nimport org.onosproject.net.pi.model.PiMatchFieldId;\nimport org.onosproject.net.pi.runtime.PiAction;\nimport org.onosproject.net.pi.runtime.PiActionParam;\nimport org.osgi.service.component.annotations.Activate;\nimport org.osgi.service.component.annotations.Component;\nimport org.osgi.service.component.annotations.Deactivate;\nimport org.osgi.service.component.annotations.Reference;\nimport org.osgi.service.component.annotations.ReferenceCardinality;\nimport org.onosproject.ngsdn.tutorial.common.FabricDeviceConfig;\nimport org.onosproject.ngsdn.tutorial.common.Utils;\nimport org.slf4j.Logger;\nimport org.slf4j.LoggerFactory;\n\nimport java.util.Set;\nimport java.util.stream.Collectors;\n\nimport static org.onosproject.ngsdn.tutorial.AppConstants.INITIAL_SETUP_DELAY;\n\n/**\n * App component that configures devices to provide L2 bridging capabilities.\n */\n@Component(\n        immediate = true,\n        // *** TODO EXERCISE 4\n        // Enable component (enabled = true)\n        enabled = false\n)\npublic class L2BridgingComponent {\n\n    private final Logger log = LoggerFactory.getLogger(getClass());\n\n    private static final int DEFAULT_BROADCAST_GROUP_ID = 255;\n\n    private final DeviceListener deviceListener = new InternalDeviceListener();\n    private final HostListener hostListener = new InternalHostListener();\n\n    private ApplicationId appId;\n\n    //--------------------------------------------------------------------------\n    // ONOS CORE SERVICE BINDING\n    //\n    // These variables are set by the Karaf runtime environment before calling\n    // the activate() method.\n    //--------------------------------------------------------------------------\n\n    @Reference(cardinality = ReferenceCardinality.MANDATORY)\n    private HostService hostService;\n\n    @Reference(cardinality = ReferenceCardinality.MANDATORY)\n    private DeviceService deviceService;\n\n    @Reference(cardinality = ReferenceCardinality.MANDATORY)\n    private InterfaceService interfaceService;\n\n    @Reference(cardinality = ReferenceCardinality.MANDATORY)\n    private NetworkConfigService configService;\n\n    @Reference(cardinality = ReferenceCardinality.MANDATORY)\n    private FlowRuleService flowRuleService;\n\n    @Reference(cardinality = ReferenceCardinality.MANDATORY)\n    private GroupService groupService;\n\n    @Reference(cardinality = ReferenceCardinality.MANDATORY)\n    private MastershipService mastershipService;\n\n    @Reference(cardinality = ReferenceCardinality.MANDATORY)\n    private MainComponent mainComponent;\n\n    //--------------------------------------------------------------------------\n    // COMPONENT ACTIVATION.\n    //\n    // When loading/unloading the app the Karaf runtime environment will call\n    // activate()/deactivate().\n    //--------------------------------------------------------------------------\n\n    @Activate\n    protected void activate() {\n        appId = mainComponent.getAppId();\n\n        // Register listeners to be informed about device and host events.\n        deviceService.addListener(deviceListener);\n        hostService.addListener(hostListener);\n        // Schedule set up of existing devices. Needed when reloading the app.\n        mainComponent.scheduleTask(this::setUpAllDevices, INITIAL_SETUP_DELAY);\n\n        log.info(\"Started\");\n    }\n\n    @Deactivate\n    protected void deactivate() {\n        deviceService.removeListener(deviceListener);\n        hostService.removeListener(hostListener);\n\n        log.info(\"Stopped\");\n    }\n\n    //--------------------------------------------------------------------------\n    // METHODS TO COMPLETE.\n    //\n    // Complete the implementation wherever you see TODO.\n    //--------------------------------------------------------------------------\n\n    /**\n     * Sets up everything necessary to support L2 bridging on the given device.\n     *\n     * @param deviceId the device to set up\n     */\n    private void setUpDevice(DeviceId deviceId) {\n        if (isSpine(deviceId)) {\n            // Stop here. We support bridging only on leaf/tor switches.\n            return;\n        }\n        insertMulticastGroup(deviceId);\n        insertMulticastFlowRules(deviceId);\n        // Uncomment the following line after you have implemented the method:\n        // insertUnmatchedBridgingFlowRule(deviceId);\n    }\n\n    /**\n     * Inserts an ALL group in the ONOS core to replicate packets on all host\n     * facing ports. This group will be used to broadcast all ARP/NDP requests.\n     * <p>\n     * ALL groups in ONOS are equivalent to P4Runtime packet replication engine\n     * (PRE) Multicast groups.\n     *\n     * @param deviceId the device where to install the group\n     */\n    private void insertMulticastGroup(DeviceId deviceId) {\n\n        // Replicate packets where we know hosts are attached.\n        Set<PortNumber> ports = getHostFacingPorts(deviceId);\n\n        if (ports.isEmpty()) {\n            // Stop here.\n            log.warn(\"Device {} has 0 host facing ports\", deviceId);\n            return;\n        }\n\n        log.info(\"Adding L2 multicast group with {} ports on {}...\",\n                ports.size(), deviceId);\n\n        // Forge group object.\n        final GroupDescription multicastGroup = Utils.buildMulticastGroup(\n                appId, deviceId, DEFAULT_BROADCAST_GROUP_ID, ports);\n\n        // Insert.\n        groupService.addGroup(multicastGroup);\n    }\n\n    /**\n     * Insert flow rules matching ethernet destination\n     * broadcast/multicast addresses (e.g. ARP requests, NDP Neighbor\n     * Solicitation, etc.). Such packets should be processed by the multicast\n     * group created before.\n     * <p>\n     * This method will be called at component activation for each device\n     * (switch) known by ONOS, and every time a new device-added event is\n     * captured by the InternalDeviceListener defined below.\n     *\n     * @param deviceId device ID where to install the rules\n     */\n    private void insertMulticastFlowRules(DeviceId deviceId) {\n\n        log.info(\"Adding L2 multicast rules on {}...\", deviceId);\n\n        // Modify P4Runtime entity names to match content of P4Info file (look\n        // for the fully qualified name of tables, match fields, and actions.\n        // ---- START SOLUTION ----\n        // Match ARP request - Match exactly FF:FF:FF:FF:FF:FF\n        final PiCriterion macBroadcastCriterion = PiCriterion.builder()\n                .matchTernary(\n                        PiMatchFieldId.of(\"hdr.ethernet.dst_addr\"),\n                        MacAddress.valueOf(\"FF:FF:FF:FF:FF:FF\").toBytes(),\n                        MacAddress.valueOf(\"FF:FF:FF:FF:FF:FF\").toBytes())\n                .build();\n\n        // Match NDP NS - Match ternary 33:33:**:**:**:**\n        final PiCriterion ipv6MulticastCriterion = PiCriterion.builder()\n                .matchTernary(\n                        PiMatchFieldId.of(\"hdr.ethernet.dst_addr\"),\n                        MacAddress.valueOf(\"33:33:00:00:00:00\").toBytes(),\n                        MacAddress.valueOf(\"FF:FF:00:00:00:00\").toBytes())\n                .build();\n\n        // Action: set multicast group id\n        final PiAction setMcastGroupAction = PiAction.builder()\n                .withId(PiActionId.of(\"IngressPipeImpl.set_multicast_group\"))\n                .withParameter(new PiActionParam(\n                        PiActionParamId.of(\"gid\"),\n                        DEFAULT_BROADCAST_GROUP_ID))\n                .build();\n\n        //  Build 2 flow rules.\n        final String tableId = \"IngressPipeImpl.l2_ternary_table\";\n        // ---- END SOLUTION ----\n\n        final FlowRule rule1 = Utils.buildFlowRule(\n                deviceId, appId, tableId,\n                macBroadcastCriterion, setMcastGroupAction);\n\n        final FlowRule rule2 = Utils.buildFlowRule(\n                deviceId, appId, tableId,\n                ipv6MulticastCriterion, setMcastGroupAction);\n\n        // Insert rules.\n        flowRuleService.applyFlowRules(rule1, rule2);\n    }\n\n    /**\n     * Insert flow rule that matches all unmatched ethernet traffic. This\n     * will implement the traditional briding behavior that floods all\n     * unmatched traffic.\n     * <p>\n     * This method will be called at component activation for each device\n     * (switch) known by ONOS, and every time a new device-added event is\n     * captured by the InternalDeviceListener defined below.\n     *\n     * @param deviceId device ID where to install the rules\n     */\n    @SuppressWarnings(\"unused\")\n    private void insertUnmatchedBridgingFlowRule(DeviceId deviceId) {\n\n        log.info(\"Adding L2 multicast rules on {}...\", deviceId);\n\n        // Modify P4Runtime entity names to match content of P4Info file (look\n        // for the fully qualified name of tables, match fields, and actions.\n        // ---- START SOLUTION ----\n\n        // Match unmatched traffic - Match ternary **:**:**:**:**:**\n        final PiCriterion unmatchedTrafficCriterion = PiCriterion.builder()\n                .matchTernary(\n                        PiMatchFieldId.of(\"hdr.ethernet.dst_addr\"),\n                        MacAddress.valueOf(\"00:00:00:00:00:00\").toBytes(),\n                        MacAddress.valueOf(\"00:00:00:00:00:00\").toBytes())\n                .build();\n\n        // Action: set multicast group id\n        final PiAction setMcastGroupAction = PiAction.builder()\n                .withId(PiActionId.of(\"IngressPipeImpl.set_multicast_group\"))\n                .withParameter(new PiActionParam(\n                        PiActionParamId.of(\"gid\"),\n                        DEFAULT_BROADCAST_GROUP_ID))\n                .build();\n\n        //  Build flow rule.\n        final String tableId = \"IngressPipeImpl.l2_ternary_table\";\n        // ---- END SOLUTION ----\n\n        final FlowRule rule = Utils.buildFlowRule(\n                deviceId, appId, tableId,\n                unmatchedTrafficCriterion, setMcastGroupAction);\n\n        // Insert rules.\n        flowRuleService.applyFlowRules(rule);\n    }\n\n    /**\n     * Insert flow rules to forward packets to a given host located at the given\n     * device and port.\n     * <p>\n     * This method will be called at component activation for each host known by\n     * ONOS, and every time a new host-added event is captured by the\n     * InternalHostListener defined below.\n     *\n     * @param host     host instance\n     * @param deviceId device where the host is located\n     * @param port     port where the host is attached to\n     */\n    private void learnHost(Host host, DeviceId deviceId, PortNumber port) {\n\n        log.info(\"Adding L2 unicast rule on {} for host {} (port {})...\",\n                deviceId, host.id(), port);\n\n        // Modify P4Runtime entity names to match content of P4Info file (look\n        // for the fully qualified name of tables, match fields, and actions.\n        // ---- START SOLUTION ----\n        final String tableId = \"IngressPipeImpl.l2_exact_table\";\n        // Match exactly on the host MAC address.\n        final MacAddress hostMac = host.mac();\n        final PiCriterion hostMacCriterion = PiCriterion.builder()\n                .matchExact(PiMatchFieldId.of(\"hdr.ethernet.dst_addr\"),\n                        hostMac.toBytes())\n                .build();\n\n        // Action: set output port\n        final PiAction l2UnicastAction = PiAction.builder()\n                .withId(PiActionId.of(\"IngressPipeImpl.set_egress_port\"))\n                .withParameter(new PiActionParam(\n                        PiActionParamId.of(\"port_num\"),\n                        port.toLong()))\n                .build();\n        // ---- END SOLUTION ----\n\n        // Forge flow rule.\n        final FlowRule rule = Utils.buildFlowRule(\n                deviceId, appId, tableId, hostMacCriterion, l2UnicastAction);\n\n        // Insert.\n        flowRuleService.applyFlowRules(rule);\n    }\n\n    //--------------------------------------------------------------------------\n    // EVENT LISTENERS\n    //\n    // Events are processed only if isRelevant() returns true.\n    //--------------------------------------------------------------------------\n\n    /**\n     * Listener of device events.\n     */\n    public class InternalDeviceListener implements DeviceListener {\n\n        @Override\n        public boolean isRelevant(DeviceEvent event) {\n            switch (event.type()) {\n                case DEVICE_ADDED:\n                case DEVICE_AVAILABILITY_CHANGED:\n                    break;\n                default:\n                    // Ignore other events.\n                    return false;\n            }\n            // Process only if this controller instance is the master.\n            final DeviceId deviceId = event.subject().id();\n            return mastershipService.isLocalMaster(deviceId);\n        }\n\n        @Override\n        public void event(DeviceEvent event) {\n            final DeviceId deviceId = event.subject().id();\n            if (deviceService.isAvailable(deviceId)) {\n                // A P4Runtime device is considered available in ONOS when there\n                // is a StreamChannel session open and the pipeline\n                // configuration has been set.\n\n                // Events are processed using a thread pool defined in the\n                // MainComponent.\n                mainComponent.getExecutorService().execute(() -> {\n                    log.info(\"{} event! deviceId={}\", event.type(), deviceId);\n\n                    setUpDevice(deviceId);\n                });\n            }\n        }\n    }\n\n    /**\n     * Listener of host events.\n     */\n    public class InternalHostListener implements HostListener {\n\n        @Override\n        public boolean isRelevant(HostEvent event) {\n            switch (event.type()) {\n                case HOST_ADDED:\n                    // Host added events will be generated by the\n                    // HostLocationProvider by intercepting ARP/NDP packets.\n                    break;\n                case HOST_REMOVED:\n                case HOST_UPDATED:\n                case HOST_MOVED:\n                default:\n                    // Ignore other events.\n                    // Food for thoughts: how to support host moved/removed?\n                    return false;\n            }\n            // Process host event only if this controller instance is the master\n            // for the device where this host is attached to.\n            final Host host = event.subject();\n            final DeviceId deviceId = host.location().deviceId();\n            return mastershipService.isLocalMaster(deviceId);\n        }\n\n        @Override\n        public void event(HostEvent event) {\n            final Host host = event.subject();\n            // Device and port where the host is located.\n            final DeviceId deviceId = host.location().deviceId();\n            final PortNumber port = host.location().port();\n\n            mainComponent.getExecutorService().execute(() -> {\n                log.info(\"{} event! host={}, deviceId={}, port={}\",\n                        event.type(), host.id(), deviceId, port);\n\n                learnHost(host, deviceId, port);\n            });\n        }\n    }\n\n    //--------------------------------------------------------------------------\n    // UTILITY METHODS\n    //--------------------------------------------------------------------------\n\n    /**\n     * Returns a set of ports for the given device that are used to connect\n     * hosts to the fabric.\n     *\n     * @param deviceId device ID\n     * @return set of host facing ports\n     */\n    private Set<PortNumber> getHostFacingPorts(DeviceId deviceId) {\n        // Get all interfaces configured via netcfg for the given device ID and\n        // return the corresponding device port number. Interface configuration\n        // in the netcfg.json looks like this:\n        // \"device:leaf1/3\": {\n        //   \"interfaces\": [\n        //     {\n        //       \"name\": \"leaf1-3\",\n        //       \"ips\": [\"2001:1:1::ff/64\"]\n        //     }\n        //   ]\n        // }\n        return interfaceService.getInterfaces().stream()\n                .map(Interface::connectPoint)\n                .filter(cp -> cp.deviceId().equals(deviceId))\n                .map(ConnectPoint::port)\n                .collect(Collectors.toSet());\n    }\n\n    /**\n     * Returns true if the given device is defined as a spine in the\n     * netcfg.json.\n     *\n     * @param deviceId device ID\n     * @return true if spine, false otherwise\n     */\n    private boolean isSpine(DeviceId deviceId) {\n        // Example netcfg defining a device as spine:\n        // \"devices\": {\n        //   \"device:spine1\": {\n        //     ...\n        //     \"fabricDeviceConfig\": {\n        //       \"myStationMac\": \"...\",\n        //       \"mySid\": \"...\",\n        //       \"isSpine\": true\n        //     }\n        //   },\n        //   ...\n        final FabricDeviceConfig cfg = configService.getConfig(\n                deviceId, FabricDeviceConfig.class);\n        return cfg != null && cfg.isSpine();\n    }\n\n    /**\n     * Sets up L2 bridging on all devices known by ONOS and for which this ONOS\n     * node instance is currently master.\n     * <p>\n     * This method is called at component activation.\n     */\n    private void setUpAllDevices() {\n        deviceService.getAvailableDevices().forEach(device -> {\n            if (mastershipService.isLocalMaster(device.id())) {\n                log.info(\"*** L2 BRIDGING - Starting initial set up for {}...\", device.id());\n                setUpDevice(device.id());\n                // For all hosts connected to this device...\n                hostService.getConnectedHosts(device.id()).forEach(\n                        host -> learnHost(host, host.location().deviceId(),\n                                host.location().port()));\n            }\n        });\n    }\n}\n"
  },
  {
    "path": "app/src/main/java/org/onosproject/ngsdn/tutorial/MainComponent.java",
    "content": "package org.onosproject.ngsdn.tutorial;\n\nimport com.google.common.collect.Lists;\nimport org.onlab.util.SharedScheduledExecutors;\nimport org.onosproject.cfg.ComponentConfigService;\nimport org.onosproject.core.ApplicationId;\nimport org.onosproject.core.CoreService;\nimport org.onosproject.net.Device;\nimport org.onosproject.net.DeviceId;\nimport org.onosproject.net.config.ConfigFactory;\nimport org.onosproject.net.config.NetworkConfigRegistry;\nimport org.onosproject.net.config.basics.SubjectFactories;\nimport org.onosproject.net.device.DeviceService;\nimport org.onosproject.net.flow.FlowRule;\nimport org.onosproject.net.flow.FlowRuleService;\nimport org.onosproject.net.group.Group;\nimport org.onosproject.net.group.GroupService;\nimport org.osgi.service.component.annotations.Activate;\nimport org.osgi.service.component.annotations.Component;\nimport org.osgi.service.component.annotations.Deactivate;\nimport org.osgi.service.component.annotations.Reference;\nimport org.osgi.service.component.annotations.ReferenceCardinality;\nimport org.onosproject.ngsdn.tutorial.common.FabricDeviceConfig;\nimport org.onosproject.ngsdn.tutorial.pipeconf.PipeconfLoader;\nimport org.slf4j.Logger;\nimport org.slf4j.LoggerFactory;\n\nimport java.util.Collection;\nimport java.util.concurrent.ExecutorService;\nimport java.util.concurrent.Executors;\nimport java.util.concurrent.TimeUnit;\n\nimport static org.onosproject.ngsdn.tutorial.AppConstants.APP_NAME;\nimport static org.onosproject.ngsdn.tutorial.AppConstants.CLEAN_UP_DELAY;\nimport static org.onosproject.ngsdn.tutorial.AppConstants.DEFAULT_CLEAN_UP_RETRY_TIMES;\nimport static org.onosproject.ngsdn.tutorial.common.Utils.sleep;\n\n/**\n * A component which among other things registers the fabricDeviceConfig to the\n * netcfg subsystem.\n */\n@Component(immediate = true, service = MainComponent.class)\npublic class MainComponent {\n\n    private static final Logger log =\n            LoggerFactory.getLogger(MainComponent.class.getName());\n\n    @Reference(cardinality = ReferenceCardinality.MANDATORY)\n    private CoreService coreService;\n\n    @Reference(cardinality = ReferenceCardinality.MANDATORY)\n    //Force activation of this component after the pipeconf has been registered.\n    @SuppressWarnings(\"unused\")\n    protected PipeconfLoader pipeconfLoader;\n\n    @Reference(cardinality = ReferenceCardinality.MANDATORY)\n    protected NetworkConfigRegistry configRegistry;\n\n    @Reference(cardinality = ReferenceCardinality.MANDATORY)\n    private GroupService groupService;\n\n    @Reference(cardinality = ReferenceCardinality.MANDATORY)\n    private DeviceService deviceService;\n\n    @Reference(cardinality = ReferenceCardinality.MANDATORY)\n    private FlowRuleService flowRuleService;\n\n    @Reference(cardinality = ReferenceCardinality.MANDATORY)\n    private ComponentConfigService compCfgService;\n\n    private final ConfigFactory<DeviceId, FabricDeviceConfig> fabricConfigFactory =\n            new ConfigFactory<DeviceId, FabricDeviceConfig>(\n                    SubjectFactories.DEVICE_SUBJECT_FACTORY, FabricDeviceConfig.class, FabricDeviceConfig.CONFIG_KEY) {\n                @Override\n                public FabricDeviceConfig createConfig() {\n                    return new FabricDeviceConfig();\n                }\n            };\n\n    private ApplicationId appId;\n\n    // For the sake of simplicity and to facilitate reading logs, use a\n    // single-thread executor to serialize all configuration tasks.\n    private final ExecutorService executorService = Executors.newSingleThreadExecutor();\n\n    @Activate\n    protected void activate() {\n        appId = coreService.registerApplication(APP_NAME);\n\n        // Wait to remove flow and groups from previous executions.\n        waitPreviousCleanup();\n\n        compCfgService.preSetProperty(\"org.onosproject.net.flow.impl.FlowRuleManager\",\n                                      \"fallbackFlowPollFrequency\", \"4\", false);\n        compCfgService.preSetProperty(\"org.onosproject.net.group.impl.GroupManager\",\n                                      \"fallbackGroupPollFrequency\", \"3\", false);\n        compCfgService.preSetProperty(\"org.onosproject.provider.host.impl.HostLocationProvider\",\n                                      \"requestIpv6ND\", \"true\", false);\n        compCfgService.preSetProperty(\"org.onosproject.provider.lldp.impl.LldpLinkProvider\",\n                                      \"useBddp\", \"false\", false);\n\n        configRegistry.registerConfigFactory(fabricConfigFactory);\n        log.info(\"Started\");\n    }\n\n    @Deactivate\n    protected void deactivate() {\n        configRegistry.unregisterConfigFactory(fabricConfigFactory);\n\n        cleanUp();\n\n        log.info(\"Stopped\");\n    }\n\n    /**\n     * Returns the application ID.\n     *\n     * @return application ID\n     */\n    ApplicationId getAppId() {\n        return appId;\n    }\n\n    /**\n     * Returns the executor service managed by this component.\n     *\n     * @return executor service\n     */\n    public ExecutorService getExecutorService() {\n        return executorService;\n    }\n\n    /**\n     * Schedules a task for the future using the executor service managed by\n     * this component.\n     *\n     * @param task task runnable\n     * @param delaySeconds delay in seconds\n     */\n    public void scheduleTask(Runnable task, int delaySeconds) {\n        SharedScheduledExecutors.newTimeout(\n                () -> executorService.execute(task),\n                delaySeconds, TimeUnit.SECONDS);\n    }\n\n    /**\n     * Triggers clean up of flows and groups from this app, returns false if no\n     * flows or groups were found, true otherwise.\n     *\n     * @return false if no flows or groups were found, true otherwise\n     */\n    private boolean cleanUp() {\n        Collection<FlowRule> flows = Lists.newArrayList(\n                flowRuleService.getFlowEntriesById(appId).iterator());\n\n        Collection<Group> groups = Lists.newArrayList();\n        for (Device device : deviceService.getAvailableDevices()) {\n            groupService.getGroups(device.id(), appId).forEach(groups::add);\n        }\n\n        if (flows.isEmpty() && groups.isEmpty()) {\n            return false;\n        }\n\n        flows.forEach(flowRuleService::removeFlowRules);\n        if (!groups.isEmpty()) {\n            // Wait for flows to be removed in case those depend on groups.\n            sleep(1000);\n            groups.forEach(g -> groupService.removeGroup(\n                    g.deviceId(), g.appCookie(), g.appId()));\n        }\n\n        return true;\n    }\n\n    private void waitPreviousCleanup() {\n        int retry = DEFAULT_CLEAN_UP_RETRY_TIMES;\n        while (retry != 0) {\n\n            if (!cleanUp()) {\n                return;\n            }\n\n            log.info(\"Waiting to remove flows and groups from \" +\n                             \"previous execution of {}...\",\n                     appId.name());\n\n            sleep(CLEAN_UP_DELAY);\n\n            --retry;\n        }\n    }\n}\n"
  },
  {
    "path": "app/src/main/java/org/onosproject/ngsdn/tutorial/NdpReplyComponent.java",
    "content": "/*\n * Copyright 2019-present Open Networking Foundation\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may not use this file except in compliance with the License.\n * You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\npackage org.onosproject.ngsdn.tutorial;\n\nimport org.onlab.packet.Ip6Address;\nimport org.onlab.packet.IpAddress;\nimport org.onlab.packet.MacAddress;\nimport org.onlab.util.ItemNotFoundException;\nimport org.onosproject.core.ApplicationId;\nimport org.onosproject.mastership.MastershipService;\nimport org.onosproject.net.DeviceId;\nimport org.onosproject.net.config.NetworkConfigService;\nimport org.onosproject.net.device.DeviceEvent;\nimport org.onosproject.net.device.DeviceListener;\nimport org.onosproject.net.device.DeviceService;\nimport org.onosproject.net.flow.FlowRule;\nimport org.onosproject.net.flow.FlowRuleOperations;\nimport org.onosproject.net.flow.FlowRuleService;\nimport org.onosproject.net.flow.criteria.PiCriterion;\nimport org.onosproject.net.host.InterfaceIpAddress;\nimport org.onosproject.net.intf.Interface;\nimport org.onosproject.net.intf.InterfaceService;\nimport org.onosproject.net.pi.model.PiActionId;\nimport org.onosproject.net.pi.model.PiActionParamId;\nimport org.onosproject.net.pi.model.PiMatchFieldId;\nimport org.onosproject.net.pi.runtime.PiAction;\nimport org.onosproject.net.pi.runtime.PiActionParam;\nimport org.onosproject.ngsdn.tutorial.common.FabricDeviceConfig;\nimport org.onosproject.ngsdn.tutorial.common.Utils;\nimport org.osgi.service.component.annotations.Activate;\nimport org.osgi.service.component.annotations.Component;\nimport org.osgi.service.component.annotations.Deactivate;\nimport org.osgi.service.component.annotations.Reference;\nimport org.osgi.service.component.annotations.ReferenceCardinality;\nimport org.slf4j.Logger;\nimport org.slf4j.LoggerFactory;\n\nimport java.util.Collection;\nimport java.util.stream.Collectors;\n\nimport static org.onosproject.ngsdn.tutorial.AppConstants.INITIAL_SETUP_DELAY;\n\n/**\n * App component that configures devices to generate NDP Neighbor Advertisement\n * packets for all interface IPv6 addresses configured in the netcfg.\n */\n@Component(\n        immediate = true,\n        // *** TODO EXERCISE 5\n        // Enable component (enabled = true)\n        enabled = false\n)\npublic class NdpReplyComponent {\n\n    private static final Logger log =\n            LoggerFactory.getLogger(NdpReplyComponent.class.getName());\n\n    //--------------------------------------------------------------------------\n    // ONOS CORE SERVICE BINDING\n    //\n    // These variables are set by the Karaf runtime environment before calling\n    // the activate() method.\n    //--------------------------------------------------------------------------\n\n    @Reference(cardinality = ReferenceCardinality.MANDATORY)\n    protected NetworkConfigService configService;\n\n    @Reference(cardinality = ReferenceCardinality.MANDATORY)\n    protected FlowRuleService flowRuleService;\n\n    @Reference(cardinality = ReferenceCardinality.MANDATORY)\n    protected InterfaceService interfaceService;\n\n    @Reference(cardinality = ReferenceCardinality.MANDATORY)\n    protected MastershipService mastershipService;\n\n    @Reference(cardinality = ReferenceCardinality.MANDATORY)\n    protected DeviceService deviceService;\n\n    @Reference(cardinality = ReferenceCardinality.MANDATORY)\n    private MainComponent mainComponent;\n\n    private DeviceListener deviceListener = new InternalDeviceListener();\n    private ApplicationId appId;\n\n    //--------------------------------------------------------------------------\n    // COMPONENT ACTIVATION.\n    //\n    // When loading/unloading the app the Karaf runtime environment will call\n    // activate()/deactivate().\n    //--------------------------------------------------------------------------\n\n    @Activate\n    public void activate() {\n        appId = mainComponent.getAppId();\n        // Register listeners to be informed about device events.\n        deviceService.addListener(deviceListener);\n        // Schedule set up of existing devices. Needed when reloading the app.\n        mainComponent.scheduleTask(this::setUpAllDevices, INITIAL_SETUP_DELAY);\n        log.info(\"Started\");\n    }\n\n    @Deactivate\n    public void deactivate() {\n        deviceService.removeListener(deviceListener);\n        log.info(\"Stopped\");\n    }\n\n    //--------------------------------------------------------------------------\n    // METHODS TO COMPLETE.\n    //\n    // Complete the implementation wherever you see TODO.\n    //--------------------------------------------------------------------------\n\n    /**\n     * Set up all devices for which this ONOS instance is currently master.\n     */\n    private void setUpAllDevices() {\n        deviceService.getAvailableDevices().forEach(device -> {\n            if (mastershipService.isLocalMaster(device.id())) {\n                log.info(\"*** NDP REPLY - Starting Initial set up for {}...\", device.id());\n                setUpDevice(device.id());\n            }\n        });\n    }\n\n    /**\n     * Performs setup of the given device by creating a flow rule to generate\n     * NDP NA packets for IPv6 addresses associated to the device interfaces.\n     *\n     * @param deviceId device ID\n     */\n    private void setUpDevice(DeviceId deviceId) {\n\n        // Get this device config from netcfg.json.\n        final FabricDeviceConfig config = configService.getConfig(\n                deviceId, FabricDeviceConfig.class);\n        if (config == null) {\n            // Config not available yet\n            throw new ItemNotFoundException(\"Missing fabricDeviceConfig for \" + deviceId);\n        }\n\n        // Get this device myStation mac.\n        final MacAddress deviceMac = config.myStationMac();\n\n        // Get all interfaces currently configured for the device\n        final Collection<Interface> interfaces = interfaceService.getInterfaces()\n                .stream()\n                .filter(iface -> iface.connectPoint().deviceId().equals(deviceId))\n                .collect(Collectors.toSet());\n\n        if (interfaces.isEmpty()) {\n            log.info(\"{} does not have any IPv6 interface configured\",\n                     deviceId);\n            return;\n        }\n\n        // Generate and install flow rules.\n        log.info(\"Adding rules to {} to generate NDP NA for {} IPv6 interfaces...\",\n                 deviceId, interfaces.size());\n        final Collection<FlowRule> flowRules = interfaces.stream()\n                .map(this::getIp6Addresses)\n                .flatMap(Collection::stream)\n                .map(ipv6addr -> buildNdpReplyFlowRule(deviceId, ipv6addr, deviceMac))\n                .collect(Collectors.toSet());\n\n        installRules(flowRules);\n    }\n\n    /**\n     * Build a flow rule for the NDP reply table on the given device, for the\n     * given target IPv6 address and MAC address.\n     *\n     * @param deviceId          device ID where to install the flow rules\n     * @param targetIpv6Address target IPv6 address\n     * @param targetMac         target MAC address\n     * @return flow rule object\n     */\n    private FlowRule buildNdpReplyFlowRule(DeviceId deviceId,\n                                           Ip6Address targetIpv6Address,\n                                           MacAddress targetMac) {\n\n        // *** TODO EXERCISE 5\n        // Modify P4Runtime entity names to match content of P4Info file (look\n        // for the fully qualified name of tables, match fields, and actions.\n        // ---- START SOLUTION ----\n        // Build match.\n        final PiCriterion match = PiCriterion.builder()\n                .matchExact(PiMatchFieldId.of(\"hdr.ndp.target_ipv6_addr\"), targetIpv6Address.toOctets())\n                .build();\n        // Build action.\n        final PiActionParam targetMacParam = new PiActionParam(\n                PiActionParamId.of(\"target_mac\"), targetMac.toBytes());\n        final PiAction action = PiAction.builder()\n                .withId(PiActionId.of(\"MODIFY ME\"))\n                .withParameter(targetMacParam)\n                .build();\n        // Table ID.\n        final String tableId = \"MODIFY ME\";\n        // ---- END SOLUTION ----\n\n        // Build flow rule.\n        final FlowRule rule = Utils.buildFlowRule(\n                deviceId, appId, tableId, match, action);\n\n        return rule;\n    }\n\n    //--------------------------------------------------------------------------\n    // EVENT LISTENERS\n    //\n    // Events are processed only if isRelevant() returns true.\n    //--------------------------------------------------------------------------\n\n    /**\n     * Listener of device events.\n     */\n    public class InternalDeviceListener implements DeviceListener {\n\n        @Override\n        public boolean isRelevant(DeviceEvent event) {\n            switch (event.type()) {\n                case DEVICE_ADDED:\n                case DEVICE_AVAILABILITY_CHANGED:\n                    break;\n                default:\n                    // Ignore other events.\n                    return false;\n            }\n            // Process only if this controller instance is the master.\n            final DeviceId deviceId = event.subject().id();\n            return mastershipService.isLocalMaster(deviceId);\n        }\n\n        @Override\n        public void event(DeviceEvent event) {\n            final DeviceId deviceId = event.subject().id();\n            if (deviceService.isAvailable(deviceId)) {\n                // A P4Runtime device is considered available in ONOS when there\n                // is a StreamChannel session open and the pipeline\n                // configuration has been set.\n\n                // Events are processed using a thread pool defined in the\n                // MainComponent.\n                mainComponent.getExecutorService().execute(() -> {\n                    log.info(\"{} event! deviceId={}\", event.type(), deviceId);\n                    setUpDevice(deviceId);\n                });\n            }\n        }\n    }\n\n    //--------------------------------------------------------------------------\n    // UTILITY METHODS\n    //--------------------------------------------------------------------------\n\n    /**\n     * Returns all IPv6 addresses associated with the given interface.\n     *\n     * @param iface interface instance\n     * @return collection of IPv6 addresses\n     */\n    private Collection<Ip6Address> getIp6Addresses(Interface iface) {\n        return iface.ipAddressesList()\n                .stream()\n                .map(InterfaceIpAddress::ipAddress)\n                .filter(IpAddress::isIp6)\n                .map(IpAddress::getIp6Address)\n                .collect(Collectors.toSet());\n    }\n\n    /**\n     * Install the given flow rules in batch using the flow rule service.\n     *\n     * @param flowRules flow rules to install\n     */\n    private void installRules(Collection<FlowRule> flowRules) {\n        FlowRuleOperations.Builder ops = FlowRuleOperations.builder();\n        flowRules.forEach(ops::add);\n        flowRuleService.apply(ops.build());\n    }\n}\n"
  },
  {
    "path": "app/src/main/java/org/onosproject/ngsdn/tutorial/Srv6Component.java",
    "content": "/*\n * Copyright 2019-present Open Networking Foundation\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may not use this file except in compliance with the License.\n * You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\npackage org.onosproject.ngsdn.tutorial;\n\nimport com.google.common.collect.Lists;\nimport org.onlab.packet.Ip6Address;\nimport org.onosproject.core.ApplicationId;\nimport org.onosproject.mastership.MastershipService;\nimport org.onosproject.net.Device;\nimport org.onosproject.net.DeviceId;\nimport org.onosproject.net.config.NetworkConfigService;\nimport org.onosproject.net.device.DeviceEvent;\nimport org.onosproject.net.device.DeviceListener;\nimport org.onosproject.net.device.DeviceService;\nimport org.onosproject.net.flow.FlowRule;\nimport org.onosproject.net.flow.FlowRuleOperations;\nimport org.onosproject.net.flow.FlowRuleService;\nimport org.onosproject.net.flow.criteria.PiCriterion;\nimport org.onosproject.net.pi.model.PiActionId;\nimport org.onosproject.net.pi.model.PiActionParamId;\nimport org.onosproject.net.pi.model.PiMatchFieldId;\nimport org.onosproject.net.pi.model.PiTableId;\nimport org.onosproject.net.pi.runtime.PiAction;\nimport org.onosproject.net.pi.runtime.PiActionParam;\nimport org.onosproject.net.pi.runtime.PiTableAction;\nimport org.osgi.service.component.annotations.Activate;\nimport org.osgi.service.component.annotations.Component;\nimport org.osgi.service.component.annotations.Deactivate;\nimport org.osgi.service.component.annotations.Reference;\nimport org.osgi.service.component.annotations.ReferenceCardinality;\nimport org.onosproject.ngsdn.tutorial.common.FabricDeviceConfig;\nimport org.onosproject.ngsdn.tutorial.common.Utils;\nimport org.slf4j.Logger;\nimport org.slf4j.LoggerFactory;\n\nimport java.util.List;\nimport java.util.Optional;\n\nimport static com.google.common.collect.Streams.stream;\nimport static org.onosproject.ngsdn.tutorial.AppConstants.INITIAL_SETUP_DELAY;\n\n/**\n * Application which handles SRv6 segment routing.\n */\n@Component(\n        immediate = true,\n        // *** TODO EXERCISE 6\n        // set to true when ready\n        enabled = false,\n        service = Srv6Component.class\n)\npublic class Srv6Component {\n\n    private static final Logger log = LoggerFactory.getLogger(Srv6Component.class);\n\n    //--------------------------------------------------------------------------\n    // ONOS CORE SERVICE BINDING\n    //\n    // These variables are set by the Karaf runtime environment before calling\n    // the activate() method.\n    //--------------------------------------------------------------------------\n\n    @Reference(cardinality = ReferenceCardinality.MANDATORY)\n    private FlowRuleService flowRuleService;\n\n    @Reference(cardinality = ReferenceCardinality.MANDATORY)\n    private MastershipService mastershipService;\n\n    @Reference(cardinality = ReferenceCardinality.MANDATORY)\n    private DeviceService deviceService;\n\n    @Reference(cardinality = ReferenceCardinality.MANDATORY)\n    private NetworkConfigService networkConfigService;\n\n    @Reference(cardinality = ReferenceCardinality.MANDATORY)\n    private MainComponent mainComponent;\n\n    private final DeviceListener deviceListener = new Srv6Component.InternalDeviceListener();\n\n    private ApplicationId appId;\n\n    //--------------------------------------------------------------------------\n    // COMPONENT ACTIVATION.\n    //\n    // When loading/unloading the app the Karaf runtime environment will call\n    // activate()/deactivate().\n    //--------------------------------------------------------------------------\n\n    @Activate\n    protected void activate() {\n        appId = mainComponent.getAppId();\n\n        // Register listeners to be informed about device and host events.\n        deviceService.addListener(deviceListener);\n\n        // Schedule set up for all devices.\n        mainComponent.scheduleTask(this::setUpAllDevices, INITIAL_SETUP_DELAY);\n\n        log.info(\"Started\");\n    }\n\n    @Deactivate\n    protected void deactivate() {\n        deviceService.removeListener(deviceListener);\n\n        log.info(\"Stopped\");\n    }\n\n    //--------------------------------------------------------------------------\n    // METHODS TO COMPLETE.\n    //\n    // Complete the implementation wherever you see TODO.\n    //--------------------------------------------------------------------------\n\n    /**\n     * Populate the My SID table from the network configuration for the\n     * specified device.\n     *\n     * @param deviceId the device Id\n     */\n    private void setUpMySidTable(DeviceId deviceId) {\n\n        Ip6Address mySid = getMySid(deviceId);\n\n        log.info(\"Adding mySid rule on {} (sid {})...\", deviceId, mySid);\n\n        // *** TODO EXERCISE 6\n        // Fill in the table ID for the SRv6 my segment identifier table\n        // ---- START SOLUTION ----\n        String tableId = \"MODIFY ME\";\n        // ---- END SOLUTION ----\n\n        // *** TODO EXERCISE 6\n        // Modify the field and action id to match your P4Info\n        // ---- START SOLUTION ----\n        PiCriterion match = PiCriterion.builder()\n                .matchLpm(\n                        PiMatchFieldId.of(\"MODIFY ME\"),\n                        mySid.toOctets(), 128)\n                .build();\n\n        PiTableAction action = PiAction.builder()\n                .withId(PiActionId.of(\"MODIFY ME\"))\n                .build();\n        // ---- END SOLUTION ----\n\n        FlowRule myStationRule = Utils.buildFlowRule(\n                deviceId, appId, tableId, match, action);\n\n        flowRuleService.applyFlowRules(myStationRule);\n    }\n\n    /**\n     * Insert a SRv6 transit insert policy that will inject an SRv6 header for\n     * packets destined to destIp.\n     *\n     * @param deviceId     device ID\n     * @param destIp       target IP address for the SRv6 policy\n     * @param prefixLength prefix length for the target IP\n     * @param segmentList  list of SRv6 SIDs that make up the path\n     */\n    public void insertSrv6InsertRule(DeviceId deviceId, Ip6Address destIp, int prefixLength,\n                                     List<Ip6Address> segmentList) {\n        if (segmentList.size() < 2 || segmentList.size() > 3) {\n            throw new RuntimeException(\"List of \" + segmentList.size() + \" segments is not supported\");\n        }\n\n        // *** TODO EXERCISE 6\n        // Fill in the table ID for the SRv6 transit table.\n        // ---- START SOLUTION ----\n        String tableId = \"MODIFY ME\";\n        // ---- END SOLUTION ----\n\n        // *** TODO EXERCISE 6\n        // Modify match field, action id, and action parameters to match your P4Info.\n        // ---- START SOLUTION ----\n        PiCriterion match = PiCriterion.builder()\n                .matchLpm(PiMatchFieldId.of(\"MODIFY ME\"), destIp.toOctets(), prefixLength)\n                .build();\n\n        List<PiActionParam> actionParams = Lists.newArrayList();\n\n        for (int i = 0; i < segmentList.size(); i++) {\n            PiActionParamId paramId = PiActionParamId.of(\"s\" + (i + 1));\n            PiActionParam param = new PiActionParam(paramId, segmentList.get(i).toOctets());\n            actionParams.add(param);\n        }\n\n        PiAction action = PiAction.builder()\n                .withId(PiActionId.of(\"IngressPipeImpl.srv6_t_insert_\" + segmentList.size()))\n                .withParameters(actionParams)\n                .build();\n        // ---- END SOLUTION ----\n\n        final FlowRule rule = Utils.buildFlowRule(\n                deviceId, appId, tableId, match, action);\n\n        flowRuleService.applyFlowRules(rule);\n    }\n\n    /**\n     * Remove all SRv6 transit insert polices for the specified device.\n     *\n     * @param deviceId device ID\n     */\n    public void clearSrv6InsertRules(DeviceId deviceId) {\n        // *** TODO EXERCISE 6\n        // Fill in the table ID for the SRv6 transit table\n        // ---- START SOLUTION ----\n        String tableId = \"MODIFY ME\";\n        // ---- END SOLUTION ----\n\n        FlowRuleOperations.Builder ops = FlowRuleOperations.builder();\n        stream(flowRuleService.getFlowEntries(deviceId))\n                .filter(fe -> fe.appId() == appId.id())\n                .filter(fe -> fe.table().equals(PiTableId.of(tableId)))\n                .forEach(ops::remove);\n        flowRuleService.apply(ops.build());\n    }\n\n    // ---------- END METHODS TO COMPLETE ----------------\n\n    //--------------------------------------------------------------------------\n    // EVENT LISTENERS\n    //\n    // Events are processed only if isRelevant() returns true.\n    //--------------------------------------------------------------------------\n\n    /**\n     * Listener of device events.\n     */\n    public class InternalDeviceListener implements DeviceListener {\n\n        @Override\n        public boolean isRelevant(DeviceEvent event) {\n            switch (event.type()) {\n                case DEVICE_ADDED:\n                case DEVICE_AVAILABILITY_CHANGED:\n                    break;\n                default:\n                    // Ignore other events.\n                    return false;\n            }\n            // Process only if this controller instance is the master.\n            final DeviceId deviceId = event.subject().id();\n            return mastershipService.isLocalMaster(deviceId);\n        }\n\n        @Override\n        public void event(DeviceEvent event) {\n            final DeviceId deviceId = event.subject().id();\n            if (deviceService.isAvailable(deviceId)) {\n                // A P4Runtime device is considered available in ONOS when there\n                // is a StreamChannel session open and the pipeline\n                // configuration has been set.\n                mainComponent.getExecutorService().execute(() -> {\n                    log.info(\"{} event! deviceId={}\", event.type(), deviceId);\n\n                    setUpMySidTable(event.subject().id());\n                });\n            }\n        }\n    }\n\n\n    //--------------------------------------------------------------------------\n    // UTILITY METHODS\n    //--------------------------------------------------------------------------\n\n    /**\n     * Sets up SRv6 My SID table on all devices known by ONOS and for which this\n     * ONOS node instance is currently master.\n     */\n    private synchronized void setUpAllDevices() {\n        // Set up host routes\n        stream(deviceService.getAvailableDevices())\n                .map(Device::id)\n                .filter(mastershipService::isLocalMaster)\n                .forEach(deviceId -> {\n                    log.info(\"*** SRV6 - Starting initial set up for {}...\", deviceId);\n                    this.setUpMySidTable(deviceId);\n                });\n    }\n\n    /**\n     * Returns the Srv6 config for the given device.\n     *\n     * @param deviceId the device ID\n     * @return Srv6  device config\n     */\n    private Optional<FabricDeviceConfig> getDeviceConfig(DeviceId deviceId) {\n        FabricDeviceConfig config = networkConfigService.getConfig(deviceId, FabricDeviceConfig.class);\n        return Optional.ofNullable(config);\n    }\n\n    /**\n     * Returns Srv6 SID for the given device.\n     *\n     * @param deviceId the device ID\n     * @return SID for the device\n     */\n    private Ip6Address getMySid(DeviceId deviceId) {\n        return getDeviceConfig(deviceId)\n                .map(FabricDeviceConfig::mySid)\n                .orElseThrow(() -> new RuntimeException(\n                        \"Missing mySid config for \" + deviceId));\n    }\n}\n"
  },
  {
    "path": "app/src/main/java/org/onosproject/ngsdn/tutorial/cli/Srv6ClearCommand.java",
    "content": "/*\n * Copyright 2019-present Open Networking Foundation\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may not use this file except in compliance with the License.\n * You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\npackage org.onosproject.ngsdn.tutorial.cli;\n\nimport org.apache.karaf.shell.api.action.Argument;\nimport org.apache.karaf.shell.api.action.Command;\nimport org.apache.karaf.shell.api.action.Completion;\nimport org.apache.karaf.shell.api.action.lifecycle.Service;\nimport org.onosproject.cli.AbstractShellCommand;\nimport org.onosproject.cli.net.DeviceIdCompleter;\nimport org.onosproject.net.Device;\nimport org.onosproject.net.DeviceId;\nimport org.onosproject.net.device.DeviceService;\nimport org.onosproject.ngsdn.tutorial.Srv6Component;\n\n/**\n * SRv6 Transit Clear Command\n */\n@Service\n@Command(scope = \"onos\", name = \"srv6-clear\",\n         description = \"Clears all t_insert rules from the SRv6 Transit table\")\npublic class Srv6ClearCommand extends AbstractShellCommand {\n\n    @Argument(index = 0, name = \"uri\", description = \"Device ID\",\n              required = true, multiValued = false)\n    @Completion(DeviceIdCompleter.class)\n    String uri = null;\n\n    @Override\n    protected void doExecute() {\n        DeviceService deviceService = get(DeviceService.class);\n        Srv6Component app = get(Srv6Component.class);\n\n        Device device = deviceService.getDevice(DeviceId.deviceId(uri));\n        if (device == null) {\n            print(\"Device \\\"%s\\\" is not found\", uri);\n            return;\n        }\n        app.clearSrv6InsertRules(device.id());\n    }\n\n}\n"
  },
  {
    "path": "app/src/main/java/org/onosproject/ngsdn/tutorial/cli/Srv6InsertCommand.java",
    "content": "/*\n * Copyright 2019-present Open Networking Foundation\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may not use this file except in compliance with the License.\n * You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\npackage org.onosproject.ngsdn.tutorial.cli;\n\nimport org.apache.karaf.shell.api.action.Argument;\nimport org.apache.karaf.shell.api.action.Command;\nimport org.apache.karaf.shell.api.action.Completion;\nimport org.apache.karaf.shell.api.action.lifecycle.Service;\nimport org.onlab.packet.Ip6Address;\nimport org.onlab.packet.IpAddress;\nimport org.onosproject.cli.AbstractShellCommand;\nimport org.onosproject.cli.net.DeviceIdCompleter;\nimport org.onosproject.net.Device;\nimport org.onosproject.net.DeviceId;\nimport org.onosproject.net.device.DeviceService;\nimport org.onosproject.ngsdn.tutorial.Srv6Component;\n\nimport java.util.List;\nimport java.util.stream.Collectors;\n\n/**\n * SRv6 Transit Insert Command\n */\n@Service\n@Command(scope = \"onos\", name = \"srv6-insert\",\n         description = \"Insert a t_insert rule into the SRv6 Transit table\")\npublic class Srv6InsertCommand extends AbstractShellCommand {\n\n    @Argument(index = 0, name = \"uri\", description = \"Device ID\",\n              required = true, multiValued = false)\n    @Completion(DeviceIdCompleter.class)\n    String uri = null;\n\n    @Argument(index = 1, name = \"segments\",\n            description = \"SRv6 Segments (space separated list); last segment is target IP address\",\n            required = false, multiValued = true)\n    @Completion(Srv6SidCompleter.class)\n    List<String> segments = null;\n\n    @Override\n    protected void doExecute() {\n        DeviceService deviceService = get(DeviceService.class);\n        Srv6Component app = get(Srv6Component.class);\n\n        Device device = deviceService.getDevice(DeviceId.deviceId(uri));\n        if (device == null) {\n            print(\"Device \\\"%s\\\" is not found\", uri);\n            return;\n        }\n        if (segments.size() == 0) {\n            print(\"No segments listed\");\n            return;\n        }\n        List<Ip6Address> sids = segments.stream()\n                .map(Ip6Address::valueOf)\n                .collect(Collectors.toList());\n        Ip6Address destIp = sids.get(sids.size() - 1);\n\n        print(\"Installing path on device %s: %s\",\n                uri, sids.stream()\n                         .map(IpAddress::toString)\n                         .collect(Collectors.joining(\", \")));\n        app.insertSrv6InsertRule(device.id(), destIp, 128, sids);\n\n    }\n\n}\n"
  },
  {
    "path": "app/src/main/java/org/onosproject/ngsdn/tutorial/cli/Srv6SidCompleter.java",
    "content": "/*\n * Copyright 2019-present Open Networking Foundation\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may not use this file except in compliance with the License.\n * You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\npackage org.onosproject.ngsdn.tutorial.cli;\n\nimport org.apache.karaf.shell.api.action.lifecycle.Service;\nimport org.apache.karaf.shell.api.console.CommandLine;\nimport org.apache.karaf.shell.api.console.Completer;\nimport org.apache.karaf.shell.api.console.Session;\nimport org.apache.karaf.shell.support.completers.StringsCompleter;\nimport org.onosproject.cli.AbstractShellCommand;\nimport org.onosproject.net.config.NetworkConfigService;\nimport org.onosproject.net.device.DeviceService;\nimport org.onosproject.ngsdn.tutorial.common.FabricDeviceConfig;\n\nimport java.util.List;\nimport java.util.Objects;\nimport java.util.SortedSet;\n\nimport static com.google.common.collect.Streams.stream;\n\n/**\n * Completer for SIDs based on device config.\n */\n@Service\npublic class Srv6SidCompleter implements Completer {\n\n    @Override\n    public int complete(Session session, CommandLine commandLine, List<String> candidates) {\n        DeviceService deviceService = AbstractShellCommand.get(DeviceService.class);\n        NetworkConfigService netCfgService = AbstractShellCommand.get(NetworkConfigService.class);\n\n        // Delegate string completer\n        StringsCompleter delegate = new StringsCompleter();\n        SortedSet<String> strings = delegate.getStrings();\n\n        stream(deviceService.getDevices())\n                .map(d -> netCfgService.getConfig(d.id(), FabricDeviceConfig.class))\n                .filter(Objects::nonNull)\n                .map(FabricDeviceConfig::mySid)\n                .filter(Objects::nonNull)\n                .forEach(sid -> strings.add(sid.toString()));\n\n        // Now let the completer do the work for figuring out what to offer.\n        return delegate.complete(session, commandLine, candidates);\n    }\n}\n"
  },
  {
    "path": "app/src/main/java/org/onosproject/ngsdn/tutorial/cli/package-info.java",
    "content": "package org.onosproject.ngsdn.tutorial.cli;"
  },
  {
    "path": "app/src/main/java/org/onosproject/ngsdn/tutorial/common/FabricDeviceConfig.java",
    "content": "/*\n * Copyright 2019-present Open Networking Foundation\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may not use this file except in compliance with the License.\n * You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\npackage org.onosproject.ngsdn.tutorial.common;\n\nimport org.onlab.packet.Ip6Address;\nimport org.onlab.packet.MacAddress;\nimport org.onosproject.net.DeviceId;\nimport org.onosproject.net.config.Config;\n\n/**\n * Device configuration object for the IPv6 fabric tutorial application.\n */\npublic class FabricDeviceConfig extends Config<DeviceId> {\n\n    public static final String CONFIG_KEY = \"fabricDeviceConfig\";\n    private static final String MY_STATION_MAC = \"myStationMac\";\n    private static final String MY_SID = \"mySid\";\n    private static final String IS_SPINE = \"isSpine\";\n\n    @Override\n    public boolean isValid() {\n        return hasOnlyFields(MY_STATION_MAC, MY_SID, IS_SPINE) &&\n                myStationMac() != null &&\n                mySid() != null;\n    }\n\n    /**\n     * Gets the MAC address of the switch.\n     *\n     * @return MAC address of the switch. Or null if not configured.\n     */\n    public MacAddress myStationMac() {\n        String mac = get(MY_STATION_MAC, null);\n        return mac != null ? MacAddress.valueOf(mac) : null;\n    }\n\n    /**\n     * Gets the SRv6 segment ID (SID) of the switch.\n     *\n     * @return IP address of the router. Or null if not configured.\n     */\n    public Ip6Address mySid() {\n        String ip = get(MY_SID, null);\n        return ip != null ? Ip6Address.valueOf(ip) : null;\n    }\n\n    /**\n     * Checks if the switch is a spine switch.\n     *\n     * @return true if the switch is a spine switch. false if the switch is not\n     * a spine switch, or if the value is not configured.\n     */\n    public boolean isSpine() {\n        String isSpine = get(IS_SPINE, null);\n        return isSpine != null && Boolean.valueOf(isSpine);\n    }\n}\n"
  },
  {
    "path": "app/src/main/java/org/onosproject/ngsdn/tutorial/common/Utils.java",
    "content": "/*\n * Copyright 2019-present Open Networking Foundation\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may not use this file except in compliance with the License.\n * You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\npackage org.onosproject.ngsdn.tutorial.common;\n\nimport org.onosproject.core.ApplicationId;\nimport org.onosproject.net.DeviceId;\nimport org.onosproject.net.PortNumber;\nimport org.onosproject.net.flow.DefaultFlowRule;\nimport org.onosproject.net.flow.DefaultTrafficSelector;\nimport org.onosproject.net.flow.DefaultTrafficTreatment;\nimport org.onosproject.net.flow.FlowRule;\nimport org.onosproject.net.flow.criteria.PiCriterion;\nimport org.onosproject.net.group.DefaultGroupBucket;\nimport org.onosproject.net.group.DefaultGroupDescription;\nimport org.onosproject.net.group.DefaultGroupKey;\nimport org.onosproject.net.group.GroupBucket;\nimport org.onosproject.net.group.GroupBuckets;\nimport org.onosproject.net.group.GroupDescription;\nimport org.onosproject.net.group.GroupKey;\nimport org.onosproject.net.pi.model.PiActionProfileId;\nimport org.onosproject.net.pi.model.PiTableId;\nimport org.onosproject.net.pi.runtime.PiAction;\nimport org.onosproject.net.pi.runtime.PiGroupKey;\nimport org.onosproject.net.pi.runtime.PiTableAction;\nimport org.slf4j.Logger;\nimport org.slf4j.LoggerFactory;\n\nimport java.nio.ByteBuffer;\nimport java.util.Collection;\nimport java.util.List;\nimport java.util.stream.Collectors;\n\nimport static com.google.common.base.Preconditions.checkArgument;\nimport static com.google.common.base.Preconditions.checkNotNull;\nimport static org.onosproject.net.group.DefaultGroupBucket.createAllGroupBucket;\nimport static org.onosproject.net.group.DefaultGroupBucket.createCloneGroupBucket;\nimport static org.onosproject.ngsdn.tutorial.AppConstants.DEFAULT_FLOW_RULE_PRIORITY;\n\npublic final class Utils {\n\n    private static final Logger log = LoggerFactory.getLogger(Utils.class);\n\n    public static GroupDescription buildMulticastGroup(\n            ApplicationId appId,\n            DeviceId deviceId,\n            int groupId,\n            Collection<PortNumber> ports) {\n        return buildReplicationGroup(appId, deviceId, groupId, ports, false);\n    }\n\n    public static GroupDescription buildCloneGroup(\n            ApplicationId appId,\n            DeviceId deviceId,\n            int groupId,\n            Collection<PortNumber> ports) {\n        return buildReplicationGroup(appId, deviceId, groupId, ports, true);\n    }\n\n    private static GroupDescription buildReplicationGroup(\n            ApplicationId appId,\n            DeviceId deviceId,\n            int groupId,\n            Collection<PortNumber> ports,\n            boolean isClone) {\n\n        checkNotNull(deviceId);\n        checkNotNull(appId);\n        checkArgument(!ports.isEmpty());\n\n        final GroupKey groupKey = new DefaultGroupKey(\n                ByteBuffer.allocate(4).putInt(groupId).array());\n\n        final List<GroupBucket> bucketList = ports.stream()\n                .map(p -> DefaultTrafficTreatment.builder()\n                        .setOutput(p).build())\n                .map(t -> isClone ? createCloneGroupBucket(t)\n                        : createAllGroupBucket(t))\n                .collect(Collectors.toList());\n\n        return new DefaultGroupDescription(\n                deviceId,\n                isClone ? GroupDescription.Type.CLONE : GroupDescription.Type.ALL,\n                new GroupBuckets(bucketList),\n                groupKey, groupId, appId);\n    }\n\n    public static FlowRule buildFlowRule(DeviceId switchId, ApplicationId appId,\n                                         String tableId, PiCriterion piCriterion,\n                                         PiTableAction piAction) {\n        return DefaultFlowRule.builder()\n                .forDevice(switchId)\n                .forTable(PiTableId.of(tableId))\n                .fromApp(appId)\n                .withPriority(DEFAULT_FLOW_RULE_PRIORITY)\n                .makePermanent()\n                .withSelector(DefaultTrafficSelector.builder()\n                                      .matchPi(piCriterion).build())\n                .withTreatment(DefaultTrafficTreatment.builder()\n                                       .piTableAction(piAction).build())\n                .build();\n    }\n\n    public static GroupDescription buildSelectGroup(DeviceId deviceId,\n                                                    String tableId,\n                                                    String actionProfileId,\n                                                    int groupId,\n                                                    Collection<PiAction> actions,\n                                                    ApplicationId appId) {\n\n        final GroupKey groupKey = new PiGroupKey(\n                PiTableId.of(tableId), PiActionProfileId.of(actionProfileId), groupId);\n        final List<GroupBucket> buckets = actions.stream()\n                .map(action -> DefaultTrafficTreatment.builder()\n                        .piTableAction(action).build())\n                .map(DefaultGroupBucket::createSelectGroupBucket)\n                .collect(Collectors.toList());\n        return new DefaultGroupDescription(\n                deviceId,\n                GroupDescription.Type.SELECT,\n                new GroupBuckets(buckets),\n                groupKey,\n                groupId,\n                appId);\n    }\n\n    public static void sleep(int millis) {\n        try {\n            Thread.sleep(millis);\n        } catch (InterruptedException e) {\n            log.error(\"Interrupted!\", e);\n            Thread.currentThread().interrupt();\n        }\n    }\n}\n"
  },
  {
    "path": "app/src/main/java/org/onosproject/ngsdn/tutorial/pipeconf/InterpreterImpl.java",
    "content": "/*\n * Copyright 2019-present Open Networking Foundation\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may not use this file except in compliance with the License.\n * You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\npackage org.onosproject.ngsdn.tutorial.pipeconf;\n\nimport com.google.common.collect.ImmutableList;\nimport com.google.common.collect.ImmutableMap;\nimport org.onlab.packet.DeserializationException;\nimport org.onlab.packet.Ethernet;\nimport org.onlab.util.ImmutableByteSequence;\nimport org.onosproject.net.ConnectPoint;\nimport org.onosproject.net.DeviceId;\nimport org.onosproject.net.Port;\nimport org.onosproject.net.PortNumber;\nimport org.onosproject.net.device.DeviceService;\nimport org.onosproject.net.driver.AbstractHandlerBehaviour;\nimport org.onosproject.net.flow.TrafficTreatment;\nimport org.onosproject.net.flow.criteria.Criterion;\nimport org.onosproject.net.packet.DefaultInboundPacket;\nimport org.onosproject.net.packet.InboundPacket;\nimport org.onosproject.net.packet.OutboundPacket;\nimport org.onosproject.net.pi.model.PiMatchFieldId;\nimport org.onosproject.net.pi.model.PiPacketMetadataId;\nimport org.onosproject.net.pi.model.PiPipelineInterpreter;\nimport org.onosproject.net.pi.model.PiTableId;\nimport org.onosproject.net.pi.runtime.PiAction;\nimport org.onosproject.net.pi.runtime.PiPacketMetadata;\nimport org.onosproject.net.pi.runtime.PiPacketOperation;\n\nimport java.nio.ByteBuffer;\nimport java.util.Collection;\nimport java.util.List;\nimport java.util.Map;\nimport java.util.Optional;\n\nimport static java.lang.String.format;\nimport static java.util.stream.Collectors.toList;\nimport static org.onlab.util.ImmutableByteSequence.copyFrom;\nimport static org.onosproject.net.PortNumber.CONTROLLER;\nimport static org.onosproject.net.PortNumber.FLOOD;\nimport static org.onosproject.net.flow.instructions.Instruction.Type.OUTPUT;\nimport static org.onosproject.net.flow.instructions.Instructions.OutputInstruction;\nimport static org.onosproject.net.pi.model.PiPacketOperationType.PACKET_OUT;\nimport static org.onosproject.ngsdn.tutorial.AppConstants.CPU_PORT_ID;\n\n\n/**\n * Interpreter implementation.\n */\npublic class InterpreterImpl extends AbstractHandlerBehaviour\n        implements PiPipelineInterpreter {\n\n\n    // From v1model.p4\n    private static final int V1MODEL_PORT_BITWIDTH = 9;\n\n    // From P4Info.\n    private static final Map<Criterion.Type, String> CRITERION_MAP =\n            new ImmutableMap.Builder<Criterion.Type, String>()\n                    .put(Criterion.Type.IN_PORT, \"standard_metadata.ingress_port\")\n                    .put(Criterion.Type.ETH_DST, \"hdr.ethernet.dst_addr\")\n                    .put(Criterion.Type.ETH_SRC, \"hdr.ethernet.src_addr\")\n                    .put(Criterion.Type.ETH_TYPE, \"hdr.ethernet.ether_type\")\n                    .put(Criterion.Type.IPV6_DST, \"hdr.ipv6.dst_addr\")\n                    .put(Criterion.Type.IP_PROTO, \"local_metadata.ip_proto\")\n                    .put(Criterion.Type.ICMPV4_TYPE, \"local_metadata.icmp_type\")\n                    .put(Criterion.Type.ICMPV6_TYPE, \"local_metadata.icmp_type\")\n                    .build();\n\n    /**\n     * Returns a collection of PI packet operations populated with metadata\n     * specific for this pipeconf and equivalent to the given ONOS\n     * OutboundPacket instance.\n     *\n     * @param packet ONOS OutboundPacket\n     * @return collection of PI packet operations\n     * @throws PiInterpreterException if the packet treatments cannot be\n     *                                executed by this pipeline\n     */\n    @Override\n    public Collection<PiPacketOperation> mapOutboundPacket(OutboundPacket packet)\n            throws PiInterpreterException {\n        TrafficTreatment treatment = packet.treatment();\n\n        // Packet-out in main.p4 supports only setting the output port,\n        // i.e. we only understand OUTPUT instructions.\n        List<OutputInstruction> outInstructions = treatment\n                .allInstructions()\n                .stream()\n                .filter(i -> i.type().equals(OUTPUT))\n                .map(i -> (OutputInstruction) i)\n                .collect(toList());\n\n        if (treatment.allInstructions().size() != outInstructions.size()) {\n            // There are other instructions that are not of type OUTPUT.\n            throw new PiInterpreterException(\"Treatment not supported: \" + treatment);\n        }\n\n        ImmutableList.Builder<PiPacketOperation> builder = ImmutableList.builder();\n        for (OutputInstruction outInst : outInstructions) {\n            if (outInst.port().isLogical() && !outInst.port().equals(FLOOD)) {\n                throw new PiInterpreterException(format(\n                        \"Packet-out on logical port '%s' not supported\",\n                        outInst.port()));\n            } else if (outInst.port().equals(FLOOD)) {\n                // To emulate flooding, we create a packet-out operation for\n                // each switch port.\n                final DeviceService deviceService = handler().get(DeviceService.class);\n                for (Port port : deviceService.getPorts(packet.sendThrough())) {\n                    builder.add(buildPacketOut(packet.data(), port.number().toLong()));\n                }\n            } else {\n                // Create only one packet-out for the given OUTPUT instruction.\n                builder.add(buildPacketOut(packet.data(), outInst.port().toLong()));\n            }\n        }\n        return builder.build();\n    }\n\n    /**\n     * Builds a pipeconf-specific packet-out instance with the given payload and\n     * egress port.\n     *\n     * @param pktData    packet payload\n     * @param portNumber egress port\n     * @return packet-out\n     * @throws PiInterpreterException if packet-out cannot be built\n     */\n    private PiPacketOperation buildPacketOut(ByteBuffer pktData, long portNumber)\n            throws PiInterpreterException {\n\n        // Make sure port number can fit in v1model port metadata bitwidth.\n        final ImmutableByteSequence portBytes;\n        try {\n            portBytes = copyFrom(portNumber).fit(V1MODEL_PORT_BITWIDTH);\n        } catch (ImmutableByteSequence.ByteSequenceTrimException e) {\n            throw new PiInterpreterException(format(\n                    \"Port number %d too big, %s\", portNumber, e.getMessage()));\n        }\n\n        // Create metadata instance for egress port.\n        // *** TODO EXERCISE 4: modify metadata names to match P4 program\n        // ---- START SOLUTION ----\n        final String outPortMetadataName = \"ADD HERE METADATA NAME FOR THE EGRESS PORT\";\n        // ---- END SOLUTION ----\n        final PiPacketMetadata outPortMetadata = PiPacketMetadata.builder()\n                .withId(PiPacketMetadataId.of(outPortMetadataName))\n                .withValue(portBytes)\n                .build();\n\n        // Build packet out.\n        return PiPacketOperation.builder()\n                .withType(PACKET_OUT)\n                .withData(copyFrom(pktData))\n                .withMetadata(outPortMetadata)\n                .build();\n    }\n\n    /**\n     * Returns an ONS InboundPacket equivalent to the given pipeconf-specific\n     * packet-in operation.\n     *\n     * @param packetIn packet operation\n     * @param deviceId ID of the device that originated the packet-in\n     * @return inbound packet\n     * @throws PiInterpreterException if the packet operation cannot be mapped\n     *                                to an inbound packet\n     */\n    @Override\n    public InboundPacket mapInboundPacket(PiPacketOperation packetIn, DeviceId deviceId)\n            throws PiInterpreterException {\n\n        // Find the ingress_port metadata.\n        // *** TODO EXERCISE 4: modify metadata names to match P4Info\n        // ---- START SOLUTION ----\n        final String inportMetadataName = \"ADD HERE METADATA NAME FOR THE INGRESS PORT\";\n        // ---- END SOLUTION ----\n        Optional<PiPacketMetadata> inportMetadata = packetIn.metadatas()\n                .stream()\n                .filter(meta -> meta.id().id().equals(inportMetadataName))\n                .findFirst();\n\n        if (!inportMetadata.isPresent()) {\n            throw new PiInterpreterException(format(\n                    \"Missing metadata '%s' in packet-in received from '%s': %s\",\n                    inportMetadataName, deviceId, packetIn));\n        }\n\n        // Build ONOS InboundPacket instance with the given ingress port.\n\n        // 1. Parse packet-in object into Ethernet packet instance.\n        final byte[] payloadBytes = packetIn.data().asArray();\n        final ByteBuffer rawData = ByteBuffer.wrap(payloadBytes);\n        final Ethernet ethPkt;\n        try {\n            ethPkt = Ethernet.deserializer().deserialize(\n                    payloadBytes, 0, packetIn.data().size());\n        } catch (DeserializationException dex) {\n            throw new PiInterpreterException(dex.getMessage());\n        }\n\n        // 2. Get ingress port\n        final ImmutableByteSequence portBytes = inportMetadata.get().value();\n        final short portNum = portBytes.asReadOnlyBuffer().getShort();\n        final ConnectPoint receivedFrom = new ConnectPoint(\n                deviceId, PortNumber.portNumber(portNum));\n\n        return new DefaultInboundPacket(receivedFrom, ethPkt, rawData);\n    }\n\n    @Override\n    public Optional<Integer> mapLogicalPortNumber(PortNumber port) {\n        if (CONTROLLER.equals(port)) {\n            return Optional.of(CPU_PORT_ID);\n        } else {\n            return Optional.empty();\n        }\n    }\n\n    @Override\n    public Optional<PiMatchFieldId> mapCriterionType(Criterion.Type type) {\n        if (CRITERION_MAP.containsKey(type)) {\n            return Optional.of(PiMatchFieldId.of(CRITERION_MAP.get(type)));\n        } else {\n            return Optional.empty();\n        }\n    }\n\n    @Override\n    public PiAction mapTreatment(TrafficTreatment treatment, PiTableId piTableId)\n            throws PiInterpreterException {\n        throw new PiInterpreterException(\"Treatment mapping not supported\");\n    }\n\n    @Override\n    public Optional<PiTableId> mapFlowRuleTableId(int flowRuleTableId) {\n        return Optional.empty();\n    }\n}\n"
  },
  {
    "path": "app/src/main/java/org/onosproject/ngsdn/tutorial/pipeconf/PipeconfLoader.java",
    "content": "/*\n * Copyright 2019-present Open Networking Foundation\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may not use this file except in compliance with the License.\n * You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\npackage org.onosproject.ngsdn.tutorial.pipeconf;\n\nimport org.onosproject.net.behaviour.Pipeliner;\nimport org.onosproject.net.driver.DriverAdminService;\nimport org.onosproject.net.driver.DriverProvider;\nimport org.onosproject.net.pi.model.DefaultPiPipeconf;\nimport org.onosproject.net.pi.model.PiPipeconf;\nimport org.onosproject.net.pi.model.PiPipelineInterpreter;\nimport org.onosproject.net.pi.model.PiPipelineModel;\nimport org.onosproject.net.pi.service.PiPipeconfService;\nimport org.onosproject.p4runtime.model.P4InfoParser;\nimport org.onosproject.p4runtime.model.P4InfoParserException;\nimport org.osgi.service.component.annotations.Activate;\nimport org.osgi.service.component.annotations.Component;\nimport org.osgi.service.component.annotations.Deactivate;\nimport org.osgi.service.component.annotations.Reference;\nimport org.osgi.service.component.annotations.ReferenceCardinality;\nimport org.slf4j.Logger;\nimport org.slf4j.LoggerFactory;\n\nimport java.net.URL;\nimport java.util.List;\nimport java.util.stream.Collectors;\n\nimport static org.onosproject.net.pi.model.PiPipeconf.ExtensionType.BMV2_JSON;\nimport static org.onosproject.net.pi.model.PiPipeconf.ExtensionType.P4_INFO_TEXT;\nimport static org.onosproject.ngsdn.tutorial.AppConstants.PIPECONF_ID;\n\n/**\n * Component that builds and register the pipeconf at app activation.\n */\n@Component(immediate = true, service = PipeconfLoader.class)\npublic final class PipeconfLoader {\n\n    private final Logger log = LoggerFactory.getLogger(getClass());\n\n    private static final String P4INFO_PATH = \"/p4info.txt\";\n    private static final String BMV2_JSON_PATH = \"/bmv2.json\";\n\n\n    @Reference(cardinality = ReferenceCardinality.MANDATORY)\n    private PiPipeconfService pipeconfService;\n\n    @Reference(cardinality = ReferenceCardinality.MANDATORY)\n    private DriverAdminService driverAdminService;\n\n    @Activate\n    public void activate() {\n        // Registers the pipeconf at component activation.\n        if (pipeconfService.getPipeconf(PIPECONF_ID).isPresent()) {\n            // Remove first if already registered, to support reloading of the\n            // pipeconf during the tutorial.\n            pipeconfService.unregister(PIPECONF_ID);\n        }\n        removePipeconfDrivers();\n        try {\n            pipeconfService.register(buildPipeconf());\n        } catch (P4InfoParserException e) {\n            log.error(\"Unable to register \" + PIPECONF_ID, e);\n        }\n    }\n\n    @Deactivate\n    public void deactivate() {\n        // Do nothing.\n    }\n\n    private PiPipeconf buildPipeconf() throws P4InfoParserException {\n\n        final URL p4InfoUrl = PipeconfLoader.class.getResource(P4INFO_PATH);\n        final URL bmv2JsonUrlUrl = PipeconfLoader.class.getResource(BMV2_JSON_PATH);\n        final PiPipelineModel pipelineModel = P4InfoParser.parse(p4InfoUrl);\n\n        return DefaultPiPipeconf.builder()\n                .withId(PIPECONF_ID)\n                .withPipelineModel(pipelineModel)\n                .addBehaviour(PiPipelineInterpreter.class, InterpreterImpl.class)\n                .addBehaviour(Pipeliner.class, PipelinerImpl.class)\n                .addExtension(P4_INFO_TEXT, p4InfoUrl)\n                .addExtension(BMV2_JSON, bmv2JsonUrlUrl)\n                .build();\n    }\n\n    private void removePipeconfDrivers() {\n        List<DriverProvider> driverProvidersToRemove = driverAdminService\n                .getProviders().stream()\n                .filter(p -> p.getDrivers().stream()\n                        .anyMatch(d -> d.name().endsWith(PIPECONF_ID.id())))\n                .collect(Collectors.toList());\n\n        if (driverProvidersToRemove.isEmpty()) {\n            return;\n        }\n\n        log.info(\"Found {} outdated drivers for pipeconf '{}', removing...\",\n                 driverProvidersToRemove.size(), PIPECONF_ID);\n\n        driverProvidersToRemove.forEach(driverAdminService::unregisterProvider);\n    }\n}\n"
  },
  {
    "path": "app/src/main/java/org/onosproject/ngsdn/tutorial/pipeconf/PipelinerImpl.java",
    "content": "/*\n * Copyright 2019-present Open Networking Foundation\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may not use this file except in compliance with the License.\n * You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\npackage org.onosproject.ngsdn.tutorial.pipeconf;\n\nimport org.onosproject.net.DeviceId;\nimport org.onosproject.net.PortNumber;\nimport org.onosproject.net.behaviour.NextGroup;\nimport org.onosproject.net.behaviour.Pipeliner;\nimport org.onosproject.net.behaviour.PipelinerContext;\nimport org.onosproject.net.driver.AbstractHandlerBehaviour;\nimport org.onosproject.net.flow.DefaultFlowRule;\nimport org.onosproject.net.flow.DefaultTrafficTreatment;\nimport org.onosproject.net.flow.FlowRule;\nimport org.onosproject.net.flow.FlowRuleService;\nimport org.onosproject.net.flow.instructions.Instructions;\nimport org.onosproject.net.flowobjective.FilteringObjective;\nimport org.onosproject.net.flowobjective.ForwardingObjective;\nimport org.onosproject.net.flowobjective.NextObjective;\nimport org.onosproject.net.flowobjective.ObjectiveError;\nimport org.onosproject.net.group.GroupDescription;\nimport org.onosproject.net.group.GroupService;\nimport org.onosproject.net.pi.model.PiActionId;\nimport org.onosproject.net.pi.model.PiTableId;\nimport org.onosproject.net.pi.runtime.PiAction;\nimport org.onosproject.ngsdn.tutorial.common.Utils;\nimport org.slf4j.Logger;\n\nimport java.util.Collections;\nimport java.util.List;\n\nimport static org.onosproject.net.flow.instructions.Instruction.Type.OUTPUT;\nimport static org.onosproject.ngsdn.tutorial.AppConstants.CPU_CLONE_SESSION_ID;\nimport static org.slf4j.LoggerFactory.getLogger;\n\n/**\n * Pipeliner implementation that maps all forwarding objectives to the ACL\n * table. All other types of objectives are not supported.\n */\npublic class PipelinerImpl extends AbstractHandlerBehaviour implements Pipeliner {\n\n    // From the P4Info file\n    private static final String ACL_TABLE = \"IngressPipeImpl.acl_table\";\n    private static final String CLONE_TO_CPU = \"IngressPipeImpl.clone_to_cpu\";\n\n    private final Logger log = getLogger(getClass());\n\n    private FlowRuleService flowRuleService;\n    private GroupService groupService;\n    private DeviceId deviceId;\n\n\n    @Override\n    public void init(DeviceId deviceId, PipelinerContext context) {\n        this.deviceId = deviceId;\n        this.flowRuleService = context.directory().get(FlowRuleService.class);\n        this.groupService = context.directory().get(GroupService.class);\n    }\n\n    @Override\n    public void filter(FilteringObjective obj) {\n        obj.context().ifPresent(c -> c.onError(obj, ObjectiveError.UNSUPPORTED));\n    }\n\n    @Override\n    public void forward(ForwardingObjective obj) {\n        if (obj.treatment() == null) {\n            obj.context().ifPresent(c -> c.onError(obj, ObjectiveError.UNSUPPORTED));\n        }\n\n        // Whether this objective specifies an OUTPUT:CONTROLLER instruction.\n        final boolean hasCloneToCpuAction = obj.treatment()\n                .allInstructions().stream()\n                .filter(i -> i.type().equals(OUTPUT))\n                .map(i -> (Instructions.OutputInstruction) i)\n                .anyMatch(i -> i.port().equals(PortNumber.CONTROLLER));\n\n        if (!hasCloneToCpuAction) {\n            // We support only objectives for clone to CPU behaviours (e.g. for\n            // host and link discovery)\n            obj.context().ifPresent(c -> c.onError(obj, ObjectiveError.UNSUPPORTED));\n        }\n\n        // Create an equivalent FlowRule with same selector and clone_to_cpu action.\n        final PiAction cloneToCpuAction = PiAction.builder()\n                .withId(PiActionId.of(CLONE_TO_CPU))\n                .build();\n\n        final FlowRule.Builder ruleBuilder = DefaultFlowRule.builder()\n                .forTable(PiTableId.of(ACL_TABLE))\n                .forDevice(deviceId)\n                .withSelector(obj.selector())\n                .fromApp(obj.appId())\n                .withPriority(obj.priority())\n                .withTreatment(DefaultTrafficTreatment.builder()\n                                       .piTableAction(cloneToCpuAction).build());\n\n        if (obj.permanent()) {\n            ruleBuilder.makePermanent();\n        } else {\n            ruleBuilder.makeTemporary(obj.timeout());\n        }\n\n        final GroupDescription cloneGroup = Utils.buildCloneGroup(\n                obj.appId(),\n                deviceId,\n                CPU_CLONE_SESSION_ID,\n                // Ports where to clone the packet.\n                // Just controller in this case.\n                Collections.singleton(PortNumber.CONTROLLER));\n\n        switch (obj.op()) {\n            case ADD:\n                flowRuleService.applyFlowRules(ruleBuilder.build());\n                groupService.addGroup(cloneGroup);\n                break;\n            case REMOVE:\n                flowRuleService.removeFlowRules(ruleBuilder.build());\n                // Do not remove the clone group as other flow rules might be\n                // pointing to it.\n                break;\n            default:\n                log.warn(\"Unknown operation {}\", obj.op());\n        }\n\n        obj.context().ifPresent(c -> c.onSuccess(obj));\n    }\n\n    @Override\n    public void next(NextObjective obj) {\n        obj.context().ifPresent(c -> c.onError(obj, ObjectiveError.UNSUPPORTED));\n    }\n\n    @Override\n    public List<String> getNextMappings(NextGroup nextGroup) {\n        // We do not use nextObjectives or groups.\n        return Collections.emptyList();\n    }\n}\n"
  },
  {
    "path": "docker-compose.yml",
    "content": "version: \"3\"\n\nservices:\n  mininet:\n    image: opennetworking/ngsdn-tutorial:stratum_bmv2\n    hostname: mininet\n    container_name: mininet\n    privileged: true\n    tty: true\n    stdin_open: true\n    restart: always\n    volumes:\n      - ./tmp:/tmp\n      - ./mininet:/mininet\n    ports:\n      - \"50001:50001\"\n      - \"50002:50002\"\n      - \"50003:50003\"\n      - \"50004:50004\"\n    # NGSDN_TOPO_PY is a Python-based Mininet script defining the topology. Its\n    # value is passed to docker-compose as an environment variable, defined in\n    # the Makefile.\n    entrypoint: \"/mininet/${NGSDN_TOPO_PY}\"\n  onos:\n    image: onosproject/onos:2.2.2\n    hostname: onos\n    container_name: onos\n    ports:\n      - \"8181:8181\" # HTTP\n      - \"8101:8101\" # SSH (CLI)\n    volumes:\n      - ./tmp/onos:/root/onos/apache-karaf-4.2.8/data/tmp\n    environment:\n      - ONOS_APPS=gui2,drivers.bmv2,lldpprovider,hostprovider\n    links:\n      - mininet\n"
  },
  {
    "path": "mininet/flowrule-gtp.json",
    "content": "{\n  \"flows\": [\n    {\n      \"deviceId\": \"device:leaf1\",\n      \"tableId\": \"FabricIngress.spgw_ingress.dl_sess_lookup\",\n      \"priority\": 10,\n      \"timeout\": 0,\n      \"isPermanent\": true,\n      \"selector\": {\n        \"criteria\": [\n          {\n            \"type\": \"IPV4_DST\",\n            \"ip\": \"<INSERT UE IP ADDRESS HERE>/32\"\n          }\n        ]\n      },\n      \"treatment\": {\n        \"instructions\": [\n          {\n            \"type\": \"PROTOCOL_INDEPENDENT\",\n            \"subtype\": \"ACTION\",\n            \"actionId\": \"FabricIngress.spgw_ingress.set_dl_sess_info\",\n            \"actionParams\": {\n              \"teid\": \"BEEF\",\n              \"s1u_enb_addr\": \"0a006401\",\n              \"s1u_sgw_addr\": \"0a0064fe\"\n            }\n          }\n        ]\n      }\n    }\n  ]\n}\n"
  },
  {
    "path": "mininet/host-cmd",
    "content": "#!/bin/bash\n\n# Attach to a Mininet host and run a command\n\nif [ -z $1 ]; then\n  echo \"usage: $0 host cmd [args...]\"\n  exit 1\nelse\n  host=$1\nfi\n\npid=`ps ax | grep \"mininet:$host$\" | grep bash | grep -v mnexec | awk '{print $1};'`\n\nif echo $pid | grep -q ' '; then\n  echo \"Error: found multiple mininet:$host processes\"\n  exit 2\nfi\n\nif [ \"$pid\" == \"\" ]; then\n  echo \"Could not find Mininet host $host\"\n  exit 3\nfi\n\nif [ -z $2 ]; then\n  cmd='bash'\nelse\n  shift\n  cmd=$*\nfi\n\ncgroup=/sys/fs/cgroup/cpu/$host\nif [ -d \"$cgroup\" ]; then\n  cg=\"-g $host\"\nfi\n\n# Check whether host should be running in a chroot dir\nrootdir=\"/var/run/mn/$host/root\"\nif [ -d $rootdir -a -x $rootdir/bin/bash ]; then\n    cmd=\"'cd `pwd`; exec $cmd'\"\n    cmd=\"chroot $rootdir /bin/bash -c $cmd\"\nfi\n\nmnexec $cg -a $pid hostname $host\ncmd=\"exec mnexec $cg -a $pid $cmd\"\neval $cmd"
  },
  {
    "path": "mininet/netcfg-gtp.json",
    "content": "{\n  \"devices\": {\n    \"device:leaf1\": {\n      \"basic\": {\n        \"managementAddress\": \"grpc://mininet:50001?device_id=1\",\n        \"driver\": \"stratum-bmv2\",\n        \"pipeconf\": \"org.onosproject.pipelines.fabric\",\n        \"locType\": \"grid\",\n        \"gridX\": 200,\n        \"gridY\": 600\n      },\n      \"segmentrouting\": {\n        \"name\": \"leaf1\",\n        \"ipv4NodeSid\": 101,\n        \"ipv4Loopback\": \"192.168.1.1\",\n        \"routerMac\": \"00:AA:00:00:00:01\",\n        \"isEdgeRouter\": true,\n        \"adjacencySids\": []\n      }\n    },\n    \"device:leaf2\": {\n      \"basic\": {\n        \"managementAddress\": \"grpc://mininet:50002?device_id=1\",\n        \"driver\": \"stratum-bmv2\",\n        \"pipeconf\": \"org.onosproject.pipelines.fabric\",\n        \"locType\": \"grid\",\n        \"gridX\": 800,\n        \"gridY\": 600\n      },\n      \"segmentrouting\": {\n        \"name\": \"leaf2\",\n        \"ipv4NodeSid\": 102,\n        \"ipv4Loopback\": \"192.168.1.2\",\n        \"routerMac\": \"00:AA:00:00:00:02\",\n        \"isEdgeRouter\": true,\n        \"adjacencySids\": []\n      }\n    },\n    \"device:spine1\": {\n      \"basic\": {\n        \"managementAddress\": \"grpc://mininet:50003?device_id=1\",\n        \"driver\": \"stratum-bmv2\",\n        \"pipeconf\": \"org.onosproject.pipelines.fabric\",\n        \"locType\": \"grid\",\n        \"gridX\": 400,\n        \"gridY\": 400\n      },\n      \"segmentrouting\": {\n        \"name\": \"spine1\",\n        \"ipv4NodeSid\": 201,\n        \"ipv4Loopback\": \"192.168.2.1\",\n        \"routerMac\": \"00:BB:00:00:00:01\",\n        \"isEdgeRouter\": false,\n        \"adjacencySids\": []\n      }\n    },\n    \"device:spine2\": {\n      \"basic\": {\n        \"managementAddress\": \"grpc://mininet:50004?device_id=1\",\n        \"driver\": \"stratum-bmv2\",\n        \"pipeconf\": \"org.onosproject.pipelines.fabric\",\n        \"locType\": \"grid\",\n        \"gridX\": 600,\n        \"gridY\": 400\n      },\n      \"segmentrouting\": {\n        \"name\": \"spine2\",\n        \"ipv4NodeSid\": 202,\n        \"ipv4Loopback\": \"192.168.2.2\",\n        \"routerMac\": \"00:BB:00:00:00:02\",\n        \"isEdgeRouter\": false,\n        \"adjacencySids\": []\n      }\n    }\n  },\n  \"ports\": {\n    \"device:leaf1/3\": {\n      \"interfaces\": [\n        {\n          \"name\": \"leaf1-3\",\n          \"ips\": [\n            \"10.0.100.254/24\"\n          ],\n          \"vlan-untagged\": 100\n        }\n      ]\n    },\n    \"device:leaf2/3\": {\n      \"interfaces\": [\n        {\n          \"name\": \"leaf2-3\",\n          \"ips\": [\n            \"10.0.200.254/24\"\n          ],\n          \"vlan-untagged\": 200\n        }\n      ]\n    }\n  },\n  \"hosts\": {\n    \"00:00:00:00:00:10/None\": {\n      \"basic\": {\n        \"name\": \"enodeb\",\n        \"gridX\": 100,\n        \"gridY\": 700,\n        \"locType\": \"grid\",\n        \"ips\": [\n          \"10.0.100.1\"\n        ],\n        \"locations\": [\n          \"device:leaf1/3\"\n        ]\n      }\n    },\n    \"00:00:00:00:00:20/None\": {\n      \"basic\": {\n        \"name\": \"pdn\",\n        \"gridX\": 850,\n        \"gridY\": 700,\n        \"locType\": \"grid\",\n        \"ips\": [\n          \"10.0.200.1\"\n        ],\n        \"locations\": [\n          \"device:leaf2/3\"\n        ]\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "mininet/netcfg-sr.json",
    "content": "{\n  \"devices\": {\n    \"device:leaf1\": {\n      \"basic\": {\n        \"managementAddress\": \"grpc://mininet:50001?device_id=1\",\n        \"driver\": \"stratum-bmv2\",\n        \"pipeconf\": \"org.onosproject.pipelines.fabric\",\n        \"locType\": \"grid\",\n        \"gridX\": 200,\n        \"gridY\": 600\n      },\n      \"segmentrouting\": {\n        \"name\": \"leaf1\",\n        \"ipv4NodeSid\": 101,\n        \"ipv4Loopback\": \"192.168.1.1\",\n        \"routerMac\": \"00:AA:00:00:00:01\",\n        \"isEdgeRouter\": true,\n        \"adjacencySids\": []\n      }\n    },\n    \"device:leaf2\": {\n      \"basic\": {\n        \"managementAddress\": \"grpc://mininet:50002?device_id=1\",\n        \"driver\": \"stratum-bmv2\",\n        \"pipeconf\": \"org.onosproject.pipelines.fabric\",\n        \"locType\": \"grid\",\n        \"gridX\": 800,\n        \"gridY\": 600\n      },\n      \"segmentrouting\": {\n        \"name\": \"leaf2\",\n        \"ipv4NodeSid\": 102,\n        \"ipv4Loopback\": \"192.168.1.2\",\n        \"routerMac\": \"00:AA:00:00:00:02\",\n        \"isEdgeRouter\": true,\n        \"adjacencySids\": []\n      }\n    },\n    \"device:spine1\": {\n      \"basic\": {\n        \"managementAddress\": \"grpc://mininet:50003?device_id=1\",\n        \"driver\": \"stratum-bmv2\",\n        \"pipeconf\": \"org.onosproject.pipelines.fabric\",\n        \"locType\": \"grid\",\n        \"gridX\": 400,\n        \"gridY\": 400\n      },\n      \"segmentrouting\": {\n        \"name\": \"spine1\",\n        \"ipv4NodeSid\": 201,\n        \"ipv4Loopback\": \"192.168.2.1\",\n        \"routerMac\": \"00:BB:00:00:00:01\",\n        \"isEdgeRouter\": false,\n        \"adjacencySids\": []\n      }\n    },\n    \"device:spine2\": {\n      \"basic\": {\n        \"managementAddress\": \"grpc://mininet:50004?device_id=1\",\n        \"driver\": \"stratum-bmv2\",\n        \"pipeconf\": \"org.onosproject.pipelines.fabric\",\n        \"locType\": \"grid\",\n        \"gridX\": 600,\n        \"gridY\": 400\n      },\n      \"segmentrouting\": {\n        \"name\": \"spine2\",\n        \"ipv4NodeSid\": 202,\n        \"ipv4Loopback\": \"192.168.2.2\",\n        \"routerMac\": \"00:BB:00:00:00:02\",\n        \"isEdgeRouter\": false,\n        \"adjacencySids\": []\n      }\n    }\n  },\n  \"ports\": {\n    \"device:leaf1/3\": {\n      \"interfaces\": [\n        {\n          \"name\": \"leaf1-3\",\n          \"ips\": [\n            \"172.16.1.254/24\"\n          ],\n          \"vlan-untagged\": 100\n        }\n      ]\n    },\n    \"device:leaf1/4\": {\n      \"interfaces\": [\n        {\n          \"name\": \"leaf1-4\",\n          \"ips\": [\n            \"172.16.1.254/24\"\n          ],\n          \"vlan-untagged\": 100\n        }\n      ]\n    },\n    \"device:leaf1/5\": {\n      \"interfaces\": [\n        {\n          \"name\": \"leaf1-5\",\n          \"ips\": [\n            \"172.16.1.254/24\"\n          ],\n          \"vlan-tagged\": [\n            100\n          ]\n        }\n      ]\n    },\n    \"device:leaf1/6\": {\n      \"interfaces\": [\n        {\n          \"name\": \"leaf1-6\",\n          \"ips\": [\n            \"172.16.2.254/24\"\n          ],\n          \"vlan-tagged\": [\n            200\n          ]\n        }\n      ]\n    }\n  },\n  \"hosts\": {\n    \"00:00:00:00:00:1A/None\": {\n      \"basic\": {\n        \"name\": \"h1a\",\n        \"locType\": \"grid\",\n        \"gridX\": 100,\n        \"gridY\": 700\n      }\n    },\n    \"00:00:00:00:00:1B/None\": {\n      \"basic\": {\n        \"name\": \"h1b\",\n        \"locType\": \"grid\",\n        \"gridX\": 100,\n        \"gridY\": 800\n      }\n    },\n    \"00:00:00:00:00:1C/100\": {\n      \"basic\": {\n        \"name\": \"h1c\",\n        \"locType\": \"grid\",\n        \"gridX\": 250,\n        \"gridY\": 800\n      }\n    },\n    \"00:00:00:00:00:20/200\": {\n      \"basic\": {\n        \"name\": \"h2\",\n        \"locType\": \"grid\",\n        \"gridX\": 400,\n        \"gridY\": 700\n      }\n    },\n    \"00:00:00:00:00:30/300\": {\n      \"basic\": {\n        \"name\": \"h3\",\n        \"locType\": \"grid\",\n        \"gridX\": 750,\n        \"gridY\": 700\n      }\n    },\n    \"00:00:00:00:00:40/None\": {\n      \"basic\": {\n        \"name\": \"h4\",\n        \"locType\": \"grid\",\n        \"gridX\": 850,\n        \"gridY\": 700\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "mininet/netcfg.json",
    "content": "{\n  \"devices\": {\n    \"device:leaf1\": {\n      \"basic\": {\n        \"managementAddress\": \"grpc://mininet:50001?device_id=1\",\n        \"driver\": \"stratum-bmv2\",\n        \"pipeconf\": \"org.onosproject.ngsdn-tutorial\",\n        \"locType\": \"grid\",\n        \"gridX\": 200,\n        \"gridY\": 600\n      },\n      \"fabricDeviceConfig\": {\n        \"myStationMac\": \"00:aa:00:00:00:01\",\n        \"mySid\": \"3:101:2::\",\n        \"isSpine\": false\n      }\n    },\n    \"device:leaf2\": {\n      \"basic\": {\n        \"managementAddress\": \"grpc://mininet:50002?device_id=1\",\n        \"driver\": \"stratum-bmv2\",\n        \"pipeconf\": \"org.onosproject.ngsdn-tutorial\",\n        \"locType\": \"grid\",\n        \"gridX\": 800,\n        \"gridY\": 600\n      },\n      \"fabricDeviceConfig\": {\n        \"myStationMac\": \"00:aa:00:00:00:02\",\n        \"mySid\": \"3:102:2::\",\n        \"isSpine\": false\n      }\n    },\n    \"device:spine1\": {\n      \"basic\": {\n        \"managementAddress\": \"grpc://mininet:50003?device_id=1\",\n        \"driver\": \"stratum-bmv2\",\n        \"pipeconf\": \"org.onosproject.ngsdn-tutorial\",\n        \"locType\": \"grid\",\n        \"gridX\": 400,\n        \"gridY\": 400\n      },\n      \"fabricDeviceConfig\": {\n        \"myStationMac\": \"00:bb:00:00:00:01\",\n        \"mySid\": \"3:201:2::\",\n        \"isSpine\": true\n      }\n    },\n    \"device:spine2\": {\n      \"basic\": {\n        \"managementAddress\": \"grpc://mininet:50004?device_id=1\",\n        \"driver\": \"stratum-bmv2\",\n        \"pipeconf\": \"org.onosproject.ngsdn-tutorial\",\n        \"locType\": \"grid\",\n        \"gridX\": 600,\n        \"gridY\": 400\n      },\n      \"fabricDeviceConfig\": {\n        \"myStationMac\": \"00:bb:00:00:00:02\",\n        \"mySid\": \"3:202:2::\",\n        \"isSpine\": true\n      }\n    }\n  },\n  \"ports\": {\n    \"device:leaf1/3\": {\n      \"interfaces\": [\n        {\n          \"name\": \"leaf1-3\",\n          \"ips\": [\"2001:1:1::ff/64\"]\n        }\n      ]\n    },\n    \"device:leaf1/4\": {\n      \"interfaces\": [\n        {\n          \"name\": \"leaf1-4\",\n          \"ips\": [\"2001:1:1::ff/64\"]\n        }\n      ]\n    },\n    \"device:leaf1/5\": {\n      \"interfaces\": [\n        {\n          \"name\": \"leaf1-5\",\n          \"ips\": [\"2001:1:1::ff/64\"]\n        }\n      ]\n    },\n    \"device:leaf1/6\": {\n      \"interfaces\": [\n        {\n          \"name\": \"leaf1-6\",\n          \"ips\": [\"2001:1:2::ff/64\"]\n        }\n      ]\n    },\n    \"device:leaf2/3\": {\n      \"interfaces\": [\n        {\n          \"name\": \"leaf2-3\",\n          \"ips\": [\"2001:2:3::ff/64\"]\n        }\n      ]\n    },\n    \"device:leaf2/4\": {\n      \"interfaces\": [\n        {\n          \"name\": \"leaf2-4\",\n          \"ips\": [\"2001:2:4::ff/64\"]\n        }\n      ]\n    }\n  },\n  \"hosts\": {\n    \"00:00:00:00:00:1A/None\": {\n      \"basic\": {\n        \"name\": \"h1a\",\n        \"locType\": \"grid\",\n        \"gridX\": 100,\n        \"gridY\": 700\n      }\n    },\n    \"00:00:00:00:00:1B/None\": {\n      \"basic\": {\n        \"name\": \"h1b\",\n        \"locType\": \"grid\",\n        \"gridX\": 100,\n        \"gridY\": 800\n      }\n    },\n    \"00:00:00:00:00:1C/None\": {\n      \"basic\": {\n        \"name\": \"h1c\",\n        \"locType\": \"grid\",\n        \"gridX\": 250,\n        \"gridY\": 800\n      }\n    },\n    \"00:00:00:00:00:20/None\": {\n      \"basic\": {\n        \"name\": \"h2\",\n        \"locType\": \"grid\",\n        \"gridX\": 400,\n        \"gridY\": 700\n      }\n    },\n    \"00:00:00:00:00:30/None\": {\n      \"basic\": {\n        \"name\": \"h3\",\n        \"locType\": \"grid\",\n        \"gridX\": 750,\n        \"gridY\": 700\n      }\n    },\n    \"00:00:00:00:00:40/None\": {\n      \"basic\": {\n        \"name\": \"h4\",\n        \"locType\": \"grid\",\n        \"gridX\": 850,\n        \"gridY\": 700\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "mininet/recv-gtp.py",
    "content": "#!/usr/bin/python\n\n# Script used in Exercise 8 that sniffs packets and prints on screen whether\n# they are GTP encapsulated or not.\n\nimport signal\nimport sys\n\nfrom ptf.packet import IP\nfrom scapy.contrib import gtp\nfrom scapy.sendrecv import sniff\n\npkt_count = 0\n\n\ndef handle_pkt(pkt, ex):\n    global pkt_count\n    pkt_count = pkt_count + 1\n    if gtp.GTP_U_Header in pkt:\n        is_gtp_encap = True\n    else:\n        is_gtp_encap = False\n\n    print \"[%d] %d bytes: %s -> %s, is_gtp_encap=%s\\n\\t%s\" \\\n          % (pkt_count, len(pkt), pkt[IP].src, pkt[IP].dst,\n             is_gtp_encap, pkt.summary())\n\n    if is_gtp_encap and ex:\n        exit()\n\n\nprint \"Will print a line for each UDP packet received...\"\n\n\ndef handle_timeout(signum, frame):\n    print \"Timeout! Did not receive any GTP packet\"\n    exit(1)\n\n\nexitOnSuccess = False\nif len(sys.argv) > 1 and sys.argv[1] == \"-e\":\n    # wait max 10 seconds or exit\n    signal.signal(signal.SIGALRM, handle_timeout)\n    signal.alarm(10)\n    exitOnSuccess = True\n\nsniff(count=0, store=False, filter=\"udp\",\n      prn=lambda x: handle_pkt(x, exitOnSuccess))\n"
  },
  {
    "path": "mininet/send-udp.py",
    "content": "#!/usr/bin/python\n\n# Script used in Exercise 8.\n# Send downlink packets to UE address.\n\nfrom scapy.layers.inet import IP, UDP\nfrom scapy.sendrecv import send\n\nUE_ADDR = '17.0.0.1'\nRATE = 5  # packets per second\nPAYLOAD = ' '.join(['P4 is great!'] * 50)\n\nprint \"Sending %d UDP packets per second to %s...\" % (RATE, UE_ADDR)\n\npkt = IP(dst=UE_ADDR) / UDP(sport=80, dport=400) / PAYLOAD\nsend(pkt, inter=1.0 / RATE, loop=True, verbose=True)\n"
  },
  {
    "path": "mininet/topo-gtp.py",
    "content": "#!/usr/bin/python\n\n#  Copyright 2019-present Open Networking Foundation\n#\n#  Licensed under the Apache License, Version 2.0 (the \"License\");\n#  you may not use this file except in compliance with the License.\n#  You may obtain a copy of the License at\n#\n#      http://www.apache.org/licenses/LICENSE-2.0\n#\n#  Unless required by applicable law or agreed to in writing, software\n#  distributed under the License is distributed on an \"AS IS\" BASIS,\n#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n#  See the License for the specific language governing permissions and\n#  limitations under the License.\n\nimport argparse\n\nfrom mininet.cli import CLI\nfrom mininet.log import setLogLevel\nfrom mininet.net import Mininet\nfrom mininet.node import Host\nfrom mininet.topo import Topo\nfrom stratum import StratumBmv2Switch\n\nCPU_PORT = 255\n\n\nclass IPv4Host(Host):\n    \"\"\"Host that can be configured with an IPv4 gateway (default route).\n    \"\"\"\n\n    def config(self, mac=None, ip=None, defaultRoute=None, lo='up', gw=None,\n               **_params):\n        super(IPv4Host, self).config(mac, ip, defaultRoute, lo, **_params)\n        self.cmd('ip -4 addr flush dev %s' % self.defaultIntf())\n        self.cmd('ip -6 addr flush dev %s' % self.defaultIntf())\n        self.cmd('sysctl -w net.ipv4.ip_forward=0')\n        self.cmd('ip -4 link set up %s' % self.defaultIntf())\n        self.cmd('ip -4 addr add %s dev %s' % (ip, self.defaultIntf()))\n        if gw:\n            self.cmd('ip -4 route add default via %s' % gw)\n        # Disable offload\n        for attr in [\"rx\", \"tx\", \"sg\"]:\n            cmd = \"/sbin/ethtool --offload %s %s off\" % (\n                self.defaultIntf(), attr)\n            self.cmd(cmd)\n\n        def updateIP():\n            return ip.split('/')[0]\n\n        self.defaultIntf().updateIP = updateIP\n\n\nclass TutorialTopo(Topo):\n    \"\"\"2x2 fabric topology for GTP encap exercise with 2 IPv4 hosts emulating an\n       enodeb (base station) and a gateway to a Packet Data Metwork (PDN)\n    \"\"\"\n\n    def __init__(self, *args, **kwargs):\n        Topo.__init__(self, *args, **kwargs)\n\n        # Leaves\n        # gRPC port 50001\n        leaf1 = self.addSwitch('leaf1', cls=StratumBmv2Switch, cpuport=CPU_PORT)\n        # gRPC port 50002\n        leaf2 = self.addSwitch('leaf2', cls=StratumBmv2Switch, cpuport=CPU_PORT)\n\n        # Spines\n        # gRPC port 50003\n        spine1 = self.addSwitch('spine1', cls=StratumBmv2Switch, cpuport=CPU_PORT)\n        # gRPC port 50004\n        spine2 = self.addSwitch('spine2', cls=StratumBmv2Switch, cpuport=CPU_PORT)\n\n        # Switch Links\n        self.addLink(spine1, leaf1)\n        self.addLink(spine1, leaf2)\n        self.addLink(spine2, leaf1)\n        self.addLink(spine2, leaf2)\n\n        # IPv4 hosts attached to leaf 1\n        enodeb = self.addHost('enodeb', cls=IPv4Host, mac='00:00:00:00:00:10',\n                              ip='10.0.100.1/24', gw='10.0.100.254')\n        self.addLink(enodeb, leaf1)  # port 3\n\n        # IPv4 hosts attached to leaf 2\n        pdn = self.addHost('pdn', cls=IPv4Host, mac='00:00:00:00:00:20',\n                           ip='10.0.200.1/24', gw='10.0.200.254')\n        self.addLink(pdn, leaf2)  # port 3\n\n\ndef main():\n    net = Mininet(topo=TutorialTopo(), controller=None)\n    net.start()\n    CLI(net)\n    net.stop()\n    print '#' * 80\n    print 'ATTENTION: Mininet was stopped! Perhaps accidentally?'\n    print 'No worries, it will restart automatically in a few seconds...'\n    print 'To access again the Mininet CLI, use `make mn-cli`'\n    print 'To detach from the CLI (without stopping), press Ctrl-D'\n    print 'To permanently quit Mininet, use `make stop`'\n    print '#' * 80\n\n\nif __name__ == \"__main__\":\n    parser = argparse.ArgumentParser(\n        description='Mininet topology script for 2x2 fabric with stratum_bmv2 and IPv4 hosts')\n    args = parser.parse_args()\n    setLogLevel('info')\n\n    main()\n"
  },
  {
    "path": "mininet/topo-v4.py",
    "content": "#!/usr/bin/python\n\n#  Copyright 2019-present Open Networking Foundation\n#\n#  Licensed under the Apache License, Version 2.0 (the \"License\");\n#  you may not use this file except in compliance with the License.\n#  You may obtain a copy of the License at\n#\n#      http://www.apache.org/licenses/LICENSE-2.0\n#\n#  Unless required by applicable law or agreed to in writing, software\n#  distributed under the License is distributed on an \"AS IS\" BASIS,\n#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n#  See the License for the specific language governing permissions and\n#  limitations under the License.\n\nimport argparse\n\nfrom mininet.cli import CLI\nfrom mininet.log import setLogLevel\nfrom mininet.net import Mininet\nfrom mininet.node import Host\nfrom mininet.topo import Topo\nfrom stratum import StratumBmv2Switch\n\nCPU_PORT = 255\n\n\nclass IPv4Host(Host):\n    \"\"\"Host that can be configured with an IPv4 gateway (default route).\n    \"\"\"\n\n    def config(self, mac=None, ip=None, defaultRoute=None, lo='up', gw=None,\n               **_params):\n        super(IPv4Host, self).config(mac, ip, defaultRoute, lo, **_params)\n        self.cmd('ip -4 addr flush dev %s' % self.defaultIntf())\n        self.cmd('ip -6 addr flush dev %s' % self.defaultIntf())\n        self.cmd('ip -4 link set up %s' % self.defaultIntf())\n        self.cmd('ip -4 addr add %s dev %s' % (ip, self.defaultIntf()))\n        if gw:\n            self.cmd('ip -4 route add default via %s' % gw)\n        # Disable offload\n        for attr in [\"rx\", \"tx\", \"sg\"]:\n            cmd = \"/sbin/ethtool --offload %s %s off\" % (\n                self.defaultIntf(), attr)\n            self.cmd(cmd)\n\n        def updateIP():\n            return ip.split('/')[0]\n\n        self.defaultIntf().updateIP = updateIP\n\n\nclass TaggedIPv4Host(Host):\n    \"\"\"VLAN-tagged host that can be configured with an IPv4 gateway\n    (default route).\n    \"\"\"\n    vlanIntf = None\n\n    def config(self, mac=None, ip=None, defaultRoute=None, lo='up', gw=None,\n               vlan=None, **_params):\n        super(TaggedIPv4Host, self).config(mac, ip, defaultRoute, lo, **_params)\n        self.vlanIntf = \"%s.%s\" % (self.defaultIntf(), vlan)\n        # Replace default interface with a tagged one\n        self.cmd('ip -4 addr flush dev %s' % self.defaultIntf())\n        self.cmd('ip -6 addr flush dev %s' % self.defaultIntf())\n        self.cmd('ip -4 link add link %s name %s type vlan id %s' % (\n            self.defaultIntf(), self.vlanIntf, vlan))\n        self.cmd('ip -4 link set up %s' % self.vlanIntf)\n        self.cmd('ip -4 addr add %s dev %s' % (ip, self.vlanIntf))\n        if gw:\n            self.cmd('ip -4 route add default via %s' % gw)\n\n        self.defaultIntf().name = self.vlanIntf\n        self.nameToIntf[self.vlanIntf] = self.defaultIntf()\n\n        # Disable offload\n        for attr in [\"rx\", \"tx\", \"sg\"]:\n            cmd = \"/sbin/ethtool --offload %s %s off\" % (\n                self.defaultIntf(), attr)\n            self.cmd(cmd)\n\n        def updateIP():\n            return ip.split('/')[0]\n\n        self.defaultIntf().updateIP = updateIP\n\n    def terminate(self):\n        self.cmd('ip -4 link remove link %s' % self.vlanIntf)\n        super(TaggedIPv4Host, self).terminate()\n\n\nclass TutorialTopo(Topo):\n    \"\"\"2x2 fabric topology with IPv4 hosts\"\"\"\n\n    def __init__(self, *args, **kwargs):\n        Topo.__init__(self, *args, **kwargs)\n\n        # Leaves\n        # gRPC port 50001\n        leaf1 = self.addSwitch('leaf1', cls=StratumBmv2Switch, cpuport=CPU_PORT)\n        # gRPC port 50002\n        leaf2 = self.addSwitch('leaf2', cls=StratumBmv2Switch, cpuport=CPU_PORT)\n\n        # Spines\n        # gRPC port 50003\n        spine1 = self.addSwitch('spine1', cls=StratumBmv2Switch, cpuport=CPU_PORT)\n        # gRPC port 50004\n        spine2 = self.addSwitch('spine2', cls=StratumBmv2Switch, cpuport=CPU_PORT)\n\n        # Switch Links\n        self.addLink(spine1, leaf1)\n        self.addLink(spine1, leaf2)\n        self.addLink(spine2, leaf1)\n        self.addLink(spine2, leaf2)\n\n        # IPv4 hosts attached to leaf 1\n        h1a = self.addHost('h1a', cls=IPv4Host, mac=\"00:00:00:00:00:1A\",\n                           ip='172.16.1.1/24', gw='172.16.1.254')\n        h1b = self.addHost('h1b', cls=IPv4Host, mac=\"00:00:00:00:00:1B\",\n                           ip='172.16.1.2/24', gw='172.16.1.254')\n        h1c = self.addHost('h1c', cls=TaggedIPv4Host, mac=\"00:00:00:00:00:1C\",\n                           ip='172.16.1.3/24', gw='172.16.1.254', vlan=100)\n        h2 = self.addHost('h2', cls=TaggedIPv4Host, mac=\"00:00:00:00:00:20\",\n                          ip='172.16.2.1/24', gw='172.16.2.254', vlan=200)\n        self.addLink(h1a, leaf1)  # port 3\n        self.addLink(h1b, leaf1)  # port 4\n        self.addLink(h1c, leaf1)  # port 5\n        self.addLink(h2, leaf1)  # port 6\n\n        # IPv4 hosts attached to leaf 2\n        h3 = self.addHost('h3', cls=TaggedIPv4Host, mac=\"00:00:00:00:00:30\",\n                          ip='172.16.3.1/24', gw='172.16.3.254', vlan=300)\n        h4 = self.addHost('h4', cls=IPv4Host, mac=\"00:00:00:00:00:40\",\n                          ip='172.16.4.1/24', gw='172.16.4.254')\n        self.addLink(h3, leaf2)  # port 3\n        self.addLink(h4, leaf2)  # port 4\n\n\ndef main():\n    net = Mininet(topo=TutorialTopo(), controller=None)\n    net.start()\n    CLI(net)\n    net.stop()\n    print '#' * 80\n    print 'ATTENTION: Mininet was stopped! Perhaps accidentally?'\n    print 'No worries, it will restart automatically in a few seconds...'\n    print 'To access again the Mininet CLI, use `make mn-cli`'\n    print 'To detach from the CLI (without stopping), press Ctrl-D'\n    print 'To permanently quit Mininet, use `make stop`'\n    print '#' * 80\n\n\nif __name__ == \"__main__\":\n    parser = argparse.ArgumentParser(\n        description='Mininet topology script for 2x2 fabric with stratum_bmv2 and IPv4 hosts')\n    args = parser.parse_args()\n    setLogLevel('info')\n\n    main()\n"
  },
  {
    "path": "mininet/topo-v6.py",
    "content": "#!/usr/bin/python\n\n#  Copyright 2019-present Open Networking Foundation\n#\n#  Licensed under the Apache License, Version 2.0 (the \"License\");\n#  you may not use this file except in compliance with the License.\n#  You may obtain a copy of the License at\n#\n#      http://www.apache.org/licenses/LICENSE-2.0\n#\n#  Unless required by applicable law or agreed to in writing, software\n#  distributed under the License is distributed on an \"AS IS\" BASIS,\n#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n#  See the License for the specific language governing permissions and\n#  limitations under the License.\n\nimport argparse\n\nfrom mininet.cli import CLI\nfrom mininet.log import setLogLevel\nfrom mininet.net import Mininet\nfrom mininet.node import Host\nfrom mininet.topo import Topo\nfrom stratum import StratumBmv2Switch\n\nCPU_PORT = 255\n\n\nclass IPv6Host(Host):\n    \"\"\"Host that can be configured with an IPv6 gateway (default route).\n    \"\"\"\n\n    def config(self, ipv6, ipv6_gw=None, **params):\n        super(IPv6Host, self).config(**params)\n        self.cmd('ip -4 addr flush dev %s' % self.defaultIntf())\n        self.cmd('ip -6 addr flush dev %s' % self.defaultIntf())\n        self.cmd('ip -6 addr add %s dev %s' % (ipv6, self.defaultIntf()))\n        if ipv6_gw:\n            self.cmd('ip -6 route add default via %s' % ipv6_gw)\n        # Disable offload\n        for attr in [\"rx\", \"tx\", \"sg\"]:\n            cmd = \"/sbin/ethtool --offload %s %s off\" % (self.defaultIntf(), attr)\n            self.cmd(cmd)\n\n        def updateIP():\n            return ipv6.split('/')[0]\n\n        self.defaultIntf().updateIP = updateIP\n\n    def terminate(self):\n        super(IPv6Host, self).terminate()\n\n\nclass TutorialTopo(Topo):\n    \"\"\"2x2 fabric topology with IPv6 hosts\"\"\"\n\n    def __init__(self, *args, **kwargs):\n        Topo.__init__(self, *args, **kwargs)\n\n        # Leaves\n        # gRPC port 50001\n        leaf1 = self.addSwitch('leaf1', cls=StratumBmv2Switch, cpuport=CPU_PORT)\n        # gRPC port 50002\n        leaf2 = self.addSwitch('leaf2', cls=StratumBmv2Switch, cpuport=CPU_PORT)\n\n        # Spines\n        # gRPC port 50003\n        spine1 = self.addSwitch('spine1', cls=StratumBmv2Switch, cpuport=CPU_PORT)\n        # gRPC port 50004\n        spine2 = self.addSwitch('spine2', cls=StratumBmv2Switch, cpuport=CPU_PORT)\n\n        # Switch Links\n        self.addLink(spine1, leaf1)\n        self.addLink(spine1, leaf2)\n        self.addLink(spine2, leaf1)\n        self.addLink(spine2, leaf2)\n\n        # IPv6 hosts attached to leaf 1\n        h1a = self.addHost('h1a', cls=IPv6Host, mac=\"00:00:00:00:00:1A\",\n                           ipv6='2001:1:1::a/64', ipv6_gw='2001:1:1::ff')\n        h1b = self.addHost('h1b', cls=IPv6Host, mac=\"00:00:00:00:00:1B\",\n                           ipv6='2001:1:1::b/64', ipv6_gw='2001:1:1::ff')\n        h1c = self.addHost('h1c', cls=IPv6Host, mac=\"00:00:00:00:00:1C\",\n                           ipv6='2001:1:1::c/64', ipv6_gw='2001:1:1::ff')\n        h2 = self.addHost('h2', cls=IPv6Host, mac=\"00:00:00:00:00:20\",\n                          ipv6='2001:1:2::1/64', ipv6_gw='2001:1:2::ff')\n        self.addLink(h1a, leaf1)  # port 3\n        self.addLink(h1b, leaf1)  # port 4\n        self.addLink(h1c, leaf1)  # port 5\n        self.addLink(h2, leaf1)  # port 6\n\n        # IPv6 hosts attached to leaf 2\n        h3 = self.addHost('h3', cls=IPv6Host, mac=\"00:00:00:00:00:30\",\n                          ipv6='2001:2:3::1/64', ipv6_gw='2001:2:3::ff')\n        h4 = self.addHost('h4', cls=IPv6Host, mac=\"00:00:00:00:00:40\",\n                          ipv6='2001:2:4::1/64', ipv6_gw='2001:2:4::ff')\n        self.addLink(h3, leaf2)  # port 3\n        self.addLink(h4, leaf2)  # port 4\n\n\ndef main():\n    net = Mininet(topo=TutorialTopo(), controller=None)\n    net.start()\n    CLI(net)\n    net.stop()\n    print '#' * 80\n    print 'ATTENTION: Mininet was stopped! Perhaps accidentally?'\n    print 'No worries, it will restart automatically in a few seconds...'\n    print 'To access again the Mininet CLI, use `make mn-cli`'\n    print 'To detach from the CLI (without stopping), press Ctrl-D'\n    print 'To permanently quit Mininet, use `make stop`'\n    print '#' * 80\n\n\nif __name__ == \"__main__\":\n    parser = argparse.ArgumentParser(\n        description='Mininet topology script for 2x2 fabric with stratum_bmv2 and IPv6 hosts')\n    args = parser.parse_args()\n    setLogLevel('info')\n\n    main()\n"
  },
  {
    "path": "p4src/main.p4",
    "content": "/*\n * Copyright 2019-present Open Networking Foundation\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may not use this file except in compliance with the License.\n * You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\n\n#include <core.p4>\n#include <v1model.p4>\n\n// CPU_PORT specifies the P4 port number associated to controller packet-in and\n// packet-out. All packets forwarded via this port will be delivered to the\n// controller as P4Runtime PacketIn messages. Similarly, PacketOut messages from\n// the controller will be seen by the P4 pipeline as coming from the CPU_PORT.\n#define CPU_PORT 255\n\n// CPU_CLONE_SESSION_ID specifies the mirroring session for packets to be cloned\n// to the CPU port. Packets associated with this session ID will be cloned to\n// the CPU_PORT as well as being transmitted via their egress port (set by the\n// bridging/routing/acl table). For cloning to work, the P4Runtime controller\n// needs first to insert a CloneSessionEntry that maps this session ID to the\n// CPU_PORT.\n#define CPU_CLONE_SESSION_ID 99\n\n// Maximum number of hops supported when using SRv6.\n// Required for Exercise 7.\n#define SRV6_MAX_HOPS 4\n\ntypedef bit<9>   port_num_t;\ntypedef bit<48>  mac_addr_t;\ntypedef bit<16>  mcast_group_id_t;\ntypedef bit<32>  ipv4_addr_t;\ntypedef bit<128> ipv6_addr_t;\ntypedef bit<16>  l4_port_t;\n\nconst bit<16> ETHERTYPE_IPV4 = 0x0800;\nconst bit<16> ETHERTYPE_IPV6 = 0x86dd;\n\nconst bit<8> IP_PROTO_ICMP   = 1;\nconst bit<8> IP_PROTO_TCP    = 6;\nconst bit<8> IP_PROTO_UDP    = 17;\nconst bit<8> IP_PROTO_SRV6   = 43;\nconst bit<8> IP_PROTO_ICMPV6 = 58;\n\nconst mac_addr_t IPV6_MCAST_01 = 0x33_33_00_00_00_01;\n\nconst bit<8> ICMP6_TYPE_NS = 135;\nconst bit<8> ICMP6_TYPE_NA = 136;\n\nconst bit<8> NDP_OPT_TARGET_LL_ADDR = 2;\n\nconst bit<32> NDP_FLAG_ROUTER    = 0x80000000;\nconst bit<32> NDP_FLAG_SOLICITED = 0x40000000;\nconst bit<32> NDP_FLAG_OVERRIDE  = 0x20000000;\n\n\n//------------------------------------------------------------------------------\n// HEADER DEFINITIONS\n//------------------------------------------------------------------------------\n\nheader ethernet_t {\n    mac_addr_t  dst_addr;\n    mac_addr_t  src_addr;\n    bit<16>     ether_type;\n}\n\nheader ipv4_t {\n    bit<4>   version;\n    bit<4>   ihl;\n    bit<6>   dscp;\n    bit<2>   ecn;\n    bit<16>  total_len;\n    bit<16>  identification;\n    bit<3>   flags;\n    bit<13>  frag_offset;\n    bit<8>   ttl;\n    bit<8>   protocol;\n    bit<16>  hdr_checksum;\n    bit<32>  src_addr;\n    bit<32>  dst_addr;\n}\n\nheader ipv6_t {\n    bit<4>    version;\n    bit<8>    traffic_class;\n    bit<20>   flow_label;\n    bit<16>   payload_len;\n    bit<8>    next_hdr;\n    bit<8>    hop_limit;\n    bit<128>  src_addr;\n    bit<128>  dst_addr;\n}\n\nheader srv6h_t {\n    bit<8>   next_hdr;\n    bit<8>   hdr_ext_len;\n    bit<8>   routing_type;\n    bit<8>   segment_left;\n    bit<8>   last_entry;\n    bit<8>   flags;\n    bit<16>  tag;\n}\n\nheader srv6_list_t {\n    bit<128>  segment_id;\n}\n\nheader tcp_t {\n    bit<16>  src_port;\n    bit<16>  dst_port;\n    bit<32>  seq_no;\n    bit<32>  ack_no;\n    bit<4>   data_offset;\n    bit<3>   res;\n    bit<3>   ecn;\n    bit<6>   ctrl;\n    bit<16>  window;\n    bit<16>  checksum;\n    bit<16>  urgent_ptr;\n}\n\nheader udp_t {\n    bit<16> src_port;\n    bit<16> dst_port;\n    bit<16> len;\n    bit<16> checksum;\n}\n\nheader icmp_t {\n    bit<8>   type;\n    bit<8>   icmp_code;\n    bit<16>  checksum;\n    bit<16>  identifier;\n    bit<16>  sequence_number;\n    bit<64>  timestamp;\n}\n\nheader icmpv6_t {\n    bit<8>   type;\n    bit<8>   code;\n    bit<16>  checksum;\n}\n\nheader ndp_t {\n    bit<32>      flags;\n    ipv6_addr_t  target_ipv6_addr;\n    // NDP option.\n    bit<8>       type;\n    bit<8>       length;\n    bit<48>      target_mac_addr;\n}\n\n// Packet-in header. Prepended to packets sent to the CPU_PORT and used by the\n// P4Runtime server (Stratum) to populate the PacketIn message metadata fields.\n// Here we use it to carry the original ingress port where the packet was\n// received.\n@controller_header(\"packet_in\")\nheader cpu_in_header_t {\n    port_num_t  ingress_port;\n    bit<7>      _pad;\n}\n\n// Packet-out header. Prepended to packets received from the CPU_PORT. Fields of\n// this header are populated by the P4Runtime server based on the P4Runtime\n// PacketOut metadata fields. Here we use it to inform the P4 pipeline on which\n// port this packet-out should be transmitted.\n@controller_header(\"packet_out\")\nheader cpu_out_header_t {\n    port_num_t  egress_port;\n    bit<7>      _pad;\n}\n\nstruct parsed_headers_t {\n    cpu_out_header_t cpu_out;\n    cpu_in_header_t cpu_in;\n    ethernet_t ethernet;\n    ipv4_t ipv4;\n    ipv6_t ipv6;\n    srv6h_t srv6h;\n    srv6_list_t[SRV6_MAX_HOPS] srv6_list;\n    tcp_t tcp;\n    udp_t udp;\n    icmp_t icmp;\n    icmpv6_t icmpv6;\n    ndp_t ndp;\n}\n\nstruct local_metadata_t {\n    l4_port_t   l4_src_port;\n    l4_port_t   l4_dst_port;\n    bool        is_multicast;\n    ipv6_addr_t next_srv6_sid;\n    bit<8>      ip_proto;\n    bit<8>      icmp_type;\n}\n\n\n//------------------------------------------------------------------------------\n// INGRESS PIPELINE\n//------------------------------------------------------------------------------\n\nparser ParserImpl (packet_in packet,\n                   out parsed_headers_t hdr,\n                   inout local_metadata_t local_metadata,\n                   inout standard_metadata_t standard_metadata)\n{\n    state start {\n        transition select(standard_metadata.ingress_port) {\n            CPU_PORT: parse_packet_out;\n            default: parse_ethernet;\n        }\n    }\n\n    state parse_packet_out {\n        packet.extract(hdr.cpu_out);\n        transition parse_ethernet;\n    }\n\n    state parse_ethernet {\n        packet.extract(hdr.ethernet);\n        transition select(hdr.ethernet.ether_type){\n            ETHERTYPE_IPV4: parse_ipv4;\n            ETHERTYPE_IPV6: parse_ipv6;\n            default: accept;\n        }\n    }\n\n    state parse_ipv4 {\n        packet.extract(hdr.ipv4);\n        local_metadata.ip_proto = hdr.ipv4.protocol;\n        transition select(hdr.ipv4.protocol) {\n            IP_PROTO_TCP: parse_tcp;\n            IP_PROTO_UDP: parse_udp;\n            IP_PROTO_ICMP: parse_icmp;\n            default: accept;\n        }\n    }\n\n    state parse_ipv6 {\n        packet.extract(hdr.ipv6);\n        local_metadata.ip_proto = hdr.ipv6.next_hdr;\n        transition select(hdr.ipv6.next_hdr) {\n            IP_PROTO_TCP: parse_tcp;\n            IP_PROTO_UDP: parse_udp;\n            IP_PROTO_ICMPV6: parse_icmpv6;\n            IP_PROTO_SRV6: parse_srv6;\n            default: accept;\n        }\n    }\n\n    state parse_tcp {\n        packet.extract(hdr.tcp);\n        local_metadata.l4_src_port = hdr.tcp.src_port;\n        local_metadata.l4_dst_port = hdr.tcp.dst_port;\n        transition accept;\n    }\n\n    state parse_udp {\n        packet.extract(hdr.udp);\n        local_metadata.l4_src_port = hdr.udp.src_port;\n        local_metadata.l4_dst_port = hdr.udp.dst_port;\n        transition accept;\n    }\n\n    state parse_icmp {\n        packet.extract(hdr.icmp);\n        local_metadata.icmp_type = hdr.icmp.type;\n        transition accept;\n    }\n\n    state parse_icmpv6 {\n        packet.extract(hdr.icmpv6);\n        local_metadata.icmp_type = hdr.icmpv6.type;\n        transition select(hdr.icmpv6.type) {\n            ICMP6_TYPE_NS: parse_ndp;\n            ICMP6_TYPE_NA: parse_ndp;\n            default: accept;\n        }\n    }\n\n    state parse_ndp {\n        packet.extract(hdr.ndp);\n        transition accept;\n    }\n\n    state parse_srv6 {\n        packet.extract(hdr.srv6h);\n        transition parse_srv6_list;\n    }\n\n    state parse_srv6_list {\n        packet.extract(hdr.srv6_list.next);\n        bool next_segment = (bit<32>)hdr.srv6h.segment_left - 1 == (bit<32>)hdr.srv6_list.lastIndex;\n        transition select(next_segment) {\n            true: mark_current_srv6;\n            default: check_last_srv6;\n        }\n    }\n\n    state mark_current_srv6 {\n        local_metadata.next_srv6_sid = hdr.srv6_list.last.segment_id;\n        transition check_last_srv6;\n    }\n\n    state check_last_srv6 {\n        // working with bit<8> and int<32> which cannot be cast directly; using\n        // bit<32> as common intermediate type for comparision\n        bool last_segment = (bit<32>)hdr.srv6h.last_entry == (bit<32>)hdr.srv6_list.lastIndex;\n        transition select(last_segment) {\n           true: parse_srv6_next_hdr;\n           false: parse_srv6_list;\n        }\n    }\n\n    state parse_srv6_next_hdr {\n        transition select(hdr.srv6h.next_hdr) {\n            IP_PROTO_TCP: parse_tcp;\n            IP_PROTO_UDP: parse_udp;\n            IP_PROTO_ICMPV6: parse_icmpv6;\n            default: accept;\n        }\n    }\n}\n\n\ncontrol VerifyChecksumImpl(inout parsed_headers_t hdr,\n                           inout local_metadata_t meta)\n{\n    // Not used here. We assume all packets have valid checksum, if not, we let\n    // the end hosts detect errors.\n    apply { /* EMPTY */ }\n}\n\n\ncontrol IngressPipeImpl (inout parsed_headers_t    hdr,\n                         inout local_metadata_t    local_metadata,\n                         inout standard_metadata_t standard_metadata) {\n\n    // Drop action shared by many tables.\n    action drop() {\n        mark_to_drop(standard_metadata);\n    }\n\n\n    // *** L2 BRIDGING\n    //\n    // Here we define tables to forward packets based on their Ethernet\n    // destination address. There are two types of L2 entries that we\n    // need to support:\n    //\n    // 1. Unicast entries: which will be filled in by the control plane when the\n    //    location (port) of new hosts is learned.\n    // 2. Broadcast/multicast entries: used replicate NDP Neighbor Solicitation\n    //    (NS) messages to all host-facing ports;\n    //\n    // For (2), unlike ARP messages in IPv4 which are broadcasted to Ethernet\n    // destination address FF:FF:FF:FF:FF:FF, NDP messages are sent to special\n    // Ethernet addresses specified by RFC2464. These addresses are prefixed\n    // with 33:33 and the last four octets are the last four octets of the IPv6\n    // destination multicast address. The most straightforward way of matching\n    // on such IPv6 broadcast/multicast packets, without digging in the details\n    // of RFC2464, is to use a ternary match on 33:33:**:**:**:**, where * means\n    // \"don't care\".\n    //\n    // For this reason, our solution defines two tables. One that matches in an\n    // exact fashion (easier to scale on switch ASIC memory) and one that uses\n    // ternary matching (which requires more expensive TCAM memories, usually\n    // much smaller).\n\n    // --- l2_exact_table (for unicast entries) --------------------------------\n\n    action set_egress_port(port_num_t port_num) {\n        standard_metadata.egress_spec = port_num;\n    }\n\n    table l2_exact_table {\n        key = {\n            hdr.ethernet.dst_addr: exact;\n        }\n        actions = {\n            set_egress_port;\n            @defaultonly drop;\n        }\n        const default_action = drop;\n        // The @name annotation is used here to provide a name to this table\n        // counter, as it will be needed by the compiler to generate the\n        // corresponding P4Info entity.\n        @name(\"l2_exact_table_counter\")\n        counters = direct_counter(CounterType.packets_and_bytes);\n    }\n\n    // --- l2_ternary_table (for broadcast/multicast entries) ------------------\n\n    action set_multicast_group(mcast_group_id_t gid) {\n        // gid will be used by the Packet Replication Engine (PRE) in the\n        // Traffic Manager--located right after the ingress pipeline, to\n        // replicate a packet to multiple egress ports, specified by the control\n        // plane by means of P4Runtime MulticastGroupEntry messages.\n        standard_metadata.mcast_grp = gid;\n        local_metadata.is_multicast = true;\n    }\n\n    table l2_ternary_table {\n        key = {\n            hdr.ethernet.dst_addr: ternary;\n        }\n        actions = {\n            set_multicast_group;\n            @defaultonly drop;\n        }\n        const default_action = drop;\n        @name(\"l2_ternary_table_counter\")\n        counters = direct_counter(CounterType.packets_and_bytes);\n    }\n\n\n    // *** TODO EXERCISE 5 (IPV6 ROUTING)\n    //\n    // 1. Create a table to to handle NDP messages to resolve the MAC address of\n    //    switch. This table should:\n    //    - match on hdr.ndp.target_ipv6_addr (exact match)\n    //    - provide action \"ndp_ns_to_na\" (look in snippets.p4)\n    //    - default_action should be \"NoAction\"\n    //\n    // 2. Create table to handle IPv6 routing. Create a L2 my station table (hit\n    //    when Ethernet destination address is the switch address). This table\n    //    should not do anything to the packet (i.e., NoAction), but the control\n    //    block below should use the result (table.hit) to decide how to process\n    //    the packet.\n    //\n    // 3. Create a table for IPv6 routing. An action selector should be use to\n    //    pick a next hop MAC address according to a hash of packet header\n    //    fields (IPv6 source/destination address and the flow label). Look in\n    //    snippets.p4 for an example of an action selector and table using it.\n    //\n    // You can name your tables whatever you like. You will need to fill\n    // the name in elsewhere in this exercise.\n\n\n    // *** TODO EXERCISE 6 (SRV6)\n    //\n    // Implement tables to provide SRV6 logic.\n\n\n    // *** ACL\n    //\n    // Provides ways to override a previous forwarding decision, for example\n    // requiring that a packet is cloned/sent to the CPU, or dropped.\n    //\n    // We use this table to clone all NDP packets to the control plane, so to\n    // enable host discovery. When the location of a new host is discovered, the\n    // controller is expected to update the L2 and L3 tables with the\n    // corresponding bridging and routing entries.\n\n    action send_to_cpu() {\n        standard_metadata.egress_spec = CPU_PORT;\n    }\n\n    action clone_to_cpu() {\n        // Cloning is achieved by using a v1model-specific primitive. Here we\n        // set the type of clone operation (ingress-to-egress pipeline), the\n        // clone session ID (the CPU one), and the metadata fields we want to\n        // preserve for the cloned packet replica.\n        clone3(CloneType.I2E, CPU_CLONE_SESSION_ID, { standard_metadata.ingress_port });\n    }\n\n    table acl_table {\n        key = {\n            standard_metadata.ingress_port: ternary;\n            hdr.ethernet.dst_addr:          ternary;\n            hdr.ethernet.src_addr:          ternary;\n            hdr.ethernet.ether_type:        ternary;\n            local_metadata.ip_proto:        ternary;\n            local_metadata.icmp_type:       ternary;\n            local_metadata.l4_src_port:     ternary;\n            local_metadata.l4_dst_port:     ternary;\n        }\n        actions = {\n            send_to_cpu;\n            clone_to_cpu;\n            drop;\n        }\n        @name(\"acl_table_counter\")\n        counters = direct_counter(CounterType.packets_and_bytes);\n    }\n\n    apply {\n\n        if (hdr.cpu_out.isValid()) {\n            // *** TODO EXERCISE 4\n            // Implement logic such that if this is a packet-out from the\n            // controller:\n            // 1. Set the packet egress port to that found in the cpu_out header\n            // 2. Remove (set invalid) the cpu_out header\n            // 3. Exit the pipeline here (no need to go through other tables\n        }\n\n        bool do_l3_l2 = true;\n\n        if (hdr.icmpv6.isValid() && hdr.icmpv6.type == ICMP6_TYPE_NS) {\n            // *** TODO EXERCISE 5\n            // Insert logic to handle NDP messages to resolve the MAC address of the\n            // switch. You should apply the NDP reply table created before.\n            // If this is an NDP NS packet, i.e., if a matching entry is found,\n            // unset the \"do_l3_l2\" flag to skip the L3 and L2 tables, as the\n            // \"ndp_ns_to_na\" action already set an egress port.\n        }\n\n        if (do_l3_l2) {\n\n            // *** TODO EXERCISE 5\n            // Insert logic to match the My Station table and upon hit, the\n            // routing table. You should also add a conditional to drop the\n            // packet if the hop_limit reaches 0.\n\n            // *** TODO EXERCISE 6\n            // Insert logic to match the SRv6 My SID and Transit tables as well\n            // as logic to perform PSP behavior. HINT: This logic belongs\n            // somewhere between checking the switch's my station table and\n            // applying the routing table.\n\n            // L2 bridging logic. Apply the exact table first...\n            if (!l2_exact_table.apply().hit) {\n                // ...if an entry is NOT found, apply the ternary one in case\n                // this is a multicast/broadcast NDP NS packet.\n                l2_ternary_table.apply();\n            }\n        }\n\n        // Lastly, apply the ACL table.\n        acl_table.apply();\n    }\n}\n\n\ncontrol EgressPipeImpl (inout parsed_headers_t hdr,\n                        inout local_metadata_t local_metadata,\n                        inout standard_metadata_t standard_metadata) {\n    apply {\n\n        if (standard_metadata.egress_port == CPU_PORT) {\n            // *** TODO EXERCISE 4\n            // Implement logic such that if the packet is to be forwarded to the\n            // CPU port, e.g., if in ingress we matched on the ACL table with\n            // action send/clone_to_cpu...\n            // 1. Set cpu_in header as valid\n            // 2. Set the cpu_in.ingress_port field to the original packet's\n            //    ingress port (standard_metadata.ingress_port).\n        }\n\n        // If this is a multicast packet (flag set by l2_ternary_table), make\n        // sure we are not replicating the packet on the same port where it was\n        // received. This is useful to avoid broadcasting NDP requests on the\n        // ingress port.\n        if (local_metadata.is_multicast == true &&\n              standard_metadata.ingress_port == standard_metadata.egress_port) {\n            mark_to_drop(standard_metadata);\n        }\n    }\n}\n\n\ncontrol ComputeChecksumImpl(inout parsed_headers_t hdr,\n                            inout local_metadata_t local_metadata)\n{\n    apply {\n        // The following is used to update the ICMPv6 checksum of NDP\n        // NA packets generated by the ndp reply table in the ingress pipeline.\n        // This function is executed only if the NDP header is present.\n        update_checksum(hdr.ndp.isValid(),\n            {\n                hdr.ipv6.src_addr,\n                hdr.ipv6.dst_addr,\n                hdr.ipv6.payload_len,\n                8w0,\n                hdr.ipv6.next_hdr,\n                hdr.icmpv6.type,\n                hdr.icmpv6.code,\n                hdr.ndp.flags,\n                hdr.ndp.target_ipv6_addr,\n                hdr.ndp.type,\n                hdr.ndp.length,\n                hdr.ndp.target_mac_addr\n            },\n            hdr.icmpv6.checksum,\n            HashAlgorithm.csum16\n        );\n    }\n}\n\n\ncontrol DeparserImpl(packet_out packet, in parsed_headers_t hdr) {\n    apply {\n        packet.emit(hdr.cpu_in);\n        packet.emit(hdr.ethernet);\n        packet.emit(hdr.ipv4);\n        packet.emit(hdr.ipv6);\n        packet.emit(hdr.srv6h);\n        packet.emit(hdr.srv6_list);\n        packet.emit(hdr.tcp);\n        packet.emit(hdr.udp);\n        packet.emit(hdr.icmp);\n        packet.emit(hdr.icmpv6);\n        packet.emit(hdr.ndp);\n    }\n}\n\n\nV1Switch(\n    ParserImpl(),\n    VerifyChecksumImpl(),\n    IngressPipeImpl(),\n    EgressPipeImpl(),\n    ComputeChecksumImpl(),\n    DeparserImpl()\n) main;\n"
  },
  {
    "path": "p4src/snippets.p4",
    "content": "//------------------------------------------------------------------------------\n// SNIPPETS FOR EXERCISE 5 (IPV6 ROUTING)\n//------------------------------------------------------------------------------\n\n// Action that transforms an NDP NS packet into an NDP NA one for the given\n// target MAC address. The action also sets the egress port to the ingress\n// one where the NDP NS packet was received.\naction ndp_ns_to_na(mac_addr_t target_mac) {\n    hdr.ethernet.src_addr = target_mac;\n    hdr.ethernet.dst_addr = IPV6_MCAST_01;\n    ipv6_addr_t host_ipv6_tmp = hdr.ipv6.src_addr;\n    hdr.ipv6.src_addr = hdr.ndp.target_ipv6_addr;\n    hdr.ipv6.dst_addr = host_ipv6_tmp;\n    hdr.ipv6.next_hdr = IP_PROTO_ICMPV6;\n    hdr.icmpv6.type = ICMP6_TYPE_NA;\n    hdr.ndp.flags = NDP_FLAG_ROUTER | NDP_FLAG_OVERRIDE;\n    hdr.ndp.type = NDP_OPT_TARGET_LL_ADDR;\n    hdr.ndp.length = 1;\n    hdr.ndp.target_mac_addr = target_mac;\n    standard_metadata.egress_spec = standard_metadata.ingress_port;\n}\n\n// ECMP action selector definition:\naction_selector(HashAlgorithm.crc16, 32w1024, 32w16) ecmp_selector;\n\n// Example indirect table that uses the ecmp_selector. \"Selector\" match fields\n// are used as input to the action selector hash function.\n// table table_with_action_selector {\n//   key = {\n//       hdr_field_1: lpm / exact / ternary;\n//       hdr_field_2: selector;\n//       hdr_field_3: selector;\n//       ...\n//   }\n//   actions = { ... }\n//   implementation = ecmp_selector;\n//   ...\n// }\n\n//------------------------------------------------------------------------------\n// SNIPPETS FOR EXERCISE 6 (SRV6)\n//------------------------------------------------------------------------------\n\naction insert_srv6h_header(bit<8> num_segments) {\n    hdr.srv6h.setValid();\n    hdr.srv6h.next_hdr = hdr.ipv6.next_hdr;\n    hdr.srv6h.hdr_ext_len =  num_segments * 2;\n    hdr.srv6h.routing_type = 4;\n    hdr.srv6h.segment_left = num_segments - 1;\n    hdr.srv6h.last_entry = num_segments - 1;\n    hdr.srv6h.flags = 0;\n    hdr.srv6h.tag = 0;\n    hdr.ipv6.next_hdr = IP_PROTO_SRV6;\n}\n\naction srv6_t_insert_2(ipv6_addr_t s1, ipv6_addr_t s2) {\n    hdr.ipv6.dst_addr = s1;\n    hdr.ipv6.payload_len = hdr.ipv6.payload_len + 40;\n    insert_srv6h_header(2);\n    hdr.srv6_list[0].setValid();\n    hdr.srv6_list[0].segment_id = s2;\n    hdr.srv6_list[1].setValid();\n    hdr.srv6_list[1].segment_id = s1;\n}\n\naction srv6_t_insert_3(ipv6_addr_t s1, ipv6_addr_t s2, ipv6_addr_t s3) {\n    hdr.ipv6.dst_addr = s1;\n    hdr.ipv6.payload_len = hdr.ipv6.payload_len + 56;\n    insert_srv6h_header(3);\n    hdr.srv6_list[0].setValid();\n    hdr.srv6_list[0].segment_id = s3;\n    hdr.srv6_list[1].setValid();\n    hdr.srv6_list[1].segment_id = s2;\n    hdr.srv6_list[2].setValid();\n    hdr.srv6_list[2].segment_id = s1;\n}\n\ntable srv6_transit {\n  key = {\n      // TODO: Add match fields for SRv6 transit rules; we'll start with the\n      //  destination IP address.\n  }\n  actions = {\n      // Note: Single segment header doesn't make sense given PSP\n      // i.e. we will pop the SRv6 header when segments_left reaches 0\n      srv6_t_insert_2;\n      srv6_t_insert_3;\n      // Extra credit: set a metadata field, then push label stack in egress\n  }\n  @name(\"srv6_transit_table_counter\")\n  counters = direct_counter(CounterType.packets_and_bytes);\n}\n\naction srv6_pop() {\n  hdr.ipv6.next_hdr = hdr.srv6h.next_hdr;\n  // SRv6 header is 8 bytes\n  // SRv6 list entry is 16 bytes each\n  // (((bit<16>)hdr.srv6h.last_entry + 1) * 16) + 8;\n  bit<16> srv6h_size = (((bit<16>)hdr.srv6h.last_entry + 1) << 4) + 8;\n  hdr.ipv6.payload_len = hdr.ipv6.payload_len - srv6h_size;\n\n  hdr.srv6h.setInvalid();\n  // Need to set MAX_HOPS headers invalid\n  hdr.srv6_list[0].setInvalid();\n  hdr.srv6_list[1].setInvalid();\n  hdr.srv6_list[2].setInvalid();\n}\n"
  },
  {
    "path": "ptf/lib/__init__.py",
    "content": ""
  },
  {
    "path": "ptf/lib/base_test.py",
    "content": "# Copyright 2013-present Barefoot Networks, Inc.\n# Copyright 2018-present Open Networking Foundation\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\n#\n# Antonin Bas (antonin@barefootnetworks.com)\n# Carmelo Cascone (carmelo@opennetworking.org)\n#\n\nimport logging\n# https://stackoverflow.com/questions/24812604/hide-scapy-warning-message-ipv6\nlogging.getLogger(\"scapy.runtime\").setLevel(logging.ERROR)\n\nimport itertools\nimport Queue\nimport sys\nimport threading\nimport time\nfrom StringIO import StringIO\nfrom functools import wraps, partial\nfrom unittest import SkipTest\n\nimport google.protobuf.text_format\nimport grpc\nimport ptf\nimport scapy.packet\nimport scapy.utils\nfrom google.protobuf import text_format\nfrom google.rpc import status_pb2, code_pb2\nfrom ipaddress import ip_address\nfrom p4.config.v1 import p4info_pb2\nfrom p4.v1 import p4runtime_pb2, p4runtime_pb2_grpc\nfrom ptf import config\nfrom ptf import testutils as testutils\nfrom ptf.base_tests import BaseTest\nfrom ptf.dataplane import match_exp_pkt\nfrom ptf.packet import IPv6\nfrom scapy.layers.inet6 import *\nfrom scapy.layers.l2 import Ether\nfrom scapy.pton_ntop import inet_pton, inet_ntop\nfrom scapy.utils6 import in6_getnsma, in6_getnsmac\n\nfrom helper import P4InfoHelper\n\nDEFAULT_PRIORITY = 10\n\nIPV6_MCAST_MAC_1 = \"33:33:00:00:00:01\"\n\nSWITCH1_MAC = \"00:00:00:00:aa:01\"\nSWITCH2_MAC = \"00:00:00:00:aa:02\"\nSWITCH3_MAC = \"00:00:00:00:aa:03\"\nHOST1_MAC = \"00:00:00:00:00:01\"\nHOST2_MAC = \"00:00:00:00:00:02\"\n\nMAC_BROADCAST = \"FF:FF:FF:FF:FF:FF\"\nMAC_FULL_MASK = \"FF:FF:FF:FF:FF:FF\"\nMAC_MULTICAST = \"33:33:00:00:00:00\"\nMAC_MULTICAST_MASK = \"FF:FF:00:00:00:00\"\n\nSWITCH1_IPV6 = \"2001:0:1::1\"\nSWITCH2_IPV6 = \"2001:0:2::1\"\nSWITCH3_IPV6 = \"2001:0:3::1\"\nSWITCH4_IPV6 = \"2001:0:4::1\"\nHOST1_IPV6 = \"2001:0000:85a3::8a2e:370:1111\"\nHOST2_IPV6 = \"2001:0000:85a3::8a2e:370:2222\"\nIPV6_MASK_ALL = \"FFFF:FFFF:FFFF:FFFF:FFFF:FFFF:FFFF:FFFF\"\n\nARP_ETH_TYPE = 0x0806\nIPV6_ETH_TYPE = 0x86DD\n\nICMPV6_IP_PROTO = 58\nNS_ICMPV6_TYPE = 135\nNA_ICMPV6_TYPE = 136\n\n# FIXME: this should be removed, use generic packet in test\nPACKET_IN_INGRESS_PORT_META_ID = 1\n\n\ndef print_inline(text):\n    sys.stdout.write(text)\n    sys.stdout.flush()\n\n\n# See https://gist.github.com/carymrobbins/8940382\n# functools.partialmethod is introduced in Python 3.4\nclass partialmethod(partial):\n    def __get__(self, instance, owner):\n        if instance is None:\n            return self\n        return partial(self.func, instance,\n                       *(self.args or ()), **(self.keywords or {}))\n\n\n# Convert integer (with length) to binary byte string\n# Equivalent to Python 3.2 int.to_bytes\n# See\n# https://stackoverflow.com/questions/16022556/has-python-3-to-bytes-been-back-ported-to-python-2-7\ndef stringify(n, length):\n    h = '%x' % n\n    s = ('0' * (len(h) % 2) + h).zfill(length * 2).decode('hex')\n    return s\n\n\ndef ipv4_to_binary(addr):\n    bytes_ = [int(b, 10) for b in addr.split('.')]\n    return \"\".join(chr(b) for b in bytes_)\n\n\ndef ipv6_to_binary(addr):\n    ip = ip_address(addr.decode(\"utf-8\"))\n    return ip.packed\n\n\ndef mac_to_binary(addr):\n    bytes_ = [int(b, 16) for b in addr.split(':')]\n    return \"\".join(chr(b) for b in bytes_)\n\n\ndef format_pkt_match(received_pkt, expected_pkt):\n    # Taken from PTF dataplane class\n    stdout_save = sys.stdout\n    try:\n        # The scapy packet dissection methods print directly to stdout,\n        # so we have to redirect stdout to a string.\n        sys.stdout = StringIO()\n\n        print \"========== EXPECTED ==========\"\n        if isinstance(expected_pkt, scapy.packet.Packet):\n            scapy.packet.ls(expected_pkt)\n            print '--'\n        scapy.utils.hexdump(expected_pkt)\n        print \"========== RECEIVED ==========\"\n        if isinstance(received_pkt, scapy.packet.Packet):\n            scapy.packet.ls(received_pkt)\n            print '--'\n        scapy.utils.hexdump(received_pkt)\n        print \"==============================\"\n\n        return sys.stdout.getvalue()\n    finally:\n        sys.stdout.close()\n        sys.stdout = stdout_save  # Restore the original stdout.\n\n\ndef format_pb_msg_match(received_msg, expected_msg):\n    result = StringIO()\n    result.write(\"========== EXPECTED PROTO ==========\\n\")\n    result.write(text_format.MessageToString(expected_msg))\n    result.write(\"========== RECEIVED PROTO ==========\\n\")\n    result.write(text_format.MessageToString(received_msg))\n    result.write(\"==============================\\n\")\n    val = result.getvalue()\n    result.close()\n    return val\n\n\ndef pkt_mac_swap(pkt):\n    orig_dst = pkt[Ether].dst\n    pkt[Ether].dst = pkt[Ether].src\n    pkt[Ether].src = orig_dst\n    return pkt\n\n\ndef pkt_route(pkt, mac_dst):\n    pkt[Ether].src = pkt[Ether].dst\n    pkt[Ether].dst = mac_dst\n    return pkt\n\n\ndef pkt_decrement_ttl(pkt):\n    if IP in pkt:\n        pkt[IP].ttl -= 1\n    elif IPv6 in pkt:\n        pkt[IPv6].hlim -= 1\n    return pkt\n\n\ndef genNdpNsPkt(target_ip, src_mac=HOST1_MAC, src_ip=HOST1_IPV6):\n    nsma = in6_getnsma(inet_pton(socket.AF_INET6, target_ip))\n    d = inet_ntop(socket.AF_INET6, nsma)\n    dm = in6_getnsmac(nsma)\n    p = Ether(dst=dm) / IPv6(dst=d, src=src_ip, hlim=255)\n    p /= ICMPv6ND_NS(tgt=target_ip)\n    p /= ICMPv6NDOptSrcLLAddr(lladdr=src_mac)\n    return p\n\n\ndef genNdpNaPkt(target_ip, target_mac,\n                src_mac=SWITCH1_MAC, dst_mac=IPV6_MCAST_MAC_1,\n                src_ip=SWITCH1_IPV6, dst_ip=HOST1_IPV6):\n    p = Ether(src=src_mac, dst=dst_mac)\n    p /= IPv6(dst=dst_ip, src=src_ip, hlim=255)\n    p /= ICMPv6ND_NA(tgt=target_ip)\n    p /= ICMPv6NDOptDstLLAddr(lladdr=target_mac)\n    return p\n\n\nclass P4RuntimeErrorFormatException(Exception):\n    \"\"\"Used to indicate that the gRPC error Status object returned by the server has\n    an incorrect format.\n    \"\"\"\n\n    def __init__(self, message):\n        super(P4RuntimeErrorFormatException, self).__init__(message)\n\n\n# Used to iterate over the p4.Error messages in a gRPC error Status object\nclass P4RuntimeErrorIterator:\n    def __init__(self, grpc_error):\n        assert (grpc_error.code() == grpc.StatusCode.UNKNOWN)\n        self.grpc_error = grpc_error\n\n        error = None\n        # The gRPC Python package does not have a convenient way to access the\n        # binary details for the error: they are treated as trailing metadata.\n        for meta in itertools.chain(self.grpc_error.initial_metadata(),\n                                    self.grpc_error.trailing_metadata()):\n            if meta[0] == \"grpc-status-details-bin\":\n                error = status_pb2.Status()\n                error.ParseFromString(meta[1])\n                break\n        if error is None:\n            raise P4RuntimeErrorFormatException(\"No binary details field\")\n\n        if len(error.details) == 0:\n            raise P4RuntimeErrorFormatException(\n                \"Binary details field has empty Any details repeated field\")\n        self.errors = error.details\n        self.idx = 0\n\n    def __iter__(self):\n        return self\n\n    def next(self):\n        while self.idx < len(self.errors):\n            p4_error = p4runtime_pb2.Error()\n            one_error_any = self.errors[self.idx]\n            if not one_error_any.Unpack(p4_error):\n                raise P4RuntimeErrorFormatException(\n                    \"Cannot convert Any message to p4.Error\")\n            if p4_error.canonical_code == code_pb2.OK:\n                continue\n            v = self.idx, p4_error\n            self.idx += 1\n            return v\n        raise StopIteration\n\n\n# P4Runtime uses a 3-level message in case of an error during the processing of\n# a write batch. This means that if we do not wrap the grpc.RpcError inside a\n# custom exception, we can end-up with a non-helpful exception message in case\n# of failure as only the first level will be printed. In this custom exception\n# class, we extract the nested error message (one for each operation included in\n# the batch) in order to print error code + user-facing message.  See P4 Runtime\n# documentation for more details on error-reporting.\nclass P4RuntimeWriteException(Exception):\n    def __init__(self, grpc_error):\n        assert (grpc_error.code() == grpc.StatusCode.UNKNOWN)\n        super(P4RuntimeWriteException, self).__init__()\n        self.errors = []\n        try:\n            error_iterator = P4RuntimeErrorIterator(grpc_error)\n            for error_tuple in error_iterator:\n                self.errors.append(error_tuple)\n        except P4RuntimeErrorFormatException:\n            raise  # just propagate exception for now\n\n    def __str__(self):\n        message = \"Error(s) during Write:\\n\"\n        for idx, p4_error in self.errors:\n            code_name = code_pb2._CODE.values_by_number[\n                p4_error.canonical_code].name\n            message += \"\\t* At index {}: {}, '{}'\\n\".format(\n                idx, code_name, p4_error.message)\n        return message\n\n\n# This code is common to all tests. setUp() is invoked at the beginning of the\n# test and tearDown is called at the end, no matter whether the test passed /\n# failed / errored.\n# noinspection PyUnresolvedReferences\nclass P4RuntimeTest(BaseTest):\n    def setUp(self):\n        BaseTest.setUp(self)\n\n        # Setting up PTF dataplane\n        self.dataplane = ptf.dataplane_instance\n        self.dataplane.flush()\n\n        self._swports = []\n        for device, port, ifname in config[\"interfaces\"]:\n            self._swports.append(port)\n\n        self.port1 = self.swports(0)\n        self.port2 = self.swports(1)\n        self.port3 = self.swports(2)\n\n        grpc_addr = testutils.test_param_get(\"grpcaddr\")\n        if grpc_addr is None:\n            grpc_addr = 'localhost:50051'\n\n        self.device_id = int(testutils.test_param_get(\"device_id\"))\n        if self.device_id is None:\n            self.fail(\"Device ID is not set\")\n\n        self.cpu_port = int(testutils.test_param_get(\"cpu_port\"))\n        if self.cpu_port is None:\n            self.fail(\"CPU port is not set\")\n\n        pltfm = testutils.test_param_get(\"pltfm\")\n        if pltfm is not None and pltfm == 'hw' and getattr(self, \"_skip_on_hw\",\n                                                           False):\n            raise SkipTest(\"Skipping test in HW\")\n\n        self.channel = grpc.insecure_channel(grpc_addr)\n        self.stub = p4runtime_pb2_grpc.P4RuntimeStub(self.channel)\n\n        proto_txt_path = testutils.test_param_get(\"p4info\")\n        # print \"Importing p4info proto from\", proto_txt_path\n        self.p4info = p4info_pb2.P4Info()\n        with open(proto_txt_path, \"rb\") as fin:\n            google.protobuf.text_format.Merge(fin.read(), self.p4info)\n\n        self.helper = P4InfoHelper(proto_txt_path)\n\n        # used to store write requests sent to the P4Runtime server, useful for\n        # autocleanup of tests (see definition of autocleanup decorator below)\n        self.reqs = []\n\n        self.election_id = 1\n        self.set_up_stream()\n\n    def set_up_stream(self):\n        self.stream_out_q = Queue.Queue()\n        self.stream_in_q = Queue.Queue()\n\n        def stream_req_iterator():\n            while True:\n                p = self.stream_out_q.get()\n                if p is None:\n                    break\n                yield p\n\n        def stream_recv(stream):\n            for p in stream:\n                self.stream_in_q.put(p)\n\n        self.stream = self.stub.StreamChannel(stream_req_iterator())\n        self.stream_recv_thread = threading.Thread(\n            target=stream_recv, args=(self.stream,))\n        self.stream_recv_thread.start()\n\n        self.handshake()\n\n    def handshake(self):\n        req = p4runtime_pb2.StreamMessageRequest()\n        arbitration = req.arbitration\n        arbitration.device_id = self.device_id\n        election_id = arbitration.election_id\n        election_id.high = 0\n        election_id.low = self.election_id\n        self.stream_out_q.put(req)\n\n        rep = self.get_stream_packet(\"arbitration\", timeout=2)\n        if rep is None:\n            self.fail(\"Failed to establish handshake\")\n\n    def tearDown(self):\n        self.tear_down_stream()\n        BaseTest.tearDown(self)\n\n    def tear_down_stream(self):\n        self.stream_out_q.put(None)\n        self.stream_recv_thread.join()\n\n    def get_packet_in(self, timeout=2):\n        msg = self.get_stream_packet(\"packet\", timeout)\n        if msg is None:\n            self.fail(\"PacketIn message not received\")\n        else:\n            return msg.packet\n\n    def verify_packet_in(self, exp_packet_in_msg, timeout=2):\n        rx_packet_in_msg = self.get_packet_in(timeout=timeout)\n\n        # Check payload first, then metadata\n        rx_pkt = Ether(rx_packet_in_msg.payload)\n        exp_pkt = exp_packet_in_msg.payload\n        if not match_exp_pkt(exp_pkt, rx_pkt):\n            self.fail(\"Received PacketIn.payload is not the expected one\\n\"\n                      + format_pkt_match(rx_pkt, exp_pkt))\n\n        rx_meta_dict = {m.metadata_id: m.value\n                        for m in rx_packet_in_msg.metadata}\n        exp_meta_dict = {m.metadata_id: m.value\n                         for m in exp_packet_in_msg.metadata}\n        shared_meta = {mid: rx_meta_dict[mid] for mid in rx_meta_dict\n                       if mid in exp_meta_dict\n                       and rx_meta_dict[mid] == exp_meta_dict[mid]}\n\n        if len(rx_meta_dict) is not len(exp_meta_dict) \\\n                or len(shared_meta) is not len(exp_meta_dict):\n            self.fail(\"Received PacketIn.metadata is not the expected one\\n\"\n                      + format_pb_msg_match(rx_packet_in_msg,\n                                            exp_packet_in_msg))\n\n    def get_stream_packet(self, type_, timeout=1):\n        start = time.time()\n        try:\n            while True:\n                remaining = timeout - (time.time() - start)\n                if remaining < 0:\n                    break\n                msg = self.stream_in_q.get(timeout=remaining)\n                if not msg.HasField(type_):\n                    continue\n                return msg\n        except:  # timeout expired\n            pass\n        return None\n\n    def send_packet_out(self, packet):\n        packet_out_req = p4runtime_pb2.StreamMessageRequest()\n        packet_out_req.packet.CopyFrom(packet)\n        self.stream_out_q.put(packet_out_req)\n\n    def swports(self, idx):\n        if idx >= len(self._swports):\n            self.fail(\"Index {} is out-of-bound of port map\".format(idx))\n        return self._swports[idx]\n\n    def _write(self, req):\n        try:\n            return self.stub.Write(req)\n        except grpc.RpcError as e:\n            if e.code() != grpc.StatusCode.UNKNOWN:\n                raise e\n            raise P4RuntimeWriteException(e)\n\n    def write_request(self, req, store=True):\n        rep = self._write(req)\n        if store:\n            self.reqs.append(req)\n        return rep\n\n    def insert(self, entity):\n        if isinstance(entity, list) or isinstance(entity, tuple):\n            for e in entity:\n                self.insert(e)\n            return\n        req = self.get_new_write_request()\n        update = req.updates.add()\n        update.type = p4runtime_pb2.Update.INSERT\n        if isinstance(entity, p4runtime_pb2.TableEntry):\n            msg_entity = update.entity.table_entry\n        elif isinstance(entity, p4runtime_pb2.ActionProfileGroup):\n            msg_entity = update.entity.action_profile_group\n        elif isinstance(entity, p4runtime_pb2.ActionProfileMember):\n            msg_entity = update.entity.action_profile_member\n        else:\n            self.fail(\"Entity %s not supported\" % entity.__name__)\n        msg_entity.CopyFrom(entity)\n        self.write_request(req)\n\n    def get_new_write_request(self):\n        req = p4runtime_pb2.WriteRequest()\n        req.device_id = self.device_id\n        election_id = req.election_id\n        election_id.high = 0\n        election_id.low = self.election_id\n        return req\n\n    def insert_pre_multicast_group(self, group_id, ports):\n        req = self.get_new_write_request()\n        update = req.updates.add()\n        update.type = p4runtime_pb2.Update.INSERT\n        pre_entry = update.entity.packet_replication_engine_entry\n        mg_entry = pre_entry.multicast_group_entry\n        mg_entry.multicast_group_id = group_id\n        for port in ports:\n            replica = mg_entry.replicas.add()\n            replica.egress_port = port\n            replica.instance = 0\n        return req, self.write_request(req)\n\n    def insert_pre_clone_session(self, session_id, ports, cos=0,\n                                 packet_length_bytes=0):\n        req = self.get_new_write_request()\n        update = req.updates.add()\n        update.type = p4runtime_pb2.Update.INSERT\n        pre_entry = update.entity.packet_replication_engine_entry\n        clone_entry = pre_entry.clone_session_entry\n        clone_entry.session_id = session_id\n        clone_entry.class_of_service = cos\n        clone_entry.packet_length_bytes = packet_length_bytes\n        for port in ports:\n            replica = clone_entry.replicas.add()\n            replica.egress_port = port\n            replica.instance = 1\n        return req, self.write_request(req)\n\n    # iterates over all requests in reverse order; if they are INSERT updates,\n    # replay them as DELETE updates; this is a convenient way to clean-up a lot\n    # of switch state\n    def undo_write_requests(self, reqs):\n        updates = []\n        for req in reversed(reqs):\n            for update in reversed(req.updates):\n                if update.type == p4runtime_pb2.Update.INSERT:\n                    updates.append(update)\n        new_req = self.get_new_write_request()\n        for update in updates:\n            update.type = p4runtime_pb2.Update.DELETE\n            new_req.updates.add().CopyFrom(update)\n        self._write(new_req)\n\n\n# this decorator can be used on the runTest method of P4Runtime PTF tests\n# when it is used, the undo_write_requests will be called at the end of the test\n# (irrespective of whether the test was a failure, a success, or an exception\n# was raised). When this is used, all write requests must be performed through\n# one of the send_request_* convenience functions, or by calling write_request;\n# do not use stub.Write directly!\n# most of the time, it is a great idea to use this decorator, as it makes the\n# tests less verbose. In some circumstances, it is difficult to use it, in\n# particular when the test itself issues DELETE request to remove some\n# objects. In this case you will want to do the cleanup yourself (in the\n# tearDown function for example); you can still use undo_write_request which\n# should make things easier.\n# because the PTF test writer needs to choose whether or not to use autocleanup,\n# it seems more appropriate to define a decorator for this rather than do it\n# unconditionally in the P4RuntimeTest tearDown method.\ndef autocleanup(f):\n    @wraps(f)\n    def handle(*args, **kwargs):\n        test = args[0]\n        assert (isinstance(test, P4RuntimeTest))\n        try:\n            return f(*args, **kwargs)\n        finally:\n            test.undo_write_requests(test.reqs)\n\n    return handle\n\n\ndef skip_on_hw(cls):\n    cls._skip_on_hw = True\n    return cls\n"
  },
  {
    "path": "ptf/lib/chassis_config.pb.txt",
    "content": "description: \"Config for PTF tests using virtual interfaces\"\nchassis {\n  platform: PLT_P4_SOFT_SWITCH\n  name: \"bmv2-simple_switch\"\n}\nnodes {\n  id: 1\n  slot: 1\n  index: 1\n}\nsingleton_ports {\n  id: 1\n  name: \"veth0\"\n  slot: 1\n  port: 1\n  channel: 1\n  speed_bps: 100000000000\n  config_params {\n    admin_state: ADMIN_STATE_ENABLED\n  }\n  node: 1\n}\nsingleton_ports {\n  id: 2\n  name: \"veth2\"\n  slot: 1\n  port: 2\n  channel: 1\n  speed_bps: 100000000000\n  config_params {\n    admin_state: ADMIN_STATE_ENABLED\n  }\n  node: 1\n}\nsingleton_ports {\n  id: 3\n  name: \"veth4\"\n  slot: 1\n  port: 3\n  channel: 1\n  speed_bps: 100000000000\n  config_params {\n    admin_state: ADMIN_STATE_ENABLED\n  }\n  node: 1\n}\nsingleton_ports {\n  id: 4\n  name: \"veth6\"\n  slot: 1\n  port: 4\n  channel: 1\n  speed_bps: 100000000000\n  config_params {\n    admin_state: ADMIN_STATE_ENABLED\n  }\n  node: 1\n}\nsingleton_ports {\n  id: 5\n  name: \"veth8\"\n  slot: 1\n  port: 5\n  channel: 1\n  speed_bps: 100000000000\n  config_params {\n    admin_state: ADMIN_STATE_ENABLED\n  }\n  node: 1\n}\nsingleton_ports {\n  id: 6\n  name: \"veth10\"\n  slot: 1\n  port: 6\n  channel: 1\n  speed_bps: 100000000000\n  config_params {\n    admin_state: ADMIN_STATE_ENABLED\n  }\n  node: 1\n}\nsingleton_ports {\n  id: 7\n  name: \"veth12\"\n  slot: 1\n  port: 7\n  channel: 1\n  speed_bps: 100000000000\n  config_params {\n    admin_state: ADMIN_STATE_ENABLED\n  }\n  node: 1\n}\nsingleton_ports {\n  id: 8\n  name: \"veth14\"\n  slot: 1\n  port: 8\n  channel: 1\n  speed_bps: 100000000000\n  config_params {\n    admin_state: ADMIN_STATE_ENABLED\n  }\n  node: 1\n}"
  },
  {
    "path": "ptf/lib/convert.py",
    "content": "# Copyright 2017-present Open Networking Foundation\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#    http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\nimport math\nimport re\nimport socket\n\nimport ipaddress\n\n\"\"\"\nThis package contains several helper functions for encoding to and decoding from\nbyte strings:\n- integers\n- IPv4 address strings\n- IPv6 address strings\n- Ethernet address strings\n\"\"\"\n\nmac_pattern = re.compile(r'^([\\da-fA-F]{2}:){5}([\\da-fA-F]{2})$')\n\n\ndef matchesMac(mac_addr_string):\n    return mac_pattern.match(mac_addr_string) is not None\n\n\ndef encodeMac(mac_addr_string):\n    return mac_addr_string.replace(':', '').decode('hex')\n\n\ndef decodeMac(encoded_mac_addr):\n    return ':'.join(s.encode('hex') for s in encoded_mac_addr)\n\n\nip_pattern = re.compile(r'^(\\d{1,3}\\.){3}(\\d{1,3})$')\n\n\ndef matchesIPv4(ip_addr_string):\n    return ip_pattern.match(ip_addr_string) is not None\n\n\ndef encodeIPv4(ip_addr_string):\n    return socket.inet_aton(ip_addr_string)\n\n\ndef decodeIPv4(encoded_ip_addr):\n    return socket.inet_ntoa(encoded_ip_addr)\n\n\ndef matchesIPv6(ip_addr_string):\n    try:\n        addr = ipaddress.ip_address(unicode(ip_addr_string, \"utf-8\"))\n        return isinstance(addr, ipaddress.IPv6Address)\n    except ValueError:\n        return False\n\n\ndef encodeIPv6(ip_addr_string):\n    return socket.inet_pton(socket.AF_INET6, ip_addr_string)\n\n\ndef bitwidthToBytes(bitwidth):\n    return int(math.ceil(bitwidth / 8.0))\n\n\ndef encodeNum(number, bitwidth):\n    byte_len = bitwidthToBytes(bitwidth)\n    num_str = '%x' % number\n    if number >= 2 ** bitwidth:\n        raise Exception(\n            \"Number, %d, does not fit in %d bits\" % (number, bitwidth))\n    return ('0' * (byte_len * 2 - len(num_str)) + num_str).decode('hex')\n\n\ndef decodeNum(encoded_number):\n    return int(encoded_number.encode('hex'), 16)\n\n\ndef encode(x, bitwidth):\n    'Tries to infer the type of `x` and encode it'\n    byte_len = bitwidthToBytes(bitwidth)\n    if (type(x) == list or type(x) == tuple) and len(x) == 1:\n        x = x[0]\n    encoded_bytes = None\n    if type(x) == str:\n        if matchesMac(x):\n            encoded_bytes = encodeMac(x)\n        elif matchesIPv4(x):\n            encoded_bytes = encodeIPv4(x)\n        elif matchesIPv6(x):\n            encoded_bytes = encodeIPv6(x)\n        else:\n            # Assume that the string is already encoded\n            encoded_bytes = x\n    elif type(x) == int:\n        encoded_bytes = encodeNum(x, bitwidth)\n    else:\n        raise Exception(\"Encoding objects of %r is not supported\" % type(x))\n    assert (len(encoded_bytes) == byte_len)\n    return encoded_bytes\n\n\ndef test():\n    # TODO These tests should be moved out of main eventually\n    mac = \"aa:bb:cc:dd:ee:ff\"\n    enc_mac = encodeMac(mac)\n    assert (enc_mac == '\\xaa\\xbb\\xcc\\xdd\\xee\\xff')\n    dec_mac = decodeMac(enc_mac)\n    assert (mac == dec_mac)\n\n    ip = \"10.0.0.1\"\n    enc_ip = encodeIPv4(ip)\n    assert (enc_ip == '\\x0a\\x00\\x00\\x01')\n    dec_ip = decodeIPv4(enc_ip)\n    assert (ip == dec_ip)\n\n    num = 1337\n    byte_len = 5\n    enc_num = encodeNum(num, byte_len * 8)\n    assert (enc_num == '\\x00\\x00\\x00\\x05\\x39')\n    dec_num = decodeNum(enc_num)\n    assert (num == dec_num)\n\n    assert (matchesIPv4('10.0.0.1'))\n    assert (not matchesIPv4('10.0.0.1.5'))\n    assert (not matchesIPv4('1000.0.0.1'))\n    assert (not matchesIPv4('10001'))\n\n    assert (matchesIPv6('::1'))\n    assert (encode('1:2:3:4:5:6:7:8', 128) == '\\x00\\x01\\x00\\x02\\x00\\x03\\x00\\x04\\x00\\x05\\x00\\x06\\x00\\x07\\x00\\x08')\n    assert (matchesIPv6('2001:0000:85a3::8a2e:370:1111'))\n    assert (not matchesIPv6('10.0.0.1'))\n\n    assert (encode(mac, 6 * 8) == enc_mac)\n    assert (encode(ip, 4 * 8) == enc_ip)\n    assert (encode(num, 5 * 8) == enc_num)\n    assert (encode((num,), 5 * 8) == enc_num)\n    assert (encode([num], 5 * 8) == enc_num)\n\n    num = 256\n    try:\n        encodeNum(num, 8)\n        raise Exception(\"expected exception\")\n    except Exception as e:\n        print e\n\n\nif __name__ == '__main__':\n    test()\n"
  },
  {
    "path": "ptf/lib/helper.py",
    "content": "# Copyright 2017-present Open Networking Foundation\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#    http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\nimport re\n\nimport google.protobuf.text_format\nfrom p4.config.v1 import p4info_pb2\nfrom p4.v1 import p4runtime_pb2\n\nfrom convert import encode\n\n\ndef get_match_field_value(match_field):\n    match_type = match_field.WhichOneof(\"field_match_type\")\n    if match_type == 'valid':\n        return match_field.valid.value\n    elif match_type == 'exact':\n        return match_field.exact.value\n    elif match_type == 'lpm':\n        return match_field.lpm.value, match_field.lpm.prefix_len\n    elif match_type == 'ternary':\n        return match_field.ternary.value, match_field.ternary.mask\n    elif match_type == 'range':\n        return match_field.range.low, match_field.range.high\n    else:\n        raise Exception(\"Unsupported match type with type %r\" % match_type)\n\n\nclass P4InfoHelper(object):\n    def __init__(self, p4_info_filepath):\n        p4info = p4info_pb2.P4Info()\n        # Load the p4info file into a skeleton P4Info object\n        with open(p4_info_filepath) as p4info_f:\n            google.protobuf.text_format.Merge(p4info_f.read(), p4info)\n        self.p4info = p4info\n\n        self.next_mbr_id = 1\n        self.next_grp_id = 1\n\n    def get_next_mbr_id(self):\n        mbr_id = self.next_mbr_id\n        self.next_mbr_id = self.next_mbr_id + 1\n        return mbr_id\n\n    def get_next_grp_id(self):\n        grp_id = self.next_grp_id\n        self.next_grp_id = self.next_grp_id + 1\n        return grp_id\n\n    def get(self, entity_type, name=None, id=None):\n        if name is not None and id is not None:\n            raise AssertionError(\"name or id must be None\")\n\n        for o in getattr(self.p4info, entity_type):\n            pre = o.preamble\n            if name:\n                if pre.name == name:\n                    return o\n            else:\n                if pre.id == id:\n                    return o\n\n        if name:\n            raise AttributeError(\"Could not find %r of type %s\"\n                                 % (name, entity_type))\n        else:\n            raise AttributeError(\"Could not find id %r of type %s\"\n                                 % (id, entity_type))\n\n    def get_id(self, entity_type, name):\n        return self.get(entity_type, name=name).preamble.id\n\n    def get_name(self, entity_type, id):\n        return self.get(entity_type, id=id).preamble.name\n\n    def __getattr__(self, attr):\n        # Synthesize convenience functions for name to id lookups for top-level\n        # entities e.g. get_tables_id(name_string) or\n        # get_actions_id(name_string)\n        m = re.search(r\"^get_(\\w+)_id$\", attr)\n        if m:\n            primitive = m.group(1)\n            return lambda name: self.get_id(primitive, name)\n\n        # Synthesize convenience functions for id to name lookups\n        # e.g. get_tables_name(id) or get_actions_name(id)\n        m = re.search(r\"^get_(\\w+)_name$\", attr)\n        if m:\n            primitive = m.group(1)\n            return lambda x: self.get_name(primitive, x)\n\n        raise AttributeError(\n            \"%r object has no attribute %r (check your P4Info)\"\n            % (self.__class__, attr))\n\n    def get_match_field(self, table_name, name=None, id=None):\n        t = None\n        for t in self.p4info.tables:\n            if t.preamble.name == table_name:\n                break\n        if not t:\n            raise AttributeError(\"No such table %r in P4Info\" % table_name)\n        for mf in t.match_fields:\n            if name is not None:\n                if mf.name == name:\n                    return mf\n            elif id is not None:\n                if mf.id == id:\n                    return mf\n        raise AttributeError(\"%r has no match field %r (check your P4Info)\"\n                             % (table_name, name if name is not None else id))\n\n    def get_packet_metadata(self, meta_type, name=None, id=None):\n        for t in self.p4info.controller_packet_metadata:\n            pre = t.preamble\n            if pre.name == meta_type:\n                for m in t.metadata:\n                    if name is not None:\n                        if m.name == name:\n                            return m\n                    elif id is not None:\n                        if m.id == id:\n                            return m\n        raise AttributeError(\n            \"ControllerPacketMetadata %r has no metadata %r (check your P4Info)\"\n            % (meta_type, name if name is not None else id))\n\n    def get_match_field_id(self, table_name, match_field_name):\n        return self.get_match_field(table_name, name=match_field_name).id\n\n    def get_match_field_name(self, table_name, match_field_id):\n        return self.get_match_field(table_name, id=match_field_id).name\n\n    def get_match_field_pb(self, table_name, match_field_name, value):\n        p4info_match = self.get_match_field(table_name, match_field_name)\n        bitwidth = p4info_match.bitwidth\n        p4runtime_match = p4runtime_pb2.FieldMatch()\n        p4runtime_match.field_id = p4info_match.id\n        match_type = p4info_match.match_type\n        if match_type == p4info_pb2.MatchField.EXACT:\n            exact = p4runtime_match.exact\n            exact.value = encode(value, bitwidth)\n        elif match_type == p4info_pb2.MatchField.LPM:\n            lpm = p4runtime_match.lpm\n            lpm.value = encode(value[0], bitwidth)\n            lpm.prefix_len = value[1]\n        elif match_type == p4info_pb2.MatchField.TERNARY:\n            lpm = p4runtime_match.ternary\n            lpm.value = encode(value[0], bitwidth)\n            lpm.mask = encode(value[1], bitwidth)\n        elif match_type == p4info_pb2.MatchField.RANGE:\n            lpm = p4runtime_match.range\n            lpm.low = encode(value[0], bitwidth)\n            lpm.high = encode(value[1], bitwidth)\n        else:\n            raise Exception(\"Unsupported match type with type %r\" % match_type)\n        return p4runtime_match\n\n    def get_action_param(self, action_name, name=None, id=None):\n        for a in self.p4info.actions:\n            pre = a.preamble\n            if pre.name == action_name:\n                for p in a.params:\n                    if name is not None:\n                        if p.name == name:\n                            return p\n                    elif id is not None:\n                        if p.id == id:\n                            return p\n        raise AttributeError(\n            \"Action %r has no param %r (check your P4Info)\"\n            % (action_name, name if name is not None else id))\n\n    def get_action_param_id(self, action_name, param_name):\n        return self.get_action_param(action_name, name=param_name).id\n\n    def get_action_param_name(self, action_name, param_id):\n        return self.get_action_param(action_name, id=param_id).name\n\n    def get_action_param_pb(self, action_name, param_name, value):\n        p4info_param = self.get_action_param(action_name, param_name)\n        p4runtime_param = p4runtime_pb2.Action.Param()\n        p4runtime_param.param_id = p4info_param.id\n        p4runtime_param.value = encode(value, p4info_param.bitwidth)\n        return p4runtime_param\n\n    def build_table_entry(self,\n                          table_name,\n                          match_fields=None,\n                          default_action=False,\n                          action_name=None,\n                          action_params=None,\n                          group_id=None,\n                          priority=None):\n        table_entry = p4runtime_pb2.TableEntry()\n        table_entry.table_id = self.get_tables_id(table_name)\n\n        if priority is not None:\n            table_entry.priority = priority\n\n        if match_fields:\n            table_entry.match.extend([\n                self.get_match_field_pb(table_name, match_field_name, value)\n                for match_field_name, value in match_fields.iteritems()\n            ])\n\n        if default_action:\n            table_entry.is_default_action = True\n\n        if action_name:\n            action = table_entry.action.action\n            action.CopyFrom(self.build_action(action_name, action_params))\n\n        if group_id:\n            table_entry.action.action_profile_group_id = group_id\n\n        return table_entry\n\n    def build_action(self, action_name, action_params=None):\n        action = p4runtime_pb2.Action()\n        action.action_id = self.get_actions_id(action_name)\n        if action_params:\n            action.params.extend([\n                self.get_action_param_pb(action_name, field_name, value)\n                for field_name, value in action_params.iteritems()\n            ])\n        return action\n\n    def build_act_prof_member(self, act_prof_name,\n                              action_name, action_params=None,\n                              member_id=None):\n        member = p4runtime_pb2.ActionProfileMember()\n        member.action_profile_id = self.get_action_profiles_id(act_prof_name)\n        member.member_id = member_id if member_id else self.get_next_mbr_id()\n        member.action.CopyFrom(self.build_action(action_name, action_params))\n        return member\n\n    def build_act_prof_group(self, act_prof_name, group_id, actions=()):\n        messages = []\n        group = p4runtime_pb2.ActionProfileGroup()\n        group.action_profile_id = self.get_action_profiles_id(act_prof_name)\n        group.group_id = group_id\n        for action in actions:\n            action_name = action[0]\n            if len(action) > 1:\n                action_params = action[1]\n            else:\n                action_params = None\n            member = self.build_act_prof_member(\n                act_prof_name, action_name, action_params)\n            messages.extend([member])\n            group_member = p4runtime_pb2.ActionProfileGroup.Member()\n            group_member.member_id = member.member_id\n            group_member.weight = 1\n            group.members.extend([group_member])\n        messages.append(group)\n        return messages\n\n    def build_packet_out(self, payload, metadata=None):\n        packet_out = p4runtime_pb2.PacketOut()\n        packet_out.payload = payload\n        if not metadata:\n            return packet_out\n        for name, value in metadata.items():\n            p4info_meta = self.get_packet_metadata(\"packet_out\", name)\n            meta = packet_out.metadata.add()\n            meta.metadata_id = p4info_meta.id\n            meta.value = encode(value, p4info_meta.bitwidth)\n        return packet_out\n\n    def build_packet_in(self, payload, metadata=None):\n        packet_in = p4runtime_pb2.PacketIn()\n        packet_in.payload = payload\n        if not metadata:\n            return packet_in\n        for name, value in metadata.items():\n            p4info_meta = self.get_packet_metadata(\"packet_in\", name)\n            meta = packet_in.metadata.add()\n            meta.metadata_id = p4info_meta.id\n            meta.value = encode(value, p4info_meta.bitwidth)\n        return packet_in\n"
  },
  {
    "path": "ptf/lib/port_map.json",
    "content": "[\n    {\n        \"ptf_port\": 0,\n        \"p4_port\": 1,\n        \"iface_name\": \"veth1\"\n    },\n    {\n        \"ptf_port\": 1,\n        \"p4_port\": 2,\n        \"iface_name\": \"veth3\"\n    },\n    {\n        \"ptf_port\": 2,\n        \"p4_port\": 3,\n        \"iface_name\": \"veth5\"\n    },\n    {\n        \"ptf_port\": 3,\n        \"p4_port\": 4,\n        \"iface_name\": \"veth7\"\n    },\n    {\n        \"ptf_port\": 4,\n        \"p4_port\": 5,\n        \"iface_name\": \"veth9\"\n    },\n    {\n        \"ptf_port\": 5,\n        \"p4_port\": 6,\n        \"iface_name\": \"veth11\"\n    },\n    {\n        \"ptf_port\": 6,\n        \"p4_port\": 7,\n        \"iface_name\": \"veth13\"\n    },\n    {\n        \"ptf_port\": 7,\n        \"p4_port\": 8,\n        \"iface_name\": \"veth15\"\n    }\n]\n"
  },
  {
    "path": "ptf/lib/runner.py",
    "content": "#!/usr/bin/env python2\n\n# Copyright 2013-present Barefoot Networks, Inc.\n# Copyright 2018-present Open Networking Foundation\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\nimport Queue\nimport argparse\nimport json\nimport logging\nimport os\nimport re\nimport subprocess\nimport sys\nimport threading\nimport time\nfrom collections import OrderedDict\n\nimport google.protobuf.text_format\nimport grpc\nfrom p4.v1 import p4runtime_pb2, p4runtime_pb2_grpc\n\nPTF_ROOT = os.path.dirname(os.path.realpath(__file__))\n\nlogging.basicConfig(level=logging.INFO)\nlogger = logging.getLogger(\"PTF runner\")\n\n\ndef error(msg, *args, **kwargs):\n    logger.error(msg, *args, **kwargs)\n\n\ndef warn(msg, *args, **kwargs):\n    logger.warn(msg, *args, **kwargs)\n\n\ndef info(msg, *args, **kwargs):\n    logger.info(msg, *args, **kwargs)\n\n\ndef debug(msg, *args, **kwargs):\n    logger.debug(msg, *args, **kwargs)\n\n\ndef check_ifaces(ifaces):\n    \"\"\"\n    Checks that required interfaces exist.\n    \"\"\"\n    ifconfig_out = subprocess.check_output(['ifconfig'])\n    iface_list = re.findall(r'^([a-zA-Z0-9]+)', ifconfig_out, re.S | re.M)\n    present_ifaces = set(iface_list)\n    ifaces = set(ifaces)\n    return ifaces <= present_ifaces\n\n\ndef build_bmv2_config(bmv2_json_path):\n    \"\"\"\n    Builds the device config for BMv2\n    \"\"\"\n    with open(bmv2_json_path) as f:\n        return f.read()\n\n\ndef update_config(p4info_path, bmv2_json_path, grpc_addr, device_id):\n    \"\"\"\n    Performs a SetForwardingPipelineConfig on the device\n    \"\"\"\n    channel = grpc.insecure_channel(grpc_addr)\n    stub = p4runtime_pb2_grpc.P4RuntimeStub(channel)\n\n    debug(\"Sending P4 config\")\n\n    # Send master arbitration via stream channel\n    # This should go in library, to be re-used also by base_test.py.\n    stream_out_q = Queue.Queue()\n    stream_in_q = Queue.Queue()\n\n    def stream_req_iterator():\n        while True:\n            p = stream_out_q.get()\n            if p is None:\n                break\n            yield p\n\n    def stream_recv(stream):\n        for p in stream:\n            stream_in_q.put(p)\n\n    def get_stream_packet(type_, timeout=1):\n        start = time.time()\n        try:\n            while True:\n                remaining = timeout - (time.time() - start)\n                if remaining < 0:\n                    break\n                msg = stream_in_q.get(timeout=remaining)\n                if not msg.HasField(type_):\n                    continue\n                return msg\n        except:  # timeout expired\n            pass\n        return None\n\n    stream = stub.StreamChannel(stream_req_iterator())\n    stream_recv_thread = threading.Thread(target=stream_recv, args=(stream,))\n    stream_recv_thread.start()\n\n    req = p4runtime_pb2.StreamMessageRequest()\n    arbitration = req.arbitration\n    arbitration.device_id = device_id\n    election_id = arbitration.election_id\n    election_id.high = 0\n    election_id.low = 1\n    stream_out_q.put(req)\n\n    rep = get_stream_packet(\"arbitration\", timeout=5)\n    if rep is None:\n        error(\"Failed to establish handshake\")\n        return False\n\n    try:\n        # Set pipeline config.\n        request = p4runtime_pb2.SetForwardingPipelineConfigRequest()\n        request.device_id = device_id\n        election_id = request.election_id\n        election_id.high = 0\n        election_id.low = 1\n        config = request.config\n        with open(p4info_path, 'r') as p4info_f:\n            google.protobuf.text_format.Merge(p4info_f.read(), config.p4info)\n        config.p4_device_config = build_bmv2_config(bmv2_json_path)\n        request.action = p4runtime_pb2.SetForwardingPipelineConfigRequest.VERIFY_AND_COMMIT\n        try:\n            stub.SetForwardingPipelineConfig(request)\n        except Exception as e:\n            error(\"Error during SetForwardingPipelineConfig\")\n            error(str(e))\n            return False\n        return True\n    finally:\n        stream_out_q.put(None)\n        stream_recv_thread.join()\n\n\ndef run_test(p4info_path, grpc_addr, device_id, cpu_port, ptfdir, port_map_path,\n             extra_args=()):\n    \"\"\"\n    Runs PTF tests included in provided directory.\n    Device must be running and configfured with appropriate P4 program.\n    \"\"\"\n    # TODO: check schema?\n    # \"ptf_port\" is ignored for now, we assume that ports are provided by\n    # increasing values of ptf_port, in the range [0, NUM_IFACES[.\n    port_map = OrderedDict()\n    with open(port_map_path, 'r') as port_map_f:\n        port_list = json.load(port_map_f)\n        for entry in port_list:\n            p4_port = entry[\"p4_port\"]\n            iface_name = entry[\"iface_name\"]\n            port_map[p4_port] = iface_name\n\n    if not check_ifaces(port_map.values()):\n        error(\"Some interfaces are missing\")\n        return False\n\n    ifaces = []\n    # FIXME\n    # find base_test.py\n    pypath = os.path.dirname(os.path.abspath(__file__))\n    if 'PYTHONPATH' in os.environ:\n        os.environ['PYTHONPATH'] += \":\" + pypath\n    else:\n        os.environ['PYTHONPATH'] = pypath\n    for iface_idx, iface_name in port_map.items():\n        ifaces.extend(['-i', '{}@{}'.format(iface_idx, iface_name)])\n    cmd = ['ptf']\n    cmd.extend(['--test-dir', ptfdir])\n    cmd.extend(ifaces)\n    test_params = 'p4info=\\'{}\\''.format(p4info_path)\n    test_params += ';grpcaddr=\\'{}\\''.format(grpc_addr)\n    test_params += ';device_id=\\'{}\\''.format(device_id)\n    test_params += ';cpu_port=\\'{}\\''.format(cpu_port)\n    cmd.append('--test-params={}'.format(test_params))\n    cmd.extend(extra_args)\n    debug(\"Executing PTF command: {}\".format(' '.join(cmd)))\n\n    try:\n        # we want the ptf output to be sent to stdout\n        p = subprocess.Popen(cmd)\n        p.wait()\n    except:\n        error(\"Error when running PTF tests\")\n        return False\n    return p.returncode == 0\n\n\ndef check_ptf():\n    try:\n        with open(os.devnull, 'w') as devnull:\n            subprocess.check_call(['ptf', '--version'],\n                                  stdout=devnull, stderr=devnull)\n        return True\n    except subprocess.CalledProcessError:\n        return True\n    except OSError:  # PTF not found\n        return False\n\n\n# noinspection PyTypeChecker\ndef main():\n    parser = argparse.ArgumentParser(\n        description=\"Compile the provided P4 program and run PTF tests on it\")\n    parser.add_argument('--p4info',\n                        help='Location of p4info proto in text format',\n                        type=str, action=\"store\", required=True)\n    parser.add_argument('--bmv2-json',\n                        help='Location BMv2 JSON output from p4c (if target is bmv2)',\n                        type=str, action=\"store\", required=False)\n    parser.add_argument('--grpc-addr',\n                        help='Address to use to connect to P4 Runtime server',\n                        type=str, default='localhost:50051')\n    parser.add_argument('--device-id',\n                        help='Device id for device under test',\n                        type=int, default=1)\n    parser.add_argument('--cpu-port',\n                        help='CPU port ID of device under test',\n                        type=int, required=True)\n    parser.add_argument('--ptf-dir',\n                        help='Directory containing PTF tests',\n                        type=str, required=True)\n    parser.add_argument('--port-map',\n                        help='Path to JSON port mapping',\n                        type=str, required=True)\n    args, unknown_args = parser.parse_known_args()\n\n    if not check_ptf():\n        error(\"Cannot find PTF executable\")\n        sys.exit(1)\n\n    if not os.path.exists(args.p4info):\n        error(\"P4Info file {} not found\".format(args.p4info))\n        sys.exit(1)\n    if not os.path.exists(args.bmv2_json):\n        error(\"BMv2 json file {} not found\".format(args.bmv2_json))\n        sys.exit(1)\n    if not os.path.exists(args.port_map):\n        print \"Port map path '{}' does not exist\".format(args.port_map)\n        sys.exit(1)\n\n    try:\n\n        success = update_config(p4info_path=args.p4info,\n                                bmv2_json_path=args.bmv2_json,\n                                grpc_addr=args.grpc_addr,\n                                device_id=args.device_id)\n        if not success:\n            sys.exit(2)\n\n        success = run_test(p4info_path=args.p4info,\n                           device_id=args.device_id,\n                           grpc_addr=args.grpc_addr,\n                           cpu_port=args.cpu_port,\n                           ptfdir=args.ptf_dir,\n                           port_map_path=args.port_map,\n                           extra_args=unknown_args)\n\n        if not success:\n            sys.exit(3)\n\n    except Exception:\n        raise\n\n\nif __name__ == '__main__':\n    main()\n"
  },
  {
    "path": "ptf/lib/start_bmv2.sh",
    "content": "#!/usr/bin/env bash\n\nset -xe\n\nDIR=\"$( cd \"$( dirname \"${BASH_SOURCE[0]}\" )\" >/dev/null 2>&1 && pwd )\"\n\nCPU_PORT=255\nGRPC_PORT=28000\n\n# Create veths\nfor idx in 0 1 2 3 4 5 6 7; do\n    intf0=\"veth$(($idx*2))\"\n    intf1=\"veth$(($idx*2+1))\"\n    if ! ip link show $intf0 &> /dev/null; then\n        ip link add name $intf0 type veth peer name $intf1\n        ip link set dev $intf0 up\n        ip link set dev $intf1 up\n\n        # Set the MTU of these interfaces to be larger than default of\n        # 1500 bytes, so that P4 behavioral-model testing can be done\n        # on jumbo frames.\n        ip link set $intf0 mtu 9500\n        ip link set $intf1 mtu 9500\n\n        # Disable IPv6 on the interfaces, so that the Linux kernel\n        # will not automatically send IPv6 MDNS, Router Solicitation,\n        # and Multicast Listener Report packets on the interface,\n        # which can make P4 program debugging more confusing.\n        #\n        # Testing indicates that we can still send IPv6 packets across\n        # such interfaces, both from scapy to simple_switch, and from\n        # simple_switch out to scapy sniffing.\n        #\n        # https://superuser.com/questions/356286/how-can-i-switch-off-ipv6-nd-ra-transmissions-in-linux\n        sysctl net.ipv6.conf.${intf0}.disable_ipv6=1\n        sysctl net.ipv6.conf.${intf1}.disable_ipv6=1\n    fi\ndone\n\n# shellcheck disable=SC2086\nstratum_bmv2 \\\n    --external_stratum_urls=0.0.0.0:${GRPC_PORT} \\\n    --persistent_config_dir=/tmp \\\n    --forwarding_pipeline_configs_file=/dev/null \\\n    --chassis_config_file=\"${DIR}\"/chassis_config.pb.txt \\\n    --write_req_log_file=p4rt_write.log \\\n    --initial_pipeline=/root/dummy.json \\\n    --bmv2_log_level=trace \\\n    --cpu_port 255 \\\n    > stratum_bmv2.log 2>&1\n"
  },
  {
    "path": "ptf/run_tests",
    "content": "#!/usr/bin/env bash\n\nset -e\n\nPTF_DIR=\"$( cd \"$( dirname \"${BASH_SOURCE[0]}\" )\" >/dev/null 2>&1 && pwd )\"\nP4SRC_DIR=${PTF_DIR}/../p4src\nP4C_OUT=${P4SRC_DIR}/build\nPTF_DOCKER_IMG=${PTF_DOCKER_IMG:-undefined}\n\nrunName=ptf-${RANDOM}\n\nfunction stop() {\n        echo \"Stopping container ${runName}...\"\n        docker stop -t0 \"${runName}\" > /dev/null\n}\ntrap stop INT\n\n# Start container. Entrypoint starts stratum_bmv2. We put that in the background\n# and execute the PTF scripts separately.\necho \"*** Starting stratum_bmv2 in Docker (${runName})...\"\ndocker run --name \"${runName}\" -d --privileged --rm \\\n    -v \"${PTF_DIR}\":/ptf -w /ptf \\\n    -v \"${P4C_OUT}\":/p4c-out \\\n    \"${PTF_DOCKER_IMG}\" \\\n    ./lib/start_bmv2.sh > /dev/null\n\nsleep 2\n\nset +e\n\nprintf \"*** Starting tests...\\n\"\ndocker exec \"${runName}\" ./lib/runner.py \\\n    --bmv2-json /p4c-out/bmv2.json \\\n    --p4info /p4c-out/p4info.txt \\\n    --grpc-addr localhost:28000 \\\n    --device-id 1 \\\n    --ptf-dir ./tests \\\n    --cpu-port 255 \\\n    --port-map /ptf/lib/port_map.json \"${@}\"\n\nstop\n"
  },
  {
    "path": "ptf/tests/bridging.py",
    "content": "# Copyright 2013-present Barefoot Networks, Inc.\n# Copyright 2018-present Open Networking Foundation\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\n# ------------------------------------------------------------------------------\n# BRIDGING TESTS\n#\n# To run all tests in this file:\n#     make p4-test TEST=bridging\n#\n# To run a specific test case:\n#     make p4-test TEST=bridging.<TEST CLASS NAME>\n#\n# For example:\n#     make p4-test TEST=bridging.BridgingTest\n# ------------------------------------------------------------------------------\n\n# ------------------------------------------------------------------------------\n# Modify everywhere you see TODO\n#\n# When providing your solution, make sure to use the same names for P4Runtime\n# entities as specified in your P4Info file.\n#\n# Test cases are based on the P4 program design suggested in the exercises\n# README. Make sure to modify the test cases accordingly if you decide to\n# implement the pipeline differently.\n# ------------------------------------------------------------------------------\n\nfrom ptf.testutils import group\n\nfrom base_test import *\n\n# From the P4 program.\nCPU_CLONE_SESSION_ID = 99\n\n\n@group(\"bridging\")\nclass ArpNdpRequestWithCloneTest(P4RuntimeTest):\n    \"\"\"Tests ability to broadcast ARP requests and NDP Neighbor Solicitation\n    (NS) messages as well as cloning to CPU (controller) for host\n     discovery.\n    \"\"\"\n\n    def runTest(self):\n        #  Test With both ARP and NDP NS packets...\n        print_inline(\"ARP request ... \")\n        arp_pkt = testutils.simple_arp_packet()\n        self.testPacket(arp_pkt)\n\n        print_inline(\"NDP NS ... \")\n        ndp_pkt = genNdpNsPkt(src_mac=HOST1_MAC, src_ip=HOST1_IPV6,\n                              target_ip=HOST2_IPV6)\n        self.testPacket(ndp_pkt)\n\n    @autocleanup\n    def testPacket(self, pkt):\n        mcast_group_id = 10\n        mcast_ports = [self.port1, self.port2, self.port3]\n\n        # Add multicast group.\n        self.insert_pre_multicast_group(\n            group_id=mcast_group_id,\n            ports=mcast_ports)\n\n        # Match eth dst: FF:FF:FF:FF:FF:FF (MAC broadcast for ARP requests)\n        # Modify names to match content of P4Info file (look for the fully\n        # qualified name of tables, match fields, and actions.\n        # ---- START SOLUTION ----\n        self.insert(self.helper.build_table_entry(\n            table_name=\"IngressPipeImpl.l2_ternary_table\",\n            match_fields={\n                # Ternary match.\n                \"hdr.ethernet.dst_addr\": (\n                    \"FF:FF:FF:FF:FF:FF\",\n                    \"FF:FF:FF:FF:FF:FF\")\n            },\n            action_name=\"IngressPipeImpl.set_multicast_group\",\n            action_params={\n                \"gid\": mcast_group_id\n            },\n            priority=DEFAULT_PRIORITY\n        ))\n        # ---- END SOLUTION ----\n\n        # Match eth dst: 33:33:**:**:**:** (IPv6 multicast for NDP requests)\n        # Modify names to match content of P4Info file (look for the fully\n        # qualified name of tables, match fields, and actions.\n        # ---- START SOLUTION ----\n        self.insert(self.helper.build_table_entry(\n            table_name=\"IngressPipeImpl.l2_ternary_table\",\n            match_fields={\n                # Ternary match (value, mask)\n                \"hdr.ethernet.dst_addr\": (\n                    \"33:33:00:00:00:00\",\n                    \"FF:FF:00:00:00:00\")\n            },\n            action_name=\"IngressPipeImpl.set_multicast_group\",\n            action_params={\n                \"gid\": mcast_group_id\n            },\n            priority=DEFAULT_PRIORITY\n        ))\n        # ---- END SOLUTION ----\n\n        # Insert CPU clone session.\n        self.insert_pre_clone_session(\n            session_id=CPU_CLONE_SESSION_ID,\n            ports=[self.cpu_port])\n\n        # ACL entry to clone ARPs\n        self.insert(self.helper.build_table_entry(\n            table_name=\"IngressPipeImpl.acl_table\",\n            match_fields={\n                # Ternary match.\n                \"hdr.ethernet.ether_type\": (ARP_ETH_TYPE, 0xffff)\n            },\n            action_name=\"IngressPipeImpl.clone_to_cpu\",\n            priority=DEFAULT_PRIORITY\n        ))\n\n        # ACL entry to clone NDP Neighbor Solicitation\n        self.insert(self.helper.build_table_entry(\n            table_name=\"IngressPipeImpl.acl_table\",\n            match_fields={\n                # Ternary match.\n                \"hdr.ethernet.ether_type\": (IPV6_ETH_TYPE, 0xffff),\n                \"local_metadata.ip_proto\": (ICMPV6_IP_PROTO, 0xff),\n                \"local_metadata.icmp_type\": (NS_ICMPV6_TYPE, 0xff)\n            },\n            action_name=\"IngressPipeImpl.clone_to_cpu\",\n            priority=DEFAULT_PRIORITY\n        ))\n\n        for inport in mcast_ports:\n\n            # Send packet...\n            testutils.send_packet(self, inport, str(pkt))\n\n            # Pkt should be received on CPU via PacketIn...\n            # Expected P4Runtime PacketIn message.\n            exp_packet_in_msg = self.helper.build_packet_in(\n                payload=str(pkt),\n                metadata={\n                    \"ingress_port\": inport,\n                    \"_pad\": 0\n                })\n            self.verify_packet_in(exp_packet_in_msg)\n\n            # ...and on all ports except the ingress one.\n            verify_ports = set(mcast_ports)\n            verify_ports.discard(inport)\n            for port in verify_ports:\n                testutils.verify_packet(self, pkt, port)\n\n        testutils.verify_no_other_packets(self)\n\n\n@group(\"bridging\")\nclass ArpNdpReplyWithCloneTest(P4RuntimeTest):\n    \"\"\"Tests ability to clone ARP replies and NDP Neighbor Advertisement\n    (NA) messages as well as unicast forwarding to requesting host.\n    \"\"\"\n\n    def runTest(self):\n        #  Test With both ARP and NDP NS packets...\n        print_inline(\"ARP reply ... \")\n        # op=1 request, op=2 relpy\n        arp_pkt = testutils.simple_arp_packet(\n            eth_src=HOST1_MAC, eth_dst=HOST2_MAC, arp_op=2)\n        self.testPacket(arp_pkt)\n\n        print_inline(\"NDP NA ... \")\n        ndp_pkt = genNdpNaPkt(target_ip=HOST1_IPV6, target_mac=HOST1_MAC)\n        self.testPacket(ndp_pkt)\n\n    @autocleanup\n    def testPacket(self, pkt):\n\n        # L2 unicast entry, match on pkt's eth dst address.\n        # Modify names to match content of P4Info file (look for the fully\n        # qualified name of tables, match fields, and actions.\n        # ---- START SOLUTION ----\n        self.insert(self.helper.build_table_entry(\n            table_name=\"IngressPipeImpl.l2_exact_table\",\n            match_fields={\n                # Exact match.\n                \"hdr.ethernet.dst_addr\": pkt[Ether].dst\n            },\n            action_name=\"IngressPipeImpl.set_egress_port\",\n            action_params={\n                \"port_num\": self.port2\n            }\n        ))\n        # ---- END SOLUTION ----\n\n        # CPU clone session.\n        self.insert_pre_clone_session(\n            session_id=CPU_CLONE_SESSION_ID,\n            ports=[self.cpu_port])\n\n        # ACL entry to clone ARPs\n        self.insert(self.helper.build_table_entry(\n            table_name=\"IngressPipeImpl.acl_table\",\n            match_fields={\n                # Ternary match.\n                \"hdr.ethernet.ether_type\": (ARP_ETH_TYPE, 0xffff)\n            },\n            action_name=\"IngressPipeImpl.clone_to_cpu\",\n            priority=DEFAULT_PRIORITY\n        ))\n\n        # ACL entry to clone NDP Neighbor Solicitation\n        self.insert(self.helper.build_table_entry(\n            table_name=\"IngressPipeImpl.acl_table\",\n            match_fields={\n                # Ternary match.\n                \"hdr.ethernet.ether_type\": (IPV6_ETH_TYPE, 0xffff),\n                \"local_metadata.ip_proto\": (ICMPV6_IP_PROTO, 0xff),\n                \"local_metadata.icmp_type\": (NA_ICMPV6_TYPE, 0xff)\n            },\n            action_name=\"IngressPipeImpl.clone_to_cpu\",\n            priority=DEFAULT_PRIORITY\n        ))\n\n        testutils.send_packet(self, self.port1, str(pkt))\n\n        # Pkt should be received on CPU via PacketIn...\n        # Expected P4Runtime PacketIn message.\n        exp_packet_in_msg = self.helper.build_packet_in(\n            payload=str(pkt),\n            metadata={\n                \"ingress_port\": self.port1,\n                \"_pad\": 0\n            })\n        self.verify_packet_in(exp_packet_in_msg)\n\n        # ..and on port2 as indicated by the L2 unicast rule.\n        testutils.verify_packet(self, pkt, self.port2)\n\n\n@group(\"bridging\")\nclass BridgingTest(P4RuntimeTest):\n    \"\"\"Tests basic L2 unicast forwarding\"\"\"\n\n    def runTest(self):\n        # Test with different type of packets.\n        for pkt_type in [\"tcp\", \"udp\", \"icmp\", \"tcpv6\", \"udpv6\", \"icmpv6\"]:\n            print_inline(\"%s ... \" % pkt_type)\n            pkt = getattr(testutils, \"simple_%s_packet\" % pkt_type)(pktlen=120)\n            self.testPacket(pkt)\n\n    @autocleanup\n    def testPacket(self, pkt):\n\n        # Insert L2 unicast entry, match on pkt's eth dst address.\n        # Modify names to match content of P4Info file (look for the fully\n        # qualified name of tables, match fields, and actions.\n        # ---- START SOLUTION ----\n        self.insert(self.helper.build_table_entry(\n            table_name=\"IngressPipeImpl.l2_exact_table\",\n            match_fields={\n                # Exact match.\n                \"hdr.ethernet.dst_addr\": pkt[Ether].dst\n            },\n            action_name=\"IngressPipeImpl.set_egress_port\",\n            action_params={\n                \"port_num\": self.port2\n            }\n        ))\n        # ---- END SOLUTION ----\n\n        # Test bidirectional forwarding by swapping MAC addresses on the pkt\n        pkt2 = pkt_mac_swap(pkt.copy())\n\n        # Insert L2 unicast entry for pkt2.\n        # Modify names to match content of P4Info file (look for the fully\n        # qualified name of tables, match fields, and actions.\n        # ---- START SOLUTION ----\n        self.insert(self.helper.build_table_entry(\n            table_name=\"IngressPipeImpl.l2_exact_table\",\n            match_fields={\n                # Exact match.\n                \"hdr.ethernet.dst_addr\": pkt2[Ether].dst\n            },\n            action_name=\"IngressPipeImpl.set_egress_port\",\n            action_params={\n                \"port_num\": self.port1\n            }\n        ))\n        # ---- END SOLUTION ----\n\n        # Send and verify.\n        testutils.send_packet(self, self.port1, str(pkt))\n        testutils.send_packet(self, self.port2, str(pkt2))\n\n        testutils.verify_each_packet_on_each_port(\n            self, [pkt, pkt2], [self.port2, self.port1])\n"
  },
  {
    "path": "ptf/tests/packetio.py",
    "content": "# Copyright 2019-present Open Networking Foundation\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\n# ------------------------------------------------------------------------------\n# CONTROLLER PACKET-IN/OUT TESTS\n#\n# To run all tests in this file:\n#     make p4-test TEST=packetio\n#\n# To run a specific test case:\n#     make p4-test TEST=packetio.<TEST CLASS NAME>\n#\n# For example:\n#     make p4-test TEST=packetio.PacketOutTest\n# ------------------------------------------------------------------------------\n\n# ------------------------------------------------------------------------------\n# Modify everywhere you see TODO\n#\n# When providing your solution, make sure to use the same names for P4Runtime\n# entities as specified in your P4Info file.\n#\n# Test cases are based on the P4 program design suggested in the exercises\n# README. Make sure to modify the test cases accordingly if you decide to\n# implement the pipeline differently.\n# ------------------------------------------------------------------------------\n\nfrom ptf.testutils import group\n\nfrom base_test import *\n\nCPU_CLONE_SESSION_ID = 99\n\n\n@group(\"packetio\")\nclass PacketOutTest(P4RuntimeTest):\n    \"\"\"Tests controller packet-out capability by sending PacketOut messages and\n    expecting a corresponding packet on the output port set in the PacketOut\n    metadata.\n    \"\"\"\n\n    def runTest(self):\n        for pkt_type in [\"tcp\", \"udp\", \"icmp\", \"arp\", \"tcpv6\", \"udpv6\",\n                         \"icmpv6\"]:\n            print_inline(\"%s ... \" % pkt_type)\n            pkt = getattr(testutils, \"simple_%s_packet\" % pkt_type)()\n            self.testPacket(pkt)\n\n    def testPacket(self, pkt):\n        for outport in [self.port1, self.port2]:\n            # Build PacketOut message.\n            # TODO EXERCISE 4\n            # Modify metadata names to match the content of your P4Info file\n            # ---- START SOLUTION ----\n            packet_out_msg = self.helper.build_packet_out(\n                payload=str(pkt),\n                metadata={\n                    \"MODIFY ME\": outport,\n                    \"_pad\": 0\n                })\n            # ---- END SOLUTION ----\n\n            # Send message and expect packet on the given data plane port.\n            self.send_packet_out(packet_out_msg)\n\n            testutils.verify_packet(self, pkt, outport)\n\n        # Make sure packet was forwarded only on the specified ports\n        testutils.verify_no_other_packets(self)\n\n\n@group(\"packetio\")\nclass PacketInTest(P4RuntimeTest):\n    \"\"\"Tests controller packet-in capability my matching on the packet EtherType\n    and cloning to the CPU port.\n    \"\"\"\n\n    def runTest(self):\n        for pkt_type in [\"tcp\", \"udp\", \"icmp\", \"arp\", \"tcpv6\", \"udpv6\",\n                         \"icmpv6\"]:\n            print_inline(\"%s ... \" % pkt_type)\n            pkt = getattr(testutils, \"simple_%s_packet\" % pkt_type)()\n            self.testPacket(pkt)\n\n    @autocleanup\n    def testPacket(self, pkt):\n\n        # Insert clone to CPU session\n        self.insert_pre_clone_session(\n            session_id=CPU_CLONE_SESSION_ID,\n            ports=[self.cpu_port])\n\n        # Insert ACL entry to match on the given eth_type and clone to CPU.\n        eth_type = pkt[Ether].type\n        # TODO EXERCISE 4\n        # Modify names to match content of P4Info file (look for the fully\n        # qualified name of the ACL table, EtherType match field, and\n        # clone_to_cpu action.\n        # ---- START SOLUTION ----\n        self.insert(self.helper.build_table_entry(\n            table_name=\"MODIFY ME\",\n            match_fields={\n                # Ternary match.\n                \"MODIFY ME\": (eth_type, 0xffff)\n            },\n            action_name=\"MODIFY ME\",\n            priority=DEFAULT_PRIORITY\n        ))\n        # ---- END SOLUTION ----\n\n        for inport in [self.port1, self.port2, self.port3]:\n            # TODO EXERCISE 4\n            # Modify metadata names to match the content of your P4Info file\n            # ---- START SOLUTION ----\n            # Expected P4Runtime PacketIn message.\n            exp_packet_in_msg = self.helper.build_packet_in(\n                payload=str(pkt),\n                metadata={\n                    \"MODIFY ME\": inport,\n                    \"_pad\": 0\n                })\n            # ---- END SOLUTION ----\n\n            # Send packet to given switch ingress port and expect P4Runtime\n            # PacketIn message.\n            testutils.send_packet(self, inport, str(pkt))\n            self.verify_packet_in(exp_packet_in_msg)\n"
  },
  {
    "path": "ptf/tests/routing.py",
    "content": "# Copyright 2019-present Open Networking Foundation\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\n# ------------------------------------------------------------------------------\n# IPV6 ROUTING TESTS\n#\n# To run all tests:\n#     make p4-test TEST=routing\n#\n# To run a specific test case:\n#     make p4-test TEST=routing.<TEST CLASS NAME>\n#\n# For example:\n#     make p4-test TEST=routing.IPv6RoutingTest\n# ------------------------------------------------------------------------------\n\n# ------------------------------------------------------------------------------\n# Modify everywhere you see TODO\n#\n# When providing your solution, make sure to use the same names for P4Runtime\n# entities as specified in your P4Info file.\n#\n# Test cases are based on the P4 program design suggested in the exercises\n# README. Make sure to modify the test cases accordingly if you decide to\n# implement the pipeline differently.\n# ------------------------------------------------------------------------------\n\nfrom ptf.testutils import group\n\nfrom base_test import *\n\n\n@group(\"routing\")\nclass IPv6RoutingTest(P4RuntimeTest):\n    \"\"\"Tests basic IPv6 routing\"\"\"\n\n    def runTest(self):\n        # Test with different type of packets.\n        for pkt_type in [\"tcpv6\", \"udpv6\", \"icmpv6\"]:\n            print_inline(\"%s ... \" % pkt_type)\n            pkt = getattr(testutils, \"simple_%s_packet\" % pkt_type)()\n            self.testPacket(pkt)\n\n    @autocleanup\n    def testPacket(self, pkt):\n        next_hop_mac = SWITCH2_MAC\n\n        # Add entry to \"My Station\" table. Consider the given pkt's eth dst addr\n        # as myStationMac address.\n        # *** TODO EXERCISE 5\n        # Modify names to match content of P4Info file (look for the fully\n        # qualified name of tables, match fields, and actions.\n        # ---- START SOLUTION ----\n        self.insert(self.helper.build_table_entry(\n            table_name=\"MODIFY ME\",\n            match_fields={\n                # Exact match.\n                \"MODIFY ME\": pkt[Ether].dst\n            },\n            action_name=\"NoAction\"\n        ))\n        # ---- END SOLUTION ----\n\n        # Insert ECMP group with only one member (next_hop_mac)\n        # *** TODO EXERCISE 5\n        # Modify names to match content of P4Info file (look for the fully\n        # qualified name of tables, match fields, and actions.\n        # ---- START SOLUTION ----\n        self.insert(self.helper.build_act_prof_group(\n            act_prof_name=\"MODIFY ME\",\n            group_id=1,\n            actions=[\n                # List of tuples (action name, action param dict)\n                (\"MODIFY ME\", {\"MODIFY ME\": next_hop_mac}),\n            ]\n        ))\n        # ---- END SOLUTION ----\n\n        # Insert L3 routing entry to map pkt's IPv6 dst addr to group\n        # *** TODO EXERCISE 5\n        # Modify names to match content of P4Info file (look for the fully\n        # qualified name of tables, match fields, and actions.\n        # ---- START SOLUTION ----\n        self.insert(self.helper.build_table_entry(\n            table_name=\"MODIFY ME\",\n            match_fields={\n                # LPM match (value, prefix)\n                \"MODIFY ME\": (pkt[IPv6].dst, 128)\n            },\n            group_id=1\n        ))\n        # ---- END SOLUTION ----\n\n        # Insert L3 entry to map next_hop_mac to output port 2.\n        # *** TODO EXERCISE 5\n        # Modify names to match content of P4Info file (look for the fully\n        # qualified name of tables, match fields, and actions.\n        # ---- START SOLUTION ----\n        self.insert(self.helper.build_table_entry(\n            table_name=\"MODIFY ME\",\n            match_fields={\n                # Exact match\n                \"MODIFY ME\": next_hop_mac\n            },\n            action_name=\"MODIFY ME\",\n            action_params={\n                \"MODIFY ME\": self.port2\n            }\n        ))\n        # ---- END SOLUTION ----\n\n        # Expected pkt should have routed MAC addresses and decremented hop\n        # limit (TTL).\n        exp_pkt = pkt.copy()\n        pkt_route(exp_pkt, next_hop_mac)\n        pkt_decrement_ttl(exp_pkt)\n\n        testutils.send_packet(self, self.port1, str(pkt))\n        testutils.verify_packet(self, exp_pkt, self.port2)\n\n\n@group(\"routing\")\nclass NdpReplyGenTest(P4RuntimeTest):\n    \"\"\"Tests automatic generation of NDP Neighbor Advertisement for IPV6\n    addresses associated to the switch interface.\n    \"\"\"\n\n    @autocleanup\n    def runTest(self):\n        switch_ip = SWITCH1_IPV6\n        target_mac = SWITCH1_MAC\n\n        # Insert entry to transform NDP NA packets for the given target address\n        # (match), to NDP NA packets with the given target MAC address (action\n        # *** TODO EXERCISE 5\n        # Modify names to match content of P4Info file (look for the fully\n        # qualified name of tables, match fields, and actions.\n        # ---- START SOLUTION ----\n        self.insert(self.helper.build_table_entry(\n            table_name=\"MODIFY ME\",\n            match_fields={\n                # Exact match.\n                \"MODIFY ME\": switch_ip\n            },\n            action_name=\"MODIFY ME\",\n            action_params={\n                \"MODIFY ME\": target_mac\n            }\n        ))\n        # ---- END SOLUTION ----\n\n        # NDP Neighbor Solicitation packet\n        pkt = genNdpNsPkt(target_ip=switch_ip)\n\n        # NDP Neighbor Advertisement packet\n        exp_pkt = genNdpNaPkt(target_ip=switch_ip,\n                              target_mac=target_mac,\n                              src_mac=target_mac,\n                              src_ip=switch_ip,\n                              dst_ip=pkt[IPv6].src)\n\n        # Send NDP NS, expect NDP NA from the same port.\n        testutils.send_packet(self, self.port1, str(pkt))\n        testutils.verify_packet(self, exp_pkt, self.port1)"
  },
  {
    "path": "ptf/tests/srv6.py",
    "content": "# Copyright 2019-present Open Networking Foundation\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\n# ------------------------------------------------------------------------------\n# SRV6 TESTS\n#\n# To run all tests:\n#     make p4-test TEST=srv6\n#\n# To run a specific test case:\n#     make p4-test TEST=srv6.<TEST CLASS NAME>\n#\n# For example:\n#     make p4-test TEST=srv6.Srv6InsertTest\n# ------------------------------------------------------------------------------\n\n# ------------------------------------------------------------------------------\n# Modify everywhere you see TODO\n#\n# When providing your solution, make sure to use the same names for P4Runtime\n# entities as specified in your P4Info file.\n#\n# Test cases are based on the P4 program design suggested in the exercises\n# README. Make sure to modify the test cases accordingly if you decide to\n# implement the pipeline differently.\n# ------------------------------------------------------------------------------\n\nfrom ptf.testutils import group\n\nfrom base_test import *\n\n\ndef insert_srv6_header(pkt, sid_list):\n    \"\"\"Applies SRv6 insert transformation to the given packet.\n    \"\"\"\n    # Set IPv6 dst to first SID...\n    pkt[IPv6].dst = sid_list[0]\n    # Insert SRv6 header between IPv6 header and payload\n    sid_len = len(sid_list)\n    srv6_hdr = IPv6ExtHdrSegmentRouting(\n        nh=pkt[IPv6].nh,\n        addresses=sid_list[::-1],\n        len=sid_len * 2,\n        segleft=sid_len - 1,\n        lastentry=sid_len - 1)\n    pkt[IPv6].nh = 43  # next IPv6 header is SR header\n    pkt[IPv6].payload = srv6_hdr / pkt[IPv6].payload\n    return pkt\n\n\ndef pop_srv6_header(pkt):\n    \"\"\"Removes SRv6 header from the given packet.\n    \"\"\"\n    pkt[IPv6].nh = pkt[IPv6ExtHdrSegmentRouting].nh\n    pkt[IPv6].payload = pkt[IPv6ExtHdrSegmentRouting].payload\n\n\ndef set_cksum(pkt, cksum):\n    if TCP in pkt:\n        pkt[TCP].chksum = cksum\n    if UDP in pkt:\n        pkt[UDP].chksum = cksum\n    if ICMPv6Unknown in pkt:\n        pkt[ICMPv6Unknown].cksum = cksum\n\n\n@group(\"srv6\")\nclass Srv6InsertTest(P4RuntimeTest):\n    \"\"\"Tests SRv6 insert behavior, where the switch receives an IPv6 packet and\n    inserts the SRv6 header\n    \"\"\"\n\n    def runTest(self):\n        sid_lists = (\n            [SWITCH2_IPV6, SWITCH3_IPV6, HOST2_IPV6],\n            [SWITCH2_IPV6, HOST2_IPV6],\n        )\n        next_hop_mac = SWITCH2_MAC\n\n        for sid_list in sid_lists:\n            for pkt_type in [\"tcpv6\", \"udpv6\", \"icmpv6\"]:\n                print_inline(\"%s %d SIDs ... \" % (pkt_type, len(sid_list)))\n\n                pkt = getattr(testutils, \"simple_%s_packet\" % pkt_type)()\n\n                self.testPacket(pkt, sid_list, next_hop_mac)\n\n    @autocleanup\n    def testPacket(self, pkt, sid_list, next_hop_mac):\n\n        # *** TODO EXERCISE 6\n        # Modify names to match content of P4Info file (look for the fully\n        # qualified name of tables, match fields, and actions.\n        # ---- START SOLUTION ----\n\n        # Add entry to \"My Station\" table. Consider the given pkt's eth dst addr\n        # as myStationMac address.\n        self.insert(self.helper.build_table_entry(\n            table_name=\"MODIFY ME\",\n            match_fields={\n                # Exact match.\n                \"MODIFY ME\": pkt[Ether].dst\n            },\n            action_name=\"NoAction\"\n        ))\n\n        # Insert SRv6 header when matching the pkt's IPV6 dst addr.\n        # Action name an params are generated based on the number of SIDs given.\n        # For example, with 2 SIDs:\n        # action_name = IngressPipeImpl.srv6_t_insert_2\n        # action_params = {\n        #     \"s1\": sid[0],\n        #     \"s2\": sid[1]\n        # }\n        sid_len = len(sid_list)\n\n        action_name = \"IngressPipeImpl.srv6_t_insert_%d\" % sid_len\n        actions_params = {\"s%d\" % (x + 1): sid_list[x] for x in range(sid_len)}\n\n        self.insert(self.helper.build_table_entry(\n            table_name=\"MODIFY ME\",\n            match_fields={\n                # LPM match (value, prefix)\n                \"MODIFY ME\": (pkt[IPv6].dst, 128)\n            },\n            action_name=action_name,\n            action_params=actions_params\n        ))\n\n        # Insert ECMP group with only one member (next_hop_mac)\n        self.insert(self.helper.build_act_prof_group(\n            act_prof_name=\"MODIFY ME\",\n            group_id=1,\n            actions=[\n                # List of tuples (action name, {action param: value})\n                (\"MODIFY ME\", {\"MODIFY ME\": next_hop_mac}),\n            ]\n        ))\n\n        # Now that we inserted the SRv6 header, we expect the pkt's IPv6 dst\n        # addr to be the first on the SID list.\n        # Match on L3 routing table.\n        first_sid = sid_list[0]\n        self.insert(self.helper.build_table_entry(\n            table_name=\"MODIFY ME\",\n            match_fields={\n                # LPM match (value, prefix)\n                \"hdr.ipv6.dst_addr\": (first_sid, 128)\n            },\n            group_id=1\n        ))\n\n        # Map next_hop_mac to output port\n        self.insert(self.helper.build_table_entry(\n            table_name=\"MODIFY ME\",\n            match_fields={\n                # Exact match\n                \"MODIFY ME\": next_hop_mac\n            },\n            action_name=\"MODIFY ME\",\n            action_params={\n                \"MODIFY ME\": self.port2\n            }\n        ))\n\n        # ---- END SOLUTION ----\n\n        # Build expected packet from the given one...\n        exp_pkt = insert_srv6_header(pkt.copy(), sid_list)\n\n        # Route and decrement TTL\n        pkt_route(exp_pkt, next_hop_mac)\n        pkt_decrement_ttl(exp_pkt)\n\n        # Bonus: update P4 program to calculate correct checksum\n        set_cksum(pkt, 1)\n        set_cksum(exp_pkt, 1)\n\n        testutils.send_packet(self, self.port1, str(pkt))\n        testutils.verify_packet(self, exp_pkt, self.port2)\n\n\n@group(\"srv6\")\nclass Srv6TransitTest(P4RuntimeTest):\n    \"\"\"Tests SRv6 transit behavior, where the switch ignores the SRv6 header\n    and routes the packet normally, without applying any SRv6-related\n    modifications.\n    \"\"\"\n\n    def runTest(self):\n        my_sid = SWITCH1_IPV6\n        sid_lists = (\n            [SWITCH2_IPV6, SWITCH3_IPV6, HOST2_IPV6],\n            [SWITCH2_IPV6, HOST2_IPV6],\n        )\n        next_hop_mac = SWITCH2_MAC\n\n        for sid_list in sid_lists:\n            for pkt_type in [\"tcpv6\", \"udpv6\", \"icmpv6\"]:\n                print_inline(\"%s %d SIDs ... \" % (pkt_type, len(sid_list)))\n\n                pkt = getattr(testutils, \"simple_%s_packet\" % pkt_type)()\n                pkt = insert_srv6_header(pkt, sid_list)\n\n                self.testPacket(pkt, next_hop_mac, my_sid)\n\n    @autocleanup\n    def testPacket(self, pkt, next_hop_mac, my_sid):\n\n        # *** TODO EXERCISE 6\n        # Modify names to match content of P4Info file (look for the fully\n        # qualified name of tables, match fields, and actions.\n        # ---- START SOLUTION ----\n\n        # Add entry to \"My Station\" table. Consider the given pkt's eth dst addr\n        # as myStationMac address.\n        self.insert(self.helper.build_table_entry(\n            table_name=\"MODIFY ME\",\n            match_fields={\n                # Exact match.\n                \"MODIFY ME\": pkt[Ether].dst\n            },\n            action_name=\"NoAction\"\n        ))\n\n        # This should be missed, this is plain IPv6 routing.\n        self.insert(self.helper.build_table_entry(\n            table_name=\"MODIFY ME\",\n            match_fields={\n                # Longest prefix match (value, prefix length)\n                \"MODIFY ME\": (my_sid, 128)\n            },\n            action_name=\"MODIFY ME\"\n        ))\n\n        # Insert ECMP group with only one member (next_hop_mac)\n        self.insert(self.helper.build_act_prof_group(\n            act_prof_name=\"MODIFY ME\",\n            group_id=1,\n            actions=[\n                # List of tuples (action name, {action param: value})\n                (\"MODIFY ME\", {\"MODIFY ME\": next_hop_mac}),\n            ]\n        ))\n\n        # Map pkt's IPv6 dst addr to group\n        self.insert(self.helper.build_table_entry(\n            table_name=\"MODIFY ME\",\n            match_fields={\n                # LPM match (value, prefix)\n                \"hdr.ipv6.dst_addr\": (pkt[IPv6].dst, 128)\n            },\n            group_id=1\n        ))\n\n        # Map next_hop_mac to output port\n        self.insert(self.helper.build_table_entry(\n            table_name=\"MODIFY ME\",\n            match_fields={\n                # Exact match\n                \"MODIFY ME\": next_hop_mac\n            },\n            action_name=\"MODIFY ME\",\n            action_params={\n                \"MODIFY ME\": self.port2\n            }\n        ))\n\n        # ---- END SOLUTION ----\n\n        # Build expected packet from the given one...\n        exp_pkt = pkt.copy()\n\n        # Route and decrement TTL\n        pkt_route(exp_pkt, next_hop_mac)\n        pkt_decrement_ttl(exp_pkt)\n\n        # Bonus: update P4 program to calculate correct checksum\n        set_cksum(pkt, 1)\n        set_cksum(exp_pkt, 1)\n\n        testutils.send_packet(self, self.port1, str(pkt))\n        testutils.verify_packet(self, exp_pkt, self.port2)\n\n\n@group(\"srv6\")\nclass Srv6EndTest(P4RuntimeTest):\n    \"\"\"Tests SRv6 end behavior (without pop), where the switch forwards the\n    packet to the next SID found in the SRv6 header.\n    \"\"\"\n\n    def runTest(self):\n        my_sid = SWITCH2_IPV6\n        sid_lists = (\n            [SWITCH2_IPV6, SWITCH3_IPV6, HOST2_IPV6],\n            [SWITCH2_IPV6, SWITCH3_IPV6, SWITCH4_IPV6, HOST2_IPV6],\n        )\n        next_hop_mac = SWITCH3_MAC\n\n        for sid_list in sid_lists:\n            for pkt_type in [\"tcpv6\", \"udpv6\", \"icmpv6\"]:\n                print_inline(\"%s %d SIDs ... \" % (pkt_type, len(sid_list)))\n                pkt = getattr(testutils, \"simple_%s_packet\" % pkt_type)()\n\n                pkt = insert_srv6_header(pkt, sid_list)\n                self.testPacket(pkt, sid_list, next_hop_mac, my_sid)\n\n    @autocleanup\n    def testPacket(self, pkt, sid_list, next_hop_mac, my_sid):\n\n        # *** TODO EXERCISE 6\n        # Modify names to match content of P4Info file (look for the fully\n        # qualified name of tables, match fields, and actions.\n        # ---- START SOLUTION ----\n\n        # Add entry to \"My Station\" table. Consider the given pkt's eth dst addr\n        # as myStationMac address.\n        self.insert(self.helper.build_table_entry(\n            table_name=\"MODIFY ME\",\n            match_fields={\n                # Exact match.\n                \"MODIFY ME\": pkt[Ether].dst\n            },\n            action_name=\"NoAction\"\n        ))\n\n        # This should be matched, we want SRv6 end behavior to be applied.\n        self.insert(self.helper.build_table_entry(\n            table_name=\"MODIFY ME\",\n            match_fields={\n                # Longest prefix match (value, prefix length)\n                \"MODIFY ME\": (my_sid, 128)\n            },\n            action_name=\"MODIFY ME\"\n        ))\n\n        # Insert ECMP group with only one member (next_hop_mac)\n        self.insert(self.helper.build_act_prof_group(\n            act_prof_name=\"MODIFY ME\",\n            group_id=1,\n            actions=[\n                # List of tuples (action name, {action param: value})\n                (\"MODIFY ME\", {\"MODIFY ME\": next_hop_mac}),\n            ]\n        ))\n\n        # After applying the srv6_end action, we expect to IPv6 dst to be the\n        # next SID in the list, we should route based on that.\n        next_sid = sid_list[1]\n        self.insert(self.helper.build_table_entry(\n            table_name=\"MODIFY ME\",\n            match_fields={\n                # LPM match (value, prefix)\n                \"hdr.ipv6.dst_addr\": (next_sid, 128)\n            },\n            group_id=1\n        ))\n\n        # Map next_hop_mac to output port\n        self.insert(self.helper.build_table_entry(\n            table_name=\"MODIFY ME\",\n            match_fields={\n                # Exact match\n                \"MODIFY ME\": next_hop_mac\n            },\n            action_name=\"MODIFY ME\",\n            action_params={\n                \"MODIFY ME\": self.port2\n            }\n        ))\n\n        # ---- END SOLUTION ----\n\n        # Build expected packet from the given one...\n        exp_pkt = pkt.copy()\n\n        # Set IPv6 dst to next SID and decrement segleft...\n        exp_pkt[IPv6].dst = next_sid\n        exp_pkt[IPv6ExtHdrSegmentRouting].segleft -= 1\n\n        # Route and decrement TTL...\n        pkt_route(exp_pkt, next_hop_mac)\n        pkt_decrement_ttl(exp_pkt)\n\n        # Bonus: update P4 program to calculate correct checksum\n        set_cksum(pkt, 1)\n        set_cksum(exp_pkt, 1)\n\n        testutils.send_packet(self, self.port1, str(pkt))\n        testutils.verify_packet(self, exp_pkt, self.port2)\n\n\n@group(\"srv6\")\nclass Srv6EndPspTest(P4RuntimeTest):\n    \"\"\"Tests SRv6 End with Penultimate Segment Pop (PSP) behavior, where the\n    switch SID is the penultimate in the SID list and the switch removes the\n    SRv6 header before routing the packet to it's final destination (last SID in\n    the list).\n    \"\"\"\n\n    def runTest(self):\n        my_sid = SWITCH3_IPV6\n        sid_lists = (\n            [SWITCH3_IPV6, HOST2_IPV6],\n        )\n        next_hop_mac = HOST2_MAC\n\n        for sid_list in sid_lists:\n            for pkt_type in [\"tcpv6\", \"udpv6\", \"icmpv6\"]:\n                print_inline(\"%s %d SIDs ... \" % (pkt_type, len(sid_list)))\n                pkt = getattr(testutils, \"simple_%s_packet\" % pkt_type)()\n                pkt = insert_srv6_header(pkt, sid_list)\n                self.testPacket(pkt, sid_list, next_hop_mac, my_sid)\n\n    @autocleanup\n    def testPacket(self, pkt, sid_list, next_hop_mac, my_sid):\n\n        # *** TODO EXERCISE 6\n        # Modify names to match content of P4Info file (look for the fully\n        # qualified name of tables, match fields, and actions.\n        # ---- START SOLUTION ----\n\n        # Add entry to \"My Station\" table. Consider the given pkt's eth dst addr\n        # as myStationMac address.\n        self.insert(self.helper.build_table_entry(\n            table_name=\"MODIFY ME\",\n            match_fields={\n                # Exact match.\n                \"MODIFY ME\": pkt[Ether].dst\n            },\n            action_name=\"NoAction\"\n        ))\n\n        # This should be matched, we want SRv6 end behavior to be applied.\n        self.insert(self.helper.build_table_entry(\n            table_name=\"MODIFY ME\",\n            match_fields={\n                # Longest prefix match (value, prefix length)\n                \"MODIFY ME\": (my_sid, 128)\n            },\n            action_name=\"MODIFY ME\"\n        ))\n\n        # Insert ECMP group with only one member (next_hop_mac)\n        self.insert(self.helper.build_act_prof_group(\n            act_prof_name=\"MODIFY ME\",\n            group_id=1,\n            actions=[\n                # List of tuples (action name, {action param: value})\n                (\"MODIFY ME\", {\"MODIFY ME\": next_hop_mac}),\n            ]\n        ))\n\n        # Map pkt's IPv6 dst addr to group\n        next_sid = sid_list[1]\n        self.insert(self.helper.build_table_entry(\n            table_name=\"MODIFY ME\",\n            match_fields={\n                # LPM match (value, prefix)\n                \"hdr.ipv6.dst_addr\": (next_sid, 128)\n            },\n            group_id=1\n        ))\n\n        # Map next_hop_mac to output port\n        self.insert(self.helper.build_table_entry(\n            table_name=\"MODIFY ME\",\n            match_fields={\n                # Exact match\n                \"MODIFY ME\": next_hop_mac\n            },\n            action_name=\"MODIFY ME\",\n            action_params={\n                \"MODIFY ME\": self.port2\n            }\n        ))\n\n        # ---- END SOLUTION ----\n\n        # Build expected packet from the given one...\n        exp_pkt = pkt.copy()\n\n        # Expect IPv6 dst to be the next SID...\n        exp_pkt[IPv6].dst = next_sid\n        # Remove SRv6 header since we are performing PSP.\n        pop_srv6_header(exp_pkt)\n\n        # Route and decrement TTL\n        pkt_route(exp_pkt, next_hop_mac)\n        pkt_decrement_ttl(exp_pkt)\n\n        # Bonus: update P4 program to calculate correct checksum\n        set_cksum(pkt, 1)\n        set_cksum(exp_pkt, 1)\n\n        testutils.send_packet(self, self.port1, str(pkt))\n        testutils.verify_packet(self, exp_pkt, self.port2)\n"
  },
  {
    "path": "solution/app/src/main/java/org/onosproject/ngsdn/tutorial/Ipv6RoutingComponent.java",
    "content": "/*\n * Copyright 2019-present Open Networking Foundation\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may not use this file except in compliance with the License.\n * You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\npackage org.onosproject.ngsdn.tutorial;\n\nimport com.google.common.collect.Lists;\nimport org.onlab.packet.Ip6Address;\nimport org.onlab.packet.Ip6Prefix;\nimport org.onlab.packet.IpAddress;\nimport org.onlab.packet.IpPrefix;\nimport org.onlab.packet.MacAddress;\nimport org.onlab.util.ItemNotFoundException;\nimport org.onosproject.core.ApplicationId;\nimport org.onosproject.mastership.MastershipService;\nimport org.onosproject.net.Device;\nimport org.onosproject.net.DeviceId;\nimport org.onosproject.net.Host;\nimport org.onosproject.net.Link;\nimport org.onosproject.net.PortNumber;\nimport org.onosproject.net.config.NetworkConfigService;\nimport org.onosproject.net.device.DeviceEvent;\nimport org.onosproject.net.device.DeviceListener;\nimport org.onosproject.net.device.DeviceService;\nimport org.onosproject.net.flow.FlowRule;\nimport org.onosproject.net.flow.FlowRuleService;\nimport org.onosproject.net.flow.criteria.PiCriterion;\nimport org.onosproject.net.group.GroupDescription;\nimport org.onosproject.net.group.GroupService;\nimport org.onosproject.net.host.HostEvent;\nimport org.onosproject.net.host.HostListener;\nimport org.onosproject.net.host.HostService;\nimport org.onosproject.net.host.InterfaceIpAddress;\nimport org.onosproject.net.intf.Interface;\nimport org.onosproject.net.intf.InterfaceService;\nimport org.onosproject.net.link.LinkEvent;\nimport org.onosproject.net.link.LinkListener;\nimport org.onosproject.net.link.LinkService;\nimport org.onosproject.net.pi.model.PiActionId;\nimport org.onosproject.net.pi.model.PiActionParamId;\nimport org.onosproject.net.pi.model.PiMatchFieldId;\nimport org.onosproject.net.pi.runtime.PiAction;\nimport org.onosproject.net.pi.runtime.PiActionParam;\nimport org.onosproject.net.pi.runtime.PiActionProfileGroupId;\nimport org.onosproject.net.pi.runtime.PiTableAction;\nimport org.osgi.service.component.annotations.Activate;\nimport org.osgi.service.component.annotations.Component;\nimport org.osgi.service.component.annotations.Deactivate;\nimport org.osgi.service.component.annotations.Reference;\nimport org.osgi.service.component.annotations.ReferenceCardinality;\nimport org.onosproject.ngsdn.tutorial.common.FabricDeviceConfig;\nimport org.onosproject.ngsdn.tutorial.common.Utils;\nimport org.slf4j.Logger;\nimport org.slf4j.LoggerFactory;\n\nimport java.util.Collection;\nimport java.util.Collections;\nimport java.util.List;\nimport java.util.Optional;\nimport java.util.Set;\nimport java.util.stream.Collectors;\n\nimport static com.google.common.collect.Streams.stream;\nimport static org.onosproject.ngsdn.tutorial.AppConstants.INITIAL_SETUP_DELAY;\n\n/**\n * App component that configures devices to provide IPv6 routing capabilities\n * across the whole fabric.\n */\n@Component(\n        immediate = true,\n        // *** TODO EXERCISE 5\n        // set to true when ready\n        enabled = true\n)\npublic class Ipv6RoutingComponent {\n\n    private static final Logger log = LoggerFactory.getLogger(Ipv6RoutingComponent.class);\n\n    private static final int DEFAULT_ECMP_GROUP_ID = 0xec3b0000;\n    private static final long GROUP_INSERT_DELAY_MILLIS = 200;\n\n    private final HostListener hostListener = new InternalHostListener();\n    private final LinkListener linkListener = new InternalLinkListener();\n    private final DeviceListener deviceListener = new InternalDeviceListener();\n\n    private ApplicationId appId;\n\n    //--------------------------------------------------------------------------\n    // ONOS CORE SERVICE BINDING\n    //\n    // These variables are set by the Karaf runtime environment before calling\n    // the activate() method.\n    //--------------------------------------------------------------------------\n\n    @Reference(cardinality = ReferenceCardinality.MANDATORY)\n    private FlowRuleService flowRuleService;\n\n    @Reference(cardinality = ReferenceCardinality.MANDATORY)\n    private HostService hostService;\n\n    @Reference(cardinality = ReferenceCardinality.MANDATORY)\n    private MastershipService mastershipService;\n\n    @Reference(cardinality = ReferenceCardinality.MANDATORY)\n    private GroupService groupService;\n\n    @Reference(cardinality = ReferenceCardinality.MANDATORY)\n    private DeviceService deviceService;\n\n    @Reference(cardinality = ReferenceCardinality.MANDATORY)\n    private NetworkConfigService networkConfigService;\n\n    @Reference(cardinality = ReferenceCardinality.MANDATORY)\n    private InterfaceService interfaceService;\n\n    @Reference(cardinality = ReferenceCardinality.MANDATORY)\n    private LinkService linkService;\n\n    @Reference(cardinality = ReferenceCardinality.MANDATORY)\n    private MainComponent mainComponent;\n\n    //--------------------------------------------------------------------------\n    // COMPONENT ACTIVATION.\n    //\n    // When loading/unloading the app the Karaf runtime environment will call\n    // activate()/deactivate().\n    //--------------------------------------------------------------------------\n\n    @Activate\n    protected void activate() {\n        appId = mainComponent.getAppId();\n\n        hostService.addListener(hostListener);\n        linkService.addListener(linkListener);\n        deviceService.addListener(deviceListener);\n\n        // Schedule set up for all devices.\n        mainComponent.scheduleTask(this::setUpAllDevices, INITIAL_SETUP_DELAY);\n\n        log.info(\"Started\");\n    }\n\n    @Deactivate\n    protected void deactivate() {\n        hostService.removeListener(hostListener);\n        linkService.removeListener(linkListener);\n        deviceService.removeListener(deviceListener);\n\n        log.info(\"Stopped\");\n    }\n\n    //--------------------------------------------------------------------------\n    // METHODS TO COMPLETE.\n    //\n    // Complete the implementation wherever you see TODO.\n    //--------------------------------------------------------------------------\n\n    /**\n     * Sets up the \"My Station\" table for the given device using the\n     * myStationMac address found in the config.\n     * <p>\n     * This method will be called at component activation for each device\n     * (switch) known by ONOS, and every time a new device-added event is\n     * captured by the InternalDeviceListener defined below.\n     *\n     * @param deviceId the device ID\n     */\n    private void setUpMyStationTable(DeviceId deviceId) {\n\n        log.info(\"Adding My Station rules to {}...\", deviceId);\n\n        final MacAddress myStationMac = getMyStationMac(deviceId);\n\n        // HINT: in our solution, the My Station table matches on the *ethernet\n        // destination* and there is only one action called *NoAction*, which is\n        // used as an indication of \"table hit\" in the control block.\n\n        // *** TODO EXERCISE 5\n        // Modify P4Runtime entity names to match content of P4Info file (look\n        // for the fully qualified name of tables, match fields, and actions.\n        // ---- START SOLUTION ----\n        final String tableId = \"IngressPipeImpl.my_station_table\";\n\n        final PiCriterion match = PiCriterion.builder()\n                .matchExact(\n                        PiMatchFieldId.of(\"hdr.ethernet.dst_addr\"),\n                        myStationMac.toBytes())\n                .build();\n\n        // Creates an action which do *NoAction* when hit.\n        final PiTableAction action = PiAction.builder()\n                .withId(PiActionId.of(\"NoAction\"))\n                .build();\n        // ---- END SOLUTION ----\n\n        final FlowRule myStationRule = Utils.buildFlowRule(\n                deviceId, appId, tableId, match, action);\n\n        flowRuleService.applyFlowRules(myStationRule);\n    }\n\n    /**\n     * Creates an ONOS SELECT group for the routing table to provide ECMP\n     * forwarding for the given collection of next hop MAC addresses. ONOS\n     * SELECT groups are equivalent to P4Runtime action selector groups.\n     * <p>\n     * This method will be called by the routing policy methods below to insert\n     * groups in the L3 table\n     *\n     * @param nextHopMacs the collection of mac addresses of next hops\n     * @param deviceId    the device where the group will be installed\n     * @return a SELECT group\n     */\n    private GroupDescription createNextHopGroup(int groupId,\n                                                Collection<MacAddress> nextHopMacs,\n                                                DeviceId deviceId) {\n\n        String actionProfileId = \"IngressPipeImpl.ecmp_selector\";\n\n        final List<PiAction> actions = Lists.newArrayList();\n\n        // Build one \"set next hop\" action for each next hop\n        // *** TODO EXERCISE 5\n        // Modify P4Runtime entity names to match content of P4Info file (look\n        // for the fully qualified name of tables, match fields, and actions.\n        // ---- START SOLUTION ----\n        final String tableId = \"IngressPipeImpl.routing_v6_table\";\n        for (MacAddress nextHopMac : nextHopMacs) {\n            final PiAction action = PiAction.builder()\n                    .withId(PiActionId.of(\"IngressPipeImpl.set_next_hop\"))\n                    .withParameter(new PiActionParam(\n                            // Action param name.\n                            PiActionParamId.of(\"dmac\"),\n                            // Action param value.\n                            nextHopMac.toBytes()))\n                    .build();\n\n            actions.add(action);\n        }\n        // ---- END SOLUTION ----\n\n        return Utils.buildSelectGroup(\n                deviceId, tableId, actionProfileId, groupId, actions, appId);\n    }\n\n    /**\n     * Creates a routing flow rule that matches on the given IPv6 prefix and\n     * executes the given group ID (created before).\n     *\n     * @param deviceId  the device where flow rule will be installed\n     * @param ip6Prefix the IPv6 prefix\n     * @param groupId   the group ID\n     * @return a flow rule\n     */\n    private FlowRule createRoutingRule(DeviceId deviceId, Ip6Prefix ip6Prefix,\n                                       int groupId) {\n\n        // *** TODO EXERCISE 5\n        // Modify P4Runtime entity names to match content of P4Info file (look\n        // for the fully qualified name of tables, match fields, and actions.\n        // ---- START SOLUTION ----\n        final String tableId = \"IngressPipeImpl.routing_v6_table\";\n        final PiCriterion match = PiCriterion.builder()\n                .matchLpm(\n                        PiMatchFieldId.of(\"hdr.ipv6.dst_addr\"),\n                        ip6Prefix.address().toOctets(),\n                        ip6Prefix.prefixLength())\n                .build();\n\n        final PiTableAction action = PiActionProfileGroupId.of(groupId);\n        // ---- END SOLUTION ----\n\n        return Utils.buildFlowRule(\n                deviceId, appId, tableId, match, action);\n    }\n\n    /**\n     * Creates a flow rule for the L2 table mapping the given next hop MAC to\n     * the given output port.\n     * <p>\n     * This is called by the routing policy methods below to establish L2-based\n     * forwarding inside the fabric, e.g., when deviceId is a leaf switch and\n     * nextHopMac is the one of a spine switch.\n     *\n     * @param deviceId   the device\n     * @param nexthopMac the next hop (destination) mac\n     * @param outPort    the output port\n     */\n    private FlowRule createL2NextHopRule(DeviceId deviceId, MacAddress nexthopMac,\n                                         PortNumber outPort) {\n\n        // *** TODO EXERCISE 5\n        // Modify P4Runtime entity names to match content of P4Info file (look\n        // for the fully qualified name of tables, match fields, and actions.\n        // ---- START SOLUTION ----\n        final String tableId = \"IngressPipeImpl.l2_exact_table\";\n        final PiCriterion match = PiCriterion.builder()\n                .matchExact(PiMatchFieldId.of(\"hdr.ethernet.dst_addr\"),\n                        nexthopMac.toBytes())\n                .build();\n\n\n        final PiAction action = PiAction.builder()\n                .withId(PiActionId.of(\"IngressPipeImpl.set_egress_port\"))\n                .withParameter(new PiActionParam(\n                        PiActionParamId.of(\"port_num\"),\n                        outPort.toLong()))\n                .build();\n        // ---- END SOLUTION ----\n\n        return Utils.buildFlowRule(\n                deviceId, appId, tableId, match, action);\n    }\n\n    //--------------------------------------------------------------------------\n    // EVENT LISTENERS\n    //\n    // Events are processed only if isRelevant() returns true.\n    //--------------------------------------------------------------------------\n\n    /**\n     * Listener of host events which triggers configuration of routing rules on\n     * the device where the host is attached.\n     */\n    class InternalHostListener implements HostListener {\n\n        @Override\n        public boolean isRelevant(HostEvent event) {\n            switch (event.type()) {\n                case HOST_ADDED:\n                    break;\n                case HOST_REMOVED:\n                case HOST_UPDATED:\n                case HOST_MOVED:\n                default:\n                    // Ignore other events.\n                    // Food for thoughts:\n                    // how to support host moved/removed events?\n                    return false;\n            }\n            // Process host event only if this controller instance is the master\n            // for the device where this host is attached.\n            final Host host = event.subject();\n            final DeviceId deviceId = host.location().deviceId();\n            return mastershipService.isLocalMaster(deviceId);\n        }\n\n        @Override\n        public void event(HostEvent event) {\n            Host host = event.subject();\n            DeviceId deviceId = host.location().deviceId();\n            mainComponent.getExecutorService().execute(() -> {\n                log.info(\"{} event! host={}, deviceId={}, port={}\",\n                        event.type(), host.id(), deviceId, host.location().port());\n                setUpHostRules(deviceId, host);\n            });\n        }\n    }\n\n    /**\n     * Listener of link events, which triggers configuration of routing rules to\n     * forward packets across the fabric, i.e. from leaves to spines and vice\n     * versa.\n     * <p>\n     * Reacting to link events instead of device ones, allows us to make sure\n     * all device are always configured with a topology view that includes all\n     * links, e.g. modifying an ECMP group as soon as a new link is added. The\n     * downside is that we might be configuring the same device twice for the\n     * same set of links/paths. However, the ONOS core treats these cases as a\n     * no-op when the device is already configured with the desired forwarding\n     * state (i.e. flows and groups)\n     */\n    class InternalLinkListener implements LinkListener {\n\n        @Override\n        public boolean isRelevant(LinkEvent event) {\n            switch (event.type()) {\n                case LINK_ADDED:\n                    break;\n                case LINK_UPDATED:\n                case LINK_REMOVED:\n                default:\n                    return false;\n            }\n            DeviceId srcDev = event.subject().src().deviceId();\n            DeviceId dstDev = event.subject().dst().deviceId();\n            return mastershipService.isLocalMaster(srcDev) ||\n                    mastershipService.isLocalMaster(dstDev);\n        }\n\n        @Override\n        public void event(LinkEvent event) {\n            DeviceId srcDev = event.subject().src().deviceId();\n            DeviceId dstDev = event.subject().dst().deviceId();\n\n            if (mastershipService.isLocalMaster(srcDev)) {\n                mainComponent.getExecutorService().execute(() -> {\n                    log.info(\"{} event! Configuring {}... linkSrc={}, linkDst={}\",\n                            event.type(), srcDev, srcDev, dstDev);\n                    setUpFabricRoutes(srcDev);\n                    setUpL2NextHopRules(srcDev);\n                });\n            }\n            if (mastershipService.isLocalMaster(dstDev)) {\n                mainComponent.getExecutorService().execute(() -> {\n                    log.info(\"{} event! Configuring {}... linkSrc={}, linkDst={}\",\n                            event.type(), dstDev, srcDev, dstDev);\n                    setUpFabricRoutes(dstDev);\n                    setUpL2NextHopRules(dstDev);\n                });\n            }\n        }\n    }\n\n    /**\n     * Listener of device events which triggers configuration of the My Station\n     * table.\n     */\n    class InternalDeviceListener implements DeviceListener {\n\n        @Override\n        public boolean isRelevant(DeviceEvent event) {\n            switch (event.type()) {\n                case DEVICE_AVAILABILITY_CHANGED:\n                case DEVICE_ADDED:\n                    break;\n                default:\n                    return false;\n            }\n            // Process device event if this controller instance is the master\n            // for the device and the device is available.\n            DeviceId deviceId = event.subject().id();\n            return mastershipService.isLocalMaster(deviceId) &&\n                    deviceService.isAvailable(event.subject().id());\n        }\n\n        @Override\n        public void event(DeviceEvent event) {\n            mainComponent.getExecutorService().execute(() -> {\n                DeviceId deviceId = event.subject().id();\n                log.info(\"{} event! device id={}\", event.type(), deviceId);\n                setUpMyStationTable(deviceId);\n            });\n        }\n    }\n\n    //--------------------------------------------------------------------------\n    // ROUTING POLICY METHODS\n    //\n    // Called by event listeners, these methods implement the actual routing\n    // policy, responsible of computing paths and creating ECMP groups.\n    //--------------------------------------------------------------------------\n\n    /**\n     * Set up L2 nexthop rules of a device to providing forwarding inside the\n     * fabric, i.e. between leaf and spine switches.\n     *\n     * @param deviceId the device ID\n     */\n    private void setUpL2NextHopRules(DeviceId deviceId) {\n\n        Set<Link> egressLinks = linkService.getDeviceEgressLinks(deviceId);\n\n        for (Link link : egressLinks) {\n            // For each other switch directly connected to this.\n            final DeviceId nextHopDevice = link.dst().deviceId();\n            // Get port of this device connecting to next hop.\n            final PortNumber outPort = link.src().port();\n            // Get next hop MAC address.\n            final MacAddress nextHopMac = getMyStationMac(nextHopDevice);\n\n            final FlowRule nextHopRule = createL2NextHopRule(\n                    deviceId, nextHopMac, outPort);\n\n            flowRuleService.applyFlowRules(nextHopRule);\n        }\n    }\n\n    /**\n     * Sets up the given device with the necessary rules to route packets to the\n     * given host.\n     *\n     * @param deviceId deviceId the device ID\n     * @param host     the host\n     */\n    private void setUpHostRules(DeviceId deviceId, Host host) {\n\n        // Get all IPv6 addresses associated to this host. In this tutorial we\n        // use hosts with only 1 IPv6 address.\n        final Collection<Ip6Address> hostIpv6Addrs = host.ipAddresses().stream()\n                .filter(IpAddress::isIp6)\n                .map(IpAddress::getIp6Address)\n                .collect(Collectors.toSet());\n\n        if (hostIpv6Addrs.isEmpty()) {\n            // Ignore.\n            log.debug(\"No IPv6 addresses for host {}, ignore\", host.id());\n            return;\n        } else {\n            log.info(\"Adding routes on {} for host {} [{}]\",\n                    deviceId, host.id(), hostIpv6Addrs);\n        }\n\n        // Create an ECMP group with only one member, where the group ID is\n        // derived from the host MAC.\n        final MacAddress hostMac = host.mac();\n        int groupId = macToGroupId(hostMac);\n\n        final GroupDescription group = createNextHopGroup(\n                groupId, Collections.singleton(hostMac), deviceId);\n\n        // Map each host IPV6 address to corresponding /128 prefix and obtain a\n        // flow rule that points to the group ID. In this tutorial we expect\n        // only one flow rule per host.\n        final List<FlowRule> flowRules = hostIpv6Addrs.stream()\n                .map(IpAddress::toIpPrefix)\n                .filter(IpPrefix::isIp6)\n                .map(IpPrefix::getIp6Prefix)\n                .map(prefix -> createRoutingRule(deviceId, prefix, groupId))\n                .collect(Collectors.toList());\n\n        // Helper function to install flows after groups, since here flows\n        // points to the group and P4Runtime enforces this dependency during\n        // write operations.\n        insertInOrder(group, flowRules);\n    }\n\n    /**\n     * Set up routes on a given device to forward packets across the fabric,\n     * making a distinction between spines and leaves.\n     *\n     * @param deviceId the device ID.\n     */\n    private void setUpFabricRoutes(DeviceId deviceId) {\n        if (isSpine(deviceId)) {\n            setUpSpineRoutes(deviceId);\n        } else {\n            setUpLeafRoutes(deviceId);\n        }\n    }\n\n    /**\n     * Insert routing rules on the given spine switch, matching on leaf\n     * interface subnets and forwarding packets to the corresponding leaf.\n     *\n     * @param spineId the spine device ID\n     */\n    private void setUpSpineRoutes(DeviceId spineId) {\n\n        log.info(\"Adding up spine routes on {}...\", spineId);\n\n        for (Device device : deviceService.getDevices()) {\n\n            if (isSpine(device.id())) {\n                // We only need routes to leaf switches. Ignore spines.\n                continue;\n            }\n\n            final DeviceId leafId = device.id();\n            final MacAddress leafMac = getMyStationMac(leafId);\n            final Set<Ip6Prefix> subnetsToRoute = getInterfaceIpv6Prefixes(leafId);\n\n            // Since we're here, we also add a route for SRv6 (Exercise 7), to\n            // forward packets with IPv6 dst the SID of a leaf switch.\n            final Ip6Address leafSid = getDeviceSid(leafId);\n            subnetsToRoute.add(Ip6Prefix.valueOf(leafSid, 128));\n\n            // Create a group with only one member.\n            int groupId = macToGroupId(leafMac);\n\n            GroupDescription group = createNextHopGroup(\n                    groupId, Collections.singleton(leafMac), spineId);\n\n            List<FlowRule> flowRules = subnetsToRoute.stream()\n                    .map(subnet -> createRoutingRule(spineId, subnet, groupId))\n                    .collect(Collectors.toList());\n\n            insertInOrder(group, flowRules);\n        }\n    }\n\n    /**\n     * Insert routing rules on the given leaf switch, matching on interface\n     * subnets associated to other leaves and forwarding packets the spines\n     * using ECMP.\n     *\n     * @param leafId the leaf device ID\n     */\n    private void setUpLeafRoutes(DeviceId leafId) {\n        log.info(\"Setting up leaf routes: {}\", leafId);\n\n        // Get the set of subnets (interface IPv6 prefixes) associated to other\n        // leafs but not this one.\n        Set<Ip6Prefix> subnetsToRouteViaSpines = stream(deviceService.getDevices())\n                .map(Device::id)\n                .filter(this::isLeaf)\n                .filter(deviceId -> !deviceId.equals(leafId))\n                .map(this::getInterfaceIpv6Prefixes)\n                .flatMap(Collection::stream)\n                .collect(Collectors.toSet());\n\n        // Get myStationMac address of all spines.\n        Set<MacAddress> spineMacs = stream(deviceService.getDevices())\n                .map(Device::id)\n                .filter(this::isSpine)\n                .map(this::getMyStationMac)\n                .collect(Collectors.toSet());\n\n        // Create an ECMP group to distribute traffic across all spines.\n        final int groupId = DEFAULT_ECMP_GROUP_ID;\n        final GroupDescription ecmpGroup = createNextHopGroup(\n                groupId, spineMacs, leafId);\n\n        // Generate a flow rule for each subnet pointing to the ECMP group.\n        List<FlowRule> flowRules = subnetsToRouteViaSpines.stream()\n                .map(subnet -> createRoutingRule(leafId, subnet, groupId))\n                .collect(Collectors.toList());\n\n        insertInOrder(ecmpGroup, flowRules);\n\n        // Since we're here, we also add a route for SRv6 (Exercise 7), to\n        // forward packets with IPv6 dst the SID of a spine switch, in this case\n        // using a single-member group.\n        stream(deviceService.getDevices())\n                .map(Device::id)\n                .filter(this::isSpine)\n                .forEach(spineId -> {\n                    MacAddress spineMac = getMyStationMac(spineId);\n                    Ip6Address spineSid = getDeviceSid(spineId);\n                    int spineGroupId = macToGroupId(spineMac);\n                    GroupDescription group = createNextHopGroup(\n                            spineGroupId, Collections.singleton(spineMac), leafId);\n                    FlowRule routingRule = createRoutingRule(\n                            leafId, Ip6Prefix.valueOf(spineSid, 128),\n                            spineGroupId);\n                    insertInOrder(group, Collections.singleton(routingRule));\n                });\n    }\n\n    //--------------------------------------------------------------------------\n    // UTILITY METHODS\n    //--------------------------------------------------------------------------\n\n    /**\n     * Returns true if the given device has isSpine flag set to true in the\n     * config, false otherwise.\n     *\n     * @param deviceId the device ID\n     * @return true if the device is a spine, false otherwise\n     */\n    private boolean isSpine(DeviceId deviceId) {\n        return getDeviceConfig(deviceId).map(FabricDeviceConfig::isSpine)\n                .orElseThrow(() -> new ItemNotFoundException(\n                        \"Missing isSpine config for \" + deviceId));\n    }\n\n    /**\n     * Returns true if the given device is not configured as spine.\n     *\n     * @param deviceId the device ID\n     * @return true if the device is a leaf, false otherwise\n     */\n    private boolean isLeaf(DeviceId deviceId) {\n        return !isSpine(deviceId);\n    }\n\n    /**\n     * Returns the MAC address configured in the \"myStationMac\" property of the\n     * given device config.\n     *\n     * @param deviceId the device ID\n     * @return MyStation MAC address\n     */\n    private MacAddress getMyStationMac(DeviceId deviceId) {\n        return getDeviceConfig(deviceId)\n                .map(FabricDeviceConfig::myStationMac)\n                .orElseThrow(() -> new ItemNotFoundException(\n                        \"Missing myStationMac config for \" + deviceId));\n    }\n\n    /**\n     * Returns the FabricDeviceConfig config object for the given device.\n     *\n     * @param deviceId the device ID\n     * @return FabricDeviceConfig device config\n     */\n    private Optional<FabricDeviceConfig> getDeviceConfig(DeviceId deviceId) {\n        FabricDeviceConfig config = networkConfigService.getConfig(\n                deviceId, FabricDeviceConfig.class);\n        return Optional.ofNullable(config);\n    }\n\n    /**\n     * Returns the set of interface IPv6 subnets (prefixes) configured for the\n     * given device.\n     *\n     * @param deviceId the device ID\n     * @return set of IPv6 prefixes\n     */\n    private Set<Ip6Prefix> getInterfaceIpv6Prefixes(DeviceId deviceId) {\n        return interfaceService.getInterfaces().stream()\n                .filter(iface -> iface.connectPoint().deviceId().equals(deviceId))\n                .map(Interface::ipAddressesList)\n                .flatMap(Collection::stream)\n                .map(InterfaceIpAddress::subnetAddress)\n                .filter(IpPrefix::isIp6)\n                .map(IpPrefix::getIp6Prefix)\n                .collect(Collectors.toSet());\n    }\n\n    /**\n     * Returns a 32 bit bit group ID from the given MAC address.\n     *\n     * @param mac the MAC address\n     * @return an integer\n     */\n    private int macToGroupId(MacAddress mac) {\n        return mac.hashCode() & 0x7fffffff;\n    }\n\n    /**\n     * Inserts the given groups and flow rules in order, groups first, then flow\n     * rules. In P4Runtime, when operating on an indirect table (i.e. with\n     * action selectors), groups must be inserted before table entries.\n     *\n     * @param group     the group\n     * @param flowRules the flow rules depending on the group\n     */\n    private void insertInOrder(GroupDescription group, Collection<FlowRule> flowRules) {\n        try {\n            groupService.addGroup(group);\n            // Wait for groups to be inserted.\n            Thread.sleep(GROUP_INSERT_DELAY_MILLIS);\n            flowRules.forEach(flowRuleService::applyFlowRules);\n        } catch (InterruptedException e) {\n            log.error(\"Interrupted!\", e);\n            Thread.currentThread().interrupt();\n        }\n    }\n\n    /**\n     * Gets Srv6 SID for the given device.\n     *\n     * @param deviceId the device ID\n     * @return SID for the device\n     */\n    private Ip6Address getDeviceSid(DeviceId deviceId) {\n        return getDeviceConfig(deviceId)\n                .map(FabricDeviceConfig::mySid)\n                .orElseThrow(() -> new ItemNotFoundException(\n                        \"Missing mySid config for \" + deviceId));\n    }\n\n    /**\n     * Sets up IPv6 routing on all devices known by ONOS and for which this ONOS\n     * node instance is currently master.\n     */\n    private synchronized void setUpAllDevices() {\n        // Set up host routes\n        stream(deviceService.getAvailableDevices())\n                .map(Device::id)\n                .filter(mastershipService::isLocalMaster)\n                .forEach(deviceId -> {\n                    log.info(\"*** IPV6 ROUTING - Starting initial set up for {}...\", deviceId);\n                    setUpMyStationTable(deviceId);\n                    setUpFabricRoutes(deviceId);\n                    setUpL2NextHopRules(deviceId);\n                    hostService.getConnectedHosts(deviceId)\n                            .forEach(host -> setUpHostRules(deviceId, host));\n                });\n    }\n}\n"
  },
  {
    "path": "solution/app/src/main/java/org/onosproject/ngsdn/tutorial/L2BridgingComponent.java",
    "content": "/*\n * Copyright 2019-present Open Networking Foundation\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may not use this file except in compliance with the License.\n * You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\npackage org.onosproject.ngsdn.tutorial;\n\nimport org.onlab.packet.MacAddress;\nimport org.onosproject.core.ApplicationId;\nimport org.onosproject.mastership.MastershipService;\nimport org.onosproject.net.ConnectPoint;\nimport org.onosproject.net.DeviceId;\nimport org.onosproject.net.Host;\nimport org.onosproject.net.PortNumber;\nimport org.onosproject.net.config.NetworkConfigService;\nimport org.onosproject.net.device.DeviceEvent;\nimport org.onosproject.net.device.DeviceListener;\nimport org.onosproject.net.device.DeviceService;\nimport org.onosproject.net.flow.FlowRule;\nimport org.onosproject.net.flow.FlowRuleService;\nimport org.onosproject.net.flow.criteria.PiCriterion;\nimport org.onosproject.net.group.GroupDescription;\nimport org.onosproject.net.group.GroupService;\nimport org.onosproject.net.host.HostEvent;\nimport org.onosproject.net.host.HostListener;\nimport org.onosproject.net.host.HostService;\nimport org.onosproject.net.intf.Interface;\nimport org.onosproject.net.intf.InterfaceService;\nimport org.onosproject.net.pi.model.PiActionId;\nimport org.onosproject.net.pi.model.PiActionParamId;\nimport org.onosproject.net.pi.model.PiMatchFieldId;\nimport org.onosproject.net.pi.runtime.PiAction;\nimport org.onosproject.net.pi.runtime.PiActionParam;\nimport org.osgi.service.component.annotations.Activate;\nimport org.osgi.service.component.annotations.Component;\nimport org.osgi.service.component.annotations.Deactivate;\nimport org.osgi.service.component.annotations.Reference;\nimport org.osgi.service.component.annotations.ReferenceCardinality;\nimport org.onosproject.ngsdn.tutorial.common.FabricDeviceConfig;\nimport org.onosproject.ngsdn.tutorial.common.Utils;\nimport org.slf4j.Logger;\nimport org.slf4j.LoggerFactory;\n\nimport java.util.Set;\nimport java.util.stream.Collectors;\n\nimport static org.onosproject.ngsdn.tutorial.AppConstants.INITIAL_SETUP_DELAY;\n\n/**\n * App component that configures devices to provide L2 bridging capabilities.\n */\n@Component(\n        immediate = true,\n        // *** TODO EXERCISE 4\n        // Enable component (enabled = true)\n        enabled = true\n)\npublic class L2BridgingComponent {\n\n    private final Logger log = LoggerFactory.getLogger(getClass());\n\n    private static final int DEFAULT_BROADCAST_GROUP_ID = 255;\n\n    private final DeviceListener deviceListener = new InternalDeviceListener();\n    private final HostListener hostListener = new InternalHostListener();\n\n    private ApplicationId appId;\n\n    //--------------------------------------------------------------------------\n    // ONOS CORE SERVICE BINDING\n    //\n    // These variables are set by the Karaf runtime environment before calling\n    // the activate() method.\n    //--------------------------------------------------------------------------\n\n    @Reference(cardinality = ReferenceCardinality.MANDATORY)\n    private HostService hostService;\n\n    @Reference(cardinality = ReferenceCardinality.MANDATORY)\n    private DeviceService deviceService;\n\n    @Reference(cardinality = ReferenceCardinality.MANDATORY)\n    private InterfaceService interfaceService;\n\n    @Reference(cardinality = ReferenceCardinality.MANDATORY)\n    private NetworkConfigService configService;\n\n    @Reference(cardinality = ReferenceCardinality.MANDATORY)\n    private FlowRuleService flowRuleService;\n\n    @Reference(cardinality = ReferenceCardinality.MANDATORY)\n    private GroupService groupService;\n\n    @Reference(cardinality = ReferenceCardinality.MANDATORY)\n    private MastershipService mastershipService;\n\n    @Reference(cardinality = ReferenceCardinality.MANDATORY)\n    private MainComponent mainComponent;\n\n    //--------------------------------------------------------------------------\n    // COMPONENT ACTIVATION.\n    //\n    // When loading/unloading the app the Karaf runtime environment will call\n    // activate()/deactivate().\n    //--------------------------------------------------------------------------\n\n    @Activate\n    protected void activate() {\n        appId = mainComponent.getAppId();\n\n        // Register listeners to be informed about device and host events.\n        deviceService.addListener(deviceListener);\n        hostService.addListener(hostListener);\n        // Schedule set up of existing devices. Needed when reloading the app.\n        mainComponent.scheduleTask(this::setUpAllDevices, INITIAL_SETUP_DELAY);\n\n        log.info(\"Started\");\n    }\n\n    @Deactivate\n    protected void deactivate() {\n        deviceService.removeListener(deviceListener);\n        hostService.removeListener(hostListener);\n\n        log.info(\"Stopped\");\n    }\n\n    //--------------------------------------------------------------------------\n    // METHODS TO COMPLETE.\n    //\n    // Complete the implementation wherever you see TODO.\n    //--------------------------------------------------------------------------\n\n    /**\n     * Sets up everything necessary to support L2 bridging on the given device.\n     *\n     * @param deviceId the device to set up\n     */\n    private void setUpDevice(DeviceId deviceId) {\n        if (isSpine(deviceId)) {\n            // Stop here. We support bridging only on leaf/tor switches.\n            return;\n        }\n        insertMulticastGroup(deviceId);\n        insertMulticastFlowRules(deviceId);\n        // Uncomment the following line after you have implemented the method:\n        insertUnmatchedBridgingFlowRule(deviceId);\n    }\n\n    /**\n     * Inserts an ALL group in the ONOS core to replicate packets on all host\n     * facing ports. This group will be used to broadcast all ARP/NDP requests.\n     * <p>\n     * ALL groups in ONOS are equivalent to P4Runtime packet replication engine\n     * (PRE) Multicast groups.\n     *\n     * @param deviceId the device where to install the group\n     */\n    private void insertMulticastGroup(DeviceId deviceId) {\n\n        // Replicate packets where we know hosts are attached.\n        Set<PortNumber> ports = getHostFacingPorts(deviceId);\n\n        if (ports.isEmpty()) {\n            // Stop here.\n            log.warn(\"Device {} has 0 host facing ports\", deviceId);\n            return;\n        }\n\n        log.info(\"Adding L2 multicast group with {} ports on {}...\",\n                ports.size(), deviceId);\n\n        // Forge group object.\n        final GroupDescription multicastGroup = Utils.buildMulticastGroup(\n                appId, deviceId, DEFAULT_BROADCAST_GROUP_ID, ports);\n\n        // Insert.\n        groupService.addGroup(multicastGroup);\n    }\n\n    /**\n     * Insert flow rules matching ethernet destination\n     * broadcast/multicast addresses (e.g. ARP requests, NDP Neighbor\n     * Solicitation, etc.). Such packets should be processed by the multicast\n     * group created before.\n     * <p>\n     * This method will be called at component activation for each device\n     * (switch) known by ONOS, and every time a new device-added event is\n     * captured by the InternalDeviceListener defined below.\n     *\n     * @param deviceId device ID where to install the rules\n     */\n    private void insertMulticastFlowRules(DeviceId deviceId) {\n\n        log.info(\"Adding L2 multicast rules on {}...\", deviceId);\n\n        // Modify P4Runtime entity names to match content of P4Info file (look\n        // for the fully qualified name of tables, match fields, and actions.\n        // ---- START SOLUTION ----\n        // Match ARP request - Match exactly FF:FF:FF:FF:FF:FF\n        final PiCriterion macBroadcastCriterion = PiCriterion.builder()\n                .matchTernary(\n                        PiMatchFieldId.of(\"hdr.ethernet.dst_addr\"),\n                        MacAddress.valueOf(\"FF:FF:FF:FF:FF:FF\").toBytes(),\n                        MacAddress.valueOf(\"FF:FF:FF:FF:FF:FF\").toBytes())\n                .build();\n\n        // Match NDP NS - Match ternary 33:33:**:**:**:**\n        final PiCriterion ipv6MulticastCriterion = PiCriterion.builder()\n                .matchTernary(\n                        PiMatchFieldId.of(\"hdr.ethernet.dst_addr\"),\n                        MacAddress.valueOf(\"33:33:00:00:00:00\").toBytes(),\n                        MacAddress.valueOf(\"FF:FF:00:00:00:00\").toBytes())\n                .build();\n\n        // Action: set multicast group id\n        final PiAction setMcastGroupAction = PiAction.builder()\n                .withId(PiActionId.of(\"IngressPipeImpl.set_multicast_group\"))\n                .withParameter(new PiActionParam(\n                        PiActionParamId.of(\"gid\"),\n                        DEFAULT_BROADCAST_GROUP_ID))\n                .build();\n\n        //  Build 2 flow rules.\n        final String tableId = \"IngressPipeImpl.l2_ternary_table\";\n        // ---- END SOLUTION ----\n\n        final FlowRule rule1 = Utils.buildFlowRule(\n                deviceId, appId, tableId,\n                macBroadcastCriterion, setMcastGroupAction);\n\n        final FlowRule rule2 = Utils.buildFlowRule(\n                deviceId, appId, tableId,\n                ipv6MulticastCriterion, setMcastGroupAction);\n\n        // Insert rules.\n        flowRuleService.applyFlowRules(rule1, rule2);\n    }\n\n    /**\n     * Insert flow rule that matches all unmatched ethernet traffic. This\n     * will implement the traditional briding behavior that floods all\n     * unmatched traffic.\n     * <p>\n     * This method will be called at component activation for each device\n     * (switch) known by ONOS, and every time a new device-added event is\n     * captured by the InternalDeviceListener defined below.\n     *\n     * @param deviceId device ID where to install the rules\n     */\n    @SuppressWarnings(\"unused\")\n    private void insertUnmatchedBridgingFlowRule(DeviceId deviceId) {\n\n        log.info(\"Adding L2 multicast rules on {}...\", deviceId);\n\n        // Modify P4Runtime entity names to match content of P4Info file (look\n        // for the fully qualified name of tables, match fields, and actions.\n        // ---- START SOLUTION ----\n\n        // Match unmatched traffic - Match ternary **:**:**:**:**:**\n        final PiCriterion unmatchedTrafficCriterion = PiCriterion.builder()\n                .matchTernary(\n                        PiMatchFieldId.of(\"hdr.ethernet.dst_addr\"),\n                        MacAddress.valueOf(\"00:00:00:00:00:00\").toBytes(),\n                        MacAddress.valueOf(\"00:00:00:00:00:00\").toBytes())\n                .build();\n\n        // Action: set multicast group id\n        final PiAction setMcastGroupAction = PiAction.builder()\n                .withId(PiActionId.of(\"IngressPipeImpl.set_multicast_group\"))\n                .withParameter(new PiActionParam(\n                        PiActionParamId.of(\"gid\"),\n                        DEFAULT_BROADCAST_GROUP_ID))\n                .build();\n\n        //  Build flow rule.\n        final String tableId = \"IngressPipeImpl.l2_ternary_table\";\n        // ---- END SOLUTION ----\n\n        final FlowRule rule = Utils.buildFlowRule(\n                deviceId, appId, tableId,\n                unmatchedTrafficCriterion, setMcastGroupAction);\n\n        // Insert rules.\n        flowRuleService.applyFlowRules(rule);\n    }\n\n    /**\n     * Insert flow rules to forward packets to a given host located at the given\n     * device and port.\n     * <p>\n     * This method will be called at component activation for each host known by\n     * ONOS, and every time a new host-added event is captured by the\n     * InternalHostListener defined below.\n     *\n     * @param host     host instance\n     * @param deviceId device where the host is located\n     * @param port     port where the host is attached to\n     */\n    private void learnHost(Host host, DeviceId deviceId, PortNumber port) {\n\n        log.info(\"Adding L2 unicast rule on {} for host {} (port {})...\",\n                deviceId, host.id(), port);\n\n        // Modify P4Runtime entity names to match content of P4Info file (look\n        // for the fully qualified name of tables, match fields, and actions.\n        // ---- START SOLUTION ----\n        final String tableId = \"IngressPipeImpl.l2_exact_table\";\n        // Match exactly on the host MAC address.\n        final MacAddress hostMac = host.mac();\n        final PiCriterion hostMacCriterion = PiCriterion.builder()\n                .matchExact(PiMatchFieldId.of(\"hdr.ethernet.dst_addr\"),\n                        hostMac.toBytes())\n                .build();\n\n        // Action: set output port\n        final PiAction l2UnicastAction = PiAction.builder()\n                .withId(PiActionId.of(\"IngressPipeImpl.set_egress_port\"))\n                .withParameter(new PiActionParam(\n                        PiActionParamId.of(\"port_num\"),\n                        port.toLong()))\n                .build();\n        // ---- END SOLUTION ----\n\n        // Forge flow rule.\n        final FlowRule rule = Utils.buildFlowRule(\n                deviceId, appId, tableId, hostMacCriterion, l2UnicastAction);\n\n        // Insert.\n        flowRuleService.applyFlowRules(rule);\n    }\n\n    //--------------------------------------------------------------------------\n    // EVENT LISTENERS\n    //\n    // Events are processed only if isRelevant() returns true.\n    //--------------------------------------------------------------------------\n\n    /**\n     * Listener of device events.\n     */\n    public class InternalDeviceListener implements DeviceListener {\n\n        @Override\n        public boolean isRelevant(DeviceEvent event) {\n            switch (event.type()) {\n                case DEVICE_ADDED:\n                case DEVICE_AVAILABILITY_CHANGED:\n                    break;\n                default:\n                    // Ignore other events.\n                    return false;\n            }\n            // Process only if this controller instance is the master.\n            final DeviceId deviceId = event.subject().id();\n            return mastershipService.isLocalMaster(deviceId);\n        }\n\n        @Override\n        public void event(DeviceEvent event) {\n            final DeviceId deviceId = event.subject().id();\n            if (deviceService.isAvailable(deviceId)) {\n                // A P4Runtime device is considered available in ONOS when there\n                // is a StreamChannel session open and the pipeline\n                // configuration has been set.\n\n                // Events are processed using a thread pool defined in the\n                // MainComponent.\n                mainComponent.getExecutorService().execute(() -> {\n                    log.info(\"{} event! deviceId={}\", event.type(), deviceId);\n\n                    setUpDevice(deviceId);\n                });\n            }\n        }\n    }\n\n    /**\n     * Listener of host events.\n     */\n    public class InternalHostListener implements HostListener {\n\n        @Override\n        public boolean isRelevant(HostEvent event) {\n            switch (event.type()) {\n                case HOST_ADDED:\n                    // Host added events will be generated by the\n                    // HostLocationProvider by intercepting ARP/NDP packets.\n                    break;\n                case HOST_REMOVED:\n                case HOST_UPDATED:\n                case HOST_MOVED:\n                default:\n                    // Ignore other events.\n                    // Food for thoughts: how to support host moved/removed?\n                    return false;\n            }\n            // Process host event only if this controller instance is the master\n            // for the device where this host is attached to.\n            final Host host = event.subject();\n            final DeviceId deviceId = host.location().deviceId();\n            return mastershipService.isLocalMaster(deviceId);\n        }\n\n        @Override\n        public void event(HostEvent event) {\n            final Host host = event.subject();\n            // Device and port where the host is located.\n            final DeviceId deviceId = host.location().deviceId();\n            final PortNumber port = host.location().port();\n\n            mainComponent.getExecutorService().execute(() -> {\n                log.info(\"{} event! host={}, deviceId={}, port={}\",\n                        event.type(), host.id(), deviceId, port);\n\n                learnHost(host, deviceId, port);\n            });\n        }\n    }\n\n    //--------------------------------------------------------------------------\n    // UTILITY METHODS\n    //--------------------------------------------------------------------------\n\n    /**\n     * Returns a set of ports for the given device that are used to connect\n     * hosts to the fabric.\n     *\n     * @param deviceId device ID\n     * @return set of host facing ports\n     */\n    private Set<PortNumber> getHostFacingPorts(DeviceId deviceId) {\n        // Get all interfaces configured via netcfg for the given device ID and\n        // return the corresponding device port number. Interface configuration\n        // in the netcfg.json looks like this:\n        // \"device:leaf1/3\": {\n        //   \"interfaces\": [\n        //     {\n        //       \"name\": \"leaf1-3\",\n        //       \"ips\": [\"2001:1:1::ff/64\"]\n        //     }\n        //   ]\n        // }\n        return interfaceService.getInterfaces().stream()\n                .map(Interface::connectPoint)\n                .filter(cp -> cp.deviceId().equals(deviceId))\n                .map(ConnectPoint::port)\n                .collect(Collectors.toSet());\n    }\n\n    /**\n     * Returns true if the given device is defined as a spine in the\n     * netcfg.json.\n     *\n     * @param deviceId device ID\n     * @return true if spine, false otherwise\n     */\n    private boolean isSpine(DeviceId deviceId) {\n        // Example netcfg defining a device as spine:\n        // \"devices\": {\n        //   \"device:spine1\": {\n        //     ...\n        //     \"fabricDeviceConfig\": {\n        //       \"myStationMac\": \"...\",\n        //       \"mySid\": \"...\",\n        //       \"isSpine\": true\n        //     }\n        //   },\n        //   ...\n        final FabricDeviceConfig cfg = configService.getConfig(\n                deviceId, FabricDeviceConfig.class);\n        return cfg != null && cfg.isSpine();\n    }\n\n    /**\n     * Sets up L2 bridging on all devices known by ONOS and for which this ONOS\n     * node instance is currently master.\n     * <p>\n     * This method is called at component activation.\n     */\n    private void setUpAllDevices() {\n        deviceService.getAvailableDevices().forEach(device -> {\n            if (mastershipService.isLocalMaster(device.id())) {\n                log.info(\"*** L2 BRIDGING - Starting initial set up for {}...\", device.id());\n                setUpDevice(device.id());\n                // For all hosts connected to this device...\n                hostService.getConnectedHosts(device.id()).forEach(\n                        host -> learnHost(host, host.location().deviceId(),\n                                host.location().port()));\n            }\n        });\n    }\n}\n"
  },
  {
    "path": "solution/app/src/main/java/org/onosproject/ngsdn/tutorial/NdpReplyComponent.java",
    "content": "/*\n * Copyright 2019-present Open Networking Foundation\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may not use this file except in compliance with the License.\n * You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\npackage org.onosproject.ngsdn.tutorial;\n\nimport org.onlab.packet.Ip6Address;\nimport org.onlab.packet.IpAddress;\nimport org.onlab.packet.MacAddress;\nimport org.onlab.util.ItemNotFoundException;\nimport org.onosproject.core.ApplicationId;\nimport org.onosproject.mastership.MastershipService;\nimport org.onosproject.net.DeviceId;\nimport org.onosproject.net.config.NetworkConfigService;\nimport org.onosproject.net.device.DeviceEvent;\nimport org.onosproject.net.device.DeviceListener;\nimport org.onosproject.net.device.DeviceService;\nimport org.onosproject.net.flow.FlowRule;\nimport org.onosproject.net.flow.FlowRuleOperations;\nimport org.onosproject.net.flow.FlowRuleService;\nimport org.onosproject.net.flow.criteria.PiCriterion;\nimport org.onosproject.net.host.InterfaceIpAddress;\nimport org.onosproject.net.intf.Interface;\nimport org.onosproject.net.intf.InterfaceService;\nimport org.onosproject.net.pi.model.PiActionId;\nimport org.onosproject.net.pi.model.PiActionParamId;\nimport org.onosproject.net.pi.model.PiMatchFieldId;\nimport org.onosproject.net.pi.runtime.PiAction;\nimport org.onosproject.net.pi.runtime.PiActionParam;\nimport org.onosproject.ngsdn.tutorial.common.FabricDeviceConfig;\nimport org.onosproject.ngsdn.tutorial.common.Utils;\nimport org.osgi.service.component.annotations.Activate;\nimport org.osgi.service.component.annotations.Component;\nimport org.osgi.service.component.annotations.Deactivate;\nimport org.osgi.service.component.annotations.Reference;\nimport org.osgi.service.component.annotations.ReferenceCardinality;\nimport org.slf4j.Logger;\nimport org.slf4j.LoggerFactory;\n\nimport java.util.Collection;\nimport java.util.stream.Collectors;\n\nimport static org.onosproject.ngsdn.tutorial.AppConstants.INITIAL_SETUP_DELAY;\n\n/**\n * App component that configures devices to generate NDP Neighbor Advertisement\n * packets for all interface IPv6 addresses configured in the netcfg.\n */\n@Component(\n        immediate = true,\n        // *** TODO EXERCISE 5\n        // Enable component (enabled = true)\n        enabled = true\n)\npublic class NdpReplyComponent {\n\n    private static final Logger log =\n            LoggerFactory.getLogger(NdpReplyComponent.class.getName());\n\n    //--------------------------------------------------------------------------\n    // ONOS CORE SERVICE BINDING\n    //\n    // These variables are set by the Karaf runtime environment before calling\n    // the activate() method.\n    //--------------------------------------------------------------------------\n\n    @Reference(cardinality = ReferenceCardinality.MANDATORY)\n    protected NetworkConfigService configService;\n\n    @Reference(cardinality = ReferenceCardinality.MANDATORY)\n    protected FlowRuleService flowRuleService;\n\n    @Reference(cardinality = ReferenceCardinality.MANDATORY)\n    protected InterfaceService interfaceService;\n\n    @Reference(cardinality = ReferenceCardinality.MANDATORY)\n    protected MastershipService mastershipService;\n\n    @Reference(cardinality = ReferenceCardinality.MANDATORY)\n    protected DeviceService deviceService;\n\n    @Reference(cardinality = ReferenceCardinality.MANDATORY)\n    private MainComponent mainComponent;\n\n    private DeviceListener deviceListener = new InternalDeviceListener();\n    private ApplicationId appId;\n\n    //--------------------------------------------------------------------------\n    // COMPONENT ACTIVATION.\n    //\n    // When loading/unloading the app the Karaf runtime environment will call\n    // activate()/deactivate().\n    //--------------------------------------------------------------------------\n\n    @Activate\n    public void activate() {\n        appId = mainComponent.getAppId();\n        // Register listeners to be informed about device events.\n        deviceService.addListener(deviceListener);\n        // Schedule set up of existing devices. Needed when reloading the app.\n        mainComponent.scheduleTask(this::setUpAllDevices, INITIAL_SETUP_DELAY);\n        log.info(\"Started\");\n    }\n\n    @Deactivate\n    public void deactivate() {\n        deviceService.removeListener(deviceListener);\n        log.info(\"Stopped\");\n    }\n\n    //--------------------------------------------------------------------------\n    // METHODS TO COMPLETE.\n    //\n    // Complete the implementation wherever you see TODO.\n    //--------------------------------------------------------------------------\n\n    /**\n     * Set up all devices for which this ONOS instance is currently master.\n     */\n    private void setUpAllDevices() {\n        deviceService.getAvailableDevices().forEach(device -> {\n            if (mastershipService.isLocalMaster(device.id())) {\n                log.info(\"*** NDP REPLY - Starting Initial set up for {}...\", device.id());\n                setUpDevice(device.id());\n            }\n        });\n    }\n\n    /**\n     * Performs setup of the given device by creating a flow rule to generate\n     * NDP NA packets for IPv6 addresses associated to the device interfaces.\n     *\n     * @param deviceId device ID\n     */\n    private void setUpDevice(DeviceId deviceId) {\n\n        // Get this device config from netcfg.json.\n        final FabricDeviceConfig config = configService.getConfig(\n                deviceId, FabricDeviceConfig.class);\n        if (config == null) {\n            // Config not available yet\n            throw new ItemNotFoundException(\"Missing fabricDeviceConfig for \" + deviceId);\n        }\n\n        // Get this device myStation mac.\n        final MacAddress deviceMac = config.myStationMac();\n\n        // Get all interfaces currently configured for the device\n        final Collection<Interface> interfaces = interfaceService.getInterfaces()\n                .stream()\n                .filter(iface -> iface.connectPoint().deviceId().equals(deviceId))\n                .collect(Collectors.toSet());\n\n        if (interfaces.isEmpty()) {\n            log.info(\"{} does not have any IPv6 interface configured\",\n                     deviceId);\n            return;\n        }\n\n        // Generate and install flow rules.\n        log.info(\"Adding rules to {} to generate NDP NA for {} IPv6 interfaces...\",\n                 deviceId, interfaces.size());\n        final Collection<FlowRule> flowRules = interfaces.stream()\n                .map(this::getIp6Addresses)\n                .flatMap(Collection::stream)\n                .map(ipv6addr -> buildNdpReplyFlowRule(deviceId, ipv6addr, deviceMac))\n                .collect(Collectors.toSet());\n\n        installRules(flowRules);\n    }\n\n    /**\n     * Build a flow rule for the NDP reply table on the given device, for the\n     * given target IPv6 address and MAC address.\n     *\n     * @param deviceId          device ID where to install the flow rules\n     * @param targetIpv6Address target IPv6 address\n     * @param targetMac         target MAC address\n     * @return flow rule object\n     */\n    private FlowRule buildNdpReplyFlowRule(DeviceId deviceId,\n                                           Ip6Address targetIpv6Address,\n                                           MacAddress targetMac) {\n\n        // *** TODO EXERCISE 5\n        // Modify P4Runtime entity names to match content of P4Info file (look\n        // for the fully qualified name of tables, match fields, and actions.\n        // ---- START SOLUTION ----\n        // Build match.\n        final PiCriterion match = PiCriterion.builder()\n                .matchExact(PiMatchFieldId.of(\"hdr.ndp.target_ipv6_addr\"), targetIpv6Address.toOctets())\n                .build();\n        // Build action.\n        final PiActionParam targetMacParam = new PiActionParam(\n                PiActionParamId.of(\"target_mac\"), targetMac.toBytes());\n        final PiAction action = PiAction.builder()\n                .withId(PiActionId.of(\"IngressPipeImpl.ndp_ns_to_na\"))\n                .withParameter(targetMacParam)\n                .build();\n        // Table ID.\n        final String tableId = \"IngressPipeImpl.ndp_reply_table\";\n        // ---- END SOLUTION ----\n\n        // Build flow rule.\n        final FlowRule rule = Utils.buildFlowRule(\n                deviceId, appId, tableId, match, action);\n\n        return rule;\n    }\n\n    //--------------------------------------------------------------------------\n    // EVENT LISTENERS\n    //\n    // Events are processed only if isRelevant() returns true.\n    //--------------------------------------------------------------------------\n\n    /**\n     * Listener of device events.\n     */\n    public class InternalDeviceListener implements DeviceListener {\n\n        @Override\n        public boolean isRelevant(DeviceEvent event) {\n            switch (event.type()) {\n                case DEVICE_ADDED:\n                case DEVICE_AVAILABILITY_CHANGED:\n                    break;\n                default:\n                    // Ignore other events.\n                    return false;\n            }\n            // Process only if this controller instance is the master.\n            final DeviceId deviceId = event.subject().id();\n            return mastershipService.isLocalMaster(deviceId);\n        }\n\n        @Override\n        public void event(DeviceEvent event) {\n            final DeviceId deviceId = event.subject().id();\n            if (deviceService.isAvailable(deviceId)) {\n                // A P4Runtime device is considered available in ONOS when there\n                // is a StreamChannel session open and the pipeline\n                // configuration has been set.\n\n                // Events are processed using a thread pool defined in the\n                // MainComponent.\n                mainComponent.getExecutorService().execute(() -> {\n                    log.info(\"{} event! deviceId={}\", event.type(), deviceId);\n                    setUpDevice(deviceId);\n                });\n            }\n        }\n    }\n\n    //--------------------------------------------------------------------------\n    // UTILITY METHODS\n    //--------------------------------------------------------------------------\n\n    /**\n     * Returns all IPv6 addresses associated with the given interface.\n     *\n     * @param iface interface instance\n     * @return collection of IPv6 addresses\n     */\n    private Collection<Ip6Address> getIp6Addresses(Interface iface) {\n        return iface.ipAddressesList()\n                .stream()\n                .map(InterfaceIpAddress::ipAddress)\n                .filter(IpAddress::isIp6)\n                .map(IpAddress::getIp6Address)\n                .collect(Collectors.toSet());\n    }\n\n    /**\n     * Install the given flow rules in batch using the flow rule service.\n     *\n     * @param flowRules flow rules to install\n     */\n    private void installRules(Collection<FlowRule> flowRules) {\n        FlowRuleOperations.Builder ops = FlowRuleOperations.builder();\n        flowRules.forEach(ops::add);\n        flowRuleService.apply(ops.build());\n    }\n}\n"
  },
  {
    "path": "solution/app/src/main/java/org/onosproject/ngsdn/tutorial/Srv6Component.java",
    "content": "/*\n * Copyright 2019-present Open Networking Foundation\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may not use this file except in compliance with the License.\n * You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\npackage org.onosproject.ngsdn.tutorial;\n\nimport com.google.common.collect.Lists;\nimport org.onlab.packet.Ip6Address;\nimport org.onosproject.core.ApplicationId;\nimport org.onosproject.mastership.MastershipService;\nimport org.onosproject.net.Device;\nimport org.onosproject.net.DeviceId;\nimport org.onosproject.net.config.NetworkConfigService;\nimport org.onosproject.net.device.DeviceEvent;\nimport org.onosproject.net.device.DeviceListener;\nimport org.onosproject.net.device.DeviceService;\nimport org.onosproject.net.flow.FlowRule;\nimport org.onosproject.net.flow.FlowRuleOperations;\nimport org.onosproject.net.flow.FlowRuleService;\nimport org.onosproject.net.flow.criteria.PiCriterion;\nimport org.onosproject.net.pi.model.PiActionId;\nimport org.onosproject.net.pi.model.PiActionParamId;\nimport org.onosproject.net.pi.model.PiMatchFieldId;\nimport org.onosproject.net.pi.model.PiTableId;\nimport org.onosproject.net.pi.runtime.PiAction;\nimport org.onosproject.net.pi.runtime.PiActionParam;\nimport org.onosproject.net.pi.runtime.PiTableAction;\nimport org.osgi.service.component.annotations.Activate;\nimport org.osgi.service.component.annotations.Component;\nimport org.osgi.service.component.annotations.Deactivate;\nimport org.osgi.service.component.annotations.Reference;\nimport org.osgi.service.component.annotations.ReferenceCardinality;\nimport org.onosproject.ngsdn.tutorial.common.FabricDeviceConfig;\nimport org.onosproject.ngsdn.tutorial.common.Utils;\nimport org.slf4j.Logger;\nimport org.slf4j.LoggerFactory;\n\nimport java.util.List;\nimport java.util.Optional;\n\nimport static com.google.common.collect.Streams.stream;\nimport static org.onosproject.ngsdn.tutorial.AppConstants.INITIAL_SETUP_DELAY;\n\n/**\n * Application which handles SRv6 segment routing.\n */\n@Component(\n        immediate = true,\n        // *** TODO EXERCISE 6\n        // set to true when ready\n        enabled = true,\n        service = Srv6Component.class\n)\npublic class Srv6Component {\n\n    private static final Logger log = LoggerFactory.getLogger(Srv6Component.class);\n\n    //--------------------------------------------------------------------------\n    // ONOS CORE SERVICE BINDING\n    //\n    // These variables are set by the Karaf runtime environment before calling\n    // the activate() method.\n    //--------------------------------------------------------------------------\n\n    @Reference(cardinality = ReferenceCardinality.MANDATORY)\n    private FlowRuleService flowRuleService;\n\n    @Reference(cardinality = ReferenceCardinality.MANDATORY)\n    private MastershipService mastershipService;\n\n    @Reference(cardinality = ReferenceCardinality.MANDATORY)\n    private DeviceService deviceService;\n\n    @Reference(cardinality = ReferenceCardinality.MANDATORY)\n    private NetworkConfigService networkConfigService;\n\n    @Reference(cardinality = ReferenceCardinality.MANDATORY)\n    private MainComponent mainComponent;\n\n    private final DeviceListener deviceListener = new Srv6Component.InternalDeviceListener();\n\n    private ApplicationId appId;\n\n    //--------------------------------------------------------------------------\n    // COMPONENT ACTIVATION.\n    //\n    // When loading/unloading the app the Karaf runtime environment will call\n    // activate()/deactivate().\n    //--------------------------------------------------------------------------\n\n    @Activate\n    protected void activate() {\n        appId = mainComponent.getAppId();\n\n        // Register listeners to be informed about device and host events.\n        deviceService.addListener(deviceListener);\n\n        // Schedule set up for all devices.\n        mainComponent.scheduleTask(this::setUpAllDevices, INITIAL_SETUP_DELAY);\n\n        log.info(\"Started\");\n    }\n\n    @Deactivate\n    protected void deactivate() {\n        deviceService.removeListener(deviceListener);\n\n        log.info(\"Stopped\");\n    }\n\n    //--------------------------------------------------------------------------\n    // METHODS TO COMPLETE.\n    //\n    // Complete the implementation wherever you see TODO.\n    //--------------------------------------------------------------------------\n\n    /**\n     * Populate the My SID table from the network configuration for the\n     * specified device.\n     *\n     * @param deviceId the device Id\n     */\n    private void setUpMySidTable(DeviceId deviceId) {\n\n        Ip6Address mySid = getMySid(deviceId);\n\n        log.info(\"Adding mySid rule on {} (sid {})...\", deviceId, mySid);\n\n        // *** TODO EXERCISE 6\n        // Fill in the table ID for the SRv6 my segment identifier table\n        // ---- START SOLUTION ----\n        String tableId = \"IngressPipeImpl.srv6_my_sid\";\n        // ---- END SOLUTION ----\n\n        // *** TODO EXERCISE 6\n        // Modify the field and action id to match your P4Info\n        // ---- START SOLUTION ----\n        PiCriterion match = PiCriterion.builder()\n                .matchLpm(\n                        PiMatchFieldId.of(\"hdr.ipv6.dst_addr\"),\n                        mySid.toOctets(), 128)\n                .build();\n\n        PiTableAction action = PiAction.builder()\n                .withId(PiActionId.of(\"IngressPipeImpl.srv6_end\"))\n                .build();\n        // ---- END SOLUTION ----\n\n        FlowRule myStationRule = Utils.buildFlowRule(\n                deviceId, appId, tableId, match, action);\n\n        flowRuleService.applyFlowRules(myStationRule);\n    }\n\n    /**\n     * Insert a SRv6 transit insert policy that will inject an SRv6 header for\n     * packets destined to destIp.\n     *\n     * @param deviceId     device ID\n     * @param destIp       target IP address for the SRv6 policy\n     * @param prefixLength prefix length for the target IP\n     * @param segmentList  list of SRv6 SIDs that make up the path\n     */\n    public void insertSrv6InsertRule(DeviceId deviceId, Ip6Address destIp, int prefixLength,\n                                     List<Ip6Address> segmentList) {\n        if (segmentList.size() < 2 || segmentList.size() > 3) {\n            throw new RuntimeException(\"List of \" + segmentList.size() + \" segments is not supported\");\n        }\n\n        // *** TODO EXERCISE 6\n        // Fill in the table ID for the SRv6 transit table.\n        // ---- START SOLUTION ----\n        String tableId = \"IngressPipeImpl.srv6_transit\";\n        // ---- END SOLUTION ----\n\n        // *** TODO EXERCISE 6\n        // Modify match field, action id, and action parameters to match your P4Info.\n        // ---- START SOLUTION ----\n        PiCriterion match = PiCriterion.builder()\n                .matchLpm(PiMatchFieldId.of(\"hdr.ipv6.dst_addr\"), destIp.toOctets(), prefixLength)\n                .build();\n\n        List<PiActionParam> actionParams = Lists.newArrayList();\n\n        for (int i = 0; i < segmentList.size(); i++) {\n            PiActionParamId paramId = PiActionParamId.of(\"s\" + (i + 1));\n            PiActionParam param = new PiActionParam(paramId, segmentList.get(i).toOctets());\n            actionParams.add(param);\n        }\n\n        PiAction action = PiAction.builder()\n                .withId(PiActionId.of(\"IngressPipeImpl.srv6_t_insert_\" + segmentList.size()))\n                .withParameters(actionParams)\n                .build();\n        // ---- END SOLUTION ----\n\n        final FlowRule rule = Utils.buildFlowRule(\n                deviceId, appId, tableId, match, action);\n\n        flowRuleService.applyFlowRules(rule);\n    }\n\n    /**\n     * Remove all SRv6 transit insert polices for the specified device.\n     *\n     * @param deviceId device ID\n     */\n    public void clearSrv6InsertRules(DeviceId deviceId) {\n        // *** TODO EXERCISE 6\n        // Fill in the table ID for the SRv6 transit table\n        // ---- START SOLUTION ----\n        String tableId = \"IngressPipeImpl.srv6_transit\";\n        // ---- END SOLUTION ----\n\n        FlowRuleOperations.Builder ops = FlowRuleOperations.builder();\n        stream(flowRuleService.getFlowEntries(deviceId))\n                .filter(fe -> fe.appId() == appId.id())\n                .filter(fe -> fe.table().equals(PiTableId.of(tableId)))\n                .forEach(ops::remove);\n        flowRuleService.apply(ops.build());\n    }\n\n    // ---------- END METHODS TO COMPLETE ----------------\n\n    //--------------------------------------------------------------------------\n    // EVENT LISTENERS\n    //\n    // Events are processed only if isRelevant() returns true.\n    //--------------------------------------------------------------------------\n\n    /**\n     * Listener of device events.\n     */\n    public class InternalDeviceListener implements DeviceListener {\n\n        @Override\n        public boolean isRelevant(DeviceEvent event) {\n            switch (event.type()) {\n                case DEVICE_ADDED:\n                case DEVICE_AVAILABILITY_CHANGED:\n                    break;\n                default:\n                    // Ignore other events.\n                    return false;\n            }\n            // Process only if this controller instance is the master.\n            final DeviceId deviceId = event.subject().id();\n            return mastershipService.isLocalMaster(deviceId);\n        }\n\n        @Override\n        public void event(DeviceEvent event) {\n            final DeviceId deviceId = event.subject().id();\n            if (deviceService.isAvailable(deviceId)) {\n                // A P4Runtime device is considered available in ONOS when there\n                // is a StreamChannel session open and the pipeline\n                // configuration has been set.\n                mainComponent.getExecutorService().execute(() -> {\n                    log.info(\"{} event! deviceId={}\", event.type(), deviceId);\n\n                    setUpMySidTable(event.subject().id());\n                });\n            }\n        }\n    }\n\n\n    //--------------------------------------------------------------------------\n    // UTILITY METHODS\n    //--------------------------------------------------------------------------\n\n    /**\n     * Sets up SRv6 My SID table on all devices known by ONOS and for which this\n     * ONOS node instance is currently master.\n     */\n    private synchronized void setUpAllDevices() {\n        // Set up host routes\n        stream(deviceService.getAvailableDevices())\n                .map(Device::id)\n                .filter(mastershipService::isLocalMaster)\n                .forEach(deviceId -> {\n                    log.info(\"*** SRV6 - Starting initial set up for {}...\", deviceId);\n                    this.setUpMySidTable(deviceId);\n                });\n    }\n\n    /**\n     * Returns the Srv6 config for the given device.\n     *\n     * @param deviceId the device ID\n     * @return Srv6  device config\n     */\n    private Optional<FabricDeviceConfig> getDeviceConfig(DeviceId deviceId) {\n        FabricDeviceConfig config = networkConfigService.getConfig(deviceId, FabricDeviceConfig.class);\n        return Optional.ofNullable(config);\n    }\n\n    /**\n     * Returns Srv6 SID for the given device.\n     *\n     * @param deviceId the device ID\n     * @return SID for the device\n     */\n    private Ip6Address getMySid(DeviceId deviceId) {\n        return getDeviceConfig(deviceId)\n                .map(FabricDeviceConfig::mySid)\n                .orElseThrow(() -> new RuntimeException(\n                        \"Missing mySid config for \" + deviceId));\n    }\n}\n"
  },
  {
    "path": "solution/app/src/main/java/org/onosproject/ngsdn/tutorial/pipeconf/InterpreterImpl.java",
    "content": "/*\n * Copyright 2019-present Open Networking Foundation\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may not use this file except in compliance with the License.\n * You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\npackage org.onosproject.ngsdn.tutorial.pipeconf;\n\nimport com.google.common.collect.ImmutableList;\nimport com.google.common.collect.ImmutableMap;\nimport org.onlab.packet.DeserializationException;\nimport org.onlab.packet.Ethernet;\nimport org.onlab.util.ImmutableByteSequence;\nimport org.onosproject.net.ConnectPoint;\nimport org.onosproject.net.DeviceId;\nimport org.onosproject.net.Port;\nimport org.onosproject.net.PortNumber;\nimport org.onosproject.net.device.DeviceService;\nimport org.onosproject.net.driver.AbstractHandlerBehaviour;\nimport org.onosproject.net.flow.TrafficTreatment;\nimport org.onosproject.net.flow.criteria.Criterion;\nimport org.onosproject.net.packet.DefaultInboundPacket;\nimport org.onosproject.net.packet.InboundPacket;\nimport org.onosproject.net.packet.OutboundPacket;\nimport org.onosproject.net.pi.model.PiMatchFieldId;\nimport org.onosproject.net.pi.model.PiPacketMetadataId;\nimport org.onosproject.net.pi.model.PiPipelineInterpreter;\nimport org.onosproject.net.pi.model.PiTableId;\nimport org.onosproject.net.pi.runtime.PiAction;\nimport org.onosproject.net.pi.runtime.PiPacketMetadata;\nimport org.onosproject.net.pi.runtime.PiPacketOperation;\n\nimport java.nio.ByteBuffer;\nimport java.util.Collection;\nimport java.util.List;\nimport java.util.Map;\nimport java.util.Optional;\n\nimport static java.lang.String.format;\nimport static java.util.stream.Collectors.toList;\nimport static org.onlab.util.ImmutableByteSequence.copyFrom;\nimport static org.onosproject.net.PortNumber.CONTROLLER;\nimport static org.onosproject.net.PortNumber.FLOOD;\nimport static org.onosproject.net.flow.instructions.Instruction.Type.OUTPUT;\nimport static org.onosproject.net.flow.instructions.Instructions.OutputInstruction;\nimport static org.onosproject.net.pi.model.PiPacketOperationType.PACKET_OUT;\nimport static org.onosproject.ngsdn.tutorial.AppConstants.CPU_PORT_ID;\n\n\n/**\n * Interpreter implementation.\n */\npublic class InterpreterImpl extends AbstractHandlerBehaviour\n        implements PiPipelineInterpreter {\n\n\n    // From v1model.p4\n    private static final int V1MODEL_PORT_BITWIDTH = 9;\n\n    // From P4Info.\n    private static final Map<Criterion.Type, String> CRITERION_MAP =\n            new ImmutableMap.Builder<Criterion.Type, String>()\n                    .put(Criterion.Type.IN_PORT, \"standard_metadata.ingress_port\")\n                    .put(Criterion.Type.ETH_DST, \"hdr.ethernet.dst_addr\")\n                    .put(Criterion.Type.ETH_SRC, \"hdr.ethernet.src_addr\")\n                    .put(Criterion.Type.ETH_TYPE, \"hdr.ethernet.ether_type\")\n                    .put(Criterion.Type.IPV6_DST, \"hdr.ipv6.dst_addr\")\n                    .put(Criterion.Type.IP_PROTO, \"local_metadata.ip_proto\")\n                    .put(Criterion.Type.ICMPV4_TYPE, \"local_metadata.icmp_type\")\n                    .put(Criterion.Type.ICMPV6_TYPE, \"local_metadata.icmp_type\")\n                    .build();\n\n    /**\n     * Returns a collection of PI packet operations populated with metadata\n     * specific for this pipeconf and equivalent to the given ONOS\n     * OutboundPacket instance.\n     *\n     * @param packet ONOS OutboundPacket\n     * @return collection of PI packet operations\n     * @throws PiInterpreterException if the packet treatments cannot be\n     *                                executed by this pipeline\n     */\n    @Override\n    public Collection<PiPacketOperation> mapOutboundPacket(OutboundPacket packet)\n            throws PiInterpreterException {\n        TrafficTreatment treatment = packet.treatment();\n\n        // Packet-out in main.p4 supports only setting the output port,\n        // i.e. we only understand OUTPUT instructions.\n        List<OutputInstruction> outInstructions = treatment\n                .allInstructions()\n                .stream()\n                .filter(i -> i.type().equals(OUTPUT))\n                .map(i -> (OutputInstruction) i)\n                .collect(toList());\n\n        if (treatment.allInstructions().size() != outInstructions.size()) {\n            // There are other instructions that are not of type OUTPUT.\n            throw new PiInterpreterException(\"Treatment not supported: \" + treatment);\n        }\n\n        ImmutableList.Builder<PiPacketOperation> builder = ImmutableList.builder();\n        for (OutputInstruction outInst : outInstructions) {\n            if (outInst.port().isLogical() && !outInst.port().equals(FLOOD)) {\n                throw new PiInterpreterException(format(\n                        \"Packet-out on logical port '%s' not supported\",\n                        outInst.port()));\n            } else if (outInst.port().equals(FLOOD)) {\n                // To emulate flooding, we create a packet-out operation for\n                // each switch port.\n                final DeviceService deviceService = handler().get(DeviceService.class);\n                for (Port port : deviceService.getPorts(packet.sendThrough())) {\n                    builder.add(buildPacketOut(packet.data(), port.number().toLong()));\n                }\n            } else {\n                // Create only one packet-out for the given OUTPUT instruction.\n                builder.add(buildPacketOut(packet.data(), outInst.port().toLong()));\n            }\n        }\n        return builder.build();\n    }\n\n    /**\n     * Builds a pipeconf-specific packet-out instance with the given payload and\n     * egress port.\n     *\n     * @param pktData    packet payload\n     * @param portNumber egress port\n     * @return packet-out\n     * @throws PiInterpreterException if packet-out cannot be built\n     */\n    private PiPacketOperation buildPacketOut(ByteBuffer pktData, long portNumber)\n            throws PiInterpreterException {\n\n        // Make sure port number can fit in v1model port metadata bitwidth.\n        final ImmutableByteSequence portBytes;\n        try {\n            portBytes = copyFrom(portNumber).fit(V1MODEL_PORT_BITWIDTH);\n        } catch (ImmutableByteSequence.ByteSequenceTrimException e) {\n            throw new PiInterpreterException(format(\n                    \"Port number %d too big, %s\", portNumber, e.getMessage()));\n        }\n\n        // Create metadata instance for egress port.\n        // *** TODO EXERCISE 4: modify metadata names to match P4 program\n        // ---- START SOLUTION ----\n        final String outPortMetadataName = \"egress_port\";\n        // ---- END SOLUTION ----\n        final PiPacketMetadata outPortMetadata = PiPacketMetadata.builder()\n                .withId(PiPacketMetadataId.of(outPortMetadataName))\n                .withValue(portBytes)\n                .build();\n\n        // Build packet out.\n        return PiPacketOperation.builder()\n                .withType(PACKET_OUT)\n                .withData(copyFrom(pktData))\n                .withMetadata(outPortMetadata)\n                .build();\n    }\n\n    /**\n     * Returns an ONS InboundPacket equivalent to the given pipeconf-specific\n     * packet-in operation.\n     *\n     * @param packetIn packet operation\n     * @param deviceId ID of the device that originated the packet-in\n     * @return inbound packet\n     * @throws PiInterpreterException if the packet operation cannot be mapped\n     *                                to an inbound packet\n     */\n    @Override\n    public InboundPacket mapInboundPacket(PiPacketOperation packetIn, DeviceId deviceId)\n            throws PiInterpreterException {\n\n        // Find the ingress_port metadata.\n        // *** TODO EXERCISE 4: modify metadata names to match P4Info\n        // ---- START SOLUTION ----\n        final String inportMetadataName = \"ingress_port\";\n        // ---- END SOLUTION ----\n        Optional<PiPacketMetadata> inportMetadata = packetIn.metadatas()\n                .stream()\n                .filter(meta -> meta.id().id().equals(inportMetadataName))\n                .findFirst();\n\n        if (!inportMetadata.isPresent()) {\n            throw new PiInterpreterException(format(\n                    \"Missing metadata '%s' in packet-in received from '%s': %s\",\n                    inportMetadataName, deviceId, packetIn));\n        }\n\n        // Build ONOS InboundPacket instance with the given ingress port.\n\n        // 1. Parse packet-in object into Ethernet packet instance.\n        final byte[] payloadBytes = packetIn.data().asArray();\n        final ByteBuffer rawData = ByteBuffer.wrap(payloadBytes);\n        final Ethernet ethPkt;\n        try {\n            ethPkt = Ethernet.deserializer().deserialize(\n                    payloadBytes, 0, packetIn.data().size());\n        } catch (DeserializationException dex) {\n            throw new PiInterpreterException(dex.getMessage());\n        }\n\n        // 2. Get ingress port\n        final ImmutableByteSequence portBytes = inportMetadata.get().value();\n        final short portNum = portBytes.asReadOnlyBuffer().getShort();\n        final ConnectPoint receivedFrom = new ConnectPoint(\n                deviceId, PortNumber.portNumber(portNum));\n\n        return new DefaultInboundPacket(receivedFrom, ethPkt, rawData);\n    }\n\n    @Override\n    public Optional<Integer> mapLogicalPortNumber(PortNumber port) {\n        if (CONTROLLER.equals(port)) {\n            return Optional.of(CPU_PORT_ID);\n        } else {\n            return Optional.empty();\n        }\n    }\n\n    @Override\n    public Optional<PiMatchFieldId> mapCriterionType(Criterion.Type type) {\n        if (CRITERION_MAP.containsKey(type)) {\n            return Optional.of(PiMatchFieldId.of(CRITERION_MAP.get(type)));\n        } else {\n            return Optional.empty();\n        }\n    }\n\n    @Override\n    public PiAction mapTreatment(TrafficTreatment treatment, PiTableId piTableId)\n            throws PiInterpreterException {\n        throw new PiInterpreterException(\"Treatment mapping not supported\");\n    }\n\n    @Override\n    public Optional<PiTableId> mapFlowRuleTableId(int flowRuleTableId) {\n        return Optional.empty();\n    }\n}\n"
  },
  {
    "path": "solution/mininet/flowrule-gtp.json",
    "content": "{\n  \"flows\": [\n    {\n      \"deviceId\": \"device:leaf1\",\n      \"tableId\": \"FabricIngress.spgw_ingress.dl_sess_lookup\",\n      \"priority\": 10,\n      \"timeout\": 0,\n      \"isPermanent\": true,\n      \"selector\": {\n        \"criteria\": [\n          {\n            \"type\": \"IPV4_DST\",\n            \"ip\": \"17.0.0.1/32\"\n          }\n        ]\n      },\n      \"treatment\": {\n        \"instructions\": [\n          {\n            \"type\": \"PROTOCOL_INDEPENDENT\",\n            \"subtype\": \"ACTION\",\n            \"actionId\": \"FabricIngress.spgw_ingress.set_dl_sess_info\",\n            \"actionParams\": {\n              \"teid\": \"BEEF\",\n              \"s1u_enb_addr\": \"0a006401\",\n              \"s1u_sgw_addr\": \"0a0064fe\"\n            }\n          }\n        ]\n      }\n    }\n  ]\n}\n"
  },
  {
    "path": "solution/mininet/netcfg-gtp.json",
    "content": "{\n  \"devices\": {\n    \"device:leaf1\": {\n      \"basic\": {\n        \"managementAddress\": \"grpc://mininet:50001?device_id=1\",\n        \"driver\": \"stratum-bmv2\",\n        \"pipeconf\": \"org.onosproject.pipelines.fabric-spgw\",\n        \"locType\": \"grid\",\n        \"gridX\": 200,\n        \"gridY\": 600\n      },\n      \"segmentrouting\": {\n        \"name\": \"leaf1\",\n        \"ipv4NodeSid\": 101,\n        \"ipv4Loopback\": \"192.168.1.1\",\n        \"routerMac\": \"00:AA:00:00:00:01\",\n        \"isEdgeRouter\": true,\n        \"adjacencySids\": []\n      }\n    },\n    \"device:leaf2\": {\n      \"basic\": {\n        \"managementAddress\": \"grpc://mininet:50002?device_id=1\",\n        \"driver\": \"stratum-bmv2\",\n        \"pipeconf\": \"org.onosproject.pipelines.fabric\",\n        \"locType\": \"grid\",\n        \"gridX\": 800,\n        \"gridY\": 600\n      },\n      \"segmentrouting\": {\n        \"name\": \"leaf2\",\n        \"ipv4NodeSid\": 102,\n        \"ipv4Loopback\": \"192.168.1.2\",\n        \"routerMac\": \"00:AA:00:00:00:02\",\n        \"isEdgeRouter\": true,\n        \"adjacencySids\": []\n      }\n    },\n    \"device:spine1\": {\n      \"basic\": {\n        \"managementAddress\": \"grpc://mininet:50003?device_id=1\",\n        \"driver\": \"stratum-bmv2\",\n        \"pipeconf\": \"org.onosproject.pipelines.fabric\",\n        \"locType\": \"grid\",\n        \"gridX\": 400,\n        \"gridY\": 400\n      },\n      \"segmentrouting\": {\n        \"name\": \"spine1\",\n        \"ipv4NodeSid\": 201,\n        \"ipv4Loopback\": \"192.168.2.1\",\n        \"routerMac\": \"00:BB:00:00:00:01\",\n        \"isEdgeRouter\": false,\n        \"adjacencySids\": []\n      }\n    },\n    \"device:spine2\": {\n      \"basic\": {\n        \"managementAddress\": \"grpc://mininet:50004?device_id=1\",\n        \"driver\": \"stratum-bmv2\",\n        \"pipeconf\": \"org.onosproject.pipelines.fabric\",\n        \"locType\": \"grid\",\n        \"gridX\": 600,\n        \"gridY\": 400\n      },\n      \"segmentrouting\": {\n        \"name\": \"spine2\",\n        \"ipv4NodeSid\": 202,\n        \"ipv4Loopback\": \"192.168.2.2\",\n        \"routerMac\": \"00:BB:00:00:00:02\",\n        \"isEdgeRouter\": false,\n        \"adjacencySids\": []\n      }\n    }\n  },\n  \"ports\": {\n    \"device:leaf1/3\": {\n      \"interfaces\": [\n        {\n          \"name\": \"leaf1-3\",\n          \"ips\": [\n            \"10.0.100.254/24\"\n          ],\n          \"vlan-untagged\": 100\n        }\n      ]\n    },\n    \"device:leaf2/3\": {\n      \"interfaces\": [\n        {\n          \"name\": \"leaf2-3\",\n          \"ips\": [\n            \"10.0.200.254/24\"\n          ],\n          \"vlan-untagged\": 200\n        }\n      ]\n    }\n  },\n  \"hosts\": {\n    \"00:00:00:00:00:10/None\": {\n      \"basic\": {\n        \"name\": \"enodeb\",\n        \"gridX\": 100,\n        \"gridY\": 700,\n        \"locType\": \"grid\",\n        \"ips\": [\n          \"10.0.100.1\"\n        ],\n        \"locations\": [\n          \"device:leaf1/3\"\n        ]\n      }\n    },\n    \"00:00:00:00:00:20/None\": {\n      \"basic\": {\n        \"name\": \"pdn\",\n        \"gridX\": 850,\n        \"gridY\": 700,\n        \"locType\": \"grid\",\n        \"ips\": [\n          \"10.0.200.1\"\n        ],\n        \"locations\": [\n          \"device:leaf2/3\"\n        ]\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "solution/mininet/netcfg-sr.json",
    "content": "{\n  \"devices\": {\n    \"device:leaf1\": {\n      \"basic\": {\n        \"managementAddress\": \"grpc://mininet:50001?device_id=1\",\n        \"driver\": \"stratum-bmv2\",\n        \"pipeconf\": \"org.onosproject.pipelines.fabric\",\n        \"locType\": \"grid\",\n        \"gridX\": 200,\n        \"gridY\": 600\n      },\n      \"segmentrouting\": {\n        \"name\": \"leaf1\",\n        \"ipv4NodeSid\": 101,\n        \"ipv4Loopback\": \"192.168.1.1\",\n        \"routerMac\": \"00:AA:00:00:00:01\",\n        \"isEdgeRouter\": true,\n        \"adjacencySids\": []\n      }\n    },\n    \"device:leaf2\": {\n      \"basic\": {\n        \"managementAddress\": \"grpc://mininet:50002?device_id=1\",\n        \"driver\": \"stratum-bmv2\",\n        \"pipeconf\": \"org.onosproject.pipelines.fabric\",\n        \"locType\": \"grid\",\n        \"gridX\": 800,\n        \"gridY\": 600\n      },\n      \"segmentrouting\": {\n        \"name\": \"leaf2\",\n        \"ipv4NodeSid\": 102,\n        \"ipv4Loopback\": \"192.168.1.2\",\n        \"routerMac\": \"00:AA:00:00:00:02\",\n        \"isEdgeRouter\": true,\n        \"adjacencySids\": []\n      }\n    },\n    \"device:spine1\": {\n      \"basic\": {\n        \"managementAddress\": \"grpc://mininet:50003?device_id=1\",\n        \"driver\": \"stratum-bmv2\",\n        \"pipeconf\": \"org.onosproject.pipelines.fabric\",\n        \"locType\": \"grid\",\n        \"gridX\": 400,\n        \"gridY\": 400\n      },\n      \"segmentrouting\": {\n        \"name\": \"spine1\",\n        \"ipv4NodeSid\": 201,\n        \"ipv4Loopback\": \"192.168.2.1\",\n        \"routerMac\": \"00:BB:00:00:00:01\",\n        \"isEdgeRouter\": false,\n        \"adjacencySids\": []\n      }\n    },\n    \"device:spine2\": {\n      \"basic\": {\n        \"managementAddress\": \"grpc://mininet:50004?device_id=1\",\n        \"driver\": \"stratum-bmv2\",\n        \"pipeconf\": \"org.onosproject.pipelines.fabric\",\n        \"locType\": \"grid\",\n        \"gridX\": 600,\n        \"gridY\": 400\n      },\n      \"segmentrouting\": {\n        \"name\": \"spine2\",\n        \"ipv4NodeSid\": 202,\n        \"ipv4Loopback\": \"192.168.2.2\",\n        \"routerMac\": \"00:BB:00:00:00:02\",\n        \"isEdgeRouter\": false,\n        \"adjacencySids\": []\n      }\n    }\n  },\n  \"ports\": {\n    \"device:leaf1/3\": {\n      \"interfaces\": [\n        {\n          \"name\": \"leaf1-3\",\n          \"ips\": [\n            \"172.16.1.254/24\"\n          ],\n          \"vlan-untagged\": 100\n        }\n      ]\n    },\n    \"device:leaf1/4\": {\n      \"interfaces\": [\n        {\n          \"name\": \"leaf1-4\",\n          \"ips\": [\n            \"172.16.1.254/24\"\n          ],\n          \"vlan-untagged\": 100\n        }\n      ]\n    },\n    \"device:leaf1/5\": {\n      \"interfaces\": [\n        {\n          \"name\": \"leaf1-5\",\n          \"ips\": [\n            \"172.16.1.254/24\"\n          ],\n          \"vlan-tagged\": [\n            100\n          ]\n        }\n      ]\n    },\n    \"device:leaf1/6\": {\n      \"interfaces\": [\n        {\n          \"name\": \"leaf1-6\",\n          \"ips\": [\n            \"172.16.2.254/24\"\n          ],\n          \"vlan-tagged\": [\n            200\n          ]\n        }\n      ]\n    },\n    \"device:leaf2/3\": {\n      \"interfaces\": [\n        {\n          \"name\": \"leaf2-3\",\n          \"ips\": [\n            \"172.16.3.254/24\"\n          ],\n          \"vlan-tagged\": [\n            300\n          ]\n        }\n      ]\n    },\n    \"device:leaf2/4\": {\n      \"interfaces\": [\n        {\n          \"name\": \"leaf2-4\",\n          \"ips\": [\n            \"172.16.4.254/24\"\n          ],\n          \"vlan-untagged\": 400\n        }\n      ]\n    }\n  },\n  \"hosts\": {\n    \"00:00:00:00:00:1A/None\": {\n      \"basic\": {\n        \"name\": \"h1a\",\n        \"locType\": \"grid\",\n        \"gridX\": 100,\n        \"gridY\": 700\n      }\n    },\n    \"00:00:00:00:00:1B/None\": {\n      \"basic\": {\n        \"name\": \"h1b\",\n        \"locType\": \"grid\",\n        \"gridX\": 100,\n        \"gridY\": 800\n      }\n    },\n    \"00:00:00:00:00:1C/100\": {\n      \"basic\": {\n        \"name\": \"h1c\",\n        \"locType\": \"grid\",\n        \"gridX\": 250,\n        \"gridY\": 800\n      }\n    },\n    \"00:00:00:00:00:20/200\": {\n      \"basic\": {\n        \"name\": \"h2\",\n        \"locType\": \"grid\",\n        \"gridX\": 400,\n        \"gridY\": 700\n      }\n    },\n    \"00:00:00:00:00:30/300\": {\n      \"basic\": {\n        \"name\": \"h3\",\n        \"locType\": \"grid\",\n        \"gridX\": 750,\n        \"gridY\": 700\n      }\n    },\n    \"00:00:00:00:00:40/None\": {\n      \"basic\": {\n        \"name\": \"h4\",\n        \"locType\": \"grid\",\n        \"gridX\": 850,\n        \"gridY\": 700\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "solution/p4src/main.p4",
    "content": "/*\n * Copyright 2019-present Open Networking Foundation\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may not use this file except in compliance with the License.\n * You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\n\n#include <core.p4>\n#include <v1model.p4>\n\n// CPU_PORT specifies the P4 port number associated to controller packet-in and\n// packet-out. All packets forwarded via this port will be delivered to the\n// controller as P4Runtime PacketIn messages. Similarly, PacketOut messages from\n// the controller will be seen by the P4 pipeline as coming from the CPU_PORT.\n#define CPU_PORT 255\n\n// CPU_CLONE_SESSION_ID specifies the mirroring session for packets to be cloned\n// to the CPU port. Packets associated with this session ID will be cloned to\n// the CPU_PORT as well as being transmitted via their egress port (set by the\n// bridging/routing/acl table). For cloning to work, the P4Runtime controller\n// needs first to insert a CloneSessionEntry that maps this session ID to the\n// CPU_PORT.\n#define CPU_CLONE_SESSION_ID 99\n\n// Maximum number of hops supported when using SRv6.\n// Required for Exercise 7.\n#define SRV6_MAX_HOPS 4\n\ntypedef bit<9>   port_num_t;\ntypedef bit<48>  mac_addr_t;\ntypedef bit<16>  mcast_group_id_t;\ntypedef bit<32>  ipv4_addr_t;\ntypedef bit<128> ipv6_addr_t;\ntypedef bit<16>  l4_port_t;\n\nconst bit<16> ETHERTYPE_IPV4 = 0x0800;\nconst bit<16> ETHERTYPE_IPV6 = 0x86dd;\n\nconst bit<8> IP_PROTO_ICMP   = 1;\nconst bit<8> IP_PROTO_TCP    = 6;\nconst bit<8> IP_PROTO_UDP    = 17;\nconst bit<8> IP_PROTO_SRV6   = 43;\nconst bit<8> IP_PROTO_ICMPV6 = 58;\n\nconst mac_addr_t IPV6_MCAST_01 = 0x33_33_00_00_00_01;\n\nconst bit<8> ICMP6_TYPE_NS = 135;\nconst bit<8> ICMP6_TYPE_NA = 136;\n\nconst bit<8> NDP_OPT_TARGET_LL_ADDR = 2;\n\nconst bit<32> NDP_FLAG_ROUTER    = 0x80000000;\nconst bit<32> NDP_FLAG_SOLICITED = 0x40000000;\nconst bit<32> NDP_FLAG_OVERRIDE  = 0x20000000;\n\n\n//------------------------------------------------------------------------------\n// HEADER DEFINITIONS\n//------------------------------------------------------------------------------\n\nheader ethernet_t {\n    mac_addr_t  dst_addr;\n    mac_addr_t  src_addr;\n    bit<16>     ether_type;\n}\n\nheader ipv4_t {\n    bit<4>   version;\n    bit<4>   ihl;\n    bit<6>   dscp;\n    bit<2>   ecn;\n    bit<16>  total_len;\n    bit<16>  identification;\n    bit<3>   flags;\n    bit<13>  frag_offset;\n    bit<8>   ttl;\n    bit<8>   protocol;\n    bit<16>  hdr_checksum;\n    bit<32>  src_addr;\n    bit<32>  dst_addr;\n}\n\nheader ipv6_t {\n    bit<4>    version;\n    bit<8>    traffic_class;\n    bit<20>   flow_label;\n    bit<16>   payload_len;\n    bit<8>    next_hdr;\n    bit<8>    hop_limit;\n    bit<128>  src_addr;\n    bit<128>  dst_addr;\n}\n\nheader srv6h_t {\n    bit<8>   next_hdr;\n    bit<8>   hdr_ext_len;\n    bit<8>   routing_type;\n    bit<8>   segment_left;\n    bit<8>   last_entry;\n    bit<8>   flags;\n    bit<16>  tag;\n}\n\nheader srv6_list_t {\n    bit<128>  segment_id;\n}\n\nheader tcp_t {\n    bit<16>  src_port;\n    bit<16>  dst_port;\n    bit<32>  seq_no;\n    bit<32>  ack_no;\n    bit<4>   data_offset;\n    bit<3>   res;\n    bit<3>   ecn;\n    bit<6>   ctrl;\n    bit<16>  window;\n    bit<16>  checksum;\n    bit<16>  urgent_ptr;\n}\n\nheader udp_t {\n    bit<16> src_port;\n    bit<16> dst_port;\n    bit<16> len;\n    bit<16> checksum;\n}\n\nheader icmp_t {\n    bit<8>   type;\n    bit<8>   icmp_code;\n    bit<16>  checksum;\n    bit<16>  identifier;\n    bit<16>  sequence_number;\n    bit<64>  timestamp;\n}\n\nheader icmpv6_t {\n    bit<8>   type;\n    bit<8>   code;\n    bit<16>  checksum;\n}\n\nheader ndp_t {\n    bit<32>      flags;\n    ipv6_addr_t  target_ipv6_addr;\n    // NDP option.\n    bit<8>       type;\n    bit<8>       length;\n    bit<48>      target_mac_addr;\n}\n\n// Packet-in header. Prepended to packets sent to the CPU_PORT and used by the\n// P4Runtime server (Stratum) to populate the PacketIn message metadata fields.\n// Here we use it to carry the original ingress port where the packet was\n// received.\n@controller_header(\"packet_in\")\nheader cpu_in_header_t {\n    port_num_t  ingress_port;\n    bit<7>      _pad;\n}\n\n// Packet-out header. Prepended to packets received from the CPU_PORT. Fields of\n// this header are populated by the P4Runtime server based on the P4Runtime\n// PacketOut metadata fields. Here we use it to inform the P4 pipeline on which\n// port this packet-out should be transmitted.\n@controller_header(\"packet_out\")\nheader cpu_out_header_t {\n    port_num_t  egress_port;\n    bit<7>      _pad;\n}\n\nstruct parsed_headers_t {\n    cpu_out_header_t cpu_out;\n    cpu_in_header_t cpu_in;\n    ethernet_t ethernet;\n    ipv4_t ipv4;\n    ipv6_t ipv6;\n    srv6h_t srv6h;\n    srv6_list_t[SRV6_MAX_HOPS] srv6_list;\n    tcp_t tcp;\n    udp_t udp;\n    icmp_t icmp;\n    icmpv6_t icmpv6;\n    ndp_t ndp;\n}\n\nstruct local_metadata_t {\n    l4_port_t   l4_src_port;\n    l4_port_t   l4_dst_port;\n    bool        is_multicast;\n    ipv6_addr_t next_srv6_sid;\n    bit<8>      ip_proto;\n    bit<8>      icmp_type;\n}\n\n\n//------------------------------------------------------------------------------\n// INGRESS PIPELINE\n//------------------------------------------------------------------------------\n\nparser ParserImpl (packet_in packet,\n                   out parsed_headers_t hdr,\n                   inout local_metadata_t local_metadata,\n                   inout standard_metadata_t standard_metadata)\n{\n    state start {\n        transition select(standard_metadata.ingress_port) {\n            CPU_PORT: parse_packet_out;\n            default: parse_ethernet;\n        }\n    }\n\n    state parse_packet_out {\n        packet.extract(hdr.cpu_out);\n        transition parse_ethernet;\n    }\n\n    state parse_ethernet {\n        packet.extract(hdr.ethernet);\n        transition select(hdr.ethernet.ether_type){\n            ETHERTYPE_IPV4: parse_ipv4;\n            ETHERTYPE_IPV6: parse_ipv6;\n            default: accept;\n        }\n    }\n\n    state parse_ipv4 {\n        packet.extract(hdr.ipv4);\n        local_metadata.ip_proto = hdr.ipv4.protocol;\n        transition select(hdr.ipv4.protocol) {\n            IP_PROTO_TCP: parse_tcp;\n            IP_PROTO_UDP: parse_udp;\n            IP_PROTO_ICMP: parse_icmp;\n            default: accept;\n        }\n    }\n\n    state parse_ipv6 {\n        packet.extract(hdr.ipv6);\n        local_metadata.ip_proto = hdr.ipv6.next_hdr;\n        transition select(hdr.ipv6.next_hdr) {\n            IP_PROTO_TCP: parse_tcp;\n            IP_PROTO_UDP: parse_udp;\n            IP_PROTO_ICMPV6: parse_icmpv6;\n            IP_PROTO_SRV6: parse_srv6;\n            default: accept;\n        }\n    }\n\n    state parse_tcp {\n        packet.extract(hdr.tcp);\n        local_metadata.l4_src_port = hdr.tcp.src_port;\n        local_metadata.l4_dst_port = hdr.tcp.dst_port;\n        transition accept;\n    }\n\n    state parse_udp {\n        packet.extract(hdr.udp);\n        local_metadata.l4_src_port = hdr.udp.src_port;\n        local_metadata.l4_dst_port = hdr.udp.dst_port;\n        transition accept;\n    }\n\n    state parse_icmp {\n        packet.extract(hdr.icmp);\n        local_metadata.icmp_type = hdr.icmp.type;\n        transition accept;\n    }\n\n    state parse_icmpv6 {\n        packet.extract(hdr.icmpv6);\n        local_metadata.icmp_type = hdr.icmpv6.type;\n        transition select(hdr.icmpv6.type) {\n            ICMP6_TYPE_NS: parse_ndp;\n            ICMP6_TYPE_NA: parse_ndp;\n            default: accept;\n        }\n    }\n\n    state parse_ndp {\n        packet.extract(hdr.ndp);\n        transition accept;\n    }\n\n    state parse_srv6 {\n        packet.extract(hdr.srv6h);\n        transition parse_srv6_list;\n    }\n\n    state parse_srv6_list {\n        packet.extract(hdr.srv6_list.next);\n        bool next_segment = (bit<32>)hdr.srv6h.segment_left - 1 == (bit<32>)hdr.srv6_list.lastIndex;\n        transition select(next_segment) {\n            true: mark_current_srv6;\n            default: check_last_srv6;\n        }\n    }\n\n    state mark_current_srv6 {\n        local_metadata.next_srv6_sid = hdr.srv6_list.last.segment_id;\n        transition check_last_srv6;\n    }\n\n    state check_last_srv6 {\n        // working with bit<8> and int<32> which cannot be cast directly; using\n        // bit<32> as common intermediate type for comparision\n        bool last_segment = (bit<32>)hdr.srv6h.last_entry == (bit<32>)hdr.srv6_list.lastIndex;\n        transition select(last_segment) {\n           true: parse_srv6_next_hdr;\n           false: parse_srv6_list;\n        }\n    }\n\n    state parse_srv6_next_hdr {\n        transition select(hdr.srv6h.next_hdr) {\n            IP_PROTO_TCP: parse_tcp;\n            IP_PROTO_UDP: parse_udp;\n            IP_PROTO_ICMPV6: parse_icmpv6;\n            default: accept;\n        }\n    }\n}\n\n\ncontrol VerifyChecksumImpl(inout parsed_headers_t hdr,\n                           inout local_metadata_t meta)\n{\n    // Not used here. We assume all packets have valid checksum, if not, we let\n    // the end hosts detect errors.\n    apply { /* EMPTY */ }\n}\n\n\ncontrol IngressPipeImpl (inout parsed_headers_t    hdr,\n                         inout local_metadata_t    local_metadata,\n                         inout standard_metadata_t standard_metadata) {\n\n    // Drop action shared by many tables.\n    action drop() {\n        mark_to_drop(standard_metadata);\n    }\n\n\n    // *** L2 BRIDGING\n    //\n    // Here we define tables to forward packets based on their Ethernet\n    // destination address. There are two types of L2 entries that we\n    // need to support:\n    //\n    // 1. Unicast entries: which will be filled in by the control plane when the\n    //    location (port) of new hosts is learned.\n    // 2. Broadcast/multicast entries: used replicate NDP Neighbor Solicitation\n    //    (NS) messages to all host-facing ports;\n    //\n    // For (2), unlike ARP messages in IPv4 which are broadcasted to Ethernet\n    // destination address FF:FF:FF:FF:FF:FF, NDP messages are sent to special\n    // Ethernet addresses specified by RFC2464. These addresses are prefixed\n    // with 33:33 and the last four octets are the last four octets of the IPv6\n    // destination multicast address. The most straightforward way of matching\n    // on such IPv6 broadcast/multicast packets, without digging in the details\n    // of RFC2464, is to use a ternary match on 33:33:**:**:**:**, where * means\n    // \"don't care\".\n    //\n    // For this reason, our solution defines two tables. One that matches in an\n    // exact fashion (easier to scale on switch ASIC memory) and one that uses\n    // ternary matching (which requires more expensive TCAM memories, usually\n    // much smaller).\n\n    // --- l2_exact_table (for unicast entries) --------------------------------\n\n    action set_egress_port(port_num_t port_num) {\n        standard_metadata.egress_spec = port_num;\n    }\n\n    table l2_exact_table {\n        key = {\n            hdr.ethernet.dst_addr: exact;\n        }\n        actions = {\n            set_egress_port;\n            @defaultonly drop;\n        }\n        const default_action = drop;\n        // The @name annotation is used here to provide a name to this table\n        // counter, as it will be needed by the compiler to generate the\n        // corresponding P4Info entity.\n        @name(\"l2_exact_table_counter\")\n        counters = direct_counter(CounterType.packets_and_bytes);\n    }\n\n    // --- l2_ternary_table (for broadcast/multicast entries) ------------------\n\n    action set_multicast_group(mcast_group_id_t gid) {\n        // gid will be used by the Packet Replication Engine (PRE) in the\n        // Traffic Manager--located right after the ingress pipeline, to\n        // replicate a packet to multiple egress ports, specified by the control\n        // plane by means of P4Runtime MulticastGroupEntry messages.\n        standard_metadata.mcast_grp = gid;\n        local_metadata.is_multicast = true;\n    }\n\n    table l2_ternary_table {\n        key = {\n            hdr.ethernet.dst_addr: ternary;\n        }\n        actions = {\n            set_multicast_group;\n            @defaultonly drop;\n        }\n        const default_action = drop;\n        @name(\"l2_ternary_table_counter\")\n        counters = direct_counter(CounterType.packets_and_bytes);\n    }\n\n\n    // *** TODO EXERCISE 5 (IPV6 ROUTING)\n    //\n    // 1. Create a table to to handle NDP messages to resolve the MAC address of\n    //    switch. This table should:\n    //    - match on hdr.ndp.target_ipv6_addr (exact match)\n    //    - provide action \"ndp_ns_to_na\" (look in snippets.p4)\n    //    - default_action should be \"NoAction\"\n    //\n    // 2. Create table to handle IPv6 routing. Create a L2 my station table (hit\n    //    when Ethernet destination address is the switch address). This table\n    //    should not do anything to the packet (i.e., NoAction), but the control\n    //    block below should use the result (table.hit) to decide how to process\n    //    the packet.\n    //\n    // 3. Create a table for IPv6 routing. An action selector should be use to\n    //    pick a next hop MAC address according to a hash of packet header\n    //    fields (IPv6 source/destination address and the flow label). Look in\n    //    snippets.p4 for an example of an action selector and table using it.\n    //\n    // You can name your tables whatever you like. You will need to fill\n    // the name in elsewhere in this exercise.\n\n    // --- ndp_reply_table -----------------------------------------------------\n\n    action ndp_ns_to_na(mac_addr_t target_mac) {\n        hdr.ethernet.src_addr = target_mac;\n        hdr.ethernet.dst_addr = IPV6_MCAST_01;\n        ipv6_addr_t host_ipv6_tmp = hdr.ipv6.src_addr;\n        hdr.ipv6.src_addr = hdr.ndp.target_ipv6_addr;\n        hdr.ipv6.dst_addr = host_ipv6_tmp;\n        hdr.ipv6.next_hdr = IP_PROTO_ICMPV6;\n        hdr.icmpv6.type = ICMP6_TYPE_NA;\n        hdr.ndp.flags = NDP_FLAG_ROUTER | NDP_FLAG_OVERRIDE;\n        hdr.ndp.type = NDP_OPT_TARGET_LL_ADDR;\n        hdr.ndp.length = 1;\n        hdr.ndp.target_mac_addr = target_mac;\n        standard_metadata.egress_spec = standard_metadata.ingress_port;\n    }\n\n    table ndp_reply_table {\n        key = {\n            hdr.ndp.target_ipv6_addr: exact;\n        }\n        actions = {\n            ndp_ns_to_na;\n        }\n        @name(\"ndp_reply_table_counter\")\n        counters = direct_counter(CounterType.packets_and_bytes);\n    }\n\n    // --- my_station_table ---------------------------------------------------\n\n    table my_station_table {\n        key = {\n            hdr.ethernet.dst_addr: exact;\n        }\n        actions = { NoAction; }\n        @name(\"my_station_table_counter\")\n        counters = direct_counter(CounterType.packets_and_bytes);\n    }\n\n    // --- routing_v6_table ----------------------------------------------------\n\n    action_selector(HashAlgorithm.crc16, 32w1024, 32w16) ecmp_selector;\n\n    action set_next_hop(mac_addr_t dmac) {\n        hdr.ethernet.src_addr = hdr.ethernet.dst_addr;\n        hdr.ethernet.dst_addr = dmac;\n        // Decrement TTL\n        hdr.ipv6.hop_limit = hdr.ipv6.hop_limit - 1;\n    }\n    table routing_v6_table {\n      key = {\n          hdr.ipv6.dst_addr:          lpm;\n          // The following fields are not used for matching, but as input to the\n          // ecmp_selector hash function.\n          hdr.ipv6.dst_addr:          selector;\n          hdr.ipv6.src_addr:          selector;\n          hdr.ipv6.flow_label:        selector;\n          // The rest of the 5-tuple is optional per RFC6438\n          hdr.ipv6.next_hdr:          selector;\n          local_metadata.l4_src_port: selector;\n          local_metadata.l4_dst_port: selector;\n      }\n      actions = {\n          set_next_hop;\n      }\n      implementation = ecmp_selector;\n      @name(\"routing_v6_table_counter\")\n      counters = direct_counter(CounterType.packets_and_bytes);\n    }\n\n    // *** TODO EXERCISE 6 (SRV6)\n    //\n    // Implement tables to provide SRV6 logic.\n\n    // --- srv6_my_sid----------------------------------------------------------\n\n    // Process the packet if the destination IP is the segemnt Id(sid) of this\n    // device. This table will decrement the \"segment left\" field from the Srv6\n    // header and set destination IP address to next segment.\n\n    action srv6_end() {\n        hdr.srv6h.segment_left = hdr.srv6h.segment_left - 1;\n        hdr.ipv6.dst_addr = local_metadata.next_srv6_sid;\n    }\n\n    direct_counter(CounterType.packets_and_bytes) srv6_my_sid_table_counter;\n    table srv6_my_sid {\n      key = {\n          hdr.ipv6.dst_addr: lpm;\n      }\n      actions = {\n          srv6_end;\n      }\n      counters = srv6_my_sid_table_counter;\n    }\n\n    // --- srv6_transit --------------------------------------------------------\n\n    // Inserts the SRv6 header to the IPv6 header of the packet based on the\n    // destination IP address.\n\n\n    action insert_srv6h_header(bit<8> num_segments) {\n        hdr.srv6h.setValid();\n        hdr.srv6h.next_hdr = hdr.ipv6.next_hdr;\n        hdr.srv6h.hdr_ext_len =  num_segments * 2;\n        hdr.srv6h.routing_type = 4;\n        hdr.srv6h.segment_left = num_segments - 1;\n        hdr.srv6h.last_entry = num_segments - 1;\n        hdr.srv6h.flags = 0;\n        hdr.srv6h.tag = 0;\n        hdr.ipv6.next_hdr = IP_PROTO_SRV6;\n    }\n\n    /*\n       Single segment header doesn't make sense given PSP\n       i.e. we will pop the SRv6 header when segments_left reaches 0\n     */\n\n    action srv6_t_insert_2(ipv6_addr_t s1, ipv6_addr_t s2) {\n        hdr.ipv6.dst_addr = s1;\n        hdr.ipv6.payload_len = hdr.ipv6.payload_len + 40;\n        insert_srv6h_header(2);\n        hdr.srv6_list[0].setValid();\n        hdr.srv6_list[0].segment_id = s2;\n        hdr.srv6_list[1].setValid();\n        hdr.srv6_list[1].segment_id = s1;\n    }\n\n    action srv6_t_insert_3(ipv6_addr_t s1, ipv6_addr_t s2, ipv6_addr_t s3) {\n        hdr.ipv6.dst_addr = s1;\n        hdr.ipv6.payload_len = hdr.ipv6.payload_len + 56;\n        insert_srv6h_header(3);\n        hdr.srv6_list[0].setValid();\n        hdr.srv6_list[0].segment_id = s3;\n        hdr.srv6_list[1].setValid();\n        hdr.srv6_list[1].segment_id = s2;\n        hdr.srv6_list[2].setValid();\n        hdr.srv6_list[2].segment_id = s1;\n    }\n\n    direct_counter(CounterType.packets_and_bytes) srv6_transit_table_counter;\n    table srv6_transit {\n      key = {\n          hdr.ipv6.dst_addr: lpm;\n          // TODO: what other fields do we want to match?\n      }\n      actions = {\n          srv6_t_insert_2;\n          srv6_t_insert_3;\n          // Extra credit: set a metadata field, then push label stack in egress\n      }\n      counters = srv6_transit_table_counter;\n    }\n\n    // Called directly in the apply block.\n    action srv6_pop() {\n      hdr.ipv6.next_hdr = hdr.srv6h.next_hdr;\n      // SRv6 header is 8 bytes\n      // SRv6 list entry is 16 bytes each\n      // (((bit<16>)hdr.srv6h.last_entry + 1) * 16) + 8;\n      bit<16> srv6h_size = (((bit<16>)hdr.srv6h.last_entry + 1) << 4) + 8;\n      hdr.ipv6.payload_len = hdr.ipv6.payload_len - srv6h_size;\n\n      hdr.srv6h.setInvalid();\n      // Need to set MAX_HOPS headers invalid\n      hdr.srv6_list[0].setInvalid();\n      hdr.srv6_list[1].setInvalid();\n      hdr.srv6_list[2].setInvalid();\n    }\n\n    // *** ACL\n    //\n    // Provides ways to override a previous forwarding decision, for example\n    // requiring that a packet is cloned/sent to the CPU, or dropped.\n    //\n    // We use this table to clone all NDP packets to the control plane, so to\n    // enable host discovery. When the location of a new host is discovered, the\n    // controller is expected to update the L2 and L3 tables with the\n    // correspionding brinding and routing entries.\n\n    action send_to_cpu() {\n        standard_metadata.egress_spec = CPU_PORT;\n    }\n\n    action clone_to_cpu() {\n        // Cloning is achieved by using a v1model-specific primitive. Here we\n        // set the type of clone operation (ingress-to-egress pipeline), the\n        // clone session ID (the CPU one), and the metadata fields we want to\n        // preserve for the cloned packet replica.\n        clone3(CloneType.I2E, CPU_CLONE_SESSION_ID, { standard_metadata.ingress_port });\n    }\n\n    table acl_table {\n        key = {\n            standard_metadata.ingress_port: ternary;\n            hdr.ethernet.dst_addr:          ternary;\n            hdr.ethernet.src_addr:          ternary;\n            hdr.ethernet.ether_type:        ternary;\n            local_metadata.ip_proto:        ternary;\n            local_metadata.icmp_type:       ternary;\n            local_metadata.l4_src_port:     ternary;\n            local_metadata.l4_dst_port:     ternary;\n        }\n        actions = {\n            send_to_cpu;\n            clone_to_cpu;\n            drop;\n        }\n        @name(\"acl_table_counter\")\n        counters = direct_counter(CounterType.packets_and_bytes);\n    }\n\n    apply {\n\n        if (hdr.cpu_out.isValid()) {\n            // *** TODO EXERCISE 4\n            // Implement logic such that if this is a packet-out from the\n            // controller:\n            // 1. Set the packet egress port to that found in the cpu_out header\n            // 2. Remove (set invalid) the cpu_out header\n            // 3. Exit the pipeline here (no need to go through other tables\n\n            standard_metadata.egress_spec = hdr.cpu_out.egress_port;\n            hdr.cpu_out.setInvalid();\n            exit;\n        }\n\n        bool do_l3_l2 = true;\n\n        if (hdr.icmpv6.isValid() && hdr.icmpv6.type == ICMP6_TYPE_NS) {\n            // *** TODO EXERCISE 5\n            // Insert logic to handle NDP messages to resolve the MAC address of the\n            // switch. You should apply the NDP reply table created before.\n            // If this is an NDP NS packet, i.e., if a matching entry is found,\n            // unset the \"do_l3_l2\" flag to skip the L3 and L2 tables, as the\n            // \"ndp_ns_to_na\" action already set an egress port.\n\n            if (ndp_reply_table.apply().hit) {\n                do_l3_l2 = false;\n            }\n        }\n\n        if (do_l3_l2) {\n\n            // *** TODO EXERCISE 5\n            // Insert logic to match the My Station table and upon hit, the\n            // routing table. You should also add a conditional to drop the\n            // packet if the hop_limit reaches 0.\n\n            // *** TODO EXERCISE 6\n            // Insert logic to match the SRv6 My SID and Transit tables as well\n            // as logic to perform PSP behavior. HINT: This logic belongs\n            // somewhere between checking the switch's my station table and\n            // applying the routing table.\n\n            if (hdr.ipv6.isValid() && my_station_table.apply().hit) {\n\n                if (srv6_my_sid.apply().hit) {\n                    // PSP logic -- enabled for all packets\n                    if (hdr.srv6h.isValid() && hdr.srv6h.segment_left == 0) {\n                        srv6_pop();\n                    }\n                } else {\n                    srv6_transit.apply();\n                }\n\n                routing_v6_table.apply();\n                // Check TTL, drop packet if necessary to avoid loops.\n                if(hdr.ipv6.hop_limit == 0) { drop(); }\n            }\n\n            // L2 bridging logic. Apply the exact table first...\n            if (!l2_exact_table.apply().hit) {\n                // ...if an entry is NOT found, apply the ternary one in case\n                // this is a multicast/broadcast NDP NS packet.\n                l2_ternary_table.apply();\n            }\n        }\n\n        // Lastly, apply the ACL table.\n        acl_table.apply();\n    }\n}\n\n\ncontrol EgressPipeImpl (inout parsed_headers_t hdr,\n                        inout local_metadata_t local_metadata,\n                        inout standard_metadata_t standard_metadata) {\n    apply {\n\n        if (standard_metadata.egress_port == CPU_PORT) {\n            // *** TODO EXERCISE 4\n            // Implement logic such that if the packet is to be forwarded to the\n            // CPU port, e.g., if in ingress we matched on the ACL table with\n            // action send/clone_to_cpu...\n            // 1. Set cpu_in header as valid\n            // 2. Set the cpu_in.ingress_port field to the original packet's\n            //    ingress port (standard_metadata.ingress_port).\n\n            hdr.cpu_in.setValid();\n            hdr.cpu_in.ingress_port = standard_metadata.ingress_port;\n            exit;\n        }\n\n        // If this is a multicast packet (flag set by l2_ternary_table), make\n        // sure we are not replicating the packet on the same port where it was\n        // received. This is useful to avoid broadcasting NDP requests on the\n        // ingress port.\n        if (local_metadata.is_multicast == true &&\n              standard_metadata.ingress_port == standard_metadata.egress_port) {\n            mark_to_drop(standard_metadata);\n        }\n    }\n}\n\n\ncontrol ComputeChecksumImpl(inout parsed_headers_t hdr,\n                            inout local_metadata_t local_metadata)\n{\n    apply {\n        // The following is used to update the ICMPv6 checksum of NDP\n        // NA packets generated by the ndp reply table in the ingress pipeline.\n        // This function is executed only if the NDP header is present.\n        update_checksum(hdr.ndp.isValid(),\n            {\n                hdr.ipv6.src_addr,\n                hdr.ipv6.dst_addr,\n                hdr.ipv6.payload_len,\n                8w0,\n                hdr.ipv6.next_hdr,\n                hdr.icmpv6.type,\n                hdr.icmpv6.code,\n                hdr.ndp.flags,\n                hdr.ndp.target_ipv6_addr,\n                hdr.ndp.type,\n                hdr.ndp.length,\n                hdr.ndp.target_mac_addr\n            },\n            hdr.icmpv6.checksum,\n            HashAlgorithm.csum16\n        );\n    }\n}\n\n\ncontrol DeparserImpl(packet_out packet, in parsed_headers_t hdr) {\n    apply {\n        packet.emit(hdr.cpu_in);\n        packet.emit(hdr.ethernet);\n        packet.emit(hdr.ipv4);\n        packet.emit(hdr.ipv6);\n        packet.emit(hdr.srv6h);\n        packet.emit(hdr.srv6_list);\n        packet.emit(hdr.tcp);\n        packet.emit(hdr.udp);\n        packet.emit(hdr.icmp);\n        packet.emit(hdr.icmpv6);\n        packet.emit(hdr.ndp);\n    }\n}\n\n\nV1Switch(\n    ParserImpl(),\n    VerifyChecksumImpl(),\n    IngressPipeImpl(),\n    EgressPipeImpl(),\n    ComputeChecksumImpl(),\n    DeparserImpl()\n) main;\n"
  },
  {
    "path": "solution/ptf/tests/bridging.py",
    "content": "# Copyright 2013-present Barefoot Networks, Inc.\n# Copyright 2018-present Open Networking Foundation\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\n# ------------------------------------------------------------------------------\n# BRIDGING TESTS\n#\n# To run all tests in this file:\n#     make p4-test TEST=bridging\n#\n# To run a specific test case:\n#     make p4-test TEST=bridging.<TEST CLASS NAME>\n#\n# For example:\n#     make p4-test TEST=bridging.BridgingTest\n# ------------------------------------------------------------------------------\n\n# ------------------------------------------------------------------------------\n# Modify everywhere you see TODO\n#\n# When providing your solution, make sure to use the same names for P4Runtime\n# entities as specified in your P4Info file.\n#\n# Test cases are based on the P4 program design suggested in the exercises\n# README. Make sure to modify the test cases accordingly if you decide to\n# implement the pipeline differently.\n# ------------------------------------------------------------------------------\n\nfrom ptf.testutils import group\n\nfrom base_test import *\n\n# From the P4 program.\nCPU_CLONE_SESSION_ID = 99\n\n\n@group(\"bridging\")\nclass ArpNdpRequestWithCloneTest(P4RuntimeTest):\n    \"\"\"Tests ability to broadcast ARP requests and NDP Neighbor Solicitation\n    (NS) messages as well as cloning to CPU (controller) for host\n     discovery.\n    \"\"\"\n\n    def runTest(self):\n        #  Test With both ARP and NDP NS packets...\n        print_inline(\"ARP request ... \")\n        arp_pkt = testutils.simple_arp_packet()\n        self.testPacket(arp_pkt)\n\n        print_inline(\"NDP NS ... \")\n        ndp_pkt = genNdpNsPkt(src_mac=HOST1_MAC, src_ip=HOST1_IPV6,\n                              target_ip=HOST2_IPV6)\n        self.testPacket(ndp_pkt)\n\n    @autocleanup\n    def testPacket(self, pkt):\n        mcast_group_id = 10\n        mcast_ports = [self.port1, self.port2, self.port3]\n\n        # Add multicast group.\n        self.insert_pre_multicast_group(\n            group_id=mcast_group_id,\n            ports=mcast_ports)\n\n        # Match eth dst: FF:FF:FF:FF:FF:FF (MAC broadcast for ARP requests)\n        # Modify names to match content of P4Info file (look for the fully\n        # qualified name of tables, match fields, and actions.\n        # ---- START SOLUTION ----\n        self.insert(self.helper.build_table_entry(\n            table_name=\"IngressPipeImpl.l2_ternary_table\",\n            match_fields={\n                # Ternary match.\n                \"hdr.ethernet.dst_addr\": (\n                    \"FF:FF:FF:FF:FF:FF\",\n                    \"FF:FF:FF:FF:FF:FF\")\n            },\n            action_name=\"IngressPipeImpl.set_multicast_group\",\n            action_params={\n                \"gid\": mcast_group_id\n            },\n            priority=DEFAULT_PRIORITY\n        ))\n        # ---- END SOLUTION ----\n\n        # Match eth dst: 33:33:**:**:**:** (IPv6 multicast for NDP requests)\n        # Modify names to match content of P4Info file (look for the fully\n        # qualified name of tables, match fields, and actions.\n        # ---- START SOLUTION ----\n        self.insert(self.helper.build_table_entry(\n            table_name=\"IngressPipeImpl.l2_ternary_table\",\n            match_fields={\n                # Ternary match (value, mask)\n                \"hdr.ethernet.dst_addr\": (\n                    \"33:33:00:00:00:00\",\n                    \"FF:FF:00:00:00:00\")\n            },\n            action_name=\"IngressPipeImpl.set_multicast_group\",\n            action_params={\n                \"gid\": mcast_group_id\n            },\n            priority=DEFAULT_PRIORITY\n        ))\n        # ---- END SOLUTION ----\n\n        # Insert CPU clone session.\n        self.insert_pre_clone_session(\n            session_id=CPU_CLONE_SESSION_ID,\n            ports=[self.cpu_port])\n\n        # ACL entry to clone ARPs\n        self.insert(self.helper.build_table_entry(\n            table_name=\"IngressPipeImpl.acl_table\",\n            match_fields={\n                # Ternary match.\n                \"hdr.ethernet.ether_type\": (ARP_ETH_TYPE, 0xffff)\n            },\n            action_name=\"IngressPipeImpl.clone_to_cpu\",\n            priority=DEFAULT_PRIORITY\n        ))\n\n        # ACL entry to clone NDP Neighbor Solicitation\n        self.insert(self.helper.build_table_entry(\n            table_name=\"IngressPipeImpl.acl_table\",\n            match_fields={\n                # Ternary match.\n                \"hdr.ethernet.ether_type\": (IPV6_ETH_TYPE, 0xffff),\n                \"local_metadata.ip_proto\": (ICMPV6_IP_PROTO, 0xff),\n                \"local_metadata.icmp_type\": (NS_ICMPV6_TYPE, 0xff)\n            },\n            action_name=\"IngressPipeImpl.clone_to_cpu\",\n            priority=DEFAULT_PRIORITY\n        ))\n\n        for inport in mcast_ports:\n\n            # Send packet...\n            testutils.send_packet(self, inport, str(pkt))\n\n            # Pkt should be received on CPU via PacketIn...\n            # Expected P4Runtime PacketIn message.\n            exp_packet_in_msg = self.helper.build_packet_in(\n                payload=str(pkt),\n                metadata={\n                    \"ingress_port\": inport,\n                    \"_pad\": 0\n                })\n            self.verify_packet_in(exp_packet_in_msg)\n\n            # ...and on all ports except the ingress one.\n            verify_ports = set(mcast_ports)\n            verify_ports.discard(inport)\n            for port in verify_ports:\n                testutils.verify_packet(self, pkt, port)\n\n        testutils.verify_no_other_packets(self)\n\n\n@group(\"bridging\")\nclass ArpNdpReplyWithCloneTest(P4RuntimeTest):\n    \"\"\"Tests ability to clone ARP replies and NDP Neighbor Advertisement\n    (NA) messages as well as unicast forwarding to requesting host.\n    \"\"\"\n\n    def runTest(self):\n        #  Test With both ARP and NDP NS packets...\n        print_inline(\"ARP reply ... \")\n        # op=1 request, op=2 relpy\n        arp_pkt = testutils.simple_arp_packet(\n            eth_src=HOST1_MAC, eth_dst=HOST2_MAC, arp_op=2)\n        self.testPacket(arp_pkt)\n\n        print_inline(\"NDP NA ... \")\n        ndp_pkt = genNdpNaPkt(target_ip=HOST1_IPV6, target_mac=HOST1_MAC)\n        self.testPacket(ndp_pkt)\n\n    @autocleanup\n    def testPacket(self, pkt):\n\n        # L2 unicast entry, match on pkt's eth dst address.\n        # Modify names to match content of P4Info file (look for the fully\n        # qualified name of tables, match fields, and actions.\n        # ---- START SOLUTION ----\n        self.insert(self.helper.build_table_entry(\n            table_name=\"IngressPipeImpl.l2_exact_table\",\n            match_fields={\n                # Exact match.\n                \"hdr.ethernet.dst_addr\": pkt[Ether].dst\n            },\n            action_name=\"IngressPipeImpl.set_egress_port\",\n            action_params={\n                \"port_num\": self.port2\n            }\n        ))\n        # ---- END SOLUTION ----\n\n        # CPU clone session.\n        self.insert_pre_clone_session(\n            session_id=CPU_CLONE_SESSION_ID,\n            ports=[self.cpu_port])\n\n        # ACL entry to clone ARPs\n        self.insert(self.helper.build_table_entry(\n            table_name=\"IngressPipeImpl.acl_table\",\n            match_fields={\n                # Ternary match.\n                \"hdr.ethernet.ether_type\": (ARP_ETH_TYPE, 0xffff)\n            },\n            action_name=\"IngressPipeImpl.clone_to_cpu\",\n            priority=DEFAULT_PRIORITY\n        ))\n\n        # ACL entry to clone NDP Neighbor Solicitation\n        self.insert(self.helper.build_table_entry(\n            table_name=\"IngressPipeImpl.acl_table\",\n            match_fields={\n                # Ternary match.\n                \"hdr.ethernet.ether_type\": (IPV6_ETH_TYPE, 0xffff),\n                \"local_metadata.ip_proto\": (ICMPV6_IP_PROTO, 0xff),\n                \"local_metadata.icmp_type\": (NA_ICMPV6_TYPE, 0xff)\n            },\n            action_name=\"IngressPipeImpl.clone_to_cpu\",\n            priority=DEFAULT_PRIORITY\n        ))\n\n        testutils.send_packet(self, self.port1, str(pkt))\n\n        # Pkt should be received on CPU via PacketIn...\n        # Expected P4Runtime PacketIn message.\n        exp_packet_in_msg = self.helper.build_packet_in(\n            payload=str(pkt),\n            metadata={\n                \"ingress_port\": self.port1,\n                \"_pad\": 0\n            })\n        self.verify_packet_in(exp_packet_in_msg)\n\n        # ..and on port2 as indicated by the L2 unicast rule.\n        testutils.verify_packet(self, pkt, self.port2)\n\n\n@group(\"bridging\")\nclass BridgingTest(P4RuntimeTest):\n    \"\"\"Tests basic L2 unicast forwarding\"\"\"\n\n    def runTest(self):\n        # Test with different type of packets.\n        for pkt_type in [\"tcp\", \"udp\", \"icmp\", \"tcpv6\", \"udpv6\", \"icmpv6\"]:\n            print_inline(\"%s ... \" % pkt_type)\n            pkt = getattr(testutils, \"simple_%s_packet\" % pkt_type)(pktlen=120)\n            self.testPacket(pkt)\n\n    @autocleanup\n    def testPacket(self, pkt):\n\n        # Insert L2 unicast entry, match on pkt's eth dst address.\n        # Modify names to match content of P4Info file (look for the fully\n        # qualified name of tables, match fields, and actions.\n        # ---- START SOLUTION ----\n        self.insert(self.helper.build_table_entry(\n            table_name=\"IngressPipeImpl.l2_exact_table\",\n            match_fields={\n                # Exact match.\n                \"hdr.ethernet.dst_addr\": pkt[Ether].dst\n            },\n            action_name=\"IngressPipeImpl.set_egress_port\",\n            action_params={\n                \"port_num\": self.port2\n            }\n        ))\n        # ---- END SOLUTION ----\n\n        # Test bidirectional forwarding by swapping MAC addresses on the pkt\n        pkt2 = pkt_mac_swap(pkt.copy())\n\n        # Insert L2 unicast entry for pkt2.\n        # Modify names to match content of P4Info file (look for the fully\n        # qualified name of tables, match fields, and actions.\n        # ---- START SOLUTION ----\n        self.insert(self.helper.build_table_entry(\n            table_name=\"IngressPipeImpl.l2_exact_table\",\n            match_fields={\n                # Exact match.\n                \"hdr.ethernet.dst_addr\": pkt2[Ether].dst\n            },\n            action_name=\"IngressPipeImpl.set_egress_port\",\n            action_params={\n                \"port_num\": self.port1\n            }\n        ))\n        # ---- END SOLUTION ----\n\n        # Send and verify.\n        testutils.send_packet(self, self.port1, str(pkt))\n        testutils.send_packet(self, self.port2, str(pkt2))\n\n        testutils.verify_each_packet_on_each_port(\n            self, [pkt, pkt2], [self.port2, self.port1])\n"
  },
  {
    "path": "solution/ptf/tests/packetio.py",
    "content": "# Copyright 2019-present Open Networking Foundation\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\n# ------------------------------------------------------------------------------\n# CONTROLLER PACKET-IN/OUT TESTS\n#\n# To run all tests in this file:\n#     make p4-test TEST=packetio\n#\n# To run a specific test case:\n#     make p4-test TEST=packetio.<TEST CLASS NAME>\n#\n# For example:\n#     make p4-test TEST=packetio.PacketOutTest\n# ------------------------------------------------------------------------------\n\n# ------------------------------------------------------------------------------\n# Modify everywhere you see TODO\n#\n# When providing your solution, make sure to use the same names for P4Runtime\n# entities as specified in your P4Info file.\n#\n# Test cases are based on the P4 program design suggested in the exercises\n# README. Make sure to modify the test cases accordingly if you decide to\n# implement the pipeline differently.\n# ------------------------------------------------------------------------------\n\nfrom ptf.testutils import group\n\nfrom base_test import *\n\nCPU_CLONE_SESSION_ID = 99\n\n\n@group(\"packetio\")\nclass PacketOutTest(P4RuntimeTest):\n    \"\"\"Tests controller packet-out capability by sending PacketOut messages and\n    expecting a corresponding packet on the output port set in the PacketOut\n    metadata.\n    \"\"\"\n\n    def runTest(self):\n        for pkt_type in [\"tcp\", \"udp\", \"icmp\", \"arp\", \"tcpv6\", \"udpv6\",\n                         \"icmpv6\"]:\n            print_inline(\"%s ... \" % pkt_type)\n            pkt = getattr(testutils, \"simple_%s_packet\" % pkt_type)()\n            self.testPacket(pkt)\n\n    def testPacket(self, pkt):\n        for outport in [self.port1, self.port2]:\n            # Build PacketOut message.\n            # TODO EXERCISE 4\n            # Modify metadata names to match the content of your P4Info file\n            # ---- START SOLUTION ----\n            packet_out_msg = self.helper.build_packet_out(\n                payload=str(pkt),\n                metadata={\n                    \"egress_port\": outport,\n                    \"_pad\": 0\n                })\n            # ---- END SOLUTION ----\n\n            # Send message and expect packet on the given data plane port.\n            self.send_packet_out(packet_out_msg)\n\n            testutils.verify_packet(self, pkt, outport)\n\n        # Make sure packet was forwarded only on the specified ports\n        testutils.verify_no_other_packets(self)\n\n\n@group(\"packetio\")\nclass PacketInTest(P4RuntimeTest):\n    \"\"\"Tests controller packet-in capability my matching on the packet EtherType\n    and cloning to the CPU port.\n    \"\"\"\n\n    def runTest(self):\n        for pkt_type in [\"tcp\", \"udp\", \"icmp\", \"arp\", \"tcpv6\", \"udpv6\",\n                         \"icmpv6\"]:\n            print_inline(\"%s ... \" % pkt_type)\n            pkt = getattr(testutils, \"simple_%s_packet\" % pkt_type)()\n            self.testPacket(pkt)\n\n    @autocleanup\n    def testPacket(self, pkt):\n\n        # Insert clone to CPU session\n        self.insert_pre_clone_session(\n            session_id=CPU_CLONE_SESSION_ID,\n            ports=[self.cpu_port])\n\n        # Insert ACL entry to match on the given eth_type and clone to CPU.\n        eth_type = pkt[Ether].type\n        # TODO EXERCISE 4\n        # Modify names to match content of P4Info file (look for the fully\n        # qualified name of the ACL table, EtherType match field, and\n        # clone_to_cpu action.\n        # ---- START SOLUTION ----\n        self.insert(self.helper.build_table_entry(\n            table_name=\"IngressPipeImpl.acl_table\",\n            match_fields={\n                # Ternary match.\n                \"hdr.ethernet.ether_type\": (eth_type, 0xffff)\n            },\n            action_name=\"IngressPipeImpl.clone_to_cpu\",\n            priority=DEFAULT_PRIORITY\n        ))\n        # ---- END SOLUTION ----\n\n        for inport in [self.port1, self.port2, self.port3]:\n            # TODO EXERCISE 4\n            # Modify metadata names to match the content of your P4Info file\n            # ---- START SOLUTION ----\n            # Expected P4Runtime PacketIn message.\n            exp_packet_in_msg = self.helper.build_packet_in(\n                payload=str(pkt),\n                metadata={\n                    \"ingress_port\": inport,\n                    \"_pad\": 0\n                })\n            # ---- END SOLUTION ----\n\n            # Send packet to given switch ingress port and expect P4Runtime\n            # PacketIn message.\n            testutils.send_packet(self, inport, str(pkt))\n            self.verify_packet_in(exp_packet_in_msg)\n"
  },
  {
    "path": "solution/ptf/tests/routing.py",
    "content": "# Copyright 2019-present Open Networking Foundation\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\n# ------------------------------------------------------------------------------\n# IPV6 ROUTING TESTS\n#\n# To run all tests:\n#     make p4-test TEST=routing\n#\n# To run a specific test case:\n#     make p4-test TEST=routing.<TEST CLASS NAME>\n#\n# For example:\n#     make p4-test TEST=routing.IPv6RoutingTest\n# ------------------------------------------------------------------------------\n\n# ------------------------------------------------------------------------------\n# Modify everywhere you see TODO\n#\n# When providing your solution, make sure to use the same names for P4Runtime\n# entities as specified in your P4Info file.\n#\n# Test cases are based on the P4 program design suggested in the exercises\n# README. Make sure to modify the test cases accordingly if you decide to\n# implement the pipeline differently.\n# ------------------------------------------------------------------------------\n\nfrom ptf.testutils import group\n\nfrom base_test import *\n\n\n@group(\"routing\")\nclass IPv6RoutingTest(P4RuntimeTest):\n    \"\"\"Tests basic IPv6 routing\"\"\"\n\n    def runTest(self):\n        # Test with different type of packets.\n        for pkt_type in [\"tcpv6\", \"udpv6\", \"icmpv6\"]:\n            print_inline(\"%s ... \" % pkt_type)\n            pkt = getattr(testutils, \"simple_%s_packet\" % pkt_type)()\n            self.testPacket(pkt)\n\n    @autocleanup\n    def testPacket(self, pkt):\n        next_hop_mac = SWITCH2_MAC\n\n        # Add entry to \"My Station\" table. Consider the given pkt's eth dst addr\n        # as myStationMac address.\n        # *** TODO EXERCISE 5\n        # Modify names to match content of P4Info file (look for the fully\n        # qualified name of tables, match fields, and actions.\n        # ---- START SOLUTION ----\n        self.insert(self.helper.build_table_entry(\n            table_name=\"IngressPipeImpl.my_station_table\",\n            match_fields={\n                # Exact match.\n                \"hdr.ethernet.dst_addr\": pkt[Ether].dst\n            },\n            action_name=\"NoAction\"\n        ))\n        # ---- END SOLUTION ----\n\n        # Insert ECMP group with only one member (next_hop_mac)\n        # *** TODO EXERCISE 5\n        # Modify names to match content of P4Info file (look for the fully\n        # qualified name of tables, match fields, and actions.\n        # ---- START SOLUTION ----\n        self.insert(self.helper.build_act_prof_group(\n            act_prof_name=\"IngressPipeImpl.ecmp_selector\",\n            group_id=1,\n            actions=[\n                # List of tuples (action name, action param dict)\n                (\"IngressPipeImpl.set_next_hop\", {\"dmac\": next_hop_mac}),\n            ]\n        ))\n        # ---- END SOLUTION ----\n\n        # Insert L3 entry to app pkt's IPv6 dst addr to group\n        # *** TODO EXERCISE 5\n        # Modify names to match content of P4Info file (look for the fully\n        # qualified name of tables, match fields, and actions.\n        # ---- START SOLUTION ----\n        self.insert(self.helper.build_table_entry(\n            table_name=\"IngressPipeImpl.routing_v6_table\",\n            match_fields={\n                # LPM match (value, prefix)\n                \"hdr.ipv6.dst_addr\": (pkt[IPv6].dst, 128)\n            },\n            group_id=1\n        ))\n        # ---- END SOLUTION ----\n\n        # Insert L3 entry to map next_hop_mac to output port 2.\n        # *** TODO EXERCISE 5\n        # Modify names to match content of P4Info file (look for the fully\n        # qualified name of tables, match fields, and actions.\n        # ---- START SOLUTION ----\n        self.insert(self.helper.build_table_entry(\n            table_name=\"IngressPipeImpl.l2_exact_table\",\n            match_fields={\n                # Exact match\n                \"hdr.ethernet.dst_addr\": next_hop_mac\n            },\n            action_name=\"IngressPipeImpl.set_egress_port\",\n            action_params={\n                \"port_num\": self.port2\n            }\n        ))\n        # ---- END SOLUTION ----\n\n        # Expected pkt should have routed MAC addresses and decremented hop\n        # limit (TTL).\n        exp_pkt = pkt.copy()\n        pkt_route(exp_pkt, next_hop_mac)\n        pkt_decrement_ttl(exp_pkt)\n\n        testutils.send_packet(self, self.port1, str(pkt))\n        testutils.verify_packet(self, exp_pkt, self.port2)\n\n\n@group(\"routing\")\nclass NdpReplyGenTest(P4RuntimeTest):\n    \"\"\"Tests automatic generation of NDP Neighbor Advertisement for IPV6\n    addresses associated to the switch interface.\n    \"\"\"\n\n    @autocleanup\n    def runTest(self):\n        switch_ip = SWITCH1_IPV6\n        target_mac = SWITCH1_MAC\n\n        # Insert entry to transform NDP NA packets for the given target address\n        # (match), to NDP NA packets with the given target MAC address (action\n        # *** TODO EXERCISE 5\n        # Modify names to match content of P4Info file (look for the fully\n        # qualified name of tables, match fields, and actions.\n        # ---- START SOLUTION ----\n        self.insert(self.helper.build_table_entry(\n            table_name=\"IngressPipeImpl.ndp_reply_table\",\n            match_fields={\n                # Exact match.\n                \"hdr.ndp.target_ipv6_addr\": switch_ip\n            },\n            action_name=\"IngressPipeImpl.ndp_ns_to_na\",\n            action_params={\n                \"target_mac\": target_mac\n            }\n        ))\n        # ---- END SOLUTION ----\n\n        # NDP Neighbor Solicitation packet\n        pkt = genNdpNsPkt(target_ip=switch_ip)\n\n        # NDP Neighbor Advertisement packet\n        exp_pkt = genNdpNaPkt(target_ip=switch_ip,\n                              target_mac=target_mac,\n                              src_mac=target_mac,\n                              src_ip=switch_ip,\n                              dst_ip=pkt[IPv6].src)\n\n        # Send NDP NS, expect NDP NA from the same port.\n        testutils.send_packet(self, self.port1, str(pkt))\n        testutils.verify_packet(self, exp_pkt, self.port1)"
  },
  {
    "path": "solution/ptf/tests/srv6.py",
    "content": "# Copyright 2019-present Open Networking Foundation\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\n# ------------------------------------------------------------------------------\n# SRV6 TESTS\n#\n# To run all tests:\n#     make p4-test TEST=srv6\n#\n# To run a specific test case:\n#     make p4-test TEST=srv6.<TEST CLASS NAME>\n#\n# For example:\n#     make p4-test TEST=srv6.Srv6InsertTest\n# ------------------------------------------------------------------------------\n\n# ------------------------------------------------------------------------------\n# Modify everywhere you see TODO\n#\n# When providing your solution, make sure to use the same names for P4Runtime\n# entities as specified in your P4Info file.\n#\n# Test cases are based on the P4 program design suggested in the exercises\n# README. Make sure to modify the test cases accordingly if you decide to\n# implement the pipeline differently.\n# ------------------------------------------------------------------------------\n\nfrom ptf.testutils import group\n\nfrom base_test import *\n\n\ndef insert_srv6_header(pkt, sid_list):\n    \"\"\"Applies SRv6 insert transformation to the given packet.\n    \"\"\"\n    # Set IPv6 dst to first SID...\n    pkt[IPv6].dst = sid_list[0]\n    # Insert SRv6 header between IPv6 header and payload\n    sid_len = len(sid_list)\n    srv6_hdr = IPv6ExtHdrSegmentRouting(\n        nh=pkt[IPv6].nh,\n        addresses=sid_list[::-1],\n        len=sid_len * 2,\n        segleft=sid_len - 1,\n        lastentry=sid_len - 1)\n    pkt[IPv6].nh = 43  # next IPv6 header is SR header\n    pkt[IPv6].payload = srv6_hdr / pkt[IPv6].payload\n    return pkt\n\n\ndef pop_srv6_header(pkt):\n    \"\"\"Removes SRv6 header from the given packet.\n    \"\"\"\n    pkt[IPv6].nh = pkt[IPv6ExtHdrSegmentRouting].nh\n    pkt[IPv6].payload = pkt[IPv6ExtHdrSegmentRouting].payload\n\n\ndef set_cksum(pkt, cksum):\n    if TCP in pkt:\n        pkt[TCP].chksum = cksum\n    if UDP in pkt:\n        pkt[UDP].chksum = cksum\n    if ICMPv6Unknown in pkt:\n        pkt[ICMPv6Unknown].cksum = cksum\n\n\n@group(\"srv6\")\nclass Srv6InsertTest(P4RuntimeTest):\n    \"\"\"Tests SRv6 insert behavior, where the switch receives an IPv6 packet and\n    inserts the SRv6 header\n    \"\"\"\n\n    def runTest(self):\n        sid_lists = (\n            [SWITCH2_IPV6, SWITCH3_IPV6, HOST2_IPV6],\n            [SWITCH2_IPV6, HOST2_IPV6],\n        )\n        next_hop_mac = SWITCH2_MAC\n\n        for sid_list in sid_lists:\n            for pkt_type in [\"tcpv6\", \"udpv6\", \"icmpv6\"]:\n                print_inline(\"%s %d SIDs ... \" % (pkt_type, len(sid_list)))\n\n                pkt = getattr(testutils, \"simple_%s_packet\" % pkt_type)()\n\n                self.testPacket(pkt, sid_list, next_hop_mac)\n\n    @autocleanup\n    def testPacket(self, pkt, sid_list, next_hop_mac):\n\n        # *** TODO EXERCISE 6\n        # Modify names to match content of P4Info file (look for the fully\n        # qualified name of tables, match fields, and actions.\n        # ---- START SOLUTION ----\n\n        # Add entry to \"My Station\" table. Consider the given pkt's eth dst addr\n        # as myStationMac address.\n        self.insert(self.helper.build_table_entry(\n            table_name=\"IngressPipeImpl.my_station_table\",\n            match_fields={\n                # Exact match.\n                \"hdr.ethernet.dst_addr\": pkt[Ether].dst\n            },\n            action_name=\"NoAction\"\n        ))\n\n        # Insert SRv6 header when matching the pkt's IPV6 dst addr.\n        # Action name an params are generated based on the number of SIDs given.\n        # For example, with 2 SIDs:\n        # action_name = IngressPipeImpl.srv6_t_insert_2\n        # action_params = {\n        #     \"s1\": sid[0],\n        #     \"s2\": sid[1]\n        # }\n        sid_len = len(sid_list)\n\n        action_name = \"IngressPipeImpl.srv6_t_insert_%d\" % sid_len\n        actions_params = {\"s%d\" % (x + 1): sid_list[x] for x in range(sid_len)}\n\n        self.insert(self.helper.build_table_entry(\n            table_name=\"IngressPipeImpl.srv6_transit\",\n            match_fields={\n                # LPM match (value, prefix)\n                \"hdr.ipv6.dst_addr\": (pkt[IPv6].dst, 128)\n            },\n            action_name=action_name,\n            action_params=actions_params\n        ))\n\n        # Insert ECMP group with only one member (next_hop_mac)\n        self.insert(self.helper.build_act_prof_group(\n            act_prof_name=\"IngressPipeImpl.ecmp_selector\",\n            group_id=1,\n            actions=[\n                # List of tuples (action name, {action param: value})\n                (\"IngressPipeImpl.set_next_hop\", {\"dmac\": next_hop_mac}),\n            ]\n        ))\n\n        # Now that we inserted the SRv6 header, we expect the pkt's IPv6 dst\n        # addr to be the first on the SID list.\n        # Match on L3 routing table.\n        first_sid = sid_list[0]\n        self.insert(self.helper.build_table_entry(\n            table_name=\"IngressPipeImpl.routing_v6_table\",\n            match_fields={\n                # LPM match (value, prefix)\n                \"hdr.ipv6.dst_addr\": (first_sid, 128)\n            },\n            group_id=1\n        ))\n\n        # Map next_hop_mac to output port\n        self.insert(self.helper.build_table_entry(\n            table_name=\"IngressPipeImpl.l2_exact_table\",\n            match_fields={\n                # Exact match.\n                \"hdr.ethernet.dst_addr\": next_hop_mac\n            },\n            action_name=\"IngressPipeImpl.set_egress_port\",\n            action_params={\n                \"port_num\": self.port2\n            }\n        ))\n\n        # ---- END SOLUTION ----\n\n        # Build expected packet from the given one...\n        exp_pkt = insert_srv6_header(pkt.copy(), sid_list)\n\n        # Route and decrement TTL\n        pkt_route(exp_pkt, next_hop_mac)\n        pkt_decrement_ttl(exp_pkt)\n\n        # Bonus: update P4 program to calculate correct checksum\n        set_cksum(pkt, 1)\n        set_cksum(exp_pkt, 1)\n\n        testutils.send_packet(self, self.port1, str(pkt))\n        testutils.verify_packet(self, exp_pkt, self.port2)\n\n\n@group(\"srv6\")\nclass Srv6TransitTest(P4RuntimeTest):\n    \"\"\"Tests SRv6 transit behavior, where the switch ignores the SRv6 header\n    and routes the packet normally, without applying any SRv6-related\n    modifications.\n    \"\"\"\n\n    def runTest(self):\n        my_sid = SWITCH1_IPV6\n        sid_lists = (\n            [SWITCH2_IPV6, SWITCH3_IPV6, HOST2_IPV6],\n            [SWITCH2_IPV6, HOST2_IPV6],\n        )\n        next_hop_mac = SWITCH2_MAC\n\n        for sid_list in sid_lists:\n            for pkt_type in [\"tcpv6\", \"udpv6\", \"icmpv6\"]:\n                print_inline(\"%s %d SIDs ... \" % (pkt_type, len(sid_list)))\n\n                pkt = getattr(testutils, \"simple_%s_packet\" % pkt_type)()\n                pkt = insert_srv6_header(pkt, sid_list)\n\n                self.testPacket(pkt, next_hop_mac, my_sid)\n\n    @autocleanup\n    def testPacket(self, pkt, next_hop_mac, my_sid):\n\n        # *** TODO EXERCISE 6\n        # Modify names to match content of P4Info file (look for the fully\n        # qualified name of tables, match fields, and actions.\n        # ---- START SOLUTION ----\n\n        # Add entry to \"My Station\" table. Consider the given pkt's eth dst addr\n        # as myStationMac address.\n        self.insert(self.helper.build_table_entry(\n            table_name=\"IngressPipeImpl.my_station_table\",\n            match_fields={\n                # Exact match.\n                \"hdr.ethernet.dst_addr\": pkt[Ether].dst\n            },\n            action_name=\"NoAction\"\n        ))\n\n        # This should be missed, this is plain IPv6 routing.\n        self.insert(self.helper.build_table_entry(\n            table_name=\"IngressPipeImpl.srv6_my_sid\",\n            match_fields={\n                # Longest prefix match (value, prefix length)\n                \"hdr.ipv6.dst_addr\": (my_sid, 128)\n            },\n            action_name=\"IngressPipeImpl.srv6_end\"\n        ))\n\n        # Insert ECMP group with only one member (next_hop_mac)\n        self.insert(self.helper.build_act_prof_group(\n            act_prof_name=\"IngressPipeImpl.ecmp_selector\",\n            group_id=1,\n            actions=[\n                # List of tuples (action name, {action param: value})\n                (\"IngressPipeImpl.set_next_hop\", {\"dmac\": next_hop_mac}),\n            ]\n        ))\n\n        # Map pkt's IPv6 dst addr to group\n        self.insert(self.helper.build_table_entry(\n            table_name=\"IngressPipeImpl.routing_v6_table\",\n            match_fields={\n                # LPM match (value, prefix)\n                \"hdr.ipv6.dst_addr\": (pkt[IPv6].dst, 128)\n            },\n            group_id=1\n        ))\n\n        # Map next_hop_mac to output port\n        self.insert(self.helper.build_table_entry(\n            table_name=\"IngressPipeImpl.l2_exact_table\",\n            match_fields={\n                # Exact match.\n                \"hdr.ethernet.dst_addr\": next_hop_mac\n            },\n            action_name=\"IngressPipeImpl.set_egress_port\",\n            action_params={\n                \"port_num\": self.port2\n            }\n        ))\n\n        # ---- END SOLUTION ----\n\n        # Build expected packet from the given one...\n        exp_pkt = pkt.copy()\n\n        # Route and decrement TTL\n        pkt_route(exp_pkt, next_hop_mac)\n        pkt_decrement_ttl(exp_pkt)\n\n        # Bonus: update P4 program to calculate correct checksum\n        set_cksum(pkt, 1)\n        set_cksum(exp_pkt, 1)\n\n        testutils.send_packet(self, self.port1, str(pkt))\n        testutils.verify_packet(self, exp_pkt, self.port2)\n\n\n@group(\"srv6\")\nclass Srv6EndTest(P4RuntimeTest):\n    \"\"\"Tests SRv6 end behavior (without pop), where the switch forwards the\n    packet to the next SID found in the SRv6 header.\n    \"\"\"\n\n    def runTest(self):\n        my_sid = SWITCH2_IPV6\n        sid_lists = (\n            [SWITCH2_IPV6, SWITCH3_IPV6, HOST2_IPV6],\n            [SWITCH2_IPV6, SWITCH3_IPV6, SWITCH4_IPV6, HOST2_IPV6],\n        )\n        next_hop_mac = SWITCH3_MAC\n\n        for sid_list in sid_lists:\n            for pkt_type in [\"tcpv6\", \"udpv6\", \"icmpv6\"]:\n                print_inline(\"%s %d SIDs ... \" % (pkt_type, len(sid_list)))\n                pkt = getattr(testutils, \"simple_%s_packet\" % pkt_type)()\n\n                pkt = insert_srv6_header(pkt, sid_list)\n                self.testPacket(pkt, sid_list, next_hop_mac, my_sid)\n\n    @autocleanup\n    def testPacket(self, pkt, sid_list, next_hop_mac, my_sid):\n\n        # *** TODO EXERCISE 6\n        # Modify names to match content of P4Info file (look for the fully\n        # qualified name of tables, match fields, and actions.\n        # ---- START SOLUTION ----\n\n        # Add entry to \"My Station\" table. Consider the given pkt's eth dst addr\n        # as myStationMac address.\n        self.insert(self.helper.build_table_entry(\n            table_name=\"IngressPipeImpl.my_station_table\",\n            match_fields={\n                # Exact match.\n                \"hdr.ethernet.dst_addr\": pkt[Ether].dst\n            },\n            action_name=\"NoAction\"\n        ))\n\n        # This should be matched, we want SRv6 end behavior to be applied.\n        self.insert(self.helper.build_table_entry(\n            table_name=\"IngressPipeImpl.srv6_my_sid\",\n            match_fields={\n                # Longest prefix match (value, prefix length)\n                \"hdr.ipv6.dst_addr\": (my_sid, 128)\n            },\n            action_name=\"IngressPipeImpl.srv6_end\"\n        ))\n\n        # Insert ECMP group with only one member (next_hop_mac)\n        self.insert(self.helper.build_act_prof_group(\n            act_prof_name=\"IngressPipeImpl.ecmp_selector\",\n            group_id=1,\n            actions=[\n                # List of tuples (action name, {action param: value})\n                (\"IngressPipeImpl.set_next_hop\", {\"dmac\": next_hop_mac}),\n            ]\n        ))\n\n        # After applying the srv6_end action, we expect to IPv6 dst to be the\n        # next SID in the list, we should route based on that.\n        next_sid = sid_list[1]\n        self.insert(self.helper.build_table_entry(\n            table_name=\"IngressPipeImpl.routing_v6_table\",\n            match_fields={\n                # LPM match (value, prefix)\n                \"hdr.ipv6.dst_addr\": (next_sid, 128)\n            },\n            group_id=1\n        ))\n\n        # Map next_hop_mac to output port\n        self.insert(self.helper.build_table_entry(\n            table_name=\"IngressPipeImpl.l2_exact_table\",\n            match_fields={\n                # Exact match.\n                \"hdr.ethernet.dst_addr\": next_hop_mac\n            },\n            action_name=\"IngressPipeImpl.set_egress_port\",\n            action_params={\n                \"port_num\": self.port2\n            }\n        ))\n\n        # ---- END SOLUTION ----\n\n        # Build expected packet from the given one...\n        exp_pkt = pkt.copy()\n\n        # Set IPv6 dst to next SID and decrement segleft...\n        exp_pkt[IPv6].dst = next_sid\n        exp_pkt[IPv6ExtHdrSegmentRouting].segleft -= 1\n\n        # Route and decrement TTL...\n        pkt_route(exp_pkt, next_hop_mac)\n        pkt_decrement_ttl(exp_pkt)\n\n        # Bonus: update P4 program to calculate correct checksum\n        set_cksum(pkt, 1)\n        set_cksum(exp_pkt, 1)\n\n        testutils.send_packet(self, self.port1, str(pkt))\n        testutils.verify_packet(self, exp_pkt, self.port2)\n\n\n@group(\"srv6\")\nclass Srv6EndPspTest(P4RuntimeTest):\n    \"\"\"Tests SRv6 End with Penultimate Segment Pop (PSP) behavior, where the\n    switch SID is the penultimate in the SID list and the switch removes the\n    SRv6 header before routing the packet to it's final destination (last SID in\n    the list).\n    \"\"\"\n\n    def runTest(self):\n        my_sid = SWITCH3_IPV6\n        sid_lists = (\n            [SWITCH3_IPV6, HOST2_IPV6],\n        )\n        next_hop_mac = HOST2_MAC\n\n        for sid_list in sid_lists:\n            for pkt_type in [\"tcpv6\", \"udpv6\", \"icmpv6\"]:\n                print_inline(\"%s %d SIDs ... \" % (pkt_type, len(sid_list)))\n                pkt = getattr(testutils, \"simple_%s_packet\" % pkt_type)()\n                pkt = insert_srv6_header(pkt, sid_list)\n                self.testPacket(pkt, sid_list, next_hop_mac, my_sid)\n\n    @autocleanup\n    def testPacket(self, pkt, sid_list, next_hop_mac, my_sid):\n\n        # *** TODO EXERCISE 6\n        # Modify names to match content of P4Info file (look for the fully\n        # qualified name of tables, match fields, and actions.\n        # ---- START SOLUTION ----\n\n        # Add entry to \"My Station\" table. Consider the given pkt's eth dst addr\n        # as myStationMac address.\n        self.insert(self.helper.build_table_entry(\n            table_name=\"IngressPipeImpl.my_station_table\",\n            match_fields={\n                # Exact match.\n                \"hdr.ethernet.dst_addr\": pkt[Ether].dst\n            },\n            action_name=\"NoAction\"\n        ))\n\n        # This should be matched, we want SRv6 end behavior to be applied.\n        self.insert(self.helper.build_table_entry(\n            table_name=\"IngressPipeImpl.srv6_my_sid\",\n            match_fields={\n                # Longest prefix match (value, prefix length)\n                \"hdr.ipv6.dst_addr\": (my_sid, 128)\n            },\n            action_name=\"IngressPipeImpl.srv6_end\"\n        ))\n\n        # Insert ECMP group with only one member (next_hop_mac)\n        self.insert(self.helper.build_act_prof_group(\n            act_prof_name=\"IngressPipeImpl.ecmp_selector\",\n            group_id=1,\n            actions=[\n                # List of tuples (action name, {action param: value})\n                (\"IngressPipeImpl.set_next_hop\", {\"dmac\": next_hop_mac}),\n            ]\n        ))\n\n        # Map pkt's IPv6 dst addr to group\n        next_sid = sid_list[1]\n        self.insert(self.helper.build_table_entry(\n            table_name=\"IngressPipeImpl.routing_v6_table\",\n            match_fields={\n                # LPM match (value, prefix)\n                \"hdr.ipv6.dst_addr\": (next_sid, 128)\n            },\n            group_id=1\n        ))\n\n        # Map next_hop_mac to output port\n        self.insert(self.helper.build_table_entry(\n            table_name=\"IngressPipeImpl.l2_exact_table\",\n            match_fields={\n                # Exact match.\n                \"hdr.ethernet.dst_addr\": next_hop_mac\n            },\n            action_name=\"IngressPipeImpl.set_egress_port\",\n            action_params={\n                \"port_num\": self.port2\n            }\n        ))\n\n        # ---- END SOLUTION ----\n\n        # Build expected packet from the given one...\n        exp_pkt = pkt.copy()\n\n        # Expect IPv6 dst to be the next SID...\n        exp_pkt[IPv6].dst = next_sid\n        # Remove SRv6 header since we are performing PSP.\n        pop_srv6_header(exp_pkt)\n\n        # Route and decrement TTL\n        pkt_route(exp_pkt, next_hop_mac)\n        pkt_decrement_ttl(exp_pkt)\n\n        # Bonus: update P4 program to calculate correct checksum\n        set_cksum(pkt, 1)\n        set_cksum(exp_pkt, 1)\n\n        testutils.send_packet(self, self.port1, str(pkt))\n        testutils.verify_packet(self, exp_pkt, self.port2)\n"
  },
  {
    "path": "util/docker/Makefile",
    "content": "include Makefile.vars\n\nbuild: build-stratum_bmv2 build-mvn\npush: push-stratum_bmv2 push-mvn\n\nbuild-stratum_bmv2:\n\tcd stratum_bmv2 && docker build -t ${STRATUM_BMV2_IMG} .\n\nbuild-mvn:\n\tcd ../../app && docker build --squash -f ../util/docker/mvn/Dockerfile \\\n\t\t-t ${MVN_IMG} .\n\npush-stratum_bmv2:\n\t# Remember to update Makefile.vars with the new image sha\n\tdocker push ${STRATUM_BMV2_IMG}\n\npush-mvn:\n\t# Remember to update Makefile.vars with the new image sha\n\tdocker push ${MVN_IMG}\n"
  },
  {
    "path": "util/docker/Makefile.vars",
    "content": "ONOS_IMG := onosproject/onos:2.2.2\nP4RT_SH_IMG := p4lang/p4runtime-sh:latest\nP4C_IMG := opennetworking/p4c:stable\nSTRATUM_BMV2_IMG := opennetworking/ngsdn-tutorial:stratum_bmv2\nMVN_IMG := opennetworking/ngsdn-tutorial:mvn\nGNMI_CLI_IMG := bocon/gnmi-cli:latest\nYANG_IMG := bocon/yang-tools:latest\nSSHPASS_IMG := ictu/sshpass\n\nONOS_SHA := sha256:438815ab20300cd7a31702b7dea635152c4c4b5b2fed9b14970bd2939a139d2a\nP4RT_SH_SHA := sha256:6ae50afb5bde620acb9473ce6cd7b990ff6cc63fe4113cf5584c8e38fe42176c\nP4C_SHA := sha256:8f9d27a6edf446c3801db621359fec5de993ebdebc6844d8b1292e369be5dfea\nSTRATUM_BMV2_SHA := sha256:f31faa5e83abbb2d9cf39d28b3578f6e113225641337ec7d16d867b0667524ef\nMVN_SHA := sha256:d85eb93ac909a90f49b16b33cb872620f9b4f640e7a6451859aec704b21f9243\nGNMI_CLI_SHA := sha256:6f1590c35e71c07406539d0e1e288e87e1e520ef58de25293441c3b9c81dffc0\nYANG_SHA := sha256:feb2dc322af113fc52f17b5735454abfbe017972c867e522ba53ea44e8386fd2\nSSHPASS_SHA := sha256:6e3d0d7564b259ef9612843d220cc390e52aab28b0ff9adaec800c72a051f41c\n"
  },
  {
    "path": "util/docker/mvn/Dockerfile",
    "content": "# Copyright 2019-present Open Networking Foundation\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# Docker image to build the ONOS app.\n# Provides pre-poulated maven repo cache to allow offline builds.\n\nFROM maven:3.6.1-jdk-11-slim\n\nCOPY . /mvn-src\nWORKDIR /mvn-src\n\nRUN mvn clean package && rm -rf ./*\n"
  },
  {
    "path": "util/docker/stratum_bmv2/Dockerfile",
    "content": "# Copyright 2019-present Open Networking Foundation\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# Docker image that extends opennetworking/mn-stratum with other dependencies\n# required by this tutorial. opennetworking/mn-stratum is the official image\n# from the Stratum project which contains stratum_bmv2 and the Mininet\n# libraries. We extend that with PTF, scapy, etc.\n\nARG MN_STRATUM_SHA=\"sha256:1bba2e2c06460c73b0133ae22829937786217e5f20f8f80fcc3063dcf6707ebe\"\n\nFROM bitnami/minideb:stretch as builder\n\nENV BUILD_DEPS \\\n    python-pip \\\n    python-setuptools \\\n    git\nRUN install_packages $BUILD_DEPS\n\nRUN mkdir -p /ouput\n\nENV PIP_DEPS \\\n    scapy==2.4.3 \\\n    git+https://github.com/p4lang/ptf.git \\\n    googleapis-common-protos==1.6.0 \\\n    ipaddress\nRUN pip install --no-cache-dir --root /output $PIP_DEPS\n\nFROM opennetworking/mn-stratum:latest@$MN_STRATUM_SHA as runtime\n\nENV RUNTIME_DEPS \\\n    make\nRUN install_packages $RUNTIME_DEPS\n\nCOPY --from=builder /output /\n\nENV DOCKER_RUN true\n\nENTRYPOINT []\n"
  },
  {
    "path": "util/gnmi-cli",
    "content": "#!/bin/bash\ndocker run --rm -it --network host bocon/gnmi-cli:latest $@\n"
  },
  {
    "path": "util/mn-cmd",
    "content": "#!/bin/bash\n\nif [ -z $1 ]; then\n  echo \"usage: $0 host cmd [args...]\"\n  exit 1\nfi\n\ndocker exec -it mininet /mininet/host-cmd $@\n"
  },
  {
    "path": "util/mn-pcap",
    "content": "#!/bin/bash\n\nDIR=\"$( cd \"$( dirname \"${BASH_SOURCE[0]}\" )\" >/dev/null 2>&1 && pwd )\"\n\nif [ -z $1 ]; then\n  echo \"usage: $0 host\"\n  exit 1\nfi\n\niface=$1-eth0\nfile=${iface}.pcap\n\nset -e\n\necho \"*** Starting tcpdump on ${iface}... Ctrl-c to stop capture\"\necho \"*** Pcap file will be written in ngsdn-tutorial/tmp/${file}\"\ndocker exec -it mininet /mininet/host-cmd $1 tcpdump -i $1-eth0 -w /tmp/\"${file}\"\n\nif [ -x \"$(command -v wireshark)\" ]; then\n  echo \"*** Opening wireshark... Ctrl-c to quit\"\n  wireshark \"${DIR}/../tmp/${file}\"\nfi\n"
  },
  {
    "path": "util/oc-pb-decoder",
    "content": "#!/bin/bash\ndocker run --rm -i bocon/yang-tools:latest oc-pb-decoder\n"
  },
  {
    "path": "util/onos-cmd",
    "content": "#!/bin/bash\n\nif [ -z $1 ]; then\n  echo \"usage: $0 cmd [args...]\"\n  exit 1\nfi\n\n# Use sshpass to skip the password prompt\ndocker run -it --rm --network host ictu/sshpass \\\n  -procks ssh -o \"UserKnownHostsFile=/dev/null\" \\\n  -o \"StrictHostKeyChecking=no\" -o LogLevel=ERROR -p 8101 onos@localhost \"$@\"\n"
  },
  {
    "path": "util/p4rt-sh",
    "content": "#!/usr/bin/env python3\n\n# Copyright 2019 Barefoot Networks, Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\n\"\"\"\nP4Runtime shell docker wrapper\nFrom: https://github.com/p4lang/p4runtime-shell/blob/master/p4runtime-sh-docker\n\"\"\"\n\nimport argparse\nfrom collections import namedtuple\nimport logging\nimport os.path\nimport sys\nimport tempfile\nimport shutil\nimport subprocess\n\nDOCKER_IMAGE = 'p4lang/p4runtime-sh'\nTMP_DIR = os.path.dirname(os.path.abspath(__file__)) + '/.pipe_cfg'\n\n\ndef main():\n    FwdPipeConfig = namedtuple('FwdPipeConfig', ['p4info', 'bin'])\n\n    def pipe_config(arg):\n        try:\n            paths = FwdPipeConfig(*[x for x in arg.split(',')])\n            if len(paths) != 2:\n                raise argparse.ArgumentError\n            return paths\n        except Exception:\n            raise argparse.ArgumentError(\n                \"Invalid pipeline config, expected <p4info path>,<binary config path>\")\n\n    parser = argparse.ArgumentParser(description='P4Runtime shell docker wrapper', add_help=False)\n    parser.add_argument('--grpc-addr',\n                        help='P4Runtime gRPC server address',\n                        metavar='<host>:<port>',\n                        type=str, action='store', default=\"localhost:50001\")\n    parser.add_argument('--config',\n                        help='If you want the shell to push a pipeline config to the server first',\n                        metavar='<p4info path (text)>,<binary config path>',\n                        type=pipe_config, action='store', default=None)\n    parser.add_argument('-v', '--verbose', help='Increase output verbosity',\n                        action='store_true')\n    args, unknown_args = parser.parse_known_args()\n\n    docker_args = []\n    new_args = []\n\n    if args.verbose:\n        logging.basicConfig(level=logging.DEBUG)\n        new_args.append('--verbose')\n\n    if args.grpc_addr is not None:\n        print(\"*** Connecting to P4Runtime server at {} ...\".format(args.grpc_addr))\n        new_args.extend([\"--grpc-addr\", args.grpc_addr])\n\n    if args.config is not None:\n        if not os.path.isfile(args.config.p4info):\n            logging.critical(\"'{}' is not a valid file\".format(args.config.p4info))\n            sys.exit(1)\n        if not os.path.isfile(args.config.bin):\n            logging.critical(\"'{}' is not a valid file\".format(args.config.bin))\n            sys.exit(1)\n\n        mount_path = \"/fwd_pipe_config\"\n        fname_p4info = \"p4info.pb.txt\"\n        fname_bin = \"config.bin\"\n\n        os.mkdir(TMP_DIR)\n        logging.debug(\n            \"Created temporary directory '{}', it will be mounted in the docker as '{}'\".format(\n                TMP_DIR, mount_path))\n        shutil.copy(args.config.p4info, os.path.join(TMP_DIR, fname_p4info))\n        shutil.copy(args.config.bin, os.path.join(TMP_DIR, fname_bin))\n\n        docker_args.extend([\"-v\", \"{}:{}\".format(TMP_DIR, mount_path)])\n        new_args.extend([\"--config\", \"{},{}\".format(\n            os.path.join(mount_path, fname_p4info), os.path.join(mount_path, fname_bin))])\n\n    cmd = [\"docker\", \"run\", \"-ti\", \"--network\", \"host\"]\n    cmd.extend(docker_args)\n    cmd.append(DOCKER_IMAGE)\n    cmd.extend(new_args)\n    cmd.extend(unknown_args)\n    logging.debug(\"Running cmd: {}\".format(\" \".join(cmd)))\n\n    subprocess.run(cmd)\n\n    if args.config is not None:\n        logging.debug(\"Cleaning up...\")\n        try:\n            shutil.rmtree(TMP_DIR)\n        except Exception:\n            logging.error(\"Error when removing temporary directory '{}'\".format(TMP_DIR))\n\n\nif __name__ == '__main__':\n    main()\n"
  },
  {
    "path": "util/vm/.gitignore",
    "content": "*.log\n.vagrant\n*.ova\n"
  },
  {
    "path": "util/vm/README.md",
    "content": "# Scripts to build the tutorial VM\n\n## Requirements\n\n- [Vagrant](https://www.vagrantup.com/) (tested v2.2.5)\n- [VirtualBox](https://www.virtualbox.org/wiki/Downloads) (tested with v5.2.32)\n\n## Steps to build\n\nIf you want to provision and use the VM locally on your machine:\n\n    cd util/vm\n    vagrant up\n\nOtherwise, if you want to export the VM in `.ova` format for distribution to\ntutorial attendees:\n\n    cd util/vm\n    ./build-vm.sh\n\nThis script will:\n\n1. provision the VM using Vagrant;\n2. reduce VM disk size;\n3. generate a file named `ngsdn-tutorial.ova`.\n\nUse credentials `sdn`/`rocks` to log in the Ubuntu system.\n\n**Note on IntelliJ IDEA plugins:** plugins need to be installed manually. We\nrecommend installing the following ones:\n\n* https://plugins.jetbrains.com/plugin/10620-p4-plugin\n* https://plugins.jetbrains.com/plugin/7322-python-community-edition"
  },
  {
    "path": "util/vm/Vagrantfile",
    "content": "REQUIRED_PLUGINS = %w( vagrant-vbguest vagrant-reload vagrant-disksize )\n\nVagrant.configure(2) do |config|\n\n  # Install plugins if missing...\n  _retry = false\n  REQUIRED_PLUGINS.each do |plugin|\n    unless Vagrant.has_plugin? plugin\n      system \"vagrant plugin install #{plugin}\"\n      _retry = true\n    end\n  end\n\n  if (_retry)\n    exec \"vagrant \" + ARGV.join(' ')\n  end\n\n  # Common config.\n  config.vm.box = \"lasp/ubuntu16.04-desktop\"\n  config.vbguest.auto_update = true\n  config.disksize.size = '50GB'\n  config.vm.synced_folder \".\", \"/vagrant\", disabled: false, type: \"virtualbox\"\n  config.vm.network \"private_network\", :type => 'dhcp', :adapter => 2\n\n  config.vm.define \"default\" do |d|\n    d.vm.hostname = \"tutorial-vm\"\n    d.vm.provider \"virtualbox\" do |vb|\n      vb.name = \"ONF NG-SDN Tutorial \" + Time.now.strftime(\"(%Y-%m-%d)\")\n      vb.gui = true\n      vb.cpus = 8\n      vb.memory = 8192\n      vb.customize ['modifyvm', :id, '--clipboard', 'bidirectional']\n      vb.customize [\"modifyvm\", :id, \"--accelerate3d\", \"on\"]\n      vb.customize [\"modifyvm\", :id, \"--graphicscontroller\", \"vboxvga\"]\n      vb.customize [\"modifyvm\", :id, \"--vram\", \"128\"]\n    end\n    d.vm.provision \"shell\", path: \"root-bootstrap.sh\"\n    d.vm.provision \"shell\", inline: \"su sdn '/vagrant/user-bootstrap.sh'\"\n  end\nend\n"
  },
  {
    "path": "util/vm/build-vm.sh",
    "content": "#!/usr/bin/env bash\n\nset -xe\n\nfunction wait_vm_shutdown {\n    set +x\n    while vboxmanage showvminfo $1 | grep -c \"running (since\"; do\n      echo \"Waiting for VM to shutdown...\"\n      sleep 1\n    done\n    sleep 2\n    set -x\n}\n\n# Provision\nvagrant up\n\n# Cleanup\nVB_UUID=$(cat .vagrant/machines/default/virtualbox/id)\nvagrant ssh -c 'bash /vagrant/cleanup.sh'\nsleep 5\nvboxmanage controlvm \"${VB_UUID}\" acpipowerbutton\nwait_vm_shutdown \"${VB_UUID}\"\n# Remove vagrant shared folder\nvboxmanage sharedfolder remove \"${VB_UUID}\" -name \"vagrant\"\n\n# Export\nrm -f ngsdn-tutorial.ova\nvboxmanage export \"${VB_UUID}\" -o ngsdn-tutorial.ova\n"
  },
  {
    "path": "util/vm/cleanup.sh",
    "content": "#!/bin/bash\nset -ex\n\nsudo apt-get clean\nsudo apt-get -y autoremove\n\nsudo rm -rf /tmp/*\n\nhistory -c\nrm -f ~/.bash_history\n\n# Zerofill virtual hd to save space when exporting\ntime sudo dd if=/dev/zero of=/tmp/zero bs=1M || true\nsync ; sleep 1 ; sync ; sudo rm -f /tmp/zero\n\n# Delete vagrant user\nsudo userdel -r -f vagrant\n"
  },
  {
    "path": "util/vm/root-bootstrap.sh",
    "content": "#!/usr/bin/env bash\n\nset -xe\n\n# Create user sdn\nuseradd -m -d /home/sdn -s /bin/bash sdn\nusermod -aG sudo sdn\nusermod -aG vboxsf sdn\necho \"sdn:rocks\" | chpasswd\necho \"sdn ALL=(ALL) NOPASSWD:ALL\" > /etc/sudoers.d/99_sdn\nchmod 440 /etc/sudoers.d/99_sdn\nupdate-locale LC_ALL=\"en_US.UTF-8\"\n\napt-get update\n\napt-get install -y --no-install-recommends apt-transport-https ca-certificates\ncurl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -\nadd-apt-repository \"deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable\"\napt-get update\n\n# Required packages\nDEBIAN_FRONTEND=noninteractive apt-get -y --no-install-recommends install \\\n    avahi-daemon \\\n    git \\\n    bash-completion \\\n    htop \\\n    python \\\n    zip unzip \\\n    make \\\n    wget \\\n    curl \\\n    vim nano emacs \\\n    docker-ce\n\n# Enable Docker at startup\nsystemctl start docker\nsystemctl enable docker\n# Add sdn user to docker group\nusermod -a -G docker sdn\n\n# Install pip\ncurl https://bootstrap.pypa.io/get-pip.py -o get-pip.py\npython get-pip.py --force-reinstall\nrm -f get-pip.py\n\n# Bash autocompletion\necho \"source /etc/profile.d/bash_completion.sh\" >> ~/.bashrc\n\n# Fix SSH server config\ntee -a /etc/ssh/sshd_config <<EOF\n\nUseDNS no\nEOF\nsed -i 's/PasswordAuthentication no/PasswordAuthentication yes/g' /etc/ssh/sshd_config\n\n# IntelliJ\nsnap install intellij-idea-community --classic\n"
  },
  {
    "path": "util/vm/user-bootstrap.sh",
    "content": "#!/bin/bash\nset -xe\n\ncd /home/sdn\n\ncp /etc/skel/.bashrc ~/\ncp /etc/skel/.profile ~/\ncp /etc/skel/.bash_logout ~/\n\n#  With Ubuntu 18.04 sometimes .cache is owned by root...\nmkdir -p ~/.cache\nsudo chown -hR sdn:sdn ~/.cache\n\ngit clone https://github.com/opennetworkinglab/ngsdn-tutorial.git\ncd ngsdn-tutorial\nmake deps\n"
  },
  {
    "path": "yang/demo-port.yang",
    "content": "// A module is a self-contained tree of nodes\nmodule demo-port {\n\n    // YANG Boilerplate \n    yang-version \"1\";\n    namespace \"https://opennetworking.org/yang/demo\";\n    prefix \"demo-port\";\n    description \"Demo model for managing ports\";\n    revision \"2019-09-10\" {\n      description \"Initial version\";\n      reference \"1.0.0\";\n    }\n\n    // Identities and Typedefs\n    identity SPEED {\n      description \"base type for port speeds\";\n    }\n\n    identity SPEED_10GB {\n      base SPEED;\n      description \"10 Gbps port speed\";\n    }\n\n    typedef port-number {\n      type uint16 {\n        range 1..32;\n      }\n      description \"New type for port number that ensure the number is between 1 and 32, inclusive\";\n    }\n\n\n    // Reusable groupings for port config and state \n    grouping port-config {\n        description \"Set of configurable attributes / leaves\";\n        leaf speed {\n            type identityref {\n                base demo-port:SPEED;\n            }\n            description \"Configurable speed of a switch port\";\n        }\n    }\n\n    grouping port-state {\n        description \"Set of read-only state\";\n        leaf status {\n            type boolean;\n            description \"Number\";\n        }\n    }\n\n    // Top-level model definition\n    container ports {\n      description \"The root container for port configuration and state\";\n        list port {\n            key \"port-number\";\n            description \"List of ports on a switch\";\n\n            leaf port-number {\n                type port-number;\n                description \"Port number (maps to the front panel port of a switch); also the key for the port list\";\n            }\n\n            // each individual will have the elements defined in the grouping\n            container config {\n                description \"Configuration data for a port\";\n                uses port-config; // reference to grouping above\n            }\n            container state {\n                config false; // makes child nodes read-only\n                description \"Read-only state for a port\";\n                uses port-state; // reference to grouping above\n            }\n        }\n    }\n}\n"
  }
]