Repository: opennetworkinglab/ngsdn-tutorial Branch: advanced Commit: 62705c3cb64f Files: 86 Total size: 603.7 KB Directory structure: gitextract_7ohb8phc/ ├── .gitignore ├── .travis.yml ├── EXERCISE-1.md ├── EXERCISE-2.md ├── EXERCISE-3.md ├── EXERCISE-4.md ├── EXERCISE-5.md ├── EXERCISE-6.md ├── EXERCISE-7.md ├── EXERCISE-8.md ├── LICENSE ├── Makefile ├── README.md ├── app/ │ ├── pom.xml │ └── src/ │ └── main/ │ └── java/ │ └── org/ │ └── onosproject/ │ └── ngsdn/ │ └── tutorial/ │ ├── AppConstants.java │ ├── Ipv6RoutingComponent.java │ ├── L2BridgingComponent.java │ ├── MainComponent.java │ ├── NdpReplyComponent.java │ ├── Srv6Component.java │ ├── cli/ │ │ ├── Srv6ClearCommand.java │ │ ├── Srv6InsertCommand.java │ │ ├── Srv6SidCompleter.java │ │ └── package-info.java │ ├── common/ │ │ ├── FabricDeviceConfig.java │ │ └── Utils.java │ └── pipeconf/ │ ├── InterpreterImpl.java │ ├── PipeconfLoader.java │ └── PipelinerImpl.java ├── docker-compose.yml ├── mininet/ │ ├── flowrule-gtp.json │ ├── host-cmd │ ├── netcfg-gtp.json │ ├── netcfg-sr.json │ ├── netcfg.json │ ├── recv-gtp.py │ ├── send-udp.py │ ├── topo-gtp.py │ ├── topo-v4.py │ └── topo-v6.py ├── p4src/ │ ├── main.p4 │ └── snippets.p4 ├── ptf/ │ ├── lib/ │ │ ├── __init__.py │ │ ├── base_test.py │ │ ├── chassis_config.pb.txt │ │ ├── convert.py │ │ ├── helper.py │ │ ├── port_map.json │ │ ├── runner.py │ │ └── start_bmv2.sh │ ├── run_tests │ └── tests/ │ ├── bridging.py │ ├── packetio.py │ ├── routing.py │ └── srv6.py ├── solution/ │ ├── app/ │ │ └── src/ │ │ └── main/ │ │ └── java/ │ │ └── org/ │ │ └── onosproject/ │ │ └── ngsdn/ │ │ └── tutorial/ │ │ ├── Ipv6RoutingComponent.java │ │ ├── L2BridgingComponent.java │ │ ├── NdpReplyComponent.java │ │ ├── Srv6Component.java │ │ └── pipeconf/ │ │ └── InterpreterImpl.java │ ├── mininet/ │ │ ├── flowrule-gtp.json │ │ ├── netcfg-gtp.json │ │ └── netcfg-sr.json │ ├── p4src/ │ │ └── main.p4 │ └── ptf/ │ └── tests/ │ ├── bridging.py │ ├── packetio.py │ ├── routing.py │ └── srv6.py ├── util/ │ ├── docker/ │ │ ├── Makefile │ │ ├── Makefile.vars │ │ ├── mvn/ │ │ │ └── Dockerfile │ │ └── stratum_bmv2/ │ │ └── Dockerfile │ ├── gnmi-cli │ ├── mn-cmd │ ├── mn-pcap │ ├── oc-pb-decoder │ ├── onos-cmd │ ├── p4rt-sh │ └── vm/ │ ├── .gitignore │ ├── README.md │ ├── Vagrantfile │ ├── build-vm.sh │ ├── cleanup.sh │ ├── root-bootstrap.sh │ └── user-bootstrap.sh └── yang/ └── demo-port.yang ================================================ FILE CONTENTS ================================================ ================================================ FILE: .gitignore ================================================ .idea/ tmp/ p4src/build app/target app/src/main/resources/p4info.txt app/src/main/resources/bmv2.json ptf/stratum_bmv2.log ptf/p4rt_write.log ptf/ptf.log ptf/ptf.pcap **/*.iml **/*.pyc **/*.bak **/.classpath **/.project **/.settings **/.factorypath util/.pipe_cfg ================================================ FILE: .travis.yml ================================================ dist: xenial language: python services: - docker python: - "3.5" install: - make deps script: - make check check-sr check-gtp NGSDN_TUTORIAL_SUDO=sudo ================================================ FILE: EXERCISE-1.md ================================================ # Exercise 1: P4Runtime Basics This exercise provides a hands-on introduction to the P4Runtime API. You will be asked to: 1. Look at the P4 starter code 2. Compile it for the BMv2 software switch and understand the output (P4Info and BMv2 JSON files) 3. Start Mininet with a 2x2 topology of `stratum_bmv2` switches 4. Use the P4Runtime Shell to manually insert table entries in one of the switches to provide connectivity between hosts ## 1. Look at the P4 program To get started, let's have a look a the P4 program: [p4src/main.p4](p4src/main.p4) In the rest of the exercises, you will be asked to build a leaf-spine data center fabric based on IPv6. To make things easier, we provide a starter P4 program which contains: * Header definitions * Parser implementation * Ingress and egress pipeline implementation (incomplete) * Checksum verification/update The implementation already provides logic for L2 bridging and ACL behaviors. We suggest you start by taking a **quick look** at the whole program to understand its structure. When you're done, try answering the following questions, while referring to the P4 program to understand the different parts in more details. **Parser** * List all the protocol headers that can be extracted from a packet. * Which header is expected to be the first one when parsing a new packet **Ingress pipeline** * For the L2 bridging case, which table is used to replicate NDP requests to all host-facing ports? What type of match is used in that table? * In the ACL table, what's the difference between `send_to_cpu` and `clone_to_cpu` actions? * In the apply block, what is the first table applied to a packet? Are P4Runtime packet-out treated differently? **Egress pipeline** * For multicast packets, can they be replicated to the ingress port? **Deparser** * What is the first header to be serialized on the wire and in which case? ## 2. Compile P4 program The next step is to compile the P4 program for the BMv2 `simple_switch` target. For this, we will use the open source P4_16 compiler ([p4c][p4c]) which includes a backend for this specific target, named `p4c-bm2-ss`. To compile the program, open a terminal window in the exercise VM and type the following command: ``` make p4-build ``` You should see the following output: ``` *** Building P4 program... docker run --rm -v /home/sdn/ngsdn-tutorial:/workdir -w /workdir opennetworking/p4c:stable \ p4c-bm2-ss --arch v1model -o p4src/build/bmv2.json \ --p4runtime-files p4src/build/p4info.txt --Wdisable=unsupported \ p4src/main.p4 *** P4 program compiled successfully! Output files are in p4src/build ``` We have instrumented the Makefile to use a containerized version of the `p4c-bm2-ss` compiler. If you look at the arguments when calling `p4c-bm2-ss `, you will notice that we are asking the compiler to: * Compile for the v1model architecture (`--arch` argument); * Put the main output in `p4src/build/bmv2.json` (`-o`); * Generate a P4Info file in `p4src/build/p4info.txt` (`--p4runtime-files`); * Ignore some warnings about unsupported features (`--Wdisable=unsupported`). It's ok to ignore such warnings here, as they are generated because of a bug in p4c. ### Compiler output #### bmv2.json This file defines a configuration for the BMv2 `simple_switch` target in JSON format. When `simple_switch` receives a new packet, it uses this configuration to process the packet in a way that is consistent with the P4 program. This is quite a big file, but don't worry, there's no need to understand its content for the sake of this exercise. If you want to learn more, a specification of the BMv2 JSON format is provided here: #### p4info.txt This file contains an instance of a P4Info schema for our P4 program, expressed using the Protobuf Text format. Take a look at this file and try to answer the following questions: 1. What is the fully qualified name of the `l2_exact_table`? What is its numeric ID? 2. To which P4 entity does the ID `16812802` belong to? A table, an action, or something else? What is the corresponding fully qualified name? 3. For the `IngressPipeImpl.set_egress_port` action, how many parameters are defined for this action? What is the bitwidth of the parameter named `port_num`? 4. At the end of the file, look for the definition of the `controller_packet_metadata` message with name `packet_out` at the end of the file. Now look at the definition of `header cpu_out_header_t` in the P4 program. Do you see any relationship between the two? ## 3. Start Mininet topology It's now time to start an emulated network of `stratum_bmv2` switches. We will program one of the switches using the compiler output obtained in the previous step. To start the topology, use the following command: ``` make start ``` This command will start two Docker containers, one for mininet and one for ONOS. You can ignore the ONOS one for now, we will use that in exercises 3 and 4. To make sure the container is started without errors, you can use the `make mn-log` command to show the Mininet log. Verify that you see the following output (press Ctrl-C to exit): ``` $ make mn-log docker-compose logs -f mininet Attaching to mininet mininet | *** Error setting resource limits. Mininet's performance may be affected. mininet | *** Creating network mininet | *** Adding hosts: mininet | h1a h1b h1c h2 h3 h4 mininet | *** Adding switches: mininet | leaf1 leaf2 spine1 spine2 mininet | *** Adding links: mininet | (h1a, leaf1) (h1b, leaf1) (h1c, leaf1) (h2, leaf1) (h3, leaf2) (h4, leaf2) (spine1, leaf1) (spine1, leaf2) (spine2, leaf1) (spine2, leaf2) mininet | *** Configuring hosts mininet | h1a h1b h1c h2 h3 h4 mininet | *** Starting controller mininet | mininet | *** Starting 4 switches mininet | leaf1 stratum_bmv2 @ 50001 mininet | leaf2 stratum_bmv2 @ 50002 mininet | spine1 stratum_bmv2 @ 50003 mininet | spine2 stratum_bmv2 @ 50004 mininet | mininet | *** Starting CLI: ``` You can ignore the "*** Error setting resource limits...". The parameters to start the mininet container are specified in [docker-compose.yml](docker-compose.yml). The container is configured to execute the topology script defined in [mininet/topo-v6.py](mininet/topo-v6.py). The topology includes 4 switches, arranged in a 2x2 fabric topology, as well as 6 hosts attached to leaf switches. 3 hosts `h1a`, `h1b`, and `h1c`, are configured to be part of the same IPv6 subnet. In the next step you will be asked to use P4Runtime to insert table entries to be able to ping between two hosts of this subnet. ![topo-v6](img/topo-v6.png) ### stratum_bmv2 temporary files When starting the Mininet container, a set of files related to the execution of each `stratum_bmv2` instance is generated in the `tmp`directory. Examples include: * `tmp/leaf1/stratum_bmv2.log`: contains the stratum_bmv2 log for switch `leaf1`; * `tmp/leaf1/chassis-config.txt`: the Stratum "chassis config" file used to specify the initial port configuration to use at switch startup; This file is automatically generated by the `StratumBmv2Switch` class invoked by [mininet/topo-v6.py](mininet/topo-v6.py). * `tmp/leaf1/write-reqs.txt`: a log of all P4Runtime write requests processed by the switch (the file might not exist if the switch has not received any write request). ## 4. Program leaf1 using P4Runtime For this part we will use the [P4Runtime Shell][p4runtime-sh], an interactive Python CLI that can be used to connect to a P4Runtime server and can run P4Runtime commands. For example, it can be used to create, read, update, and delete flow table entries. The shell can be started in two modes, with or without a P4 pipeline config. In the first case, the shell will take care of pushing the given pipeline config to the switch using the P4Runtime `SetPipelineConfig` RPC. In the second case, the shell will try to retrieve the P4Info that is currently configured in the switch. In both cases, the shell makes use of the P4Info file to: * allow specifying runtime entities such as table entries using P4Info names rather then numeric IDs (much easier to remember and read); * provide autocompletion; * validate the CLI commands. Finally, when connecting to a P4Runtime server, the specification mandates that we provide a mastership election ID to be able to write state, such as the pipeline config and table entries. To connect the P4Runtime Shell to `leaf1` and push the pipeline configuration obtained before, use the following command: ``` util/p4rt-sh --grpc-addr localhost:50001 --config p4src/build/p4info.txt,p4src/build/bmv2.json --election-id 0,1 ``` `util/p4rt-sh` is a simple Python script that invokes the P4Runtime Shell Docker container with the given arguments. For a list of arguments you can type `util/p4rt-sh --help`. **Note:** we use `--grpc-addr localhost:50001` as the Mininet container is executed locally, and `50001` is the TCP port associated to the gRPC server exposed by `leaf1`. If the shell started successfully, you should see the following output: ``` *** Connecting to P4Runtime server at host.docker.internal:50001 ... *** Welcome to the IPython shell for P4Runtime *** P4Runtime sh >>> ``` #### Available commands Use commands like `tables`, `actions`, `action_profiles`, `counters`, `direct_counters`, and other named after the P4Info message fields, to query information about P4Info objects. Commands such as `table_entry`, `action_profile_member`, `action_profile_group`, `counter_entry`, `direct_counter_entry`, `meter_entry`, `direct_meter_entry`, `multicast_group_entry`, and `clone_session_entry`, can be used to read/write the corresponding P4Runtime entities. Type the command name followed by `?` for information on each command, e.g. `table_entry?`. For more information on P4Runtime Shell, check the official documentation at: The shell supports autocompletion when pressing `tab`. For example: ``` tables["IngressPipeImpl. ``` will show all tables defined inside the `IngressPipeImpl` block. ### Bridging connectivity test Use the following steps to verify connectivity on leaf1 after inserting the required P4Runtime table entries. For this part, you will need to use the Mininet CLI. On a new terminal window, attach to the Mininet CLI using `make mn-cli`. You should see the following output: ``` *** Attaching to Mininet CLI... *** To detach press Ctrl-D (Mininet will keep running) mininet> ``` ### Insert static NDP entries To be able to ping two IPv6 hosts in the same subnet, first, the hosts need to resolve their respective MAC address using the Neighbor Discovery Protocol (NDP). This is equivalent to ARP in IPv4 networks. For example, when trying to ping `h1b` from `h1a`, `h1a` will first generate an NDP Neighbor Solicitation (NS) message to resolve the MAC address of `h1b`. Once `h1b` receives the NDP NS message, it should reply with an NDP Neighbor Advertisement (NA) with its own MAC address. Now both host are aware of each other MAC address and the ping packets can be exchanged. As you might have noted by looking at the P4 program before, the switch should be able to handle NDP packets if correctly programmed using P4Runtime (see `l2_ternary_table`), however, **to keep things simple for now, let's insert two static NDP entries in our hosts.** Add an NDP entry to `h1a`, mapping `h1b`'s IPv6 address (`2001:1:1::b`) to its MAC address (`00:00:00:00:00:1B`): ``` mininet> h1a ip -6 neigh replace 2001:1:1::B lladdr 00:00:00:00:00:1B dev h1a-eth0 ``` And vice versa, add an NDP entry to `h1b` to resolve `h1a`'s address: ``` mininet> h1b ip -6 neigh replace 2001:1:1::A lladdr 00:00:00:00:00:1A dev h1b-eth0 ``` ### Start ping Start a ping between `h1a` and `h1b`. It should not work as we have not inserted any P4Runtime table entry for forward these packets. ``` mininet> h1a ping h1b ``` You should see no output form the ping command. You can leave that command running for now. ### Insert P4Runtime table entries To be able to forward ping packets, we need to add two table entries on `l2_exact_table` in `leaf1` -- one that matches on destination MAC address of `h1b` and forwards traffic to port 4 (where `h1b` is attached), and vice versa (`h1a` is attached to port 3). Let's use the P4Runtime shell to create and insert such entries. Looking at the P4Info file, use the commands below to insert the following two entries in the `l2_exact_table`: | Match (Ethernet dest) | Egress port number | |-----------------------|-------------------- | | `00:00:00:00:00:1B` | 4 | | `00:00:00:00:00:1A` | 3 | To create a table entry object: ``` P4Runtime sh >>> te = table_entry["P4INFO-TABLE-NAME"](action = "") ``` Make sure to use the fully qualified name for each entity, e.g. `IngressPipeImpl.l2_exact_table`, `IngressPipeImpl.set_egress_port`, etc. To specify a match field: ``` P4Runtime sh >>> te.match["P4INFO-MATCH-FIELD-NAME"] = ("VALUE") ``` `VALUE` can be a MAC address expressed in Colon-Hexadecimal notation (e.g., `00:11:22:AA:BB:CC`), or IP address in dot notation, or an arbitrary string. Based on the information contained in the P4Info, P4Runtime shell will internally convert that value to a Protobuf byte string. The specify the values for the table entry action parameters: ``` P4Runtime sh >>> te.action["P4INFO-ACTION-PARAM-NAME"] = ("VALUE") ``` You can show the table entry object in Protobuf Text format, using the `print` command: ``` P4Runtime sh >>> print(te) ``` The shell internally takes care of populating the fields of the corresponding Protobuf message by using the content of the P4Info file. To insert the entry (this will issue a P4Runtime Write RPC to the switch): ``` P4Runtime sh >>> te.insert() ``` To read table entries from the switch (this will issue a P4Runtime Read RPC): ``` P4Runtime sh >>> for te in table_entry["P4INFO-TABLE-NAME"].read(): ...: print(te) ...: ``` After inserting the two entries, ping should work. Go pack to the Mininet CLI terminal with the ping command running and verify that you see an output similar to this: ``` mininet> h1a ping h1b PING 2001:1:1::b(2001:1:1::b) 56 data bytes 64 bytes from 2001:1:1::b: icmp_seq=956 ttl=64 time=1.65 ms 64 bytes from 2001:1:1::b: icmp_seq=957 ttl=64 time=1.28 ms 64 bytes from 2001:1:1::b: icmp_seq=958 ttl=64 time=1.69 ms ... ``` ## Congratulations! You have completed the first exercise! Leave Mininet running, as you will need it for the following exercises. [p4c]: https://github.com/p4lang/p4c [p4runtime-sh]: https://github.com/p4lang/p4runtime-shell ================================================ FILE: EXERCISE-2.md ================================================ # Exercise 2: Yang, OpenConfig, and gNMI Basics This exercise is designed to give you more exposure to YANG, OpenConfig, and gNMI. It includes: 1. Understanding the YANG language 2. Understand YANG encoding 3. Understanding YANG-enabled transport protocols (using gNMI) ## 1. Understanding the YANG language We start with a simple YANG module called `demo-port` in [`yang/demo-port.yang`](./yang/demo-port.yang) Take a look at the model and try to derive the structure. What are valid values for each of the leaf nodes? This model is self contained, so it isn't too difficult to work it out. However, most YANG models are defined over many files that makes it very complicated to work out the overall structure. To make this easier, we can use a tool called `pyang` to try to visualize the structure of the model. Start by entering the yang-tools Docker container: ``` $ make yang-tools bash-4.4# ``` Next, run `pyang` on the `demo-port.yang` model: ``` bash-4.4# pyang -f tree demo-port.yang ``` You should see a tree representation of the `demo-port` module. Does this match your expectations? ------ *Extra Credit:* Try to add a new leaf node to `port-config` or `port-state` grouping, then rerun `pyang` and see where your new leaf was added. ------ We can also use `pyang` to visualize a more complicated set of models, like the set of OpenConfig models that Stratum uses. These models have already been loaded into the `yang-tools` container in the `/models` directory. ``` bash-4.4# pyang -f tree \ -p ietf \ -p openconfig \ -p hercules \ openconfig/interfaces/openconfig-interfaces.yang \ openconfig/interfaces/openconfig-if-ethernet.yang \ openconfig/platform/* \ openconfig/qos/* \ openconfig/system/openconfig-system.yang \ hercules/openconfig-hercules-*.yang | less ``` You should see a tree structure of the models displayed in `less`. You can use the Arrow keys or `j`/`k` to scroll up and down. Type `q` to quit. In the interface model, we can see the path to enable or disable an interface: `interfaces/interface[name]/config/enabled` What is the path to read the number of incoming packets (`in-pkts`) on an interface? ------ *Extra Credit:* Take a look at the models in the `/models` directory or browse them on Github: Try to find the description of the `enabled` or `in-pkts` leaf nodes. *Hint:* Take a look at the `openconfig-interfaces.yang` file. ------ ## 2. Understand YANG encoding There is no specific YANG data encoding, but data adhering to YANG models can be encoded into XML, JSON, or Protobuf (among other formats). Each of these formats has it's own schema format. First, we can look at YANG's first and canonical representation format XML. To see a empty skeleton of data encoded in XML, run: ``` bash-4.4# pyang -f sample-xml-skeleton demo-port.yang ``` This skeleton should match the tree representation we saw in part 1. We can also use `pyang` to generate a DSDL schema based on the YANG model: ``` bash-4.4# pyang -f dsdl demo-port.yang | xmllint --format - ``` The first part of the schema describes the tree structure, and the second part describes the value constraints for the leaf nodes. *Extra credit:* Try adding new speed identity (e.g. `SPEED_100G`) or changing the range for `port-number` values in `demo-port.yang`, then rerun `pyang -f dsdl`. Do you see your changes reflected in the DSDL schema? ------ Next, we will look at encoding data using Protocol Buffers (protobuf). The protobuf encoding is a more compact binary encoding than XML, and libraries can be automatically generated for dozens of languages. We can use [`ygot`](https://github.com/openconfig/ygot)'s `proto_generator` to generate protobuf messages from our YANG model. ``` bash-4.4# proto_generator -output_dir=/proto -package_name=tutorial demo-port.yang ``` `proto_generator` will generate two files: * `/proto/tutorial/demo_port/demo_port.proto` * `/proto/tutorial/enums/enums.proto` Open `demo_port.proto` using `less`: ``` bash-4.4# less /proto/tutorial/demo_port/demo_port.proto ``` This file contains a top-level Ports message that matches the structure defined in the YANG model. You can see that `proto_generator` also adds a `yext.schemapath` custom option to each protobuf message field that explicitly maps to the YANG leaf path. Enums (like `tutorial.enums.DemoPortSPEED`) aren't included in this file, but `proto_generator` puts them in a separate file: `enums.proto` Open `enums.proto` using `less`: ``` bash-4.4# less /proto/tutorial/enums/enums.proto ``` You should see an enum for the 10GB speed, along with any other speeds that you added if you completed the extra credit above. We can also use `proto_generator` to build the protobuf messages for the OpenConfig models that Stratum uses: ``` bash-4.4# proto_generator \ -generate_fakeroot \ -output_dir=/proto \ -package_name=openconfig \ -exclude_modules=ietf-interfaces \ -compress_paths \ -base_import_path= \ -path=ietf,openconfig,hercules \ openconfig/interfaces/openconfig-interfaces.yang \ openconfig/interfaces/openconfig-if-ip.yang \ openconfig/lacp/openconfig-lacp.yang \ openconfig/platform/openconfig-platform-linecard.yang \ openconfig/platform/openconfig-platform-port.yang \ openconfig/platform/openconfig-platform-transceiver.yang \ openconfig/platform/openconfig-platform.yang \ openconfig/system/openconfig-system.yang \ openconfig/vlan/openconfig-vlan.yang \ hercules/openconfig-hercules-interfaces.yang \ hercules/openconfig-hercules-platform-chassis.yang \ hercules/openconfig-hercules-platform-linecard.yang \ hercules/openconfig-hercules-platform-node.yang \ hercules/openconfig-hercules-platform-port.yang \ hercules/openconfig-hercules-platform.yang \ hercules/openconfig-hercules-qos.yang \ hercules/openconfig-hercules.yang ``` You will find `openconfig.proto` and `enums.proto` in the `/proto/openconfig` directory. ------ *Extra Credit:* Try to find the Protobuf message fields used to enable a port or get the ingress packets counter in the protobuf messages. *Hint:* Searching by schemapath might help. ------ `ygot` can also be used to generate Go structs that adhere to the YANG model and that are capable of validating the structure, type, and values of data. ------ *Extra Credit:* If you have extra time or are interested in using YANG and Go together, try generating Go code for the `demo-port` module. ``` bash-4.4# mkdir -p /goSrc bash-4.4# generator -output_dir=/goSrc -package_name=tutorial demo-port.yang ``` Take a look at the Go files in `/goSrc`. ------ You can now quit out of the container (using `Ctrl-D` or `exit`). ## 3. Understanding YANG-enabled transport protocols There are several YANG-model agnostic protocols that can be used to get or set data that adheres to a model, like NETCONF, RESTCONF, and gNMI. This part focuses on using the protobuf encoding over gNMI. First, make sure your Mininet container is still running. ``` $ make start docker-compose up -d mininet is up-to-date onos is up-to-date ``` If you see the following output, then Mininet was not running: ``` Starting mininet ... done Starting onos ... done ``` You will need to go back to Exercise 1 and install forwarding rules to re-establish pings between `h1a` and `h1b` for later parts of this exercise. If you could not complete Exercise 1, you can use the following P4Runtime-sh commands to enable connectivity: ```python te = table_entry['IngressPipeImpl.l2_exact_table'](action='IngressPipeImpl.set_egress_port') te.match['hdr.ethernet.dst_addr'] = '00:00:00:00:00:1A' te.action['port_num'] = '3' te.insert() te = table_entry['IngressPipeImpl.l2_exact_table'](action='IngressPipeImpl.set_egress_port') te.match['hdr.ethernet.dst_addr'] = '00:00:00:00:00:1B' te.action['port_num'] = '4' te.insert() ``` Next, we will use a [gNMI client CLI](https://github.com/Yi-Tseng/Yi-s-gNMI-tool) to read the all of the configuration from the Stratum switche `leaf1` in our Mininet network: ``` $ util/gnmi-cli --grpc-addr localhost:50001 get / ``` The first part of the output shows the request that was made by the CLI: ``` REQUEST path { } type: CONFIG encoding: PROTO ``` The path being requested is the empty path (which means the root of the config tree), the type of data is just the config tree, and the requested encoding for the response is protobuf. The second part of the output shows the response from Stratum: ``` RESPONSE notification { update { path { } val { any_val { type_url: "type.googleapis.com/openconfig.Device" value: \252\221\231\304\001\... TRUNCATED } } } } ``` You can see that Stratum provides a response of type `openconfig.Device`, which is the top-level message defined in `openconfig.proto`. The response is the binary encoding of the data based on the protobuf message. The value is not human readable, but we can translate the reply using a utility that converts between the binary and textual representations of the protobuf message. We can rerun the command, but this time pipe the output through the converter utility (then pipe that output to `less` to make scrolling easier): ``` $ util/gnmi-cli --grpc-addr localhost:50001 get / | util/oc-pb-decoder | less ``` The contents of the response should now be easier to read. Scroll down to the first `interface`. Is the interface enabled? What is the speed of the port? ------ *Extra credit:* Can you find `in-pkts`? If not, why do you think they are missing? ------- One of the benefits of gNMI is it's "schema-less" encoding, which allows clients or devices to update only the paths that need to be updated. This is particularly useful for subscriptions. First, let's try out the schema-less representation by requesting the configuration port between `leaf1` and `h1a`: ``` $ util/gnmi-cli --grpc-addr localhost:50001 get \ /interfaces/interface[name=leaf1-eth3]/config ``` You should see this response containing 2 leafs under config - **enabled** and **health-indicator**: ``` RESPONSE notification { update { path { elem { name: "interfaces" } elem { name: "interface" key { key: "name" value: "leaf1-eth3" } } elem { name: "config" } elem { name: "enabled" } } val { bool_val: true } } } notification { update { path { elem { name: "interfaces" } elem { name: "interface" key { key: "name" value: "leaf1-eth3" } } elem { name: "config" } elem { name: "health-indicator" } } val { string_val: "GOOD" } } } ``` The schema-less representation provides and `update` for each leaf containing both the path the value of the leaf. You can confirm that the interface is enabled (set to `true`). Next, we will subscribe to the ingress unicast packet counters for the interface on `leaf1` attached to `h1a` (port 3): ``` $ util/gnmi-cli --grpc-addr localhost:50001 \ --interval 1000 sub-sample \ /interfaces/interface[name=leaf1-eth3]/state/counters/in-unicast-pkts ``` The first part of the output shows the request being made by the CLI: ``` REQUEST subscribe { subscription { path { elem { name: "interfaces" } elem { name: "interface" key { key: "name" value: "leaf1-eth3" } } elem { name: "state" } elem { name: "counters" } elem { name: "in-unicast-pkts" } } mode: SAMPLE sample_interval: 1000 } updates_only: true } ``` We have the subscription path, the type of subscription (sampling) and the sampling rate (every 1000ms, or 1s). The second part of the output is a stream of responses: ``` RESPONSE update { timestamp: 1567895852136043891 update { path { elem { name: "interfaces" } elem { name: "interface" key { key: "name" value: "leaf1-eth3" } } elem { name: "state" } elem { name: "counters" } elem { name: "in-unicast-pkts" } } val { uint_val: 1592 } } } ``` Each response has a timestamp, path, and new value. Because we are sampling, you should see a new update printed every second. Leave this running, while we generate some traffic. In another window, open the Mininet CLI and start a ping: ``` $ make mn-cli *** Attaching to Mininet CLI... *** To detach press Ctrl-D (Mininet will keep running) mininet> h1a ping h1b ``` In the first window, you should see the `uint_val` increase by 1 every second while your ping is still running. (If it's not exactly 1, then there could be other traffic like NDP messages contributing to the increase.) You can stop the gNMI subscription using `Ctrl-C`. Finally, we will monitor link events using gNMI's on-change subscriptions. Start a subscription for the operational status of the first switch's first port: ``` $ util/gnmi-cli --grpc-addr localhost:50001 sub-onchange \ /interfaces/interface[name=leaf1-eth3]/state/oper-status ``` You should immediately see a response which indicates that port 1 is `UP`: ``` RESPONSE update { timestamp: 1567896668419430407 update { path { elem { name: "interfaces" } elem { name: "interface" key { key: "name" value: "leaf1-eth3" } } elem { name: "state" } elem { name: "oper-status" } } val { string_val: "UP" } } } ``` In the shell running the Mininet CLI, let's take down the interface on `leaf1` connected to `h1a`: ``` mininet> sh ifconfig leaf1-eth3 down ``` You should see a response in your gNMI CLI window showing that the interface on `leaf1` connected to `h1a` is `DOWN`: ``` RESPONSE update { timestamp: 1567896891549363399 update { path { elem { name: "interfaces" } elem { name: "interface" key { key: "name" value: "leaf1-eth3" } } elem { name: "state" } elem { name: "oper-status" } } val { string_val: "DOWN" } } } ``` We can bring back the interface using the following Mininet command: ``` mininet> sh ifconfig leaf1-eth3 up ``` You should see another response in your gNMI CLI window that indicates the interface is `UP`. ------ *Extra credit:* We can also use gNMI to disable or enable an interface. Leave your gNMI subscription for operational status changes running. In the Mininet CLI, start a ping between two hosts. ``` mininet> h1a ping h1b ``` You should see replies being showed in the Mininet CLI. In a third window, we will use the gNMI CLI to change the configuration value of the `enabled` leaf from `true` to `false`. ``` $ util/gnmi-cli --grpc-addr localhost:50001 set \ /interfaces/interface[name=leaf1-eth3]/config/enabled \ --bool-val false ``` In the gNMI set window, you should see a request indicating the new value for the `enabled` leaf: ``` REQUEST update { path { elem { name: "interfaces" } elem { name: "interface" key { key: "name" value: "leaf1-eth3" } } elem { name: "config" } elem { name: "enabled" } } val { bool_val: false } } ``` In the gNMI subscription window, you should see a new response indicating that the operational status of `leaf1-eth3` is `DOWN`: ``` RESPONSE update { timestamp: 1567896891549363399 update { path { elem { name: "interfaces" } elem { name: "interface" key { key: "name" value: "leaf1-eth3" } } elem { name: "state" } elem { name: "oper-status" } } val { string_val: "DOWN" } } } ``` And in the Mininet CLI window, you should observe that the ping has stopped working. Next, we can re-nable the port: ``` $ util/gnmi-cli --grpc-addr localhost:50001 set \ /interfaces/interface[name=leaf1-eth3]/config/enabled \ --bool-val true ``` You should see another update in the gNMI subscription window indicating the interface is `UP`, and the ping should resume in the Mininet CLI wondow. ## Congratulations! You have completed the second exercise! ================================================ FILE: EXERCISE-3.md ================================================ # Exercise 3: Using ONOS as the Control Plane This exercise provides a hands-on introduction to ONOS, where you will learn how to: 1. Start ONOS along with a set of built-in apps for basic services such as topology discovery 2. Load a custom ONOS app and pipeconf 3. Push a configuration file to ONOS to discover and control the `stratum_bmv2` switches using P4Runtime and gNMI 4. Access the ONOS CLI and UI to verify that all `stratum_bmv2` switches have been discovered and configured correctly. ## 1. Start ONOS In a terminal window, type: ``` $ make restart ``` This command will restart the ONOS and Mininet containers, in case those were running from previous exercises, clearing any previous state. The parameters to start the ONOS container are specified in [docker -compose.yml](docker-compose.yml). The container is configured to pass the environment variable `ONOS_APPS`, used to define the built-in apps to load during startup. In our case, this variable has value: ``` ONOS_APPS=gui2,drivers.bmv2,lldpprovider,hostprovider ``` requesting ONOS to pre-load the following built-in apps: * `gui2`: ONOS web user interface (available at ) * `drivers.bmv2`: BMv2/Stratum drivers based on P4Runtime, gNMI, and gNOI * `lldpprovider`: LLDP-based link discovery application (used in Exercise 4) * `hostprovider`: Host discovery application (used in Exercise 4) Once ONOS has started, you can check its log using the `make onos-log` command. To **verify that all required apps have been activated**, run the following command in a new terminal window to access the ONOS CLI. Use password `rocks` when prompted: ``` $ make onos-cli ``` If you see the following error, then ONOS is still starting; wait a minute and try again. ``` ssh_exchange_identification: Connection closed by remote host make: *** [onos-cli] Error 255 ``` When you see the Password prompt, type the default password: `rocks`. Then type the following command in the ONOS CLI to show the list of running apps: ``` onos> apps -a -s ``` Make sure you see the following list of apps displayed: ``` * 5 org.onosproject.protocols.grpc 2.2.2 gRPC Protocol Subsystem * 6 org.onosproject.protocols.gnmi 2.2.2 gNMI Protocol Subsystem * 29 org.onosproject.drivers 2.2.2 Default Drivers * 34 org.onosproject.generaldeviceprovider 2.2.2 General Device Provider * 35 org.onosproject.protocols.p4runtime 2.2.2 P4Runtime Protocol Subsystem * 36 org.onosproject.p4runtime 2.2.2 P4Runtime Provider * 37 org.onosproject.drivers.p4runtime 2.2.2 P4Runtime Drivers * 42 org.onosproject.protocols.gnoi 2.2.2 gNOI Protocol Subsystem * 52 org.onosproject.hostprovider 2.2.2 Host Location Provider * 53 org.onosproject.lldpprovider 2.2.2 LLDP Link Provider * 66 org.onosproject.drivers.gnoi 2.2.2 gNOI Drivers * 70 org.onosproject.drivers.gnmi 2.2.2 gNMI Drivers * 71 org.onosproject.pipelines.basic 2.2.2 Basic Pipelines * 72 org.onosproject.drivers.stratum 2.2.2 Stratum Drivers * 161 org.onosproject.gui2 2.2.2 ONOS GUI2 * 181 org.onosproject.drivers.bmv2 2.2.2 BMv2 Drivers ``` There are definitely more apps than defined in `$ONOS_APPS`. That's because each app in ONOS can define other apps as dependencies. When loading an app, ONOS automatically resolves dependencies and loads all other required apps. #### Disable link discovery service Link discovery will be the focus of the next exercise. For now, this service lacks support in the P4 program. We suggest you deactivate it for the rest of this exercise, to avoid running into issues. Use the following ONOS CLI command to deactivate the link discovery service. ``` onos> app deactivate lldpprovider ``` To exit the ONOS CLI, use `Ctrl-D`. This will stop the CLI process but will not affect ONOS itself. #### Restart ONOS in case of errors If anything goes wrong and you need to kill ONOS, you can use command `make restart` to restart both Mininet and ONOS. ## 2. Build app and register pipeconf Inside the [app/](./app) directory you will find a starter implementation of an ONOS app that includes a pipeconf. The pipeconf-related files are the following: * [PipeconfLoader.java][PipeconfLoader.java]: A component that registers the pipeconf at app activation; * [InterpreterImpl.java][InterpreterImpl.java]: An implementation of the `PipelineInterpreter` driver behavior; * [PipelinerImpl.java][PipelinerImpl.java]: An implementation of the `Pipeliner` driver behavior; To build the ONOS app (including the pipeconf), run the following command in the second terminal window: ``` $ make app-build ``` This will produce a binary file `app/target/ngsdn-tutorial-1.0-SNAPSHOT.oar` that we will use to install the application in the running ONOS instance. Use the following command to load the app into ONOS and activate it: ``` $ make app-reload ``` After the app has been activated, you should see the following messages in the ONOS log (`make onos-log`) signaling that the pipeconf has been registered and the different app components have been started: ``` INFO [PiPipeconfManager] New pipeconf registered: org.onosproject.ngsdn-tutorial (fingerprint=...) INFO [MainComponent] Started ``` Alternatively, you can show the list of registered pipeconfs using the ONOS CLI (`make onos-cli`) command: ``` onos> pipeconfs ``` ## 3. Push netcfg to ONOS Now that ONOS and Mininet are running, it's time to let ONOS know how to reach the four switches and control them. We do this by using a configuration file located at [mininet/netcfg.json](mininet/netcfg.json), which contains information such as: * The gRPC address and port associated with each Stratum device; * The ONOS driver to use for each device, `stratum-bmv2` in this case; * The pipeconf to use for each device, `org.onosproject.ngsdn-tutorial` in this case, as defined in [PipeconfLoader.java][PipeconfLoader.java]; * Configuration specific to our custom app (`fabricDeviceConfig`) This file also contains information related to the IPv6 configuration associated with each switch interface. We will discuss this information in more details in the next exercises. On a terminal window, type: ``` $ make netcfg ``` This command will push the `netcfg.json` to ONOS, triggering discovery and configuration of the 4 switches. Check the ONOS log (`make onos-log`), you should see messages like: ``` INFO [GrpcChannelControllerImpl] Creating new gRPC channel grpc:///mininet:50001?device_id=1... ... INFO [StreamClientImpl] Setting mastership on device:leaf1... ... INFO [PipelineConfigClientImpl] Setting pipeline config for device:leaf1 to org.onosproject.ngsdn-tutorial... ... INFO [GnmiDeviceStateSubscriber] Started gNMI subscription for 6 ports on device:leaf1 ... INFO [DeviceManager] Device device:leaf1 port [leaf1-eth1](1) status changed (enabled=true) INFO [DeviceManager] Device device:leaf1 port [leaf1-eth2](2) status changed (enabled=true) INFO [DeviceManager] Device device:leaf1 port [leaf1-eth3](3) status changed (enabled=true) INFO [DeviceManager] Device device:leaf1 port [leaf1-eth4](4) status changed (enabled=true) INFO [DeviceManager] Device device:leaf1 port [leaf1-eth5](5) status changed (enabled=true) INFO [DeviceManager] Device device:leaf1 port [leaf1-eth6](6) status changed (enabled=true) ``` ## 4. Use the ONOS CLI to verify the network configuration Access the ONOS CLI using `make onos-cli`. Enter the following command to verify the network config pushed before: ``` onos> netcfg ``` #### Devices Verify that all 4 devices have been discovered and are connected: ``` onos> devices -s id=device:leaf1, available=true, role=MASTER, type=SWITCH, driver=stratum-bmv2:org.onosproject.ngsdn-tutorial id=device:leaf2, available=true, role=MASTER, type=SWITCH, driver=stratum-bmv2:org.onosproject.ngsdn-tutorial id=device:spine1, available=true, role=MASTER, type=SWITCH, driver=stratum-bmv2:org.onosproject.ngsdn-tutorial id=device:spine2, available=true, role=MASTER, type=SWITCH, driver=stratum-bmv2:org.onosproject.ngsdn-tutorial ``` Make sure you see `available=true` for all devices. That means ONOS has a gRPC channel open to the device and the pipeline configuration has been pushed. #### Ports Check port information, obtained by ONOS by performing a gNMI Get RPC for the OpenConfig Interfaces model: ``` onos> ports -s device:spine1 id=device:spine1, available=true, role=MASTER, type=SWITCH, driver=stratum-bmv2:org.onosproject.ngsdn-tutorial port=[spine1-eth1](1), state=enabled, type=copper, speed=10000 , ... port=[spine1-eth2](2), state=enabled, type=copper, speed=10000 , ... ``` Check port statistics, also obtained by querying the OpenConfig Interfaces model via gNMI: ``` onos> portstats device:spine1 deviceId=device:spine1 port=[spine1-eth1](1), pktRx=114, pktTx=114, bytesRx=14022, bytesTx=14136, pktRxDrp=0, pktTxDrp=0, Dur=173 port=[spine1-eth2](2), pktRx=114, pktTx=114, bytesRx=14022, bytesTx=14136, pktRxDrp=0, pktTxDrp=0, Dur=173 ``` #### Flow rules and groups Check the ONOS flow rules. You should see three flow rules for each device. For example, to show all flow rules installed so far on device `leaf1`: ``` onos> flows -s any device:leaf1 deviceId=device:leaf1, flowRuleCount=3 ADDED, bytes=0, packets=0, table=IngressPipeImpl.acl_table, priority=40000, selector=[ETH_TYPE:arp], treatment=[immediate=[IngressPipeImpl.clone_to_cpu()]] ADDED, bytes=0, packets=0, table=IngressPipeImpl.acl_table, priority=40000, selector=[ETH_TYPE:ipv6, IP_PROTO:58, ICMPV6_TYPE:136], treatment=[immediate=[IngressPipeImpl.clone_to_cpu()]] ADDED, bytes=0, packets=0, table=IngressPipeImpl.acl_table, priority=40000, selector=[ETH_TYPE:ipv6, IP_PROTO:58, ICMPV6_TYPE:135], treatment=[immediate=[IngressPipeImpl.clone_to_cpu()]] ``` This list include flow rules installed by the ONOS built-in services such as `hostprovider`. We'll talk more about these services in the next exercise. To show all groups installed so far, you can use the `groups` command. For example to show groups on `leaf1`: ``` onos> groups any device:leaf1 deviceId=device:leaf1, groupCount=1 id=0x63, state=ADDED, type=CLONE, bytes=0, packets=0, appId=org.onosproject.core, referenceCount=0 id=0x63, bucket=1, bytes=0, packets=0, weight=-1, actions=[OUTPUT:CONTROLLER] ``` "Group" is an ONOS northbound abstraction that is mapped internally to different types of P4Runtime entities. In this case, you should see 1 group of type `CLONE`, internally mapped to a P4Runtime `CloneSessionEntry`, here used to clone packets to the controller via packet-in. We'll talk more about controller packet-in/out in the next session. ## 5. Visualize the topology on the ONOS web UI Using the ONF Cloud Tutorial Portal, access the ONOS UI. If you are running the VM on your laptop, open up a browser (e.g. Firefox) to . When asked, use the username `onos` and password `rocks`. You should see 4 devices in the topology view, corresponding to the 4 switches of our 2x2 fabric. Press `L` to show device labels. Because link discovery is not enabled, the ONOS UI will not show any links between the devices. While here, feel free to interact with and discover the ONOS UI. For more information on how to use the ONOS web UI please refer to this guide: There is a way to show the pipeconf details for a given device, can you find it? #### Pipeconf UI In the ONOS topology view click on one of the switches (e.g `device:leaf1`) and the Device Details panel appears. In that panel click on the Pipeconf icon (the last one), to open the Pipeconf view for that device. ![device-leaf1-details-panel](img/device-leaf1-details-panel.png) Here you will find info on the pipeconf currently used by the specific device, including details of the P4 tables. ![onos-gui-pipeconf-leaf1](img/onos-gui-pipeconf-leaf1.png) Clicking the table row brings up the details panel, showing details of the match fields, actions, action parameter bit widths, etc. ## Congratulations! You have completed the third exercise! If you're feeling ambitious, you can do the extra credit steps below. ### Extra Credit: Inspect stratum_bmv2 internal state You can use the P4Runtime shell to dump all table entries currently installed on the switch by ONOS. In a separate terminal window, start a P4Runtime shell for leaf1: ``` $ util/p4rt-sh --grpc-addr localhost:50001 --election-id 0,1 ``` On the shell prompt, type the following command to dump all entries from the ACL table: ``` P4Runtime sh >>> for te in table_entry["IngressPipeImpl.acl_table"].read(): ...: print(te) ...: ``` You should see exactly three entries, each one corresponding to a flow rule in ONOS. For example, the flow rule matching on NDP NS packets should look like this in the P4runtime shell: ``` table_id: 33557865 ("IngressPipeImpl.acl_table") match { field_id: 4 ("hdr.ethernet.ether_type") ternary { value: "\\x86\\xdd" mask: "\\xff\\xff" } } match { field_id: 5 ("hdr.ipv6.next_hdr") ternary { value: "\\x3a" mask: "\\xff" } } match { field_id: 6 ("hdr.icmpv6.type") ternary { value: "\\x87" mask: "\\xff" } } action { action { action_id: 16782152 ("IngressPipeImpl.clone_to_cpu") } } priority: 40001 ``` ### Extra Credit: Show ONOS gRPC log ONOS provides a debugging feature that dumps all gRPC messages exchanged with a device to a file. To enable this feature, type the following command in the ONOS CLI (`make onos-cli`): ``` onos> cfg set org.onosproject.grpc.ctl.GrpcChannelControllerImpl enableMessageLog true ``` Check the content of directory `tmp/onos` in the `ngsdn-tutorial` root. You should see many files, some of which starting with name `grpc___mininet_`. You should see four such files, one file per device, named after the gRPC port used to establish the gRPC chanel. Check content of one of these files, you should see a dump of the gRPC messages in Protobuf Text format for messages like: * P4Runtime `PacketIn` and `PacketOut`; * P4Runtime Read RPCs used to periodically dump table entries and read counters; * gNMI Get RPCs to read port counters. Remember to disable the gRPC message logging in ONOS when you're done, to avoid affecting performances: ``` onos> cfg set org.onosproject.grpc.ctl.GrpcChannelControllerImpl enableMessageLog false ``` [PipeconfLoader.java]: app/src/main/java/org/onosproject/ngsdn/tutorial/pipeconf/PipeconfLoader.java [InterpreterImpl.java]: app/src/main/java/org/onosproject/ngsdn/tutorial/pipeconf/InterpreterImpl.java [PipelinerImpl.java]: app/src/main/java/org/onosproject/ngsdn/tutorial/pipeconf/PipelinerImpl.java ================================================ FILE: EXERCISE-4.md ================================================ # Exercise 4: Enabling ONOS Built-in Services In this exercise, you will integrate ONOS built-in services for link and host discovery with your P4 program. Such built-in services are based on the ability of switches to send data plane packets to the controller (packet-in) and vice versa (packet-out). To make this work with your P4 program, you will need to apply simple changes to the starter P4 code, validate the P4 changes using PTF-based data plane unit tests, and finally, apply changes to the pipeconf Java implementation to enable ONOS's built-in apps to use packet-in/out via P4Runtime. The exercise has two parts: 1. Enable packet I/O and verify link discovery 2. Host discovery & L2 bridging ## Part 1: Enable packet I/O and verify link discovery We start by reviewing how controller packet I/O works with P4Runtime. ### Background: Controller packet I/O with P4Runtime The P4 program under [p4src/main.p4](p4src/main.p4) provides support for carrying arbitrary metadata in P4Runtime `PacketIn` and `PacketOut` messages. Two special headers are defined and annotated with the standard P4 annotation `@controller_header`: ```p4 @controller_header("packet_in") header cpu_in_header_t { port_num_t ingress_port; bit<7> _pad; } @controller_header("packet_out") header cpu_out_header_t { port_num_t egress_port; bit<7> _pad; } ``` These headers are used to carry the original switch ingress port of a packet-in, and to specify the intended output port for a packet-out. When the P4Runtime agent in Stratum receives a packet from the switch CPU port, it expects to find the `cpu_in_header_t` header as the first one in the frame. Indeed, it looks at the `controller_packet_metadata` part of the P4Info file to determine the number of bits to strip at the beginning of the frame and to populate the corresponding metadata field of the `PacketIn` message, including the ingress port as in this case. Similarly, when Stratum receives a P4Runtime `PacketOut` message, it uses the values found in the `PacketOut`'s metadata fields to serialize and prepend a `cpu_out_header_t` to the frame before feeding it to the pipeline parser. ### 1. Modify P4 program The P4 starter code already provides support for the following capabilities: * Parse the `cpu_out` header (if the ingress port is the CPU one) * Emit the `cpu_in` header as the first one in the deparser * Provide an ACL table with ternary match fields and an action to send or clone packets to the CPU port (used to generate a packet-ins) Something is missing to provide complete packet-in/out support, and you have to modify the P4 program to implement it: 1. Open `p4src/main.p4`; 2. Modify the code where requested (look for `TODO EXERCISE 4`); 3. Compile the modified P4 program using the `make p4-build` command. Make sure to address any compiler errors before continuing. At this point, our P4 pipeline should be ready for testing. ### 2. Run PTF tests Before starting ONOS, let's make sure the P4 changes work as expected by running some PTF tests. But first, you need to apply a few simple changes to the test case implementation. Open file `ptf/tests/packetio.py` and modify wherever requested (look for `TODO EXERCISE 4`). This test file provides two test cases: one for packet-in and one for packet-out. In both test cases, you will have to modify the implementation to use the same name for P4Runtime entities as specified in the P4Info file obtained after compiling the P4 program (`p4src/build/p4info.txt`). To run all the tests for this exercise: make p4-test TEST=packetio This command will run all tests in the `packetio` group (i.e. the content of `ptf/tests/packetio.py`). To run a specific test case you can use: make p4-test TEST=. For example: make p4-test TEST=packetio.PacketOutTest #### Check for regressions To make sures the new changes are not breaking other features, make sure to run tests for L2 bridging support. make p4-test TEST=bridging If all tests succeed, congratulations! You can move to the next step. #### How to debug failing tests? When running PTF tests, multiple files are produced that you can use to spot bugs: * `ptf/stratum_bmv2.log`: BMv2 log with trace level (showing tables matched and other info for each packet) * `ptf/p4rt_write.log`: Log of all P4Runtime Write requests * `ptf/ptf.pcap`: PCAP file with all packets sent and received during tests (the tutorial VM comes with Wireshark for easier visualization) * `ptf/ptf.log`: PTF log of all packet operations (sent and received) ### 3. Modify ONOS pipeline interpreter The `PipelineInterpreter` is the ONOS driver behavior used to map the ONOS representation of packet-in/out to one that is consistent with your P4 pipeline (along with other similar mappings). Specifically, to use services like link and host discovery, ONOS built-in apps need to be able to set the output port of a packet-out and access the original ingress port of a packet-in. In the following, you will be asked to apply a few simple changes to the `PipelineInterpreter` implementation: 1. Open file: `app/src/main/java/org/onosproject/ngsdn/tutorial/pipeconf/InterpreterImpl.java` 2. Modify wherever requested (look for `TODO EXERCISE 4`), specifically: * Look for a method named `buildPacketOut`, modify the implementation to use the same name of the **egress port** metadata field for the `packet_out` header as specified in the P4Info file. * Look for method `mapInboundPacket`, modify the implementation to use the same name of the **ingress port** metadata field for the `packet_in` header as specified in the P4Info file. 3. Build ONOS app (including the pipeconf) with the command `make app-build`. The P4 compiler outputs (`bmv2.json` and `p4info.txt`) are copied in the app resource folder (`app/src/main/resources`) and will be included in the ONOS app binary. The copy that gets included in the ONOS app will be the one that gets deployed by ONOS to the device after the connection is initiated. ### 4. Restart ONOS **Note:** ONOS should be already running, and in theory, there should be no need to restart it. However, while ONOS supports reloading the pipeconf with a modified one (e.g., with updated `bmv2.json` and `p4info.txt`), the version of ONOS used in this tutorial (2.2.0, the most recent at the time of writing) does not support reloading the pipeconf behavior classes, in which case the old classes will still be used. For this reason, to reload a modified version of `InterpreterImpl.java`, you need to kill ONOS first. In a terminal window, type: ``` $ make restart ``` This command will restart all containers, removing any state from previous executions, including ONOS. Wait approximately 20 seconds for ONOS to completing booting, or check the ONOS log (`make onos-log`) until no more messages are shown. ### 5. Load updated app and register pipeconf On a terminal window, type: ``` $ make app-reload ``` This command will upload to ONOS and activate the app binary previously built (located at app/target/ngsdn-tutorial-1.0-SNAPSHOT.oar). ### 6. Push netcfg to ONOS to trigger device and link discovery On a terminal window, type: ``` $ make netcfg ``` Use the ONOS CLI to verify that all devices have been discovered: ``` onos> devices -s id=device:leaf1, available=true, role=MASTER, type=SWITCH, driver=stratum-bmv2:org.onosproject.ngsdn-tutorial id=device:leaf2, available=true, role=MASTER, type=SWITCH, driver=stratum-bmv2:org.onosproject.ngsdn-tutorial id=device:spine1, available=true, role=MASTER, type=SWITCH, driver=stratum-bmv2:org.onosproject.ngsdn-tutorial id=device:spine2, available=true, role=MASTER, type=SWITCH, driver=stratum-bmv2:org.onosproject.ngsdn-tutorial ``` Verify that all links have been discovered. You should see 8 links in total, each one representing a direction of the 4 bidirectional links of our Mininet topology: ``` onos> links src=device:leaf1/1, dst=device:spine1/1, type=DIRECT, state=ACTIVE, expected=false src=device:leaf1/2, dst=device:spine2/1, type=DIRECT, state=ACTIVE, expected=false src=device:leaf2/1, dst=device:spine1/2, type=DIRECT, state=ACTIVE, expected=false src=device:leaf2/2, dst=device:spine2/2, type=DIRECT, state=ACTIVE, expected=false src=device:spine1/1, dst=device:leaf1/1, type=DIRECT, state=ACTIVE, expected=false src=device:spine1/2, dst=device:leaf2/1, type=DIRECT, state=ACTIVE, expected=false src=device:spine2/1, dst=device:leaf1/2, type=DIRECT, state=ACTIVE, expected=false src=device:spine2/2, dst=device:leaf2/2, type=DIRECT, state=ACTIVE, expected=false ``` **If you don't see a link**, check the ONOS log (`make onos-log`) for any errors with packet-in/out handling. In case of errors, it's possible that you have not modified `InterpreterImpl.java` correctly. In this case, go back to step 3. You should see 5 flow rules for each device. For example, to show all flow rules installed so far on device `leaf1`: ``` onos> flows -s any device:leaf1 deviceId=device:leaf1, flowRuleCount=5 ADDED, ..., table=IngressPipeImpl.acl_table, priority=40000, selector=[ETH_TYPE:lldp], treatment=[immediate=[IngressPipeImpl.clone_to_cpu()]] ADDED, ..., table=IngressPipeImpl.acl_table, priority=40000, selector=[ETH_TYPE:bddp], treatment=[immediate=[IngressPipeImpl.clone_to_cpu()]] ADDED, ..., table=IngressPipeImpl.acl_table, priority=40000, selector=[ETH_TYPE:arp], treatment=[immediate=[IngressPipeImpl.clone_to_cpu()]] ADDED, ..., table=IngressPipeImpl.acl_table, priority=40000, selector=[ETH_TYPE:ipv6, IP_PROTO:58, ICMPV6_TYPE:136], treatment=[immediate=[IngressPipeImpl.clone_to_cpu()]] ADDED, ..., table=IngressPipeImpl.acl_table, priority=40000, selector=[ETH_TYPE:ipv6, IP_PROTO:58, ICMPV6_TYPE:135], treatment=[immediate=[IngressPipeImpl.clone_to_cpu()]] ... ``` These flow rules are the result of the translation of flow objectives generated by the `hostprovider` and `lldpprovider` built-in apps. Flow objectives are translated by the pipeconf, which provides a `Pipeliner` behavior implementation ([PipelinerImpl.java][PipelinerImpl.java]). These flow rules specify a match key by using ONOS standard/known header fields, such as `ETH_TYPE`, `ICMPV6_TYPE`, etc. These types are mapped to P4Info-specific match fields by the pipeline interpreter ([InterpreterImpl.java][InterpreterImpl.java]; look for method `mapCriterionType`) The `hostprovider` app provides host discovery capabilities by intercepting ARP (`selector=[ETH_TYPE:arp]`) and NDP packets (`selector=[ETH_TYPE:ipv6, IP_PROTO:58, ICMPV6_TYPE:...]`), which are cloned to the controller (`treatment=[immediate=[IngressPipeImpl.clone_to_cpu()]]`). Similarly, `lldpprovider` generates flow objectives to intercept LLDP and BDDP packets (`selector=[ETH_TYPE:lldp]` and `selector=[ETH_TYPE:bddp]` ) periodically emitted on all devices' ports as P4Runtime packet-outs, allowing automatic link discovery. All flow rules refer to P4 action `clone_to_cpu()`, which invokes a v1model-specific primitive to set the clone session ID: ```p4 action clone_to_cpu() { clone3(CloneType.I2E, CPU_CLONE_SESSION_ID, ...); } ``` To actually generate P4Runtime packet-in messages for matched packets, the pipeconf's pipeliner generates a `CLONE` *group*, internally translated into a P4Runtime`CloneSessionEntry`, that maps `CPU_CLONE_SESSION_ID` to a set of ports, just the CPU one in this case. To show all groups installed in ONOS, you can use the `groups` command. For example, to show groups on `leaf1`: ``` onos> groups any device:leaf1 deviceId=device:leaf1, groupCount=1 id=0x63, state=ADDED, type=CLONE, ..., appId=org.onosproject.core, referenceCount=0 id=0x63, bucket=1, ..., weight=-1, actions=[OUTPUT:CONTROLLER] ``` ### 7. Visualize links on the ONOS UI Using the ONF Cloud Tutorial Portal, access the ONOS UI. If you are running the VM on your laptop, open up a browser (e.g. Firefox) to . On the same page where the ONOS topology view is shown: * Press `L` to show device labels; * Press `A` multiple times until you see link stats, in either packets/seconds (pps) or bits/seconds. Link stats are derived by ONOS by periodically obtaining the port counters for each device. ONOS internally uses gNMI to read port information, including counters. In this case, you should see approximately 1 packet/s, as that's the rate of packet-outs generated by the `lldpprovider` app. ## Part 2: Host discovery & L2 bridging By fixing packet I/O support in the pipeline interpreter we did not only get link discovery, but also enabled the built-in `hostprovider` app to perform *host* discovery. This service is required by our tutorial app to populate the bridging tables of our P4 pipeline, to forward packets based on the Ethernet destination address. Indeed, the `hostprovider` app works by snooping incoming ARP/NDP packets on the switch and deducing where a host is connected to from the packet-in message metadata. Other apps in ONOS, like our tutorial app, can then listen for host-related events and access information about their addresses (IP, MAC) and location. In the following, you will be asked to enable the app's `L2BridgingComponent`, and to verify that host discovery works by pinging hosts on Mininet. But before, it's useful to review how the starter code implements L2 bridging. ### Background: Our implementation of L2 bridging To make things easier, the starter code assumes that hosts of a given subnet are all connected to the same leaf, and two interfaces of two different leaves cannot be configured with the same IPv6 subnet. In other words, L2 bridging is allowed only for hosts connected to the same leaf. The Mininet script [topo-v6.py](mininet/topo-v6.py) used in this tutorial defines 4 subnets: * `2001:1:1::/64` with 3 hosts connected to `leaf1` (`h1a`, `h1b`, and `h1c`) * `2001:1:2::/64` with 1 hosts connected to `leaf1` (`h2`) * `2001:2:3::/64` with 1 hosts connected to `leaf2` (`h3`) * `2001:2:4::/64` with 1 hosts connected to `leaf2` (`h4`) The same IPv6 prefixes are defined in the [netcfg.json](mininet/netcfg.json) file and are used to provide interface configuration to ONOS. #### Data plane The P4 code defines tables to forward packets based on the Ethernet address, precisely, two distinct tables, to handle two different types of L2 entries: 1. Unicast entries: which will be filled in by the control plane when the location (port) of new hosts is learned. 2. Broadcast/multicast entries: used replicate NDP Neighbor Solicitation (NS) messages to all host-facing ports; For (2), unlike ARP messages in IPv4, which are broadcasted to Ethernet destination address FF:FF:FF:FF:FF:FF, NDP messages are sent to special Ethernet addresses specified by RFC2464. These addresses are prefixed with 33:33 and the last four octets are the last four octets of the IPv6 destination multicast address. The most straightforward way of matching on such IPv6 broadcast/multicast packets, without digging in the details of RFC2464, is to use a ternary match on `33:33:**:**:**:**`, where `*` means "don't care". For this reason, our solution defines two tables. One that matches in an exact fashion `l2_exact_table` (easier to scale on switch ASIC memory) and one that uses ternary matching `l2_ternary_table` (which requires more expensive TCAM memories, usually much smaller). These tables are applied to packets in an order defined in the `apply` block of the ingress pipeline (`IngressPipeImpl`): ```p4 if (!l2_exact_table.apply().hit) { l2_ternary_table.apply(); } ``` The ternary table has lower priority, and it's applied only if a matching entry is not found in the exact one. **Note**: we won't be using VLANs to segment our L2 domains. As such, when matching packets in the `l2_ternary_table`, these will be broadcasted to ALL host-facing ports. #### Control plane (L2BridgingComponent) We already provide an ONOS app component controlling the L2 bridging tables of the P4 program: [L2BridgingComponent.java][L2BridgingComponent.java] This app component defines two event listeners located at the bottom of the `L2BridgingComponent` class, `InternalDeviceListener` for device events (e.g. connection of a new switch) and `InternalHostListener` for host events (e.g. new host discovered). These listeners in turn call methods like: * `setUpDevice()`: responsible for creating multicast groups for all host-facing ports and inserting flow rules for the `l2_ternary_table` pointing to such groups. * `learnHost()`: responsible for inserting unicast L2 entries based on the discovered host location. To support reloading the app implementation, these methods are also called at component activation for all devices and hosts known by ONOS at the time of activation (look for methods `activate()` and `setUpAllDevices()`). To keep things simple, our broadcast domain will be restricted to a single device, i.e. we allow packet replication only for ports of the same leaf switch. As such, we can exclude ports going to the spines from the multicast group. To determine whether a port is expected to be facing hosts or not, we look at the interface configuration in [netcfg.json](mininet/netcfg.json) file (look for the `ports` section of the JSON file). ### 1. Enable L2BridgingComponent and reload the app Before starting, you need to enable the app's L2BridgingComponent, which is currently disabled. 1. Open file: `app/src/main/java/org/onosproject/ngsdn/tutorial/L2BridgingComponent.java` 2. Look for the class definition at the top and enable the component by setting the `enabled` flag to `true` ```java @Component( immediate = true, enabled = true ) public class L2BridgingComponent { ``` 3. Build the ONOS app with `make app-build` 4. Re-load the app to apply the changes with `make app-reload` After reloading the app, you should see the following messages in the ONOS log (`make onos-log`): ``` INFO [L2BridgingComponent] Started ... INFO [L2BridgingComponent] *** L2 BRIDGING - Starting initial set up for device:leaf1... INFO [L2BridgingComponent] Adding L2 multicast group with 4 ports on device:leaf1... INFO [L2BridgingComponent] Adding L2 multicast rules on device:leaf1... ... INFO [L2BridgingComponent] *** L2 BRIDGING - Starting initial set up for device:leaf2... INFO [L2BridgingComponent] Adding L2 multicast group with 2 ports on device:leaf2... INFO [L2BridgingComponent] Adding L2 multicast rules on device:leaf2... ... ``` ### 2. Examine flow rules and groups Check the ONOS flow rules, you should see 2 new flow rules for the `l2_ternary_table` installed by L2BridgingComponent. For example, to show all flow rules installed so far on device `leaf1`: ``` onos> flows -s any device:leaf1 deviceId=device:leaf1, flowRuleCount=... ... ADDED, ..., table=IngressPipeImpl.l2_ternary_table, priority=10, selector=[hdr.ethernet.dst_addr=0x333300000000&&&0xffff00000000], treatment=[immediate=[IngressPipeImpl.set_multicast_group(gid=0xff)]] ADDED, ..., table=IngressPipeImpl.l2_ternary_table, priority=10, selector=[hdr.ethernet.dst_addr=0xffffffffffff&&&0xffffffffffff], treatment=[immediate=[IngressPipeImpl.set_multicast_group(gid=0xff)]] ... ``` To show also the multicast groups, you can use the `groups` command. For example to show groups on `leaf1`: ``` onos> groups any device:leaf1 deviceId=device:leaf1, groupCount=2 id=0x63, state=ADDED, type=CLONE, ..., appId=org.onosproject.core, referenceCount=0 id=0x63, bucket=1, ..., weight=-1, actions=[OUTPUT:CONTROLLER] id=0xff, state=ADDED, type=ALL, ..., appId=org.onosproject.ngsdn-tutorial, referenceCount=0 id=0xff, bucket=1, ..., weight=-1, actions=[OUTPUT:3] id=0xff, bucket=2, ..., weight=-1, actions=[OUTPUT:4] id=0xff, bucket=3, ..., weight=-1, actions=[OUTPUT:5] id=0xff, bucket=4, ..., weight=-1, actions=[OUTPUT:6] ``` The `ALL` group is a new one, created by our app (`appId=org.onosproject.ngsdn-tutorial`). Groups of type `ALL` in ONOS map to P4Runtime `MulticastGroupEntry`, in this case used to broadcast NDP NS packets to all host-facing ports. ### 3. Test L2 bridging on Mininet To verify that L2 bridging works as intended, send a ping between hosts in the same subnet: ``` mininet> h1a ping h1b PING 2001:1:1::b(2001:1:1::b) 56 data bytes 64 bytes from 2001:1:1::b: icmp_seq=2 ttl=64 time=0.580 ms 64 bytes from 2001:1:1::b: icmp_seq=3 ttl=64 time=0.483 ms 64 bytes from 2001:1:1::b: icmp_seq=4 ttl=64 time=0.484 ms ... ``` In contrast to Exercise 1, here we have NOT set any NDP static entries. Instead, NDP NS and NA packets are handled by the data plane thanks to the `ALL` group and `l2_ternary_table`'s flow rule described above. Moreover, given the ACL flow rules to clone NDP packets to the controller, hosts can be discovered by ONOS. Host discovery events are used by `L2BridgingComponent.java` to insert entries in the P4 `l2_exact_table`. Check the ONOS log, you should see messages related to the discovery of hosts `h1a` and `h1b`: ``` INFO [L2BridgingComponent] HOST_ADDED event! host=00:00:00:00:00:1A/None, deviceId=device:leaf1, port=3 INFO [L2BridgingComponent] Adding L2 unicast rule on device:leaf1 for host 00:00:00:00:00:1A/None (port 3)... INFO [L2BridgingComponent] HOST_ADDED event! host=00:00:00:00:00:1B/None, deviceId=device:leaf1, port=4 INFO [L2BridgingComponent] Adding L2 unicast rule on device:leaf1 for host 00:00:00:00:00:1B/None (port 4). ``` ### 4. Visualize hosts on the ONOS CLI and web UI You should see exactly two hosts in the ONOS CLI (`make onos-cli`): ``` onos> hosts -s id=00:00:00:00:00:1A/None, mac=00:00:00:00:00:1A, locations=[device:leaf1/3], vlan=None, ip(s)=[2001:1:1::a] id=00:00:00:00:00:1B/None, mac=00:00:00:00:00:1B, locations=[device:leaf1/4], vlan=None, ip(s)=[2001:1:1::b] ``` Using the ONF Cloud Tutorial Portal, access the ONOS UI. If you are running the VM on your laptop, open up a browser (e.g. Firefox) to . To toggle showing hosts on the topology view, press `H` on your keyboard. ## Congratulations! You have completed the fourth exercise! [PipeconfLoader.java]: app/src/main/java/org/onosproject/ngsdn/tutorial/pipeconf/PipeconfLoader.java [InterpreterImpl.java]: app/src/main/java/org/onosproject/ngsdn/tutorial/pipeconf/InterpreterImpl.java [PipelinerImpl.java]: app/src/main/java/org/onosproject/ngsdn/tutorial/pipeconf/PipelinerImpl.java [L2BridgingComponent.java]: app/src/main/java/org/onosproject/ngsdn/tutorial/L2BridgingComponent.java ================================================ FILE: EXERCISE-5.md ================================================ # Exercise 5: IPv6 Routing In this exercise, you will be modifying the P4 program and ONOS app to add support for IPv6-based (L3) routing between all hosts connected to the fabric, with support for ECMP to balance traffic flows across multiple spines. ## Background ### Requirements At this stage, we want our fabric to behave like a standard IP fabric, with switches functioning as routers. As such, the following requirements should be satisfied: * Leaf interfaces should be assigned with an IPv6 address (the gateway address) and a MAC address that we will call `myStationMac`; * Leaf switches should be able to handle NDP Neighbor Solicitation (NS) messages -- sent by hosts to resolve the MAC address associated with the switch interface/gateway IPv6 addresses, by replying with NDP Neighbor Advertisements (NA) notifying their `myStationMac` address; * Packets received with Ethernet destination `myStationMac` should be processed through the routing tables (traffic that is not dropped can then be processed through the bridging tables); * When routing, the P4 program should look at the IPv6 destination address. If a matching entry is found, the packet should be forwarded to a given next hop and the packet's Ethernet addresses should be modified accordingly (source set to `myStationMac` and destination to the next hop one); * When routing packets to a different leaf across the spines, leaf switches should be able to use ECMP to distribute traffic. ### Configuration The [netcfg.json](mininet/netcfg.json) file includes a special configuration for each device named `fabricDeviceConfig`, this block defines 3 values: * `myStationMac`: MAC address associated with each device, i.e., the router MAC address; * `mySid`: the SRv6 segment ID of the device, used in the next exercise. * `isSpine`: a boolean flag, indicating whether the device should be considered as a spine switch. Moreover, the [netcfg.json](mininet/netcfg.json) file also includes a list of interfaces with an IPv6 prefix assigned to them (look under the `ports` section of the file). The same IPv6 addresses are used in the Mininet topology script [topo-v6.py](mininet/topo-v6.py). ### Try pinging hosts in different subnets Similarly to the previous exercise, let's start by using Mininet to verify that pinging between hosts on different subnets does NOT work. It will be your task to make it work. On the Mininet CLI: ``` mininet> h2 ping h3 PING 2001:2:3::1(2001:2:3::1) 56 data bytes From 2001:1:2::1 icmp_seq=1 Destination unreachable: Address unreachable From 2001:1:2::1 icmp_seq=2 Destination unreachable: Address unreachable From 2001:1:2::1 icmp_seq=3 Destination unreachable: Address unreachable ... ``` If you check the ONOS log, you will notice that `h2` has been discovered: ``` INFO [L2BridgingComponent] HOST_ADDED event! host=00:00:00:00:00:20/None, deviceId=device:leaf1, port=6 INFO [L2BridgingComponent] Adding L2 unicast rule on device:leaf1 for host 00:00:00:00:00:20/None (port 6)... ``` That's because `h2` sends NDP NS messages to resolve the MAC address of its gateway (`2001:1:2::ff` as configured in [topo-v6.py](mininet/topo-v6.py)). We can check the IPv6 neighbor table for `h2` to see that the resolution has failed: ``` mininet> h2 ip -6 n 2001:1:2::ff dev h2-eth0 FAILED ``` ## 1. Modify P4 program The first step will be to add new tables to `main.p4`. #### P4-based generation of NDP messages We already provide ways to handle NDP NS and NA exchanged by hosts connected to the same subnet (see `l2_ternary_table`). However, for hosts, the Linux networking stack takes care of generating a NDP NA reply. For the switches in our fabric, there's no traditional networking stack associated to it. There are multiple solutions to this problem: * we can configure hosts with static NDP entries, removing the need for the switch to reply to NDP NS packets; * we can intercept NDP NS via packet-in, generate a corresponding NDP NA reply in ONOS, and send it back via packet-out; or * we can instruct the switch to generate NDP NA replies using P4 (i.e., we write P4 code that takes care of replying to NDP requests without any intervention from the control plane). **Note:** The rest of the exercise assumes you will decide to implement the last option. You can decide to go with a different one, but you should keep in mind that there will be less starter code for you to re-use. The idea is simple: NDP NA packets have the same header structure as NDP NS ones. They are both ICMPv6 packets with different header field values, such as different ICMPv6 type, different Ethernet addresses etc. A switch that knows the MAC address of a given IPv6 target address found in an NDP NS request, can transform the same packet to an NDP NA reply by modifying some of its fields. To implement P4-based generation of NDP NA messages, look in [p4src/snippets.p4](p4src/snippets.p4), we already provide an action named `ndp_ns_to_na` to transform an NDP NS packet into an NDP NA one. Your task is to implement a table that uses such action. This table should define a mapping between the interface IPv6 addresses provided in [netcfg.json](mininet/netcfg.json) and the `myStationMac` associated to each switch (also defined in netcfg.json). When an NDP NS packet is received, asking to resolve one of such IPv6 addresses, the `ndp_ns_to_na` action should be invoked with the given `myStationMac` as parameter. The ONOS app will be responsible of inserting entries in this table according to the content of netcfg.json. The ONOS app already provides a component [NdpReplyComponent.java](app/src/main/java/org/onosproject/ngsdn/tutorial/NdpReplyComponent.java) responsible of inserting entries in this table. The component is currently disabled. You will need to enable and modify it in the next steps, but for now, let's focus on the P4 program. #### LPM IPv6 routing table The main table for this exercise will be an L3 table that matches on the destination IPv6 address. You should create a table that performs longest prefix match (LPM) on the destination address and performs the required packet transformations: 1. Replace the source Ethernet address with the destination one, expected to be `myStationMac` (see next section on "My Station" table). 2. Set the destination Ethernet to the next hop's address (passed as an action argument). 3. Decrement the IPv6 `hop_limit`. This L3 table and action should provide a mapping between a given IPv6 prefix and a next hop MAC address. In our solution (and hence in the PTF starter code and ONOS app), we re-use the L2 table defined in Exercise 2 to provide a mapping between the next hop MAC address and an output port. If you want to apply the same solution, make sure to call the L3 table before the L2 one in the `apply` block. Moreover, we will want to drop the packet when the IPv6 hop limit reaches 0. This can be accomplished by inserting logic in the `apply` block that inspects the field after applying your L3 table. At this point, your pipeline should properly match, transform, and forward IPv6 packets. **Note:** For simplicity, we are using a global routing table. If you would like to segment your routing table in virtual ones (i.e. using a VRF ID), you can tackle this as extra credit. #### "My Station" table You may realize that at this point the switch will perform IPv6 routing indiscriminately, which is technically incorrect. The switch should only route Ethernet frames that are destined for the router's Ethernet address (`myStationMac`). To address this issue, you will need to create a table that matches the destination Ethernet address and marks the packet for routing if there is a match. We call this the "My Station" table. You are free to use a specific action or metadata to carry this information, or for simplicity, you can use `NoAction` and check for a hit in this table in your `apply` block. Remember to update your `apply` block after creating this table. #### Adding support for ECMP with action selectors The last modification you will make to the pipeline is to add an `action_selector` that hashes traffic between the different possible next hops. In our leaf-spine topology, we have an equal-cost path for each spine for every leaf pair, and we want to be able to take advantage of that. We have already defined the P4 `ecmp_selector` in [p4src/snippets.p4](p4src/snippets.p4), but you will need to add the selector to your L3 table. You will also need to add the selector fields as match keys. For IPv6 traffic, you will need to include the source and destination IPv6 addresses as well as the IPv6 flow label as part of the ECMP hash, but you are free to include other parts of the packet header if you would like. For example, you could include the rest of the 5-tuple (i.e. L4 proto and ports); the L4 ports are parsed into `local_metadata` if would like to use them. For more details on the required fields for hashing IPv6 traffic, see RFC6438. You can compile the program using `make p4-build`. Make sure to address any compiler errors before continuing. At this point, our P4 pipeline should be ready for testing. ## 2. Run PTF tests Tests for the IPv6 routing behavior are located in `ptf/tests/routing.py`. Open that file up and modify wherever requested (look for `TODO EXERCISE 5`). To run all the tests for this exercise: make p4-test TEST=routing This command will run all tests in the `routing` group (i.e. the content of `ptf/tests/routing.py`). To run a specific test case you can use: make p4-test TEST=. For example: make p4-test TEST=routing.NdpReplyGenTest #### Check for regressions To make sure the new changes are not breaking other features, run the tests of the previous exercises as well. make p4-test TEST=packetio make p4-test TEST=bridging make p4-test TEST=routing If all tests succeed, congratulations! You can move to the next step. ## 3. Modify ONOS app The last part of the exercise is to update the starter code for the routing components of our ONOS app, located here: * `app/src/main/java/org/onosproject/ngsdn/tutorial/Ipv6RoutingComponent.java` * `app/src/main/java/org/onosproject/ngsdn/tutorial/NdpReplyComponent.java` Open those files and modify wherever requested (look for `TODO EXERCISE 5`). #### Ipv6RoutingComponent.java The starter code already provides an implementation for event listeners and the routing policy (i.e., methods triggered as a consequence of topology events), for example to compute ECMP groups based on the available links between leaves and the spine. You are asked to modify the implementation of four methods. * `setUpMyStationTable()`: to insert flow rules for the "My Station" table; * `createNextHopGroup()`: responsible of creating the ONOS equivalent of a P4Runtime action profile group for the ECMP selector of the routing table; * `createRoutingRule()`: to create a flow rule for the IPv6 routing table; * `createL2NextHopRule()`: to create flow rules mapping next hop MAC addresses (used in the ECMP groups) to output ports. You can find a similar method in the `L2BridgingComponent` (see `learnHost()` method). This one is called to create L2 rules between switches, e.g. to forward packets between leaves and spines. There's no need to handle L2 rules for hosts since those are inserted by the `L2BridgingComponent`. #### NdpReplyComponent.java This component listens to device events. Each time a new device is added in ONOS, it uses the content of `netcfg.json` to populate the NDP reply table. You are asked to modify the implementation of method `buildNdpReplyFlowRule()`, to insert the name of the table and action to generate NDP replies. #### Enable the routing components Once you are confident your solution to the previous step works, before building and reloading the app, remember to enable the routing-related components by setting the `enabled` flag to `true` at the top of the class definition. For IPv6 routing to work, you should enable the following components: * `Ipv6RoutingComponent.java` * `NdpReplyComponent.java` #### Build and reload the app Use the following command to build and reload your app while ONOS is running: ``` $ make app-build app-reload ``` When building the app, the modified P4 compiler outputs (`bmv2.json` and `p4info.txt`) will be packaged together along with the Java classes. After reloading the app, you should see messages in the ONOS log signaling that a new pipeline configuration has been set and the `Ipv6RoutingComponent` and `NdpReplyComponent` have been activated. Also check the log for potentially harmful messages (`make onos-log`). If needed, take a look at section **Appendix A: Understanding ONOS error logs** at the end of this exercise. ## 4. Test IPv6 routing on Mininet #### Verify ping Type the following commands in the Mininet CLI, in order: ``` mininet> h2 ping h3 mininet> h3 ping h2 PING 2001:1:2::1(2001:1:2::1) 56 data bytes 64 bytes from 2001:1:2::1: icmp_seq=2 ttl=61 time=2.39 ms 64 bytes from 2001:1:2::1: icmp_seq=3 ttl=61 time=2.29 ms 64 bytes from 2001:1:2::1: icmp_seq=4 ttl=61 time=2.71 ms ... ``` Pinging between `h3` and `h2` should work now. If ping does NOT work, check section **Appendix B: Troubleshooting** at the end of this exercise. The ONOS log should show messages such as: ``` INFO [Ipv6RoutingComponent] HOST_ADDED event! host=00:00:00:00:00:20/None, deviceId=device:leaf1, port=6 INFO [Ipv6RoutingComponent] Adding routes on device:leaf1 for host 00:00:00:00:00:20/None [[2001:1:2::1]] ... INFO [Ipv6RoutingComponent] HOST_ADDED event! host=00:00:00:00:00:30/None, deviceId=device:leaf2, port=3 INFO [Ipv6RoutingComponent] Adding routes on device:leaf2 for host 00:00:00:00:00:30/None [[2001:2:3::1]] ... ``` If you don't see messages about the discovery of `h2` (`00:00:00:00:00:20`) it's because ONOS has already discovered that host when you tried to ping at the beginning of the exercise. **Note:** we need to start the ping first from `h2` and then from `h3` to let ONOS discover the location of both hosts before ping packets can be forwarded. That's because the current implementation requires hosts to generate NDP NS packets to be discovered by ONOS. To avoid having to manually generate NDP NS messages, a possible solution could be: * Configure IPv6 hosts in Mininet to periodically and automatically generate a different type of NDP messages, named Router Solicitation (RS). * Insert a flow rule in the ACL table to clone NDP RS packets to the CPU. This would require matching on a different value of ICMPv6 code other than NDP NA and NS. * Modify the `hostprovider` built-in app implementation to learn host location from NDP RS messages (it currently uses only NDP NA and NS). #### Verify P4-based NDP NA generation To verify that the P4-based generation of NDP NA replies by the switch is working, you can check the neighbor table of `h2` or `h3`. It should show something similar to this: ``` mininet> h3 ip -6 n 2001:2:3::ff dev h3-eth0 lladdr 00:aa:00:00:00:02 router REACHABLE ``` where `2001:2:3::ff` is the IPv6 gateway address defined in `netcfg.json` and `topo-v6.py`, and `00:aa:00:00:00:02` is the `myStationMac` defined for `leaf2` in `netcfg.json`. #### Visualize ECMP using the ONOS web UI To verify that ECMP is working, let's start multiple parallel traffic flows from `h2` to `h3` using iperf. In the Mininet command prompt, type: ``` mininet> h2 iperf -c h3 -u -V -P5 -b1M -t600 -i1 ``` This commands starts an iperf client on `h2`, sending UDP packets (`-u`) over IPv6 (`-V`) to `h3` (`-c`). In doing this, we generate 5 distinct flows (`-P5`), each one capped at 1Mbit/s (`-b1M`), running for 10 minutes (`-t600`) and reporting stats every 1 second (`-i1`). Since we are generating UDP traffic, there's no need to start an iperf server on `h3`. Using the ONF Cloud Tutorial Portal, access the ONOS UI. If you are using the tutorial VM, open up a browser (e.g. Firefox) to . When asked, use the username `onos` and password `rocks`. On the same page showing the ONOS topology: * Press `H` on your keyboard to show hosts; * Press `L` to show device labels; * Press `A` multiple times until you see port/link stats, in either packets/seconds (pps) or bits/seconds. If you completed the P4 and app implementation correctly, and ECMP is working, you should see traffic being forwarded to both spines as in the screenshot below: ECMP Test ## Congratulations! You have completed the fifth exercise! Now your fabric is capable of forwarding IPv6 traffic between any host. ## Appendix A: Understanding ONOS error logs There are two main types of errors that you might see when reloading the app: 1. Write errors, such as removing a nonexistent entity or inserting one that already exists: ``` WARN [WriteResponseImpl] Unable to DELETE PRE entry on device...: NOT_FOUND Multicast group does not exist ... WARN [WriteResponseImpl] Unable to INSERT table entry on device...: ALREADY_EXIST Match entry exists, use MODIFY if you wish to change action ... ``` These are usually transient errors and **you should not worry about them**. They describe a temporary inconsistency of the ONOS-internal device state, which should be soon recovered by a periodic reconciliation mechanism. The ONOS core periodically polls the device state to make sure its internal representation is accurate, while writing any pending modifications to the device, solving these errors. Otherwise, if you see them appearing periodically (every 3-4 seconds), it means the reconciliation process is not working and something else is wrong. Try re-loading the app (`make app-reload`); if that doesn't resolve the warnings, check with the instructors. 2. Translation errors, signifying that ONOS is not able to translate the flow rules (or groups) generated by apps, to a representation that is compatible with your P4Info. For example: ``` WARN [P4RuntimeFlowRuleProgrammable] Unable to translate flow rule for pipeconf 'org.onosproject.ngsdn-tutorial':... ``` **Carefully read the error message and make changes to the app as needed.** Chances are that you are using a table, match field, or action name that does not exist in your P4Info. Check your P4Info file, modify, and reload the app (`make app-build app-reload`). ## Appendix B: Troubleshooting If ping is not working, here are few steps you can take to troubleshoot your network: 1. **Check that all flow rules and groups have been written successfully to the device.** Using ONOS CLI commands such as `flows -s any device:leaf1` and `groups any device:leaf1`, verify that all flows and groups are in state `ADDED`. If you see other states such as `PENDING_ADD`, check the ONOS log for possible errors with writing those entries to the device. You can also use the ONOS web UI to check flows and group state. 2. Check `netcfg` in ONOS CLI. If network config is missing, run `make netcfg` again to configure devices and hosts. 2. **Use table counters to verify that tables are being hit as expected.** If you don't already have direct counters defined for your table(s), modify the P4 program to add some, build and reload the app (`make app-build app-reload`). ONOS should automatically detect that and poll counters every 3-4 seconds (the same period for the reconciliation process). To check their values, you can either use the ONOS CLI (`flows -s any device:leaf1`) or the web UI. 3. **Double check the PTF tests** and make sure you are creating similar flow rules in the `Ipv6RoutingComponent.java` and `NdpReplyComponenet.java`. Do you notice any difference? 4. **Look at the BMv2 logs for possible errors.** Check file `/tmp/leaf1/stratum_bmv2.log`. 5. If here and still not working, **reach out to one of the instructors for assistance.** ================================================ FILE: EXERCISE-6.md ================================================ # Exercise 6: Segment Routing v6 (SRv6) In this exercise, you will be implementing a simplified version of segment routing, a source routing method that steers traffic through a specified set of nodes. ## Background This exercise is based on an IETF draft specification called SRv6, which uses IPv6 packets to frame traffic that follows an SRv6 policy. SRv6 packets use the IPv6 routing header, and they can either encapsulate IPv6 (or IPv4) packets entirely or they can just inject an IPv6 routing header into an existing IPv6 packet. The IPv6 routing header looks as follows: ``` 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Next Header | Hdr Ext Len | Routing Type | Segments Left | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Last Entry | Flags | Tag | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | | Segment List[0] (128 bits IPv6 address) | | | | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | | | ... | | | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | | Segment List[n] (128 bits IPv6 address) | | | | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ ``` The **Next Header** field is the type of either the next IPv6 header or the payload. For SRv6, the **Routing Type** is 4. **Segments Left** points to the index of the current segment in the segment list. In properly formed SRv6 packets, the IPv6 destination address equals `Segment List[Segments Left]`. The original IPv6 address should be `Segment List[0]` in our exercise so that traffic is routed to the correct destination eventually. **Last Entry** is the index of the last entry in the segment list. Note: This means it should be one less than the length of the list. (In the example above, the list is `n+1` entries and last entry should be `n`.) Finally, the **Segment List** is a reverse-sorted list of IPv6 addresses to be traversed in a specific SRv6 policy. The last entry in the list is the first segment in the SRv6 policy. The list is not typically mutated; the entire header is inserted or removed as a whole. To keep things simple and because we are already using IPv6, your solution will just be adding the routing header to the existing IPv6 packet. (We won't be embedding entire packets inside of new IPv6 packets with an SRv6 policy, although the spec allows it and there are valid use cases for doing so.) As you may have already noticed, SRv6 uses IPv6 addresses to identify segments in a policy. While the format of the addresses is the same as IPv6, the address space is typically different from the space used for switch's internal IPv6 addresses. The format of the address also differs. A typical IPv6 unicast address is broken into a network prefix and host identifier pieces, and a subnet mask is used to delineate the boundary between the two. A typical SRv6 segment identifier (SID) is broken into a locator, a function identifier, and optionally, function arguments. The locator must be routable, which enables both SRv6-enable and unaware nodes to participate in forwarding. HINT: Due to optional arguments, longest prefix match on the 128-bit SID is preferred to exact match. There are three types of nodes of interest in a segment routed network: 1. Source Node - the node (either host or switch) that injects the SRv6 policy. 2. Transit Node - a node that forwards an SRv6 packet, but is not the destination for the traffic 3. Endpoint Node - a participating waypoint in an SRv6 policy that will modify the SRv6 header and perform a specified function In our implementation, we simplify these types into two roles: * Endpoint Node - for traffic to the switch's SID, update the SRv6 header (decrement segments left), set the IPv6 destination address to the next segment, and forward the packets ("End" behavior). For simplicity, we will always remove the SRv6 header on the penultimate segment in the policy (called Penultimate Segment Pop or PSP in the spec). * Transit Node - by default, forward traffic normally if it is not destined for the switch's IP address or its SID ("T" behavior). Allow the control plane to add rules to inject SRv6 policy for traffic destined to specific IPv6 addresses ("T.Insert" behavior). For more details, you can read the draft specification here: https://tools.ietf.org/id/draft-filsfils-spring-srv6-network-programming-06.html ## 1. Adding tables for SRv6 We have already defined the SRv6 header as well as included the logic for parsing the header in `main.p4`. The next step is to add two for each of the two roles specified above. In addition to the tables, you will also need to write the action for the endpoint node table (otherwise called the "My SID" table); in `snippets.p4`, we have provided the `t_insert` actions for policies of length 2 and 3, which should be sufficient to get you started. Once you've finished that, you will need to apply the tables in the `apply` block at the bottom of your `EgressPipeImpl` section. You will want to apply the tables after checking that the L2 destination address matches the switch's, and before the L3 table is applied (because you'll want to use the same routing entries to forward traffic after the SRv6 policy is applied). You can also apply the PSP behavior as part of your `apply` logic because we will always be applying it if we are the penultimate SID. ## 2. Testing the pipeline with Packet Test Framework (PTF) In this exercise, you will be modifying tests in [srv6.py](ptf/tests/srv6.py) to verify the SRv6 behavior of the pipeline. There are four tests in `srv6.py`: * Srv6InsertTest: Tests SRv6 insert behavior, where the switch receives an IPv6 packet and inserts the SRv6 header. * Srv6TransitTest: Tests SRv6 transit behavior, where the switch ignores the SRv6 header and routes the packet normally, without applying any SRv6-related modifications. * Srv6EndTest: Tests SRv6 end behavior (without pop), where the switch forwards the packet to the next SID found in the SRv6 header. * Srv6EndPspTest: Tests SRv6 End with Penultimate Segment Pop (PSP) behavior, where the switch SID is the penultimate in the SID list and the switch removes the SRv6 header before routing the packet to it's final destination (last SID in the list). You should be able to find `TODO EXERCISE 6` in [srv6.py](ptf/tests/srv6.py) with some hints. To run all the tests for this exercise: make p4-test TEST=srv6 This command will run all tests in the `srv6` group (i.e. the content of `ptf/tests/srv6.py`). To run a specific test case you can use: make p4-test TEST=. For example: make p4-test TEST=srv6.Srv6InsertTest **Check for regressions** At this point, our P4 program should be complete. We can check to make sure that we haven't broken anything from the previous exercises by running all tests from the `ptf/tests` directory: ``` $ make p4-test ``` Now we have shown that we can install basic rules and pass SRv6 traffic using BMv2. ## 3. Building the ONOS App For the ONOS application, you will need to update `Srv6Component.java` in the following ways: * Complete the `setUpMySidTable` method which will insert an entry into the M SID table that matches the specified device's SID and performs the `end` action. This function is called whenever a new device is connected. * Complete the `insertSrv6InsertRule` function, which creates a `t_insert` rule along for the provided SRv6 policy. This function is called by the `srv6-insert` CLI command. * Complete the `clearSrv6InsertRules`, which is called by the `srv6-clear` CLI command. Once you are finished, you should rebuild and reload your app. This will also rebuild and republish any changes to your P4 code and the ONOS pipeconf. Don't forget to enable your Srv6Component at the top of the file. As with previous exercises, you can use the following command to build and reload the app: ``` $ make app-build app-reload ``` ## 4. Inserting SRv6 policies The next step is to show that traffic can be steered using an SRv6 policy. You should start a ping between `h2` and `h4`: ``` mininet> h2 ping h4 ``` Using the ONOS UI, you can observe which paths are being used for the ping packets. - Press `a` until you see "Port stats (packets/second)" - Press `l` to show device labels Ping Test Once you determine which of the spines your packets are being hashed to (and it could be both, with requests and replies taking different paths), you should insert a set of SRv6 policies that sends the ping packets via the other spine (or the spine of your choice). To add new SRv6 policies, you should use the `srv6-insert` command. ``` onos> srv6-insert ``` Note: In our topology, the SID for spine1 is `3:201:2::` and the SID for spine is `3:202:2::`. For example, to add a policy that forwards traffic between h2 and h4 though spine1 and leaf2, you can use the following command: * Insert the SRv6 policy from h2 to h4 on leaf1 (through spine1 and leaf2) ``` onos> srv6-insert device:leaf1 3:201:2:: 3:102:2:: 2001:2:4::1 Installing path on device device:leaf1: 3:201:2::, 3:102:2::, 2001:2:4::1 ``` * Insert the SRv6 policy from h4 to h2 on leaf2 (through spine1 and leaf1) ``` onos> srv6-insert device:leaf2 3:201:2:: 3:101:2:: 2001:1:2::1 Installing path on device device:leaf2: 3:201:2::, 3:101:2::, 2001:1:2::1 ``` These commands will match on traffic to the last segment on the specified device (e.g. match `2001:1:4::1` on `leaf1`). You can update the command to allow for more specific match criteria as extra credit. You can confirm that your rule has been added using a variant of the following: (HINT: Make sure to update the tableId to match the one in your P4 program.) ``` onos> flows any device:leaf1 | grep tableId=IngressPipeImpl.srv6_transit id=c000006d73f05e, state=ADDED, bytes=0, packets=0, duration=871, liveType=UNKNOWN, priority=10, tableId=IngressPipeImpl.srv6_transit, appId=org.p4.srv6-tutorial, selector=[hdr.ipv6.dst_addr=0x20010001000400000000000000000001/128], treatment=DefaultTrafficTreatment{immediate=[ IngressPipeImpl.srv6_t_insert_3( s3=0x20010001000400000000000000000001, s1=0x30201000200000000000000000000, s2=0x30102000200000000000000000000)], deferred=[], transition=None, meter=[], cleared=false, StatTrigger=null, metadata=null} ``` You should now return to the ONOS UI to confirm that traffic is flowing through the specified spine. SRv6 Ping Test ## 5. Debugging and Clean Up If you need to remove your SRv6 policies, you can use the `srv6-clear` command to clear all SRv6 policies from a specific device. For example to remove flows from `leaf1`, use this command: ``` onos> srv6-clear device:leaf1 ``` To verify that the device inserts the correct SRv6 header, you can use **Wireshark** to capture packet from each device port. For example, if you want to capture packet from port 1 of spine1, capture packets from interface `spine1-eth1`. NOTE: `spine1-eth1` is connected to leaf1, and `spine1-eth2` is connected to leaf2; spine two follows the same pattern. ## Congratulations! You have completed the sixth exercise! Now your fabric is capable of steering traffic using SRv6. ================================================ FILE: EXERCISE-7.md ================================================ # Exercise 7: Trellis Basics The goal of this exercise is to learn how to set up and configure an emulated Trellis environment with a simple 2x2 topology. ## Background Trellis is a set of built-in ONOS applications that provide the control plane for an IP fabric based on MPLS segment-routing. That is, similar in purpose to the app we have been developing in the previous exercise, but instead of using IPv6-based routing or SRv6, Trellis uses MPLS labels to forward packets between leaf switches and across the spines. Trellis apps are deployed in Tier-1 carrier networks, and for this reason they are deemed production-grade. These apps provide an extensive feature set such as: * Carrier-oriented networking capabilities: from basic L2 and L3 forwarding, to multicast, QinQ, pseudo-wires, integration with external control planes such as BGP, OSPF, DHCP relay, etc. * Fault-tolerance and high-availability: as Trellis is designed to take full advantage of the ONOS distributed core, e.g., to withstand controller failures. It also provides dataplane-level resiliency against link failures and switch failures (with paired leaves and dual-homed hosts). See figure below. * Single-pane-of-glass monitoring and troubleshooting, with dedicated tools such as T3. ![trellis-features](img/trellis-features.png) Trellis is made of several apps running on top of ONOS, the main one is `segmentrouting`, and its implementation can be found in the ONOS source tree: [onos/apps/segmentrouting] (open on GitHub) `segmentrouting` abstracts the leaf and spine switches to make the fabric appear as "one big IP router", such that operators can program them using APIs similar to that of a traditional router (e.g. to configure VLANs, subnets, routes, etc.) The app listens to operator-provided configuration, as well as topology events, to program the switches with the necessary forwarding rules. Because of this "one big IP router" abstraction, operators can independently scale the topology to add more capacity or ports by adding more leaves and spines. `segmentrouting` and other Trellis apps use the ONOS FlowObjective API, which allow them to be pipeline-agnostic. As a matter of fact, Trellis was initially designed to work with fixed-function switches exposing an OpenFlow agent (such as Broadcom Tomahawk, Trident2, and Qumran via the OF-DPA pipeline). However, in recent years, support for P4 programmable switches was enabled without changing the Trellis apps, but instead providing a special ONOS pipeconf that brings in a P4 program complemented by a set of drivers that among other things are responsible for translating flow objectives to the P4 program-specific tables. This P4 program is named `fabric.p4`. It's implementation along with the corresponding pipeconf drivers can be found in the ONOS source tree: [onos/pipelines/fabric] (open on GitHub) This pipeconf currently works on the `stratum_bmv2` software switch as well as on Intel Barefoot Tofino-based switches (the [fabric-tofino] project provides instructions and scripts to create a Tofino-enabled pipeconf). We will come back to the details of `fabric.p4` in the next lab, for now, let's keep in mind that instead of building our own custom pipeconf, we will use one provided with ONOS. The goal of the exercise is to learn the Trellis basics by writing a configuration in the form of a netcfg JSON file to set up bridging and IPv4 routing of traffic between hosts. For a gentle overview of Trellis, please check the online book "Software-Defined Networks: A Systems Approach": Finally, the official Trellis documentation is also available online: ### Topology We will use a topology similar to previous exercises, however, instead of IPv6 we will use IPv4 hosts. The topology file is located under [mininet/topo-v4.py][topo-v4.py]. While the Trellis apps support IPv6, the P4 program does not, yet. Development of IPv6 support in `fabric.p4` is work in progress. ![topo-v4](img/topo-v4.png) Exactly like in previous exercises, the Mininet script [topo-v4.py] used here defines 4 IPv4 subnets: * `172.16.1.0/24` with 3 hosts connected to `leaf1` (`h1a`, `h1b`, and `h1c`) * `172.16.2.0/24` with 1 hosts connected to `leaf1` (`h2`) * `172.16.3.0/24` with 1 hosts connected to `leaf2` (`h3`) * `172.16.4.0/24` with 1 hosts connected to `leaf2` (`h4`) ### VLAN tagged vs. untagged ports As usually done in a traditional router, different subnets are associated to different VLANs. For this reason, Trellis allows configuring ports with different VLANs, either untagged or tagged. An **untagged** port expects packets to be received and sent **without** a VLAN tag, but internally, the switch processes all packets as belonging to a given pre-configured VLAN ID. Similarly, when transmitting packets, the VLAN tag is removed. For **tagged** ports, packets are expected to be received **with** a VLAN tag that has ID that belongs to a pre-configured set of known ones. Packets received untagged or with an unknown VLAN ID, are dropped. In our topology, we want the following VLAN configuration: * `leaf1` port `3` and `4`: VLAN `100` untagged (host `h1a` and `h1b`) * `leaf1` port `5`: VLAN `100` tagged (`h1c`) * `leaf1` port `6`: VLAN `200` tagged (`h2`) * `leaf2` port `3`: VLAN `300` tagged (`h3`) * `leaf2` port `4`: VLAN `400` untagged (`h4`) In the Mininet script [topo-v4.py], we use different host Python classes to create untagged and tagged hosts. For example, for `h1a` attached to untagged port `leaf1-3`, we use the `IPv4Host` class: ``` # Excerpt from mininet/topo-v4.py h1a = self.addHost('h1a', cls=IPv4Host, mac="00:00:00:00:00:1A", ip='172.16.1.1/24', gw='172.16.1.254') ``` For `h2`, which instead is attached to tagged port `leaf1-6`, we use the `TaggedIPv4Host` class: ``` h2 = self.addHost('h2', cls=TaggedIPv4Host, mac="00:00:00:00:00:20", ip='172.16.2.1/24', gw='172.16.2.254', vlan=200) ``` In the same Python file, you can find the implementation for both classes. For `TaggedIPv4Host` we use standard Linux commands to create a VLAN tagged interface. ### Configuration via netcfg The JSON file in [mininet/netcfg-sr.json][netcfg-sr.json] includes the necessary configuration for ONOS and the Trellis apps to program switches to forward traffic between hosts of the topology described above. **NOTE**: this is a similar but different file then the one used in previous exercises. Notice the `-sr` suffix, where `sr` stands for `segmentrouting`, as this file contains the necessary configuration for such app to work. Take a look at both the old file ([netcfg.json]) and the new one ([netcfg-sr.json]), can you spot the differences? To help, try answering the following questions: * What is the pipeconf ID used for all 4 switches? Which pipeconf ID did we use before? Why is it different? * In the new file, each device has a `"segmentrouting"` config block (JSON subtree). Do you see any similarities with the previous file and the `"fabricDeviceConfig"` block? * How come all `"fabricDeviceConfig"` blocks are gone in the new file? * Look at the `"interfaces"` config blocks, what has changed w.r.t. the old file? * In the new file, why do the untagged interfaces have only one VLAN ID value, while the tagged ones can take many (JSON array)? * Is the `interfaces` block provided for all host-facing ports? Which ports are missing and which hosts are attached to those ports? ## 1. Restart ONOS and Mininet with the IPv4 topology Since we want to use a new topology with IPv4 hosts, we need to reset the current environment: $ make reset This command will stop ONOS and Mininet and remove any state associated with them. Re-start ONOS and Mininet, this time with the new IPv4 topology: **IMPORTANT:** please notice the `-v4` suffix! $ make start-v4 Wait about 1 minute before proceeding with the next steps. This will give ONOS time to start all of its subsystems. ## 2. Load fabric pipeconf and segmentrouting Differently from previous exercises, instead of building and installing our own pipeconf and app, here we use built-in ones. Open up the ONOS CLI (`make onos-cli`) and activate the following apps: onos> app activate fabric onos> app activate segmentrouting **NOTE:** The full ID for both apps is `org.onosproject.pipelines.fabric` and `org.onosproject.segmentrouting`, respectively. For convenience, when activating built-in apps using the ONOS CLI, you can specify just the last piece of the full ID (after the last dot.) **NOTE 2:** The `fabric` app has the minimal purpose of registering pipeconfs in the system. Different from `segmentrouting`, even if we call them both apps, `fabric` does not interact with the network in any way. #### Verify apps Verify that all apps have been activated successfully: onos> apps -s -a * 18 org.onosproject.drivers 2.2.2 Default Drivers * 37 org.onosproject.protocols.grpc 2.2.2 gRPC Protocol Subsystem * 38 org.onosproject.protocols.gnmi 2.2.2 gNMI Protocol Subsystem * 39 org.onosproject.generaldeviceprovider 2.2.2 General Device Provider * 40 org.onosproject.protocols.gnoi 2.2.2 gNOI Protocol Subsystem * 41 org.onosproject.drivers.gnoi 2.2.2 gNOI Drivers * 42 org.onosproject.route-service 2.2.2 Route Service Server * 43 org.onosproject.mcast 2.2.2 Multicast traffic control * 44 org.onosproject.portloadbalancer 2.2.2 Port Load Balance Service * 45 org.onosproject.segmentrouting 2.2.2 Segment Routing * 53 org.onosproject.hostprovider 2.2.2 Host Location Provider * 54 org.onosproject.lldpprovider 2.2.2 LLDP Link Provider * 64 org.onosproject.protocols.p4runtime 2.2.2 P4Runtime Protocol Subsystem * 65 org.onosproject.p4runtime 2.2.2 P4Runtime Provider * 99 org.onosproject.drivers.gnmi 2.2.2 gNMI Drivers * 100 org.onosproject.drivers.p4runtime 2.2.2 P4Runtime Drivers * 101 org.onosproject.pipelines.basic 2.2.2 Basic Pipelines * 102 org.onosproject.drivers.stratum 2.2.2 Stratum Drivers * 103 org.onosproject.drivers.bmv2 2.2.2 BMv2 Drivers * 111 org.onosproject.pipelines.fabric 2.2.2 Fabric Pipeline * 164 org.onosproject.gui2 2.2.2 ONOS GUI2 Verify that you have the above 20 apps active in your ONOS instance. If you are wondering why so many apps, remember from EXERCISE 3 that the ONOS container in [docker-compose.yml] is configured to pass the environment variable `ONOS_APPS` which defines built-in apps to load during startup. In our case this variable has value: ONOS_APPS=gui2,drivers.bmv2,lldpprovider,hostprovider Moreover, `segmentrouting` requires other apps as dependencies, such as `route-service`, `mcast`, and `portloadbalancer`. The combination of all these apps (and others that we do not need in this exercise) is what makes Trellis. #### Verify pipeconfs Verify that the `fabric` pipeconfs have been registered successfully: onos> pipeconfs id=org.onosproject.pipelines.fabric-full, behaviors=[PortStatisticsDiscovery, PiPipelineInterpreter, Pipeliner, IntProgrammable], extensions=[P4_INFO_TEXT, BMV2_JSON, CPU_PORT_TXT] id=org.onosproject.pipelines.int, behaviors=[PiPipelineInterpreter, Pipeliner, PortStatisticsDiscovery, IntProgrammable], extensions=[P4_INFO_TEXT, BMV2_JSON] id=org.onosproject.pipelines.fabric-spgw-int, behaviors=[PortStatisticsDiscovery, PiPipelineInterpreter, Pipeliner, IntProgrammable], extensions=[P4_INFO_TEXT, BMV2_JSON, CPU_PORT_TXT] id=org.onosproject.pipelines.fabric, behaviors=[PortStatisticsDiscovery, PiPipelineInterpreter, Pipeliner], extensions=[P4_INFO_TEXT, BMV2_JSON, CPU_PORT_TXT] id=org.onosproject.pipelines.fabric-bng, behaviors=[PortStatisticsDiscovery, PiPipelineInterpreter, Pipeliner, BngProgrammable], extensions=[P4_INFO_TEXT, BMV2_JSON, CPU_PORT_TXT] id=org.onosproject.pipelines.fabric-spgw, behaviors=[PortStatisticsDiscovery, PiPipelineInterpreter, Pipeliner], extensions=[P4_INFO_TEXT, BMV2_JSON, CPU_PORT_TXT] id=org.onosproject.pipelines.fabric-int, behaviors=[PortStatisticsDiscovery, PiPipelineInterpreter, Pipeliner, IntProgrammable], extensions=[P4_INFO_TEXT, BMV2_JSON, CPU_PORT_TXT] id=org.onosproject.pipelines.basic, behaviors=[PiPipelineInterpreter, Pipeliner, PortStatisticsDiscovery], extensions=[P4_INFO_TEXT, BMV2_JSON] Wondering why so many pipeconfs? `fabric.p4` comes in different "profiles", used to enable different dataplane features in the pipeline. We'll come back to the differences between different profiles in the next exercise, for now let's make sure the basic one `org.onosproject.pipelines.fabric` is loaded. This is the one we need to program all four switches, as specified in [netcfg-sr.json]. #### Increase reconciliation frequency (optional, but recommended) Run the following commands in the ONOS CLI: onos> cfg set org.onosproject.net.flow.impl.FlowRuleManager fallbackFlowPollFrequency 4 onos> cfg set org.onosproject.net.group.impl.GroupManager fallbackGroupPollFrequency 3 This command tells the ONOS core to modify the period (in seconds) between reconciliation checks. Reconciliation is used to verify that switches have the expected forwarding state and to correct any inconsistencies, i.e., writing any pending flow rule and group. When running ONOS and the emulated switches in the same machine (especially those with low CPU/memory), it might happen that P4Runtime write requests time out because the system is overloaded. The default reconciliation period is 30 seconds, the above commands set it to 4 seconds for flow rules, and 3 seconds for groups. ## 3. Push netcfg-sr.json to ONOS On a terminal window, type: **IMPORTANT**: please notice the `-sr` suffix! $ make netcfg-sr As we learned in EXERCISE 3, this command will push [netcfg-sr.json] to ONOS, triggering discovery and configuration of the 4 switches. Moreover, since the file specifies a `segmentrouting` config block for each switch, this will instruct the `segmentrouting` app in ONOS to take control of all of them, i.e., the app will start generating flow objectives that will be translated into flow rules for the `fabric.p4` pipeline. Check the ONOS log (`make onos-log`). You should see numerous messages from components such as `TopologyHandler`, `LinkHandler`, `SegmentRoutingManager`, etc., signaling that switches have been discovered and programmed. You should also see warning messages such as: ``` [ForwardingObjectiveTranslator] Cannot translate DefaultForwardingObjective: unsupported forwarding function type 'PSEUDO_WIRE'... ``` This is normal, as not all Trellis features are supported in `fabric.p4`. One of such feature is [pseudo-wire] (L2 tunneling across the L3 fabric). You can ignore that. This error is generated by the Pipeliner driver behavior of the `fabric` pipeconf, which recognizes that the given flow objective cannot be translated. #### Check configuration in ONOS Verify that all interfaces have been configured successfully: onos> interfaces leaf1-3: port=device:leaf1/3 ips=[172.16.1.254/24] mac=00:AA:00:00:00:01 vlanUntagged=100 leaf1-4: port=device:leaf1/4 ips=[172.16.1.254/24] mac=00:AA:00:00:00:01 vlanUntagged=100 leaf1-5: port=device:leaf1/5 ips=[172.16.1.254/24] mac=00:AA:00:00:00:01 vlanTagged=[100] leaf1-6: port=device:leaf1/6 ips=[172.16.2.254/24] mac=00:AA:00:00:00:01 vlanTagged=[200] You should see four interfaces in total (for all host-facing ports of `leaf1`), configured as in the [netcfg-sr.json] file. You will have to add the configuration for `leaf2`'s ports later in this exercise. A similar output can be obtained by using a `segmentrouting`-specific command: onos> sr-device-subnets device:leaf1 172.16.1.0/24 172.16.2.0/24 device:spine1 device:spine2 device:leaf2 This command lists all device-subnet mapping known to `segmentrouting`. For a list of other available sr-specific commands, type `sr-` and press tab (as for command auto-completion). Another interesting command is `sr-ecmp-spg`, which lists all computed ECMP shortest-path graphs: onos> sr-ecmp-spg Root Device: device:leaf1 ECMP Paths: Paths from device:leaf1 to device:spine1 == : device:leaf1/1 -> device:spine1/1 Paths from device:leaf1 to device:spine2 == : device:leaf1/2 -> device:spine2/1 Paths from device:leaf1 to device:leaf2 == : device:leaf1/2 -> device:spine2/1 : device:spine2/2 -> device:leaf2/2 == : device:leaf1/1 -> device:spine1/1 : device:spine1/2 -> device:leaf2/1 Root Device: device:spine1 ECMP Paths: Paths from device:spine1 to device:leaf1 == : device:spine1/1 -> device:leaf1/1 Paths from device:spine1 to device:spine2 == : device:spine1/2 -> device:leaf2/1 : device:leaf2/2 -> device:spine2/2 == : device:spine1/1 -> device:leaf1/1 : device:leaf1/2 -> device:spine2/1 Paths from device:spine1 to device:leaf2 == : device:spine1/2 -> device:leaf2/1 Root Device: device:spine2 ECMP Paths: Paths from device:spine2 to device:leaf1 == : device:spine2/1 -> device:leaf1/2 Paths from device:spine2 to device:spine1 == : device:spine2/1 -> device:leaf1/2 : device:leaf1/1 -> device:spine1/1 == : device:spine2/2 -> device:leaf2/2 : device:leaf2/1 -> device:spine1/2 Paths from device:spine2 to device:leaf2 == : device:spine2/2 -> device:leaf2/2 Root Device: device:leaf2 ECMP Paths: Paths from device:leaf2 to device:leaf1 == : device:leaf2/1 -> device:spine1/2 : device:spine1/1 -> device:leaf1/1 == : device:leaf2/2 -> device:spine2/2 : device:spine2/1 -> device:leaf1/2 Paths from device:leaf2 to device:spine1 == : device:leaf2/1 -> device:spine1/2 Paths from device:leaf2 to device:spine2 == : device:leaf2/2 -> device:spine2/2 These graphs are used by `segmentrouting` to program flow rules and groups (action selectors) in `fabric.p4`, needed to load balance traffic across multiple spines/paths. Verify that no hosts have been discovered so far: onos> hosts You should get an empty output. Verify that all initial flows and groups have be programmed successfully: onos> flows -c added deviceId=device:leaf1, flowRuleCount=52 deviceId=device:spine1, flowRuleCount=28 deviceId=device:spine2, flowRuleCount=28 deviceId=device:leaf2, flowRuleCount=36 onos> groups -c added deviceId=device:leaf1, groupCount=5 deviceId=device:leaf2, groupCount=3 deviceId=device:spine1, groupCount=5 deviceId=device:spine2, groupCount=5 You should see the same `flowRuleCount` and `groupCount` in your output. To dump the whole set of flow rules and groups, remove the `-c` argument from the command. `added` is used to filter only entities that are known to have been written to the switch (i.e., the P4Runtime Write RPC was successful.) ## 4. Connectivity test #### Same-subnet hosts (bridging) Open up the Mininet CLI (`make mn-cli`). Start by pinging `h1a` and `h1c`, which are both on the same subnet (VLAN `100` 172.16.1.0/24): mininet> h1a ping h1c PING 172.16.1.3 (172.16.1.3) 56(84) bytes of data. 64 bytes from 172.16.1.3: icmp_seq=1 ttl=63 time=13.7 ms 64 bytes from 172.16.1.3: icmp_seq=2 ttl=63 time=3.63 ms 64 bytes from 172.16.1.3: icmp_seq=3 ttl=63 time=3.52 ms ... Ping should work. Check the ONOS log, you should see an output similar to that of exercises 4-5. [HostHandler] Host 00:00:00:00:00:1A/None is added at [device:leaf1/3] [HostHandler] Populating bridging entry for host 00:00:00:00:00:1A/None at device:leaf1:3 [HostHandler] Populating routing rule for 172.16.1.1 at device:leaf1/3 [HostHandler] Host 00:00:00:00:00:1C/100 is added at [device:leaf1/5] [HostHandler] Populating bridging entry for host 00:00:00:00:00:1C/100 at device:leaf1:5 [HostHandler] Populating routing rule for 172.16.1.3 at device:leaf1/5 That's because `segmentrouting` operates in a way that is similar to the custom app of previous exercises. Hosts are discovered by the built-in service `hostprovider` intercepting packets such as ARP or NDP. For hosts in the same subnet, to support ARP resolution, multicast (ALL) groups are used to replicate ARP requests to all ports belonging to the same VLAN. `segmentrouting` listens for host events, when a new one is discovered, it installs the necessary bridging and routing rules. #### Hosts on different subnets (routing) On the Mininet prompt, start a ping to `h2` from any host in the subnet with VLAN `100`, for example, from `h1a`: mininet> h1a ping h2 The **ping should NOT work**, and the reason is that the location of `h2` is not known to ONOS, yet. Usually, Trellis is used in networks where hosts use DHCP for addressing. In such setup, we could use the DHCP relay app in ONOS to learn host locations and addresses when the hosts request an IP address via DHCP. However, in this simpler topology, we need to manually trigger `h2` to generate some packets to be discovered by ONOS. When using `segmentrouting`, the easiest way to have ONOS discover an host, is to ping the gateway address that we configured in [netcfg-sr.json], or that you can derive from the ONOS CLI (`onos> interfaces`): mininet> h2 ping 172.16.2.254 PING 172.16.2.254 (172.16.2.254) 56(84) bytes of data. 64 bytes from 172.16.2.254: icmp_seq=1 ttl=64 time=28.9 ms 64 bytes from 172.16.2.254: icmp_seq=2 ttl=64 time=12.6 ms 64 bytes from 172.16.2.254: icmp_seq=3 ttl=64 time=15.2 ms ... Ping is working, and ONOS should have discovered `h2` by now. But, who is replying to our pings? If you check the ARP table for h2: mininet> h2 arp Address HWtype HWaddress Flags Mask Iface 172.16.2.254 ether 00:aa:00:00:00:01 C h2-eth0.200 You should recognize MAC address `00:aa:00:00:00:01` as the one associated with `leaf1` in [netcfg-sr.json]. That's it, the `segmentrouting` app in ONOS is replying to our ICMP echo request (ping) packets! Ping requests are intercepted by means of P4Runtime packet-in, while replies are generated and injected via P4Runtime packet-out. This is equivalent to pinging the interface of a traditional router. At this point, ping from `h1a` to `h2` should work: mininet> h1a ping h2 PING 172.16.2.1 (172.16.2.1) 56(84) bytes of data. 64 bytes from 172.16.2.1: icmp_seq=1 ttl=63 time=6.23 ms 64 bytes from 172.16.2.1: icmp_seq=2 ttl=63 time=3.81 ms 64 bytes from 172.16.2.1: icmp_seq=3 ttl=63 time=3.84 ms ... Moreover, you can check that all hosts pinged so far have been discovered by ONOS: onos> hosts -s id=00:00:00:00:00:1A/None, mac=00:00:00:00:00:1A, locations=[device:leaf1/3], vlan=None, ip(s)=[172.16.1.1] id=00:00:00:00:00:1C/100, mac=00:00:00:00:00:1C, locations=[device:leaf1/5], vlan=100, ip(s)=[172.16.1.3] id=00:00:00:00:00:20/200, mac=00:00:00:00:00:20, locations=[device:leaf1/6], vlan=200, ip(s)=[172.16.2.1] ## 5. Dump packets to see VLAN tags (optional) TODO: detailed instructions for this step are still a work-in-progress. If you feel adventurous, start a ping between any two hosts, and use the tool [util/mn-pcap](util/mn-pcap) to dump packets to a PCAP file. After dumping packets, the tool tries to open the pcap file on wireshark (if installed). For example, to dump packets out of the `h2` main interface: $ util/mn-pcap h2 ## 6. Add missing interface config Start a ping from `h3` to any other host, for example `h2`: mininet> h3 ping h2 ... It should NOT work. Can you explain why? Let's check the ONOS log (`make onos-log`). You should see the following messages: ... INFO [HostHandler] Host 00:00:00:00:00:30/None is added at [device:leaf2/3] INFO [HostHandler] Populating bridging entry for host 00:00:00:00:00:30/None at device:leaf2:3 WARN [RoutingRulePopulator] Untagged host 00:00:00:00:00:30/None is not allowed on device:leaf2/3 without untagged or nativevlan config WARN [RoutingRulePopulator] Fail to build fwd obj for host 00:00:00:00:00:30/None. Abort. INFO [HostHandler] 172.16.3.1 is not included in the subnet config of device:leaf2/3. Ignored. `h3` is discovered because ONOS intercepted the ARP request to resolve `h3`'s gateway IP address (`172.16.3.254`), but the rest of the programming fails because we have not provided a valid Trellis configuration for the switch port facing `h3` (`leaf2/3`). Indeed, if you look at [netcfg-sr.json] you will notice that the `"ports"` section includes a config block for all `leaf1` host-facing ports, but it does NOT provide any for `leaf2`. As a matter of fact, if you try to start a ping from `h4` (attached to `leaf2`), that should NOT work as well. It is your task to modify the [netcfg-sr.json] to add the necessary config blocks to enable connectivity for `h3` and `h4`: 1. Open up [netcfg-sr.json]. 2. Look for the `"ports"` section. 3. Provide a config for ports `device:leaf2/3` and `device:leaf2/4`. When doing so, look at the config for other ports as a reference, but make sure to provide the right IPv4 gateway address, subnet, and VLAN configuration described at the beginning of this document. 4. When done, push the updated file to ONOS using `make netcfg-sr`. 5. Verify that the two new interface configs show up when using the ONOS CLI (`onos> interfaces`). 6. If you don't see the new interfaces in the ONOS CLI, verify the ONOS log (`make onos-log`) for any possible error, and eventually go back to step 3. 7. If you struggle to make it work, a solution is available in the `solution/mininet` directory. Let's try to ping the corresponding gateway address from `h3` and `h4`: mininet> h3 ping 172.16.3.254 PING 172.16.3.254 (172.16.3.254) 56(84) bytes of data. 64 bytes from 172.16.3.254: icmp_seq=1 ttl=64 time=66.5 ms 64 bytes from 172.16.3.254: icmp_seq=2 ttl=64 time=19.1 ms 64 bytes from 172.16.3.254: icmp_seq=3 ttl=64 time=27.5 ms ... mininet> h4 ping 172.16.4.254 PING 172.16.4.254 (172.16.4.254) 56(84) bytes of data. 64 bytes from 172.16.4.254: icmp_seq=1 ttl=64 time=45.2 ms 64 bytes from 172.16.4.254: icmp_seq=2 ttl=64 time=12.7 ms 64 bytes from 172.16.4.254: icmp_seq=3 ttl=64 time=22.0 ms ... At this point, ping between all hosts should work. You can try that using the special `pingall` command in the Mininet CLI: mininet> pingall *** Ping: testing ping reachability h1a -> h1b h1c h2 h3 h4 h1b -> h1a h1c h2 h3 h4 h1c -> h1a h1b h2 h3 h4 h2 -> h1a h1b h1c h3 h4 h3 -> h1a h1b h1c h2 h4 h4 -> h1a h1b h1c h2 h3 *** Results: 0% dropped (30/30 received) ## Congratulations! You have completed the seventh exercise! You were able to use ONOS built-in Trellis apps such as `segmentrouting` and the `fabric` pipeconf to configure forwarding in a 2x2 leaf-spine fabric of IPv4 hosts. [topo-v4.py]: mininet/topo-v4.py [netcfg-sr.json]: mininet/netcfg-sr.json [netcfg.json]: mininet/netcfg.json [docker-compose.yml]: docker-compose.yml [pseudo-wire]: https://en.wikipedia.org/wiki/Pseudo-wire [onos/apps/segmentrouting]: https://github.com/opennetworkinglab/onos/tree/2.2.2/apps/segmentrouting [onos/pipelines/fabric]: https://github.com/opennetworkinglab/onos/tree/2.2.2/pipelines/fabric [fabric-tofino]: https://github.com/opencord/fabric-tofino ================================================ FILE: EXERCISE-8.md ================================================ # Exercise 8: GTP termination with fabric.p4 The goal of this exercise is to learn how to use Trellis and fabric.p4 to encapsulate and route packets using the GPRS Tunnelling Protocol (GTP) header as in 4G mobile core networks. ## Background ![Topology GTP](img/topo-gtp.png) The topology we will use in this exercise ([topo-gtp.py]) is a very simple one, with the usual 2x2 fabric, but only two hosts. We assume our fabric is used as part of a 4G (i.e, LTE) network, connecting base stations to a Packet Data Network (PDN), such as the Internet. The two hosts in our topology represent: * An eNodeB, i.e., a base station providing radio connectivity to User Equipments (UEs) such as smartphones; * A host on the Packet Data Network (PDN), i.e., any host on the Internet. To provide connectivity between the UEs and the Internet, we need to program our fabric to act as a Serving and Packet Gateway (SPGW). The SPGW is a complex and feature-rich component of the 4G mobile core architecture that is used by the base stations as a gateway to the PDN. Base stations aggregate UE traffic in GTP tunnels (one or more per UE). The SPGW has many functions, among which that of terminating such tunnels. In other words, it encapsulates downlink traffic (Internet→UE) in an additional IPv4+UDP+GTP-U header, and it removes it for the uplink direction (UE→Internet). In this exercise you will learn how to: * Program a switch with the `fabric-spgw` profile; * Use Trellis to route traffic from the PDN to the eNodeB; * Use the ONOS REST APIs to enable GTP encapsulation of downlink traffic on `leaf1`. ## 1. Start ONOS and Mininet with GTP topology Since we want to use a different topology, we need to reset the current environment (if currently active): $ make reset This command will stop ONOS and Mininet and remove any state associated with them. Re-start ONOS and Mininet, this time with the new topology: **IMPORTANT:** please notice the `-gtp` suffix! $ make start-gtp Wait about 1 minute before proceeding with the next steps, this will give time to ONOS to start all of its subsystems. ## 2. Load additional apps As in the previous exercises, let's activate the `segmentrouting` and `fabric` pipeconf app using the ONOS CLI (`make onos-cli`): onos> app activate fabric onos> app activate segmentrouting Let's also activate a third app named `netcfghostprovider`: onos> app activate netcfghostprovider The `netcfghostprovider` (Network Config Host Provider ) is a built-in service similar to the `hostprovider` (Host Location Provider) seen in the previous exercises. It is responsible for registering hosts in the system, however, differently from `hostprovider`, it does not listen for ARP or DHCP packet-ins to automatically discover hosts. Instead, it uses information in the netcfg JSON file, allowing operators to pre-populate the ONOS host store. This is useful for static topologies and to avoid relying on ARP, DHCP, and other host-generated traffic. In this exercise, we use the netcfg JSON to configure the location of the `enodeb` and `pdn` hosts. #### Verify apps The complete list of apps should look like the following (21 in total) onos> apps -s -a * 18 org.onosproject.drivers 2.2.2 Default Drivers * 37 org.onosproject.protocols.grpc 2.2.2 gRPC Protocol Subsystem * 38 org.onosproject.protocols.gnmi 2.2.2 gNMI Protocol Subsystem * 39 org.onosproject.generaldeviceprovider 2.2.2 General Device Provider * 40 org.onosproject.protocols.gnoi 2.2.2 gNOI Protocol Subsystem * 41 org.onosproject.drivers.gnoi 2.2.2 gNOI Drivers * 42 org.onosproject.route-service 2.2.2 Route Service Server * 43 org.onosproject.mcast 2.2.2 Multicast traffic control * 44 org.onosproject.portloadbalancer 2.2.2 Port Load Balance Service * 45 org.onosproject.segmentrouting 2.2.2 Segment Routing * 53 org.onosproject.hostprovider 2.2.2 Host Location Provider * 54 org.onosproject.lldpprovider 2.2.2 LLDP Link Provider * 64 org.onosproject.protocols.p4runtime 2.2.2 P4Runtime Protocol Subsystem * 65 org.onosproject.p4runtime 2.2.2 P4Runtime Provider * 96 org.onosproject.netcfghostprovider 2.2.2 Network Config Host Provider * 99 org.onosproject.drivers.gnmi 2.2.2 gNMI Drivers * 100 org.onosproject.drivers.p4runtime 2.2.2 P4Runtime Drivers * 101 org.onosproject.pipelines.basic 2.2.2 Basic Pipelines * 102 org.onosproject.drivers.stratum 2.2.2 Stratum Drivers * 103 org.onosproject.drivers.bmv2 2.2.2 BMv2 Drivers * 111 org.onosproject.pipelines.fabric 2.2.2 Fabric Pipeline * 164 org.onosproject.gui2 2.2.2 ONOS GUI2 #### Verify pipeconfs All `fabric` pipeconf profiles should have been registered by now. Take a note on the full ID of the one with SPGW capabilities (`fabric-spgw`), you will need this ID in the next step. onos> pipeconfs id=org.onosproject.pipelines.fabric-full, behaviors=[PortStatisticsDiscovery, PiPipelineInterpreter, Pipeliner, IntProgrammable], extensions=[P4_INFO_TEXT, BMV2_JSON, CPU_PORT_TXT] id=org.onosproject.pipelines.int, behaviors=[PiPipelineInterpreter, Pipeliner, PortStatisticsDiscovery, IntProgrammable], extensions=[P4_INFO_TEXT, BMV2_JSON] id=org.onosproject.pipelines.fabric-spgw-int, behaviors=[PortStatisticsDiscovery, PiPipelineInterpreter, Pipeliner, IntProgrammable], extensions=[P4_INFO_TEXT, BMV2_JSON, CPU_PORT_TXT] id=org.onosproject.pipelines.fabric, behaviors=[PortStatisticsDiscovery, PiPipelineInterpreter, Pipeliner], extensions=[P4_INFO_TEXT, BMV2_JSON, CPU_PORT_TXT] id=org.onosproject.pipelines.fabric-bng, behaviors=[PortStatisticsDiscovery, PiPipelineInterpreter, Pipeliner, BngProgrammable], extensions=[P4_INFO_TEXT, BMV2_JSON, CPU_PORT_TXT] id=org.onosproject.pipelines.fabric-spgw, behaviors=[PortStatisticsDiscovery, PiPipelineInterpreter, Pipeliner], extensions=[P4_INFO_TEXT, BMV2_JSON, CPU_PORT_TXT] id=org.onosproject.pipelines.fabric-int, behaviors=[PortStatisticsDiscovery, PiPipelineInterpreter, Pipeliner, IntProgrammable], extensions=[P4_INFO_TEXT, BMV2_JSON, CPU_PORT_TXT] id=org.onosproject.pipelines.basic, behaviors=[PiPipelineInterpreter, Pipeliner, PortStatisticsDiscovery], extensions=[P4_INFO_TEXT, BMV2_JSON] ## 3. Modify and push netcfg to use fabric-spgw profile Up until now, we have used topologies where all switches were configured with the same pipeconf, and so the same P4 program. In this exercise, we want all switches to run with the basic `fabric` profile, while `leaf1` should act as the SPGW, and so we want it programmed with the `fabric-spgw` profile. #### Modify netcfg JSON Let's modify the netcfg JSON to use the `fabric-spgw` profile on switch `leaf1`. 1. Open up file [netcfg-gtp.json] and look for the configuration of `leaf1` in thw `"devices"` block. It should look like this: ``` "devices": { "device:leaf1": { "basic": { "managementAddress": "grpc://mininet:50001?device_id=1", "driver": "stratum-bmv2", "pipeconf": "org.onosproject.pipelines.fabric", ... ``` 2. Modify the `pipeconf` property to use the full ID of the `fabric-spgw` profile obtained in the previous step. 3. Save the file. #### Push netcfg to ONOS On a terminal window, type: **IMPORTANT**: please notice the `-gtp` suffix! $ make netcfg-gtp Use the ONOS CLI (`make onos-cli`) to verify that all 4 switches are connected to ONOS and provisioned with the right pipeconf/profile. onos> devices -s id=device:leaf1, available=true, role=MASTER, type=SWITCH, driver=stratum-bmv2:org.onosproject.pipelines.fabric-spgw id=device:leaf2, available=true, role=MASTER, type=SWITCH, driver=stratum-bmv2:org.onosproject.pipelines.fabric id=device:spine1, available=true, role=MASTER, type=SWITCH, driver=stratum-bmv2:org.onosproject.pipelines.fabric id=device:spine2, available=true, role=MASTER, type=SWITCH, driver=stratum-bmv2:org.onosproject.pipelines.fabric Make sure `leaf1` has driver with the `fabric-sgw` pipeconf, while all other switches should have the basic `fabric` pipeconf. ##### Troubleshooting If `leaf1` does NOT have `available=true`, it probably means that you have inserted the wrong pipeconf ID in [netcfg-gtp.json] and ONOS is not able to perform the initial provisioning. Check the ONOS log (`make onos-log`) for possible errors. Remember from the previous exercise that some errors are expected (e.g., for unsupported `PSEUDO_WIRE` flow objectives). If you see an error like this: ERROR [DeviceTaskExecutor] Unable to complete task CONNECTION_SETUP for device:leaf1: pipeconf ... not registered It means you have to go to the previous step to correct your pipeconf ID. Modify the [netcfg-gtp.json] file and push it again using `make netcfg-gtp`. Use the ONOS CLI and log to make sure the issue is fixed before proceeding. #### Check configuration in ONOS Check the interface configuration. In this topology we want `segmentrouting` to forward traffic based on two IP subnets: onos> interfaces leaf1-3: port=device:leaf1/3 ips=[10.0.100.254/24] mac=00:AA:00:00:00:01 vlanUntagged=100 leaf2-3: port=device:leaf2/3 ips=[10.0.200.254/24] mac=00:AA:00:00:00:02 vlanUntagged=200 Check that the `enodeb` and `pdn` hosts have been discovered: onos> hosts id=00:00:00:00:00:10/None, mac=00:00:00:00:00:10, locations=[device:leaf1/3], auxLocations=null, vlan=None, ip(s)=[10.0.100.1], ..., name=enodeb, ..., provider=host:org.onosproject.netcfghost, configured=true id=00:00:00:00:00:20/None, mac=00:00:00:00:00:20, locations=[device:leaf2/3], auxLocations=null, vlan=None, ip(s)=[10.0.200.1], ..., name=pdn, ..., provider=host:org.onosproject.netcfghost, configured=true `provider=host:org.onosproject.netcfghost` and `configured=true` are indications that the host entry was created by `netcfghostprovider`. ## 4. Verify IP connectivity between PDN and eNodeB Since the two hosts have already been discovered, they should be pingable. Using the Mininet CLI (`make mn-cli`) start a ping between `enodeb` and `pdn`: mininet> enodeb ping pdn PING 10.0.200.1 (10.0.200.1) 56(84) bytes of data. 64 bytes from 10.0.200.1: icmp_seq=1 ttl=62 time=1053 ms 64 bytes from 10.0.200.1: icmp_seq=2 ttl=62 time=49.0 ms 64 bytes from 10.0.200.1: icmp_seq=3 ttl=62 time=9.63 ms ... ## 5. Start PDN and eNodeB processes We have created two Python scripts to emulate the PDN sending downlink traffic to the UEs, and the eNodeB, expecting to receive the same traffic but GTP-encapsulated. In a new terminal window, start the [send-udp.py] script on the `pdn` host: $ util/mn-cmd pdn /mininet/send-udp.py Sending 5 UDP packets per second to 17.0.0.1... [util/mn-cmd] is a convenience script to run commands on mininet hosts when using multiple terminal windows. [mininet/send-udp.py][send-udp.py] generates packets with destination IPv4 address `17.0.0.1` (UE address). In the rest of the exercise we will configure Trellis to route these packets through switch `leaf1`, and we will insert a flow rule in this switch to perform the GTP encapsulation. For now, this traffic will be dropped at `leaf2`. On a second terminal window, start the [recv-gtp.py] script on the `enodeb` host: ``` $ util/mn-cmd enodeb /mininet/recv-gtp.py Will print a line for each UDP packet received... ``` [mininet/recv-gtp.py][recv-gtp.py] simply sniffs packets received and prints them on screen, informing if the packet is GTP encapsulated or not. You should see no packets being printed for the moment. #### Use ONOS UI to visualize traffic Using the ONF Cloud Tutorial Portal, access the ONOS UI. If you are running a VM on your laptop, open up a browser (e.g. Firefox) to . When asked, use the username `onos` and password `rocks`. To show hosts, press H. To show real-time link utilization, press A multiple times until showing "Port stats (packets / second)". You should see traffic (5 pps) on the link between the `pdn` host and `leaf2`, but not on other links. **Packets are dropped at switch `leaf2` as this switch does not know how to route packets with IPv4 destination `17.0.0.1`.** ## 6. Install route for UE subnet and debug table entries Using the ONOS CLI (`make onos-cli`), type the following command to add a route for the UE subnet (`17.0.0.0/24`) with next hop the `enodeb` (`10.0.100.1`): onos> route-add 17.0.0.0/24 10.0.100.1 Check that the new route has been successfully added: ``` onos> routes B: Best route, R: Resolved route Table: ipv4 B R Network Next Hop Source (Node) > * 17.0.0.0/24 10.0.100.1 STATIC (-) Total: 1 ... ``` Since `10.0.100.1` is a known host to ONOS, i.e., we know its location in the topology (see `*` under `R` column, which stands for "Resolved route",) `segmentrouting` uses this information to compute paths and install the necessary table entries to forward packets with IPv4 destination address matching `17.0.0.0/24`. Open up the terminal window with the [recv-gtp.py] script on the `enodeb` host, you should see the following output: [...] 691 bytes: 10.0.200.1 -> 17.0.0.1, is_gtp_encap=False Ether / IP / UDP 10.0.200.1:80 > 17.0.0.1:400 / Raw .... These lines indicate that a packet has been received at the eNodeB. The static route is working! However, there's no trace of GTP headers, yet. We'll get back to this soon, but for now, let's take a look at table entries in `fabric.p4`. Feel free to also check on the ONOS UI to see packets forwarded across the spines, and delivered to the eNodeB (the next hop of our static route). #### Debug fabric.p4 table entries You can verify that the table entries for the static route have been added to the switches by "grepping" the output of the ONOS CLI `flows` command, for example for `leaf2`: onos> flows -s any device:leaf2 | grep "17.0.0.0" ADDED, bytes=0, packets=0, table=FabricIngress.forwarding.routing_v4, priority=48010, selector=[IPV4_DST:17.0.0.0/24], treatment=[immediate=[FabricIngress.forwarding.set_next_id_routing_v4(next_id=0xd)]] One entry has been `ADDED` to table `FabricIngress.forwarding.routing_v4` with `next_id=0xd`. Let's grep flow rules for `next_id=0xd`: onos> flows -s any device:leaf2 | grep "next_id=0xd" ADDED, bytes=0, packets=0, table=FabricIngress.forwarding.routing_v4, priority=48010, selector=[IPV4_DST:17.0.0.0/24], treatment=[immediate=[FabricIngress.forwarding.set_next_id_routing_v4(next_id=0xd)]] ADDED, bytes=0, packets=0, table=FabricIngress.forwarding.routing_v4, priority=48010, selector=[IPV4_DST:10.0.100.0/24], treatment=[immediate=[FabricIngress.forwarding.set_next_id_routing_v4(next_id=0xd)]] ADDED, bytes=1674881, packets=2429, table=FabricIngress.next.hashed, priority=0, selector=[next_id=0xd], treatment=[immediate=[GROUP:0xd]] ADDED, bytes=1674881, packets=2429, table=FabricIngress.next.next_vlan, priority=0, selector=[next_id=0xd], treatment=[immediate=[FabricIngress.next.set_vlan(vlan_id=0xffe)]] Notice that another route shares the same next ID (`10.0.100.0/24` from the to interface config for `leaf1`), and that the next ID points to a group with the same value (`GROUP:0xd`). Let's take a look at this specific group: onos> groups any device:leaf2 | grep "0xd" id=0xd, state=ADDED, type=SELECT, bytes=0, packets=0, appId=org.onosproject.segmentrouting, referenceCount=0 id=0xd, bucket=1, bytes=0, packets=0, weight=1, actions=[FabricIngress.next.mpls_routing_hashed(dmac=0xbb00000001, port_num=0x1, smac=0xaa00000002, label=0x65)] id=0xd, bucket=2, bytes=0, packets=0, weight=1, actions=[FabricIngress.next.mpls_routing_hashed(dmac=0xbb00000002, port_num=0x2, smac=0xaa00000002, label=0x65)] This `SELECT` group is used to hash traffic to the spines (i.e., ECMP) and to push an MPLS label with hex value`0x65`, or 101 in base 10. Spine switches will use this label to forward packets. Can you tell what 101 identifies here? Hint: take a look at [netcfg-gtp.json] ## 7. Use ONOS REST APIs to create GTP flow rule Finally, it is time to instruct `leaf1` to encapsulate traffic with a GTP tunnel header. To do this, we will insert a special table entry in the "SPGW portion" of the [fabric.p4] pipeline, implemented in file [spgw.p4]. Specifically, we will insert one entry in the [dl_sess_lookup] table, responsible for handling downlink traffic (i.e., with match on the UE IPv4 address) by setting the GTP tunnel info which will be used to perform the encapsulation (action `set_dl_sess_info`). **NOTE:** this version of spgw.p4 is from ONOS v2.2.2 (the same used in this exercise). The P4 code might have changed recently, and you might see different tables if you open up the same file in a different branch. To insert the flow rule, we will not use an app (which we would have to implement from scratch!), but instead, we will use the ONOS REST APIs. To learn more about the available APIs, use the following URL to open up the automatically generated API docs from your running ONOS instance: The specific API we will use to create new flow rules is `POST /flows`, described here: This API takes a JSON request. The file [mininet/flowrule-gtp.json][flowrule-gtp.json] specifies the flow rule we want to create. This file is incomplete, and you need to modify it before we can send it via the REST APIs. 1. Open up file [mininet/flowrule-gtp.json][flowrule-gtp.json]. Look for the `"selector"` section that specifies the match fields: ``` "selector": { "criteria": [ { "type": "IPV4_DST", "ip": "/32" } ] }, ... ``` 2. Modify the `ip` field to match on the IP address of the UE (17.0.0.1). Since the `dl_sess_lookup` table performs exact match on the IPv4 address, make sure to specify the match field with `/32` prefix length. Also, note that the `set_dl_sess_info` action, is specified as `PROTOCOL_INDEPENDENT`. This is the ONOS terminology to describe custom flow rule actions. For this reason, the action parameters are specified as byte strings in hexadecimal format: * `"teid": "BEEF"`: GTP tunnel identifier (48879 in decimal base) * `"s1u_enb_addr": "0a006401"`: destination IPv4 address of the GTP tunnel, i.e., outer IPv4 header (10.0.100.1). This is the address of the eNodeB. * `"s1u_sgw_addr": "0a0064fe"`: source address of the GTP outer IPv4 header (10.0.100.254). This is the address of the switch interface configured in Trellis. 3. Save the [flowrule-gtp.json] file. 4. Push the flow rule to ONOS using the REST APIs. On a terminal window, type the following commands: ``` $ make flowrule-gtp ``` This command uses `cURL` to push the flow rule JSON file to the ONOS REST API endpoint. If the flow rule has been created correctly, you should see an output similar to the following one: ``` *** Pushing flowrule-gtp.json to ONOS... curl --fail -sSL --user onos:rocks --noproxy localhost -X POST -H 'Content-Type:application/json' \ http://localhost:8181/onos/v1/flows?appId=rest-api -d@./mininet/flowrule-gtp.json {"flows":[{"deviceId":"device:leaf1","flowId":"54606147878186474"}]} ``` 5. Check the eNodeB process. You should see that the received packets are now GTP encapsulated! ``` [...] 727 bytes: 10.0.100.254 -> 10.0.100.1, is_gtp_encap=True Ether / IP / UDP / GTP_U_Header / IP / UDP 10.0.200.1:80 > 17.0.0.1:400 / Raw ``` ## Congratulations! You have completed the eighth exercise! You were able to use fabric.p4 and Trellis to encapsulate traffic in GTP tunnels and to route it across the fabric. [topo-gtp.py]: mininet/topo-gtp.py [netcfg-gtp.json]: mininet/netcfg-gtp.json [send-udp.py]: mininet/send-udp.py [recv-gtp.py]: mininet/recv-gtp.py [util/mn-cmd]: util/mn-cmd [fabric.p4]: https://github.com/opennetworkinglab/onos/blob/2.2.2/pipelines/fabric/impl/src/main/resources/fabric.p4 [spgw.p4]: https://github.com/opennetworkinglab/onos/blob/2.2.2/pipelines/fabric/impl/src/main/resources/include/spgw.p4 [dl_sess_lookup]: https://github.com/opennetworkinglab/onos/blob/2.2.2/pipelines/fabric/impl/src/main/resources/include/spgw.p4#L70 [flowrule-gtp.json]: mininet/flowrule-gtp.json ================================================ FILE: LICENSE ================================================ Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ================================================ FILE: Makefile ================================================ mkfile_path := $(abspath $(lastword $(MAKEFILE_LIST))) curr_dir := $(patsubst %/,%,$(dir $(mkfile_path))) include util/docker/Makefile.vars onos_url := http://localhost:8181/onos onos_curl := curl --fail -sSL --user onos:rocks --noproxy localhost app_name := org.onosproject.ngsdn-tutorial NGSDN_TUTORIAL_SUDO ?= default: $(error Please specify a make target (see README.md)) _docker_pull_all: docker pull ${ONOS_IMG}@${ONOS_SHA} docker tag ${ONOS_IMG}@${ONOS_SHA} ${ONOS_IMG} docker pull ${P4RT_SH_IMG}@${P4RT_SH_SHA} docker tag ${P4RT_SH_IMG}@${P4RT_SH_SHA} ${P4RT_SH_IMG} docker pull ${P4C_IMG}@${P4C_SHA} docker tag ${P4C_IMG}@${P4C_SHA} ${P4C_IMG} docker pull ${STRATUM_BMV2_IMG}@${STRATUM_BMV2_SHA} docker tag ${STRATUM_BMV2_IMG}@${STRATUM_BMV2_SHA} ${STRATUM_BMV2_IMG} docker pull ${MVN_IMG}@${MVN_SHA} docker tag ${MVN_IMG}@${MVN_SHA} ${MVN_IMG} docker pull ${GNMI_CLI_IMG}@${GNMI_CLI_SHA} docker tag ${GNMI_CLI_IMG}@${GNMI_CLI_SHA} ${GNMI_CLI_IMG} docker pull ${YANG_IMG}@${YANG_SHA} docker tag ${YANG_IMG}@${YANG_SHA} ${YANG_IMG} docker pull ${SSHPASS_IMG}@${SSHPASS_SHA} docker tag ${SSHPASS_IMG}@${SSHPASS_SHA} ${SSHPASS_IMG} deps: _docker_pull_all _start: $(info *** Starting ONOS and Mininet (${NGSDN_TOPO_PY})... ) @mkdir -p tmp/onos @NGSDN_TOPO_PY=${NGSDN_TOPO_PY} docker-compose up -d start: NGSDN_TOPO_PY := topo-v6.py start: _start start-v4: NGSDN_TOPO_PY := topo-v4.py start-v4: _start start-gtp: NGSDN_TOPO_PY := topo-gtp.py start-gtp: _start stop: $(info *** Stopping ONOS and Mininet...) @NGSDN_TOPO_PY=foo docker-compose down -t0 restart: reset start onos-cli: $(info *** Connecting to the ONOS CLI... password: rocks) $(info *** Top exit press Ctrl-D) @ssh -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" -o LogLevel=ERROR -p 8101 onos@localhost onos-log: docker-compose logs -f onos onos-ui: open ${onos_url}/ui mn-cli: $(info *** Attaching to Mininet CLI...) $(info *** To detach press Ctrl-D (Mininet will keep running)) -@docker attach --detach-keys "ctrl-d" $(shell docker-compose ps -q mininet) || echo "*** Detached from Mininet CLI" mn-log: docker logs -f mininet _netcfg: $(info *** Pushing ${NGSDN_NETCFG_JSON} to ONOS...) ${onos_curl} -X POST -H 'Content-Type:application/json' \ ${onos_url}/v1/network/configuration -d@./mininet/${NGSDN_NETCFG_JSON} @echo netcfg: NGSDN_NETCFG_JSON := netcfg.json netcfg: _netcfg netcfg-sr: NGSDN_NETCFG_JSON := netcfg-sr.json netcfg-sr: _netcfg netcfg-gtp: NGSDN_NETCFG_JSON := netcfg-gtp.json netcfg-gtp: _netcfg flowrule-gtp: $(info *** Pushing flowrule-gtp.json to ONOS...) ${onos_curl} -X POST -H 'Content-Type:application/json' \ ${onos_url}/v1/flows?appId=rest-api -d@./mininet/flowrule-gtp.json @echo flowrule-clean: $(info *** Removing all flows installed via REST APIs...) ${onos_curl} -X DELETE -H 'Content-Type:application/json' \ ${onos_url}/v1/flows/application/rest-api @echo reset: stop -$(NGSDN_TUTORIAL_SUDO) rm -rf ./tmp clean: -$(NGSDN_TUTORIAL_SUDO) rm -rf p4src/build -$(NGSDN_TUTORIAL_SUDO) rm -rf app/target -$(NGSDN_TUTORIAL_SUDO) rm -rf app/src/main/resources/bmv2.json -$(NGSDN_TUTORIAL_SUDO) rm -rf app/src/main/resources/p4info.txt p4-build: p4src/main.p4 $(info *** Building P4 program...) @mkdir -p p4src/build docker run --rm -v ${curr_dir}:/workdir -w /workdir ${P4C_IMG} \ p4c-bm2-ss --arch v1model -o p4src/build/bmv2.json \ --p4runtime-files p4src/build/p4info.txt --Wdisable=unsupported \ p4src/main.p4 @echo "*** P4 program compiled successfully! Output files are in p4src/build" p4-test: @cd ptf && PTF_DOCKER_IMG=$(STRATUM_BMV2_IMG) ./run_tests $(TEST) _copy_p4c_out: $(info *** Copying p4c outputs to app resources...) @mkdir -p app/src/main/resources cp -f p4src/build/p4info.txt app/src/main/resources/ cp -f p4src/build/bmv2.json app/src/main/resources/ _mvn_package: $(info *** Building ONOS app...) @mkdir -p app/target @docker run --rm -v ${curr_dir}/app:/mvn-src -w /mvn-src ${MVN_IMG} mvn -o clean package app-build: p4-build _copy_p4c_out _mvn_package $(info *** ONOS app .oar package created succesfully) @ls -1 app/target/*.oar app-install: $(info *** Installing and activating app in ONOS...) ${onos_curl} -X POST -HContent-Type:application/octet-stream \ '${onos_url}/v1/applications?activate=true' \ --data-binary @app/target/ngsdn-tutorial-1.0-SNAPSHOT.oar @echo app-uninstall: $(info *** Uninstalling app from ONOS (if present)...) -${onos_curl} -X DELETE ${onos_url}/v1/applications/${app_name} @echo app-reload: app-uninstall app-install yang-tools: docker run --rm -it -v ${curr_dir}/yang/demo-port.yang:/models/demo-port.yang ${YANG_IMG} solution-apply: mkdir working_copy cp -r app working_copy/app cp -r p4src working_copy/p4src cp -r ptf working_copy/ptf cp -r mininet working_copy/mininet rsync -r solution/ ./ solution-revert: test -d working_copy $(NGSDN_TUTORIAL_SUDO) rm -rf ./app/* $(NGSDN_TUTORIAL_SUDO) rm -rf ./p4src/* $(NGSDN_TUTORIAL_SUDO) rm -rf ./ptf/* $(NGSDN_TUTORIAL_SUDO) rm -rf ./mininet/* cp -r working_copy/* ./ $(NGSDN_TUTORIAL_SUDO) rm -rf working_copy/ check: make reset # P4 starter code and app should compile make p4-build make app-build # Check solution make solution-apply make start make p4-build make p4-test make app-build sleep 30 make app-reload sleep 10 make netcfg sleep 10 # The first ping(s) might fail because of a known race condition in the # L2BridgingComponenet. Ping all hosts. -util/mn-cmd h1a ping -c 1 2001:1:1::b util/mn-cmd h1a ping -c 1 2001:1:1::b -util/mn-cmd h1b ping -c 1 2001:1:1::c util/mn-cmd h1b ping -c 1 2001:1:1::c -util/mn-cmd h2 ping -c 1 2001:1:1::b util/mn-cmd h2 ping -c 1 2001:1:1::b util/mn-cmd h2 ping -c 1 2001:1:1::a util/mn-cmd h2 ping -c 1 2001:1:1::c -util/mn-cmd h3 ping -c 1 2001:1:2::1 util/mn-cmd h3 ping -c 1 2001:1:2::1 util/mn-cmd h3 ping -c 1 2001:1:1::a util/mn-cmd h3 ping -c 1 2001:1:1::b util/mn-cmd h3 ping -c 1 2001:1:1::c -util/mn-cmd h4 ping -c 1 2001:1:2::1 util/mn-cmd h4 ping -c 1 2001:1:2::1 util/mn-cmd h4 ping -c 1 2001:1:1::a util/mn-cmd h4 ping -c 1 2001:1:1::b util/mn-cmd h4 ping -c 1 2001:1:1::c make stop make solution-revert check-sr: make reset make start-v4 sleep 45 util/onos-cmd app activate segmentrouting util/onos-cmd app activate pipelines.fabric sleep 15 make netcfg-sr sleep 20 util/mn-cmd h1a ping -c 1 172.16.1.3 util/mn-cmd h1b ping -c 1 172.16.1.3 util/mn-cmd h2 ping -c 1 172.16.2.254 sleep 5 util/mn-cmd h2 ping -c 1 172.16.1.1 util/mn-cmd h2 ping -c 1 172.16.1.2 util/mn-cmd h2 ping -c 1 172.16.1.3 # ping from h3 and h4 should not work without the solution ! util/mn-cmd h3 ping -c 1 172.16.3.254 ! util/mn-cmd h4 ping -c 1 172.16.4.254 make solution-apply make netcfg-sr sleep 20 util/mn-cmd h3 ping -c 1 172.16.3.254 util/mn-cmd h4 ping -c 1 172.16.4.254 sleep 5 util/mn-cmd h3 ping -c 1 172.16.1.1 util/mn-cmd h3 ping -c 1 172.16.1.2 util/mn-cmd h3 ping -c 1 172.16.1.3 util/mn-cmd h3 ping -c 1 172.16.2.1 util/mn-cmd h3 ping -c 1 172.16.4.1 util/mn-cmd h4 ping -c 1 172.16.1.1 util/mn-cmd h4 ping -c 1 172.16.1.2 util/mn-cmd h4 ping -c 1 172.16.1.3 util/mn-cmd h4 ping -c 1 172.16.2.1 make stop make solution-revert check-gtp: make reset make start-gtp sleep 45 util/onos-cmd app activate segmentrouting util/onos-cmd app activate pipelines.fabric util/onos-cmd app activate netcfghostprovider sleep 15 make solution-apply make netcfg-gtp sleep 20 util/mn-cmd enodeb ping -c 1 10.0.100.254 util/mn-cmd pdn ping -c 1 10.0.200.254 util/onos-cmd route-add 17.0.0.0/24 10.0.100.1 make flowrule-gtp # util/mn-cmd requires a TTY because it uses docker -it option # hence we use screen for putting it in the background screen -d -m util/mn-cmd pdn /mininet/send-udp.py util/mn-cmd enodeb /mininet/recv-gtp.py -e make stop make solution-revert ================================================ FILE: README.md ================================================ # Next-Gen SDN Tutorial (Advanced) Welcome to the Next-Gen SDN tutorial! This tutorial is targeted at students and practitioners who want to learn about the building blocks of the next-generation SDN (NG-SDN) architecture, such as: * Data plane programming and control via P4 and P4Runtime * Configuration via YANG, OpenConfig, and gNMI * Stratum switch OS * ONOS SDN controller Tutorial sessions are organized around a sequence of hands-on exercises that show how to build a leaf-spine data center fabric based on IPv6, using P4, Stratum, and ONOS. Exercises assume an intermediate knowledge of the P4 language, and a basic knowledge of Java and Python. Participants will be provided with a starter P4 program and ONOS app implementation. Exercises will focus on concepts such as: * Using Stratum APIs (P4Runtime, gNMI, OpenConfig, gNOI) * Using ONOS with devices programmed with arbitrary P4 programs * Writing ONOS applications to provide the control plane logic (bridging, routing, ECMP, etc.) * Testing using bmv2 in Mininet * PTF-based P4 unit tests ## Basic vs. advanced version This tutorial comes in two versions: basic (`master` branch), and advanced (this branch). The basic version contains fewer exercises, and it does not assume prior knowledge of the P4 language. Instead, it provides a gentle introduction to it. Check the `master` branch of this repo if you're interested in the basic version. If you're interested in the advanced version, keep reading. ## Slides Tutorial slides are available online: These slides provide an introduction to the topics covered in the tutorial. We suggest you look at it before starting to work on the exercises. ## System requirements If you are taking this tutorial at an event organized by ONF, you should have received credentials to access the **ONF Cloud Tutorial Platform**, in which case you can skip this section. Keep reading if you are interested in working on the exercises on your laptop. To facilitate access to the tools required to complete this tutorial, we provide two options for you to choose from: 1. Download a pre-packaged VM with all included; **OR** 2. Manually install Docker and other dependencies. ### Option 1 - Download tutorial VM Use the following link to download the VM (4 GB): * The VM is in .ova format and has been created using VirtualBox v5.2.32. To run the VM you can use any modern virtualization system, although we recommend using VirtualBox. For instructions on how to get VirtualBox and import the VM, use the following links: * * Alternatively, you can use the scripts in [util/vm](util/vm) to build a VM on your machine using Vagrant. **Recommended VM configuration:** The current configuration of the VM is 4 GB of RAM and 4 core CPU. These are the recommended minimum system requirements to complete the exercises. When imported, the VM takes approx. 8 GB of HDD space. For a smooth experience, we recommend running the VM on a host system that has at least the double of resources. **VM user credentials:** Use credentials `sdn`/`rocks` to log in the Ubuntu system. ### Option 2 - Manually install Docker and other dependencies All exercises can be executed by installing the following dependencies: * Docker v1.13.0+ (with docker-compose) * make * Python 3 * Bash-like Unix shell * Wireshark (optional) **Note for Windows users**: all scripts have been tested on macOS and Ubuntu. Although we think they should work on Windows, we have not tested it. For this reason, we advise Windows users to prefer Option 1. ## Get this repo or pull latest changes To work on the exercises you will need to clone this repo: cd ~ git clone -b advanced https://github.com/opennetworkinglab/ngsdn-tutorial If the `ngsdn-tutorial` directory is already present, make sure to update its content: cd ~/ngsdn-tutorial git pull origin advanced ## Download / upgrade dependencies The VM may have shipped with an older version of the dependencies than we would like to use for the exercises. You can upgrade to the latest version using the following command: cd ~/ngsdn-tutorial make deps This command will download all necessary Docker images (~1.5 GB) allowing you to work off-line. For this reason, we recommend running this step ahead of the tutorial, with a reliable Internet connection. ## Using an IDE to work on the exercises During the exercises you will need to write code in multiple languages such as P4, Java, and Python. While the exercises do not prescribe the use of any specific IDE or code editor, the **ONF Cloud Tutorial Platform** provides access to a web-based version of Visual Studio Code (VS Code). If you are using the tutorial VM, you will find the Java IDE [IntelliJ IDEA Community Edition](https://www.jetbrains.com/idea/), already pre-loaded with plugins for P4 syntax highlighting and Python development. We suggest using IntelliJ IDEA especially when working on the ONOS app, as it provides code completion for all ONOS APIs. ## Repo structure This repo is structured as follows: * `p4src/` P4 implementation * `yang/` Yang model used in exercise 2 * `app/` custom ONOS app Java implementation * `mininet/` Mininet script to emulate a 2x2 leaf-spine fabric topology of `stratum_bmv2` devices * `util/` Utility scripts * `ptf/` P4 data plane unit tests based on Packet Test Framework (PTF) ## Tutorial commands To facilitate working on the exercises, we provide a set of make-based commands to control the different aspects of the tutorial. Commands will be introduced in the exercises, here's a quick reference: | Make command | Description | |---------------------|------------------------------------------------------- | | `make deps` | Pull and build all required dependencies | | `make p4-build` | Build P4 program | | `make p4-test` | Run PTF tests | | `make start` | Start Mininet and ONOS containers | | `make stop` | Stop all containers | | `make restart` | Restart containers clearing any previous state | | `make onos-cli` | Access the ONOS CLI (password: `rocks`, Ctrl-D to exit)| | `make onos-log` | Show the ONOS log | | `make mn-cli` | Access the Mininet CLI (Ctrl-D to exit) | | `make mn-log` | Show the Mininet log (i.e., the CLI output) | | `make app-build` | Build custom ONOS app | | `make app-reload` | Install and activate the ONOS app | | `make netcfg` | Push netcfg.json file (network config) to ONOS | ## Exercises Click on the exercise name to see the instructions: 1. [P4Runtime basics](./EXERCISE-1.md) 2. [Yang, OpenConfig, and gNMI basics](./EXERCISE-2.md) 3. [Using ONOS as the control plane](./EXERCISE-3.md) 4. [Enabling ONOS built-in services](./EXERCISE-4.md) 5. [Implementing IPv6 routing with ECMP](./EXERCISE-5.md) 6. [Implementing SRv6](./EXERCISE-6.md) 7. [Trellis Basics](./EXERCISE-7.md) 8. [GTP termination with fabric.p4](./EXERCISE-8.md) ## Solutions You can find solutions for each exercise in the [solution](solution) directory. Feel free to compare your solution to the reference one whenever you feel stuck. [![Build Status](https://travis-ci.org/opennetworkinglab/ngsdn-tutorial.svg?branch=advanced)](https://travis-ci.org/opennetworkinglab/ngsdn-tutorial) ================================================ FILE: app/pom.xml ================================================ 4.0.0 org.onosproject onos-dependencies 2.2.2 org.onosproject ngsdn-tutorial 1.0-SNAPSHOT bundle NG-SDN tutorial app http://www.onosproject.org org.onosproject.ngsdn-tutorial NG-SDN Tutorial App https://www.onosproject.org Traffic Steering https://www.onosproject.org Provides IPv6 routing capabilities to a leaf-spine network of Stratum switches org.onosproject onos-api ${onos.version} provided org.onosproject onos-protocols-p4runtime-model ${onos.version} provided org.onosproject onos-protocols-p4runtime-api ${onos.version} provided org.onosproject onos-protocols-grpc-api ${onos.version} provided org.onosproject onlab-osgi ${onos.version} provided org.onosproject onlab-misc ${onos.version} provided org.onosproject onos-cli ${onos.version} provided org.slf4j slf4j-api provided com.google.guava guava provided com.fasterxml.jackson.core jackson-databind provided junit junit test org.onosproject onos-api ${onos.version} test tests org.osgi org.osgi.service.component.annotations provided org.osgi org.osgi.core provided org.apache.karaf.shell org.apache.karaf.shell.console provided org.apache.felix maven-bundle-plugin org.onosproject.ngsdn.tutorial.cli org.onosproject onos-maven-plugin ================================================ FILE: app/src/main/java/org/onosproject/ngsdn/tutorial/AppConstants.java ================================================ /* * Copyright 2019-present Open Networking Foundation * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.onosproject.ngsdn.tutorial; import org.onosproject.net.pi.model.PiPipeconfId; public class AppConstants { public static final String APP_NAME = "org.onosproject.ngsdn-tutorial"; public static final PiPipeconfId PIPECONF_ID = new PiPipeconfId("org.onosproject.ngsdn-tutorial"); public static final int DEFAULT_FLOW_RULE_PRIORITY = 10; public static final int INITIAL_SETUP_DELAY = 2; // Seconds. public static final int CLEAN_UP_DELAY = 2000; // milliseconds public static final int DEFAULT_CLEAN_UP_RETRY_TIMES = 10; public static final int CPU_PORT_ID = 255; public static final int CPU_CLONE_SESSION_ID = 99; } ================================================ FILE: app/src/main/java/org/onosproject/ngsdn/tutorial/Ipv6RoutingComponent.java ================================================ /* * Copyright 2019-present Open Networking Foundation * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.onosproject.ngsdn.tutorial; import com.google.common.collect.Lists; import org.onlab.packet.Ip6Address; import org.onlab.packet.Ip6Prefix; import org.onlab.packet.IpAddress; import org.onlab.packet.IpPrefix; import org.onlab.packet.MacAddress; import org.onlab.util.ItemNotFoundException; import org.onosproject.core.ApplicationId; import org.onosproject.mastership.MastershipService; import org.onosproject.net.Device; import org.onosproject.net.DeviceId; import org.onosproject.net.Host; import org.onosproject.net.Link; import org.onosproject.net.PortNumber; import org.onosproject.net.config.NetworkConfigService; import org.onosproject.net.device.DeviceEvent; import org.onosproject.net.device.DeviceListener; import org.onosproject.net.device.DeviceService; import org.onosproject.net.flow.FlowRule; import org.onosproject.net.flow.FlowRuleService; import org.onosproject.net.flow.criteria.PiCriterion; import org.onosproject.net.group.GroupDescription; import org.onosproject.net.group.GroupService; import org.onosproject.net.host.HostEvent; import org.onosproject.net.host.HostListener; import org.onosproject.net.host.HostService; import org.onosproject.net.host.InterfaceIpAddress; import org.onosproject.net.intf.Interface; import org.onosproject.net.intf.InterfaceService; import org.onosproject.net.link.LinkEvent; import org.onosproject.net.link.LinkListener; import org.onosproject.net.link.LinkService; import org.onosproject.net.pi.model.PiActionId; import org.onosproject.net.pi.model.PiActionParamId; import org.onosproject.net.pi.model.PiMatchFieldId; import org.onosproject.net.pi.runtime.PiAction; import org.onosproject.net.pi.runtime.PiActionParam; import org.onosproject.net.pi.runtime.PiActionProfileGroupId; import org.onosproject.net.pi.runtime.PiTableAction; import org.osgi.service.component.annotations.Activate; import org.osgi.service.component.annotations.Component; import org.osgi.service.component.annotations.Deactivate; import org.osgi.service.component.annotations.Reference; import org.osgi.service.component.annotations.ReferenceCardinality; import org.onosproject.ngsdn.tutorial.common.FabricDeviceConfig; import org.onosproject.ngsdn.tutorial.common.Utils; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import java.util.Collection; import java.util.Collections; import java.util.List; import java.util.Optional; import java.util.Set; import java.util.stream.Collectors; import static com.google.common.collect.Streams.stream; import static org.onosproject.ngsdn.tutorial.AppConstants.INITIAL_SETUP_DELAY; /** * App component that configures devices to provide IPv6 routing capabilities * across the whole fabric. */ @Component( immediate = true, // *** TODO EXERCISE 5 // set to true when ready enabled = false ) public class Ipv6RoutingComponent { private static final Logger log = LoggerFactory.getLogger(Ipv6RoutingComponent.class); private static final int DEFAULT_ECMP_GROUP_ID = 0xec3b0000; private static final long GROUP_INSERT_DELAY_MILLIS = 200; private final HostListener hostListener = new InternalHostListener(); private final LinkListener linkListener = new InternalLinkListener(); private final DeviceListener deviceListener = new InternalDeviceListener(); private ApplicationId appId; //-------------------------------------------------------------------------- // ONOS CORE SERVICE BINDING // // These variables are set by the Karaf runtime environment before calling // the activate() method. //-------------------------------------------------------------------------- @Reference(cardinality = ReferenceCardinality.MANDATORY) private FlowRuleService flowRuleService; @Reference(cardinality = ReferenceCardinality.MANDATORY) private HostService hostService; @Reference(cardinality = ReferenceCardinality.MANDATORY) private MastershipService mastershipService; @Reference(cardinality = ReferenceCardinality.MANDATORY) private GroupService groupService; @Reference(cardinality = ReferenceCardinality.MANDATORY) private DeviceService deviceService; @Reference(cardinality = ReferenceCardinality.MANDATORY) private NetworkConfigService networkConfigService; @Reference(cardinality = ReferenceCardinality.MANDATORY) private InterfaceService interfaceService; @Reference(cardinality = ReferenceCardinality.MANDATORY) private LinkService linkService; @Reference(cardinality = ReferenceCardinality.MANDATORY) private MainComponent mainComponent; //-------------------------------------------------------------------------- // COMPONENT ACTIVATION. // // When loading/unloading the app the Karaf runtime environment will call // activate()/deactivate(). //-------------------------------------------------------------------------- @Activate protected void activate() { appId = mainComponent.getAppId(); hostService.addListener(hostListener); linkService.addListener(linkListener); deviceService.addListener(deviceListener); // Schedule set up for all devices. mainComponent.scheduleTask(this::setUpAllDevices, INITIAL_SETUP_DELAY); log.info("Started"); } @Deactivate protected void deactivate() { hostService.removeListener(hostListener); linkService.removeListener(linkListener); deviceService.removeListener(deviceListener); log.info("Stopped"); } //-------------------------------------------------------------------------- // METHODS TO COMPLETE. // // Complete the implementation wherever you see TODO. //-------------------------------------------------------------------------- /** * Sets up the "My Station" table for the given device using the * myStationMac address found in the config. *

* This method will be called at component activation for each device * (switch) known by ONOS, and every time a new device-added event is * captured by the InternalDeviceListener defined below. * * @param deviceId the device ID */ private void setUpMyStationTable(DeviceId deviceId) { log.info("Adding My Station rules to {}...", deviceId); final MacAddress myStationMac = getMyStationMac(deviceId); // HINT: in our solution, the My Station table matches on the *ethernet // destination* and there is only one action called *NoAction*, which is // used as an indication of "table hit" in the control block. // *** TODO EXERCISE 5 // Modify P4Runtime entity names to match content of P4Info file (look // for the fully qualified name of tables, match fields, and actions. // ---- START SOLUTION ---- final String tableId = "MODIFY ME"; final PiCriterion match = PiCriterion.builder() .matchExact( PiMatchFieldId.of("MODIFY ME"), myStationMac.toBytes()) .build(); // Creates an action which do *NoAction* when hit. final PiTableAction action = PiAction.builder() .withId(PiActionId.of("MODIFY ME")) .build(); // ---- END SOLUTION ---- final FlowRule myStationRule = Utils.buildFlowRule( deviceId, appId, tableId, match, action); flowRuleService.applyFlowRules(myStationRule); } /** * Creates an ONOS SELECT group for the routing table to provide ECMP * forwarding for the given collection of next hop MAC addresses. ONOS * SELECT groups are equivalent to P4Runtime action selector groups. *

* This method will be called by the routing policy methods below to insert * groups in the L3 table * * @param nextHopMacs the collection of mac addresses of next hops * @param deviceId the device where the group will be installed * @return a SELECT group */ private GroupDescription createNextHopGroup(int groupId, Collection nextHopMacs, DeviceId deviceId) { String actionProfileId = "IngressPipeImpl.ecmp_selector"; final List actions = Lists.newArrayList(); // Build one "set next hop" action for each next hop // *** TODO EXERCISE 5 // Modify P4Runtime entity names to match content of P4Info file (look // for the fully qualified name of tables, match fields, and actions. // ---- START SOLUTION ---- final String tableId = "MODIFY ME"; for (MacAddress nextHopMac : nextHopMacs) { final PiAction action = PiAction.builder() .withId(PiActionId.of("MODIFY ME")) .withParameter(new PiActionParam( // Action param name. PiActionParamId.of("MODIFY ME"), // Action param value. nextHopMac.toBytes())) .build(); actions.add(action); } // ---- END SOLUTION ---- return Utils.buildSelectGroup( deviceId, tableId, actionProfileId, groupId, actions, appId); } /** * Creates a routing flow rule that matches on the given IPv6 prefix and * executes the given group ID (created before). * * @param deviceId the device where flow rule will be installed * @param ip6Prefix the IPv6 prefix * @param groupId the group ID * @return a flow rule */ private FlowRule createRoutingRule(DeviceId deviceId, Ip6Prefix ip6Prefix, int groupId) { // *** TODO EXERCISE 5 // Modify P4Runtime entity names to match content of P4Info file (look // for the fully qualified name of tables, match fields, and actions. // ---- START SOLUTION ---- final String tableId = "MODIFY ME"; final PiCriterion match = PiCriterion.builder() .matchLpm( PiMatchFieldId.of("MODIFY ME"), ip6Prefix.address().toOctets(), ip6Prefix.prefixLength()) .build(); final PiTableAction action = PiActionProfileGroupId.of(groupId); // ---- END SOLUTION ---- return Utils.buildFlowRule( deviceId, appId, tableId, match, action); } /** * Creates a flow rule for the L2 table mapping the given next hop MAC to * the given output port. *

* This is called by the routing policy methods below to establish L2-based * forwarding inside the fabric, e.g., when deviceId is a leaf switch and * nextHopMac is the one of a spine switch. * * @param deviceId the device * @param nexthopMac the next hop (destination) mac * @param outPort the output port */ private FlowRule createL2NextHopRule(DeviceId deviceId, MacAddress nexthopMac, PortNumber outPort) { // *** TODO EXERCISE 5 // Modify P4Runtime entity names to match content of P4Info file (look // for the fully qualified name of tables, match fields, and actions. // ---- START SOLUTION ---- final String tableId = "MODIFY ME"; final PiCriterion match = PiCriterion.builder() .matchExact(PiMatchFieldId.of("MODIFY ME"), nexthopMac.toBytes()) .build(); final PiAction action = PiAction.builder() .withId(PiActionId.of("MODIFY ME")) .withParameter(new PiActionParam( PiActionParamId.of("MODIFY ME"), outPort.toLong())) .build(); // ---- END SOLUTION ---- return Utils.buildFlowRule( deviceId, appId, tableId, match, action); } //-------------------------------------------------------------------------- // EVENT LISTENERS // // Events are processed only if isRelevant() returns true. //-------------------------------------------------------------------------- /** * Listener of host events which triggers configuration of routing rules on * the device where the host is attached. */ class InternalHostListener implements HostListener { @Override public boolean isRelevant(HostEvent event) { switch (event.type()) { case HOST_ADDED: break; case HOST_REMOVED: case HOST_UPDATED: case HOST_MOVED: default: // Ignore other events. // Food for thoughts: // how to support host moved/removed events? return false; } // Process host event only if this controller instance is the master // for the device where this host is attached. final Host host = event.subject(); final DeviceId deviceId = host.location().deviceId(); return mastershipService.isLocalMaster(deviceId); } @Override public void event(HostEvent event) { Host host = event.subject(); DeviceId deviceId = host.location().deviceId(); mainComponent.getExecutorService().execute(() -> { log.info("{} event! host={}, deviceId={}, port={}", event.type(), host.id(), deviceId, host.location().port()); setUpHostRules(deviceId, host); }); } } /** * Listener of link events, which triggers configuration of routing rules to * forward packets across the fabric, i.e. from leaves to spines and vice * versa. *

* Reacting to link events instead of device ones, allows us to make sure * all device are always configured with a topology view that includes all * links, e.g. modifying an ECMP group as soon as a new link is added. The * downside is that we might be configuring the same device twice for the * same set of links/paths. However, the ONOS core treats these cases as a * no-op when the device is already configured with the desired forwarding * state (i.e. flows and groups) */ class InternalLinkListener implements LinkListener { @Override public boolean isRelevant(LinkEvent event) { switch (event.type()) { case LINK_ADDED: break; case LINK_UPDATED: case LINK_REMOVED: default: return false; } DeviceId srcDev = event.subject().src().deviceId(); DeviceId dstDev = event.subject().dst().deviceId(); return mastershipService.isLocalMaster(srcDev) || mastershipService.isLocalMaster(dstDev); } @Override public void event(LinkEvent event) { DeviceId srcDev = event.subject().src().deviceId(); DeviceId dstDev = event.subject().dst().deviceId(); if (mastershipService.isLocalMaster(srcDev)) { mainComponent.getExecutorService().execute(() -> { log.info("{} event! Configuring {}... linkSrc={}, linkDst={}", event.type(), srcDev, srcDev, dstDev); setUpFabricRoutes(srcDev); setUpL2NextHopRules(srcDev); }); } if (mastershipService.isLocalMaster(dstDev)) { mainComponent.getExecutorService().execute(() -> { log.info("{} event! Configuring {}... linkSrc={}, linkDst={}", event.type(), dstDev, srcDev, dstDev); setUpFabricRoutes(dstDev); setUpL2NextHopRules(dstDev); }); } } } /** * Listener of device events which triggers configuration of the My Station * table. */ class InternalDeviceListener implements DeviceListener { @Override public boolean isRelevant(DeviceEvent event) { switch (event.type()) { case DEVICE_AVAILABILITY_CHANGED: case DEVICE_ADDED: break; default: return false; } // Process device event if this controller instance is the master // for the device and the device is available. DeviceId deviceId = event.subject().id(); return mastershipService.isLocalMaster(deviceId) && deviceService.isAvailable(event.subject().id()); } @Override public void event(DeviceEvent event) { mainComponent.getExecutorService().execute(() -> { DeviceId deviceId = event.subject().id(); log.info("{} event! device id={}", event.type(), deviceId); setUpMyStationTable(deviceId); }); } } //-------------------------------------------------------------------------- // ROUTING POLICY METHODS // // Called by event listeners, these methods implement the actual routing // policy, responsible of computing paths and creating ECMP groups. //-------------------------------------------------------------------------- /** * Set up L2 nexthop rules of a device to providing forwarding inside the * fabric, i.e. between leaf and spine switches. * * @param deviceId the device ID */ private void setUpL2NextHopRules(DeviceId deviceId) { Set egressLinks = linkService.getDeviceEgressLinks(deviceId); for (Link link : egressLinks) { // For each other switch directly connected to this. final DeviceId nextHopDevice = link.dst().deviceId(); // Get port of this device connecting to next hop. final PortNumber outPort = link.src().port(); // Get next hop MAC address. final MacAddress nextHopMac = getMyStationMac(nextHopDevice); final FlowRule nextHopRule = createL2NextHopRule( deviceId, nextHopMac, outPort); flowRuleService.applyFlowRules(nextHopRule); } } /** * Sets up the given device with the necessary rules to route packets to the * given host. * * @param deviceId deviceId the device ID * @param host the host */ private void setUpHostRules(DeviceId deviceId, Host host) { // Get all IPv6 addresses associated to this host. In this tutorial we // use hosts with only 1 IPv6 address. final Collection hostIpv6Addrs = host.ipAddresses().stream() .filter(IpAddress::isIp6) .map(IpAddress::getIp6Address) .collect(Collectors.toSet()); if (hostIpv6Addrs.isEmpty()) { // Ignore. log.debug("No IPv6 addresses for host {}, ignore", host.id()); return; } else { log.info("Adding routes on {} for host {} [{}]", deviceId, host.id(), hostIpv6Addrs); } // Create an ECMP group with only one member, where the group ID is // derived from the host MAC. final MacAddress hostMac = host.mac(); int groupId = macToGroupId(hostMac); final GroupDescription group = createNextHopGroup( groupId, Collections.singleton(hostMac), deviceId); // Map each host IPV6 address to corresponding /128 prefix and obtain a // flow rule that points to the group ID. In this tutorial we expect // only one flow rule per host. final List flowRules = hostIpv6Addrs.stream() .map(IpAddress::toIpPrefix) .filter(IpPrefix::isIp6) .map(IpPrefix::getIp6Prefix) .map(prefix -> createRoutingRule(deviceId, prefix, groupId)) .collect(Collectors.toList()); // Helper function to install flows after groups, since here flows // points to the group and P4Runtime enforces this dependency during // write operations. insertInOrder(group, flowRules); } /** * Set up routes on a given device to forward packets across the fabric, * making a distinction between spines and leaves. * * @param deviceId the device ID. */ private void setUpFabricRoutes(DeviceId deviceId) { if (isSpine(deviceId)) { setUpSpineRoutes(deviceId); } else { setUpLeafRoutes(deviceId); } } /** * Insert routing rules on the given spine switch, matching on leaf * interface subnets and forwarding packets to the corresponding leaf. * * @param spineId the spine device ID */ private void setUpSpineRoutes(DeviceId spineId) { log.info("Adding up spine routes on {}...", spineId); for (Device device : deviceService.getDevices()) { if (isSpine(device.id())) { // We only need routes to leaf switches. Ignore spines. continue; } final DeviceId leafId = device.id(); final MacAddress leafMac = getMyStationMac(leafId); final Set subnetsToRoute = getInterfaceIpv6Prefixes(leafId); // Since we're here, we also add a route for SRv6 (Exercise 7), to // forward packets with IPv6 dst the SID of a leaf switch. final Ip6Address leafSid = getDeviceSid(leafId); subnetsToRoute.add(Ip6Prefix.valueOf(leafSid, 128)); // Create a group with only one member. int groupId = macToGroupId(leafMac); GroupDescription group = createNextHopGroup( groupId, Collections.singleton(leafMac), spineId); List flowRules = subnetsToRoute.stream() .map(subnet -> createRoutingRule(spineId, subnet, groupId)) .collect(Collectors.toList()); insertInOrder(group, flowRules); } } /** * Insert routing rules on the given leaf switch, matching on interface * subnets associated to other leaves and forwarding packets the spines * using ECMP. * * @param leafId the leaf device ID */ private void setUpLeafRoutes(DeviceId leafId) { log.info("Setting up leaf routes: {}", leafId); // Get the set of subnets (interface IPv6 prefixes) associated to other // leafs but not this one. Set subnetsToRouteViaSpines = stream(deviceService.getDevices()) .map(Device::id) .filter(this::isLeaf) .filter(deviceId -> !deviceId.equals(leafId)) .map(this::getInterfaceIpv6Prefixes) .flatMap(Collection::stream) .collect(Collectors.toSet()); // Get myStationMac address of all spines. Set spineMacs = stream(deviceService.getDevices()) .map(Device::id) .filter(this::isSpine) .map(this::getMyStationMac) .collect(Collectors.toSet()); // Create an ECMP group to distribute traffic across all spines. final int groupId = DEFAULT_ECMP_GROUP_ID; final GroupDescription ecmpGroup = createNextHopGroup( groupId, spineMacs, leafId); // Generate a flow rule for each subnet pointing to the ECMP group. List flowRules = subnetsToRouteViaSpines.stream() .map(subnet -> createRoutingRule(leafId, subnet, groupId)) .collect(Collectors.toList()); insertInOrder(ecmpGroup, flowRules); // Since we're here, we also add a route for SRv6 (Exercise 7), to // forward packets with IPv6 dst the SID of a spine switch, in this case // using a single-member group. stream(deviceService.getDevices()) .map(Device::id) .filter(this::isSpine) .forEach(spineId -> { MacAddress spineMac = getMyStationMac(spineId); Ip6Address spineSid = getDeviceSid(spineId); int spineGroupId = macToGroupId(spineMac); GroupDescription group = createNextHopGroup( spineGroupId, Collections.singleton(spineMac), leafId); FlowRule routingRule = createRoutingRule( leafId, Ip6Prefix.valueOf(spineSid, 128), spineGroupId); insertInOrder(group, Collections.singleton(routingRule)); }); } //-------------------------------------------------------------------------- // UTILITY METHODS //-------------------------------------------------------------------------- /** * Returns true if the given device has isSpine flag set to true in the * config, false otherwise. * * @param deviceId the device ID * @return true if the device is a spine, false otherwise */ private boolean isSpine(DeviceId deviceId) { return getDeviceConfig(deviceId).map(FabricDeviceConfig::isSpine) .orElseThrow(() -> new ItemNotFoundException( "Missing isSpine config for " + deviceId)); } /** * Returns true if the given device is not configured as spine. * * @param deviceId the device ID * @return true if the device is a leaf, false otherwise */ private boolean isLeaf(DeviceId deviceId) { return !isSpine(deviceId); } /** * Returns the MAC address configured in the "myStationMac" property of the * given device config. * * @param deviceId the device ID * @return MyStation MAC address */ private MacAddress getMyStationMac(DeviceId deviceId) { return getDeviceConfig(deviceId) .map(FabricDeviceConfig::myStationMac) .orElseThrow(() -> new ItemNotFoundException( "Missing myStationMac config for " + deviceId)); } /** * Returns the FabricDeviceConfig config object for the given device. * * @param deviceId the device ID * @return FabricDeviceConfig device config */ private Optional getDeviceConfig(DeviceId deviceId) { FabricDeviceConfig config = networkConfigService.getConfig( deviceId, FabricDeviceConfig.class); return Optional.ofNullable(config); } /** * Returns the set of interface IPv6 subnets (prefixes) configured for the * given device. * * @param deviceId the device ID * @return set of IPv6 prefixes */ private Set getInterfaceIpv6Prefixes(DeviceId deviceId) { return interfaceService.getInterfaces().stream() .filter(iface -> iface.connectPoint().deviceId().equals(deviceId)) .map(Interface::ipAddressesList) .flatMap(Collection::stream) .map(InterfaceIpAddress::subnetAddress) .filter(IpPrefix::isIp6) .map(IpPrefix::getIp6Prefix) .collect(Collectors.toSet()); } /** * Returns a 32 bit bit group ID from the given MAC address. * * @param mac the MAC address * @return an integer */ private int macToGroupId(MacAddress mac) { return mac.hashCode() & 0x7fffffff; } /** * Inserts the given groups and flow rules in order, groups first, then flow * rules. In P4Runtime, when operating on an indirect table (i.e. with * action selectors), groups must be inserted before table entries. * * @param group the group * @param flowRules the flow rules depending on the group */ private void insertInOrder(GroupDescription group, Collection flowRules) { try { groupService.addGroup(group); // Wait for groups to be inserted. Thread.sleep(GROUP_INSERT_DELAY_MILLIS); flowRules.forEach(flowRuleService::applyFlowRules); } catch (InterruptedException e) { log.error("Interrupted!", e); Thread.currentThread().interrupt(); } } /** * Gets Srv6 SID for the given device. * * @param deviceId the device ID * @return SID for the device */ private Ip6Address getDeviceSid(DeviceId deviceId) { return getDeviceConfig(deviceId) .map(FabricDeviceConfig::mySid) .orElseThrow(() -> new ItemNotFoundException( "Missing mySid config for " + deviceId)); } /** * Sets up IPv6 routing on all devices known by ONOS and for which this ONOS * node instance is currently master. */ private synchronized void setUpAllDevices() { // Set up host routes stream(deviceService.getAvailableDevices()) .map(Device::id) .filter(mastershipService::isLocalMaster) .forEach(deviceId -> { log.info("*** IPV6 ROUTING - Starting initial set up for {}...", deviceId); setUpMyStationTable(deviceId); setUpFabricRoutes(deviceId); setUpL2NextHopRules(deviceId); hostService.getConnectedHosts(deviceId) .forEach(host -> setUpHostRules(deviceId, host)); }); } } ================================================ FILE: app/src/main/java/org/onosproject/ngsdn/tutorial/L2BridgingComponent.java ================================================ /* * Copyright 2019-present Open Networking Foundation * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.onosproject.ngsdn.tutorial; import org.onlab.packet.MacAddress; import org.onosproject.core.ApplicationId; import org.onosproject.mastership.MastershipService; import org.onosproject.net.ConnectPoint; import org.onosproject.net.DeviceId; import org.onosproject.net.Host; import org.onosproject.net.PortNumber; import org.onosproject.net.config.NetworkConfigService; import org.onosproject.net.device.DeviceEvent; import org.onosproject.net.device.DeviceListener; import org.onosproject.net.device.DeviceService; import org.onosproject.net.flow.FlowRule; import org.onosproject.net.flow.FlowRuleService; import org.onosproject.net.flow.criteria.PiCriterion; import org.onosproject.net.group.GroupDescription; import org.onosproject.net.group.GroupService; import org.onosproject.net.host.HostEvent; import org.onosproject.net.host.HostListener; import org.onosproject.net.host.HostService; import org.onosproject.net.intf.Interface; import org.onosproject.net.intf.InterfaceService; import org.onosproject.net.pi.model.PiActionId; import org.onosproject.net.pi.model.PiActionParamId; import org.onosproject.net.pi.model.PiMatchFieldId; import org.onosproject.net.pi.runtime.PiAction; import org.onosproject.net.pi.runtime.PiActionParam; import org.osgi.service.component.annotations.Activate; import org.osgi.service.component.annotations.Component; import org.osgi.service.component.annotations.Deactivate; import org.osgi.service.component.annotations.Reference; import org.osgi.service.component.annotations.ReferenceCardinality; import org.onosproject.ngsdn.tutorial.common.FabricDeviceConfig; import org.onosproject.ngsdn.tutorial.common.Utils; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import java.util.Set; import java.util.stream.Collectors; import static org.onosproject.ngsdn.tutorial.AppConstants.INITIAL_SETUP_DELAY; /** * App component that configures devices to provide L2 bridging capabilities. */ @Component( immediate = true, // *** TODO EXERCISE 4 // Enable component (enabled = true) enabled = false ) public class L2BridgingComponent { private final Logger log = LoggerFactory.getLogger(getClass()); private static final int DEFAULT_BROADCAST_GROUP_ID = 255; private final DeviceListener deviceListener = new InternalDeviceListener(); private final HostListener hostListener = new InternalHostListener(); private ApplicationId appId; //-------------------------------------------------------------------------- // ONOS CORE SERVICE BINDING // // These variables are set by the Karaf runtime environment before calling // the activate() method. //-------------------------------------------------------------------------- @Reference(cardinality = ReferenceCardinality.MANDATORY) private HostService hostService; @Reference(cardinality = ReferenceCardinality.MANDATORY) private DeviceService deviceService; @Reference(cardinality = ReferenceCardinality.MANDATORY) private InterfaceService interfaceService; @Reference(cardinality = ReferenceCardinality.MANDATORY) private NetworkConfigService configService; @Reference(cardinality = ReferenceCardinality.MANDATORY) private FlowRuleService flowRuleService; @Reference(cardinality = ReferenceCardinality.MANDATORY) private GroupService groupService; @Reference(cardinality = ReferenceCardinality.MANDATORY) private MastershipService mastershipService; @Reference(cardinality = ReferenceCardinality.MANDATORY) private MainComponent mainComponent; //-------------------------------------------------------------------------- // COMPONENT ACTIVATION. // // When loading/unloading the app the Karaf runtime environment will call // activate()/deactivate(). //-------------------------------------------------------------------------- @Activate protected void activate() { appId = mainComponent.getAppId(); // Register listeners to be informed about device and host events. deviceService.addListener(deviceListener); hostService.addListener(hostListener); // Schedule set up of existing devices. Needed when reloading the app. mainComponent.scheduleTask(this::setUpAllDevices, INITIAL_SETUP_DELAY); log.info("Started"); } @Deactivate protected void deactivate() { deviceService.removeListener(deviceListener); hostService.removeListener(hostListener); log.info("Stopped"); } //-------------------------------------------------------------------------- // METHODS TO COMPLETE. // // Complete the implementation wherever you see TODO. //-------------------------------------------------------------------------- /** * Sets up everything necessary to support L2 bridging on the given device. * * @param deviceId the device to set up */ private void setUpDevice(DeviceId deviceId) { if (isSpine(deviceId)) { // Stop here. We support bridging only on leaf/tor switches. return; } insertMulticastGroup(deviceId); insertMulticastFlowRules(deviceId); // Uncomment the following line after you have implemented the method: // insertUnmatchedBridgingFlowRule(deviceId); } /** * Inserts an ALL group in the ONOS core to replicate packets on all host * facing ports. This group will be used to broadcast all ARP/NDP requests. *

* ALL groups in ONOS are equivalent to P4Runtime packet replication engine * (PRE) Multicast groups. * * @param deviceId the device where to install the group */ private void insertMulticastGroup(DeviceId deviceId) { // Replicate packets where we know hosts are attached. Set ports = getHostFacingPorts(deviceId); if (ports.isEmpty()) { // Stop here. log.warn("Device {} has 0 host facing ports", deviceId); return; } log.info("Adding L2 multicast group with {} ports on {}...", ports.size(), deviceId); // Forge group object. final GroupDescription multicastGroup = Utils.buildMulticastGroup( appId, deviceId, DEFAULT_BROADCAST_GROUP_ID, ports); // Insert. groupService.addGroup(multicastGroup); } /** * Insert flow rules matching ethernet destination * broadcast/multicast addresses (e.g. ARP requests, NDP Neighbor * Solicitation, etc.). Such packets should be processed by the multicast * group created before. *

* This method will be called at component activation for each device * (switch) known by ONOS, and every time a new device-added event is * captured by the InternalDeviceListener defined below. * * @param deviceId device ID where to install the rules */ private void insertMulticastFlowRules(DeviceId deviceId) { log.info("Adding L2 multicast rules on {}...", deviceId); // Modify P4Runtime entity names to match content of P4Info file (look // for the fully qualified name of tables, match fields, and actions. // ---- START SOLUTION ---- // Match ARP request - Match exactly FF:FF:FF:FF:FF:FF final PiCriterion macBroadcastCriterion = PiCriterion.builder() .matchTernary( PiMatchFieldId.of("hdr.ethernet.dst_addr"), MacAddress.valueOf("FF:FF:FF:FF:FF:FF").toBytes(), MacAddress.valueOf("FF:FF:FF:FF:FF:FF").toBytes()) .build(); // Match NDP NS - Match ternary 33:33:**:**:**:** final PiCriterion ipv6MulticastCriterion = PiCriterion.builder() .matchTernary( PiMatchFieldId.of("hdr.ethernet.dst_addr"), MacAddress.valueOf("33:33:00:00:00:00").toBytes(), MacAddress.valueOf("FF:FF:00:00:00:00").toBytes()) .build(); // Action: set multicast group id final PiAction setMcastGroupAction = PiAction.builder() .withId(PiActionId.of("IngressPipeImpl.set_multicast_group")) .withParameter(new PiActionParam( PiActionParamId.of("gid"), DEFAULT_BROADCAST_GROUP_ID)) .build(); // Build 2 flow rules. final String tableId = "IngressPipeImpl.l2_ternary_table"; // ---- END SOLUTION ---- final FlowRule rule1 = Utils.buildFlowRule( deviceId, appId, tableId, macBroadcastCriterion, setMcastGroupAction); final FlowRule rule2 = Utils.buildFlowRule( deviceId, appId, tableId, ipv6MulticastCriterion, setMcastGroupAction); // Insert rules. flowRuleService.applyFlowRules(rule1, rule2); } /** * Insert flow rule that matches all unmatched ethernet traffic. This * will implement the traditional briding behavior that floods all * unmatched traffic. *

* This method will be called at component activation for each device * (switch) known by ONOS, and every time a new device-added event is * captured by the InternalDeviceListener defined below. * * @param deviceId device ID where to install the rules */ @SuppressWarnings("unused") private void insertUnmatchedBridgingFlowRule(DeviceId deviceId) { log.info("Adding L2 multicast rules on {}...", deviceId); // Modify P4Runtime entity names to match content of P4Info file (look // for the fully qualified name of tables, match fields, and actions. // ---- START SOLUTION ---- // Match unmatched traffic - Match ternary **:**:**:**:**:** final PiCriterion unmatchedTrafficCriterion = PiCriterion.builder() .matchTernary( PiMatchFieldId.of("hdr.ethernet.dst_addr"), MacAddress.valueOf("00:00:00:00:00:00").toBytes(), MacAddress.valueOf("00:00:00:00:00:00").toBytes()) .build(); // Action: set multicast group id final PiAction setMcastGroupAction = PiAction.builder() .withId(PiActionId.of("IngressPipeImpl.set_multicast_group")) .withParameter(new PiActionParam( PiActionParamId.of("gid"), DEFAULT_BROADCAST_GROUP_ID)) .build(); // Build flow rule. final String tableId = "IngressPipeImpl.l2_ternary_table"; // ---- END SOLUTION ---- final FlowRule rule = Utils.buildFlowRule( deviceId, appId, tableId, unmatchedTrafficCriterion, setMcastGroupAction); // Insert rules. flowRuleService.applyFlowRules(rule); } /** * Insert flow rules to forward packets to a given host located at the given * device and port. *

* This method will be called at component activation for each host known by * ONOS, and every time a new host-added event is captured by the * InternalHostListener defined below. * * @param host host instance * @param deviceId device where the host is located * @param port port where the host is attached to */ private void learnHost(Host host, DeviceId deviceId, PortNumber port) { log.info("Adding L2 unicast rule on {} for host {} (port {})...", deviceId, host.id(), port); // Modify P4Runtime entity names to match content of P4Info file (look // for the fully qualified name of tables, match fields, and actions. // ---- START SOLUTION ---- final String tableId = "IngressPipeImpl.l2_exact_table"; // Match exactly on the host MAC address. final MacAddress hostMac = host.mac(); final PiCriterion hostMacCriterion = PiCriterion.builder() .matchExact(PiMatchFieldId.of("hdr.ethernet.dst_addr"), hostMac.toBytes()) .build(); // Action: set output port final PiAction l2UnicastAction = PiAction.builder() .withId(PiActionId.of("IngressPipeImpl.set_egress_port")) .withParameter(new PiActionParam( PiActionParamId.of("port_num"), port.toLong())) .build(); // ---- END SOLUTION ---- // Forge flow rule. final FlowRule rule = Utils.buildFlowRule( deviceId, appId, tableId, hostMacCriterion, l2UnicastAction); // Insert. flowRuleService.applyFlowRules(rule); } //-------------------------------------------------------------------------- // EVENT LISTENERS // // Events are processed only if isRelevant() returns true. //-------------------------------------------------------------------------- /** * Listener of device events. */ public class InternalDeviceListener implements DeviceListener { @Override public boolean isRelevant(DeviceEvent event) { switch (event.type()) { case DEVICE_ADDED: case DEVICE_AVAILABILITY_CHANGED: break; default: // Ignore other events. return false; } // Process only if this controller instance is the master. final DeviceId deviceId = event.subject().id(); return mastershipService.isLocalMaster(deviceId); } @Override public void event(DeviceEvent event) { final DeviceId deviceId = event.subject().id(); if (deviceService.isAvailable(deviceId)) { // A P4Runtime device is considered available in ONOS when there // is a StreamChannel session open and the pipeline // configuration has been set. // Events are processed using a thread pool defined in the // MainComponent. mainComponent.getExecutorService().execute(() -> { log.info("{} event! deviceId={}", event.type(), deviceId); setUpDevice(deviceId); }); } } } /** * Listener of host events. */ public class InternalHostListener implements HostListener { @Override public boolean isRelevant(HostEvent event) { switch (event.type()) { case HOST_ADDED: // Host added events will be generated by the // HostLocationProvider by intercepting ARP/NDP packets. break; case HOST_REMOVED: case HOST_UPDATED: case HOST_MOVED: default: // Ignore other events. // Food for thoughts: how to support host moved/removed? return false; } // Process host event only if this controller instance is the master // for the device where this host is attached to. final Host host = event.subject(); final DeviceId deviceId = host.location().deviceId(); return mastershipService.isLocalMaster(deviceId); } @Override public void event(HostEvent event) { final Host host = event.subject(); // Device and port where the host is located. final DeviceId deviceId = host.location().deviceId(); final PortNumber port = host.location().port(); mainComponent.getExecutorService().execute(() -> { log.info("{} event! host={}, deviceId={}, port={}", event.type(), host.id(), deviceId, port); learnHost(host, deviceId, port); }); } } //-------------------------------------------------------------------------- // UTILITY METHODS //-------------------------------------------------------------------------- /** * Returns a set of ports for the given device that are used to connect * hosts to the fabric. * * @param deviceId device ID * @return set of host facing ports */ private Set getHostFacingPorts(DeviceId deviceId) { // Get all interfaces configured via netcfg for the given device ID and // return the corresponding device port number. Interface configuration // in the netcfg.json looks like this: // "device:leaf1/3": { // "interfaces": [ // { // "name": "leaf1-3", // "ips": ["2001:1:1::ff/64"] // } // ] // } return interfaceService.getInterfaces().stream() .map(Interface::connectPoint) .filter(cp -> cp.deviceId().equals(deviceId)) .map(ConnectPoint::port) .collect(Collectors.toSet()); } /** * Returns true if the given device is defined as a spine in the * netcfg.json. * * @param deviceId device ID * @return true if spine, false otherwise */ private boolean isSpine(DeviceId deviceId) { // Example netcfg defining a device as spine: // "devices": { // "device:spine1": { // ... // "fabricDeviceConfig": { // "myStationMac": "...", // "mySid": "...", // "isSpine": true // } // }, // ... final FabricDeviceConfig cfg = configService.getConfig( deviceId, FabricDeviceConfig.class); return cfg != null && cfg.isSpine(); } /** * Sets up L2 bridging on all devices known by ONOS and for which this ONOS * node instance is currently master. *

* This method is called at component activation. */ private void setUpAllDevices() { deviceService.getAvailableDevices().forEach(device -> { if (mastershipService.isLocalMaster(device.id())) { log.info("*** L2 BRIDGING - Starting initial set up for {}...", device.id()); setUpDevice(device.id()); // For all hosts connected to this device... hostService.getConnectedHosts(device.id()).forEach( host -> learnHost(host, host.location().deviceId(), host.location().port())); } }); } } ================================================ FILE: app/src/main/java/org/onosproject/ngsdn/tutorial/MainComponent.java ================================================ package org.onosproject.ngsdn.tutorial; import com.google.common.collect.Lists; import org.onlab.util.SharedScheduledExecutors; import org.onosproject.cfg.ComponentConfigService; import org.onosproject.core.ApplicationId; import org.onosproject.core.CoreService; import org.onosproject.net.Device; import org.onosproject.net.DeviceId; import org.onosproject.net.config.ConfigFactory; import org.onosproject.net.config.NetworkConfigRegistry; import org.onosproject.net.config.basics.SubjectFactories; import org.onosproject.net.device.DeviceService; import org.onosproject.net.flow.FlowRule; import org.onosproject.net.flow.FlowRuleService; import org.onosproject.net.group.Group; import org.onosproject.net.group.GroupService; import org.osgi.service.component.annotations.Activate; import org.osgi.service.component.annotations.Component; import org.osgi.service.component.annotations.Deactivate; import org.osgi.service.component.annotations.Reference; import org.osgi.service.component.annotations.ReferenceCardinality; import org.onosproject.ngsdn.tutorial.common.FabricDeviceConfig; import org.onosproject.ngsdn.tutorial.pipeconf.PipeconfLoader; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import java.util.Collection; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; import java.util.concurrent.TimeUnit; import static org.onosproject.ngsdn.tutorial.AppConstants.APP_NAME; import static org.onosproject.ngsdn.tutorial.AppConstants.CLEAN_UP_DELAY; import static org.onosproject.ngsdn.tutorial.AppConstants.DEFAULT_CLEAN_UP_RETRY_TIMES; import static org.onosproject.ngsdn.tutorial.common.Utils.sleep; /** * A component which among other things registers the fabricDeviceConfig to the * netcfg subsystem. */ @Component(immediate = true, service = MainComponent.class) public class MainComponent { private static final Logger log = LoggerFactory.getLogger(MainComponent.class.getName()); @Reference(cardinality = ReferenceCardinality.MANDATORY) private CoreService coreService; @Reference(cardinality = ReferenceCardinality.MANDATORY) //Force activation of this component after the pipeconf has been registered. @SuppressWarnings("unused") protected PipeconfLoader pipeconfLoader; @Reference(cardinality = ReferenceCardinality.MANDATORY) protected NetworkConfigRegistry configRegistry; @Reference(cardinality = ReferenceCardinality.MANDATORY) private GroupService groupService; @Reference(cardinality = ReferenceCardinality.MANDATORY) private DeviceService deviceService; @Reference(cardinality = ReferenceCardinality.MANDATORY) private FlowRuleService flowRuleService; @Reference(cardinality = ReferenceCardinality.MANDATORY) private ComponentConfigService compCfgService; private final ConfigFactory fabricConfigFactory = new ConfigFactory( SubjectFactories.DEVICE_SUBJECT_FACTORY, FabricDeviceConfig.class, FabricDeviceConfig.CONFIG_KEY) { @Override public FabricDeviceConfig createConfig() { return new FabricDeviceConfig(); } }; private ApplicationId appId; // For the sake of simplicity and to facilitate reading logs, use a // single-thread executor to serialize all configuration tasks. private final ExecutorService executorService = Executors.newSingleThreadExecutor(); @Activate protected void activate() { appId = coreService.registerApplication(APP_NAME); // Wait to remove flow and groups from previous executions. waitPreviousCleanup(); compCfgService.preSetProperty("org.onosproject.net.flow.impl.FlowRuleManager", "fallbackFlowPollFrequency", "4", false); compCfgService.preSetProperty("org.onosproject.net.group.impl.GroupManager", "fallbackGroupPollFrequency", "3", false); compCfgService.preSetProperty("org.onosproject.provider.host.impl.HostLocationProvider", "requestIpv6ND", "true", false); compCfgService.preSetProperty("org.onosproject.provider.lldp.impl.LldpLinkProvider", "useBddp", "false", false); configRegistry.registerConfigFactory(fabricConfigFactory); log.info("Started"); } @Deactivate protected void deactivate() { configRegistry.unregisterConfigFactory(fabricConfigFactory); cleanUp(); log.info("Stopped"); } /** * Returns the application ID. * * @return application ID */ ApplicationId getAppId() { return appId; } /** * Returns the executor service managed by this component. * * @return executor service */ public ExecutorService getExecutorService() { return executorService; } /** * Schedules a task for the future using the executor service managed by * this component. * * @param task task runnable * @param delaySeconds delay in seconds */ public void scheduleTask(Runnable task, int delaySeconds) { SharedScheduledExecutors.newTimeout( () -> executorService.execute(task), delaySeconds, TimeUnit.SECONDS); } /** * Triggers clean up of flows and groups from this app, returns false if no * flows or groups were found, true otherwise. * * @return false if no flows or groups were found, true otherwise */ private boolean cleanUp() { Collection flows = Lists.newArrayList( flowRuleService.getFlowEntriesById(appId).iterator()); Collection groups = Lists.newArrayList(); for (Device device : deviceService.getAvailableDevices()) { groupService.getGroups(device.id(), appId).forEach(groups::add); } if (flows.isEmpty() && groups.isEmpty()) { return false; } flows.forEach(flowRuleService::removeFlowRules); if (!groups.isEmpty()) { // Wait for flows to be removed in case those depend on groups. sleep(1000); groups.forEach(g -> groupService.removeGroup( g.deviceId(), g.appCookie(), g.appId())); } return true; } private void waitPreviousCleanup() { int retry = DEFAULT_CLEAN_UP_RETRY_TIMES; while (retry != 0) { if (!cleanUp()) { return; } log.info("Waiting to remove flows and groups from " + "previous execution of {}...", appId.name()); sleep(CLEAN_UP_DELAY); --retry; } } } ================================================ FILE: app/src/main/java/org/onosproject/ngsdn/tutorial/NdpReplyComponent.java ================================================ /* * Copyright 2019-present Open Networking Foundation * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.onosproject.ngsdn.tutorial; import org.onlab.packet.Ip6Address; import org.onlab.packet.IpAddress; import org.onlab.packet.MacAddress; import org.onlab.util.ItemNotFoundException; import org.onosproject.core.ApplicationId; import org.onosproject.mastership.MastershipService; import org.onosproject.net.DeviceId; import org.onosproject.net.config.NetworkConfigService; import org.onosproject.net.device.DeviceEvent; import org.onosproject.net.device.DeviceListener; import org.onosproject.net.device.DeviceService; import org.onosproject.net.flow.FlowRule; import org.onosproject.net.flow.FlowRuleOperations; import org.onosproject.net.flow.FlowRuleService; import org.onosproject.net.flow.criteria.PiCriterion; import org.onosproject.net.host.InterfaceIpAddress; import org.onosproject.net.intf.Interface; import org.onosproject.net.intf.InterfaceService; import org.onosproject.net.pi.model.PiActionId; import org.onosproject.net.pi.model.PiActionParamId; import org.onosproject.net.pi.model.PiMatchFieldId; import org.onosproject.net.pi.runtime.PiAction; import org.onosproject.net.pi.runtime.PiActionParam; import org.onosproject.ngsdn.tutorial.common.FabricDeviceConfig; import org.onosproject.ngsdn.tutorial.common.Utils; import org.osgi.service.component.annotations.Activate; import org.osgi.service.component.annotations.Component; import org.osgi.service.component.annotations.Deactivate; import org.osgi.service.component.annotations.Reference; import org.osgi.service.component.annotations.ReferenceCardinality; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import java.util.Collection; import java.util.stream.Collectors; import static org.onosproject.ngsdn.tutorial.AppConstants.INITIAL_SETUP_DELAY; /** * App component that configures devices to generate NDP Neighbor Advertisement * packets for all interface IPv6 addresses configured in the netcfg. */ @Component( immediate = true, // *** TODO EXERCISE 5 // Enable component (enabled = true) enabled = false ) public class NdpReplyComponent { private static final Logger log = LoggerFactory.getLogger(NdpReplyComponent.class.getName()); //-------------------------------------------------------------------------- // ONOS CORE SERVICE BINDING // // These variables are set by the Karaf runtime environment before calling // the activate() method. //-------------------------------------------------------------------------- @Reference(cardinality = ReferenceCardinality.MANDATORY) protected NetworkConfigService configService; @Reference(cardinality = ReferenceCardinality.MANDATORY) protected FlowRuleService flowRuleService; @Reference(cardinality = ReferenceCardinality.MANDATORY) protected InterfaceService interfaceService; @Reference(cardinality = ReferenceCardinality.MANDATORY) protected MastershipService mastershipService; @Reference(cardinality = ReferenceCardinality.MANDATORY) protected DeviceService deviceService; @Reference(cardinality = ReferenceCardinality.MANDATORY) private MainComponent mainComponent; private DeviceListener deviceListener = new InternalDeviceListener(); private ApplicationId appId; //-------------------------------------------------------------------------- // COMPONENT ACTIVATION. // // When loading/unloading the app the Karaf runtime environment will call // activate()/deactivate(). //-------------------------------------------------------------------------- @Activate public void activate() { appId = mainComponent.getAppId(); // Register listeners to be informed about device events. deviceService.addListener(deviceListener); // Schedule set up of existing devices. Needed when reloading the app. mainComponent.scheduleTask(this::setUpAllDevices, INITIAL_SETUP_DELAY); log.info("Started"); } @Deactivate public void deactivate() { deviceService.removeListener(deviceListener); log.info("Stopped"); } //-------------------------------------------------------------------------- // METHODS TO COMPLETE. // // Complete the implementation wherever you see TODO. //-------------------------------------------------------------------------- /** * Set up all devices for which this ONOS instance is currently master. */ private void setUpAllDevices() { deviceService.getAvailableDevices().forEach(device -> { if (mastershipService.isLocalMaster(device.id())) { log.info("*** NDP REPLY - Starting Initial set up for {}...", device.id()); setUpDevice(device.id()); } }); } /** * Performs setup of the given device by creating a flow rule to generate * NDP NA packets for IPv6 addresses associated to the device interfaces. * * @param deviceId device ID */ private void setUpDevice(DeviceId deviceId) { // Get this device config from netcfg.json. final FabricDeviceConfig config = configService.getConfig( deviceId, FabricDeviceConfig.class); if (config == null) { // Config not available yet throw new ItemNotFoundException("Missing fabricDeviceConfig for " + deviceId); } // Get this device myStation mac. final MacAddress deviceMac = config.myStationMac(); // Get all interfaces currently configured for the device final Collection interfaces = interfaceService.getInterfaces() .stream() .filter(iface -> iface.connectPoint().deviceId().equals(deviceId)) .collect(Collectors.toSet()); if (interfaces.isEmpty()) { log.info("{} does not have any IPv6 interface configured", deviceId); return; } // Generate and install flow rules. log.info("Adding rules to {} to generate NDP NA for {} IPv6 interfaces...", deviceId, interfaces.size()); final Collection flowRules = interfaces.stream() .map(this::getIp6Addresses) .flatMap(Collection::stream) .map(ipv6addr -> buildNdpReplyFlowRule(deviceId, ipv6addr, deviceMac)) .collect(Collectors.toSet()); installRules(flowRules); } /** * Build a flow rule for the NDP reply table on the given device, for the * given target IPv6 address and MAC address. * * @param deviceId device ID where to install the flow rules * @param targetIpv6Address target IPv6 address * @param targetMac target MAC address * @return flow rule object */ private FlowRule buildNdpReplyFlowRule(DeviceId deviceId, Ip6Address targetIpv6Address, MacAddress targetMac) { // *** TODO EXERCISE 5 // Modify P4Runtime entity names to match content of P4Info file (look // for the fully qualified name of tables, match fields, and actions. // ---- START SOLUTION ---- // Build match. final PiCriterion match = PiCriterion.builder() .matchExact(PiMatchFieldId.of("hdr.ndp.target_ipv6_addr"), targetIpv6Address.toOctets()) .build(); // Build action. final PiActionParam targetMacParam = new PiActionParam( PiActionParamId.of("target_mac"), targetMac.toBytes()); final PiAction action = PiAction.builder() .withId(PiActionId.of("MODIFY ME")) .withParameter(targetMacParam) .build(); // Table ID. final String tableId = "MODIFY ME"; // ---- END SOLUTION ---- // Build flow rule. final FlowRule rule = Utils.buildFlowRule( deviceId, appId, tableId, match, action); return rule; } //-------------------------------------------------------------------------- // EVENT LISTENERS // // Events are processed only if isRelevant() returns true. //-------------------------------------------------------------------------- /** * Listener of device events. */ public class InternalDeviceListener implements DeviceListener { @Override public boolean isRelevant(DeviceEvent event) { switch (event.type()) { case DEVICE_ADDED: case DEVICE_AVAILABILITY_CHANGED: break; default: // Ignore other events. return false; } // Process only if this controller instance is the master. final DeviceId deviceId = event.subject().id(); return mastershipService.isLocalMaster(deviceId); } @Override public void event(DeviceEvent event) { final DeviceId deviceId = event.subject().id(); if (deviceService.isAvailable(deviceId)) { // A P4Runtime device is considered available in ONOS when there // is a StreamChannel session open and the pipeline // configuration has been set. // Events are processed using a thread pool defined in the // MainComponent. mainComponent.getExecutorService().execute(() -> { log.info("{} event! deviceId={}", event.type(), deviceId); setUpDevice(deviceId); }); } } } //-------------------------------------------------------------------------- // UTILITY METHODS //-------------------------------------------------------------------------- /** * Returns all IPv6 addresses associated with the given interface. * * @param iface interface instance * @return collection of IPv6 addresses */ private Collection getIp6Addresses(Interface iface) { return iface.ipAddressesList() .stream() .map(InterfaceIpAddress::ipAddress) .filter(IpAddress::isIp6) .map(IpAddress::getIp6Address) .collect(Collectors.toSet()); } /** * Install the given flow rules in batch using the flow rule service. * * @param flowRules flow rules to install */ private void installRules(Collection flowRules) { FlowRuleOperations.Builder ops = FlowRuleOperations.builder(); flowRules.forEach(ops::add); flowRuleService.apply(ops.build()); } } ================================================ FILE: app/src/main/java/org/onosproject/ngsdn/tutorial/Srv6Component.java ================================================ /* * Copyright 2019-present Open Networking Foundation * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.onosproject.ngsdn.tutorial; import com.google.common.collect.Lists; import org.onlab.packet.Ip6Address; import org.onosproject.core.ApplicationId; import org.onosproject.mastership.MastershipService; import org.onosproject.net.Device; import org.onosproject.net.DeviceId; import org.onosproject.net.config.NetworkConfigService; import org.onosproject.net.device.DeviceEvent; import org.onosproject.net.device.DeviceListener; import org.onosproject.net.device.DeviceService; import org.onosproject.net.flow.FlowRule; import org.onosproject.net.flow.FlowRuleOperations; import org.onosproject.net.flow.FlowRuleService; import org.onosproject.net.flow.criteria.PiCriterion; import org.onosproject.net.pi.model.PiActionId; import org.onosproject.net.pi.model.PiActionParamId; import org.onosproject.net.pi.model.PiMatchFieldId; import org.onosproject.net.pi.model.PiTableId; import org.onosproject.net.pi.runtime.PiAction; import org.onosproject.net.pi.runtime.PiActionParam; import org.onosproject.net.pi.runtime.PiTableAction; import org.osgi.service.component.annotations.Activate; import org.osgi.service.component.annotations.Component; import org.osgi.service.component.annotations.Deactivate; import org.osgi.service.component.annotations.Reference; import org.osgi.service.component.annotations.ReferenceCardinality; import org.onosproject.ngsdn.tutorial.common.FabricDeviceConfig; import org.onosproject.ngsdn.tutorial.common.Utils; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import java.util.List; import java.util.Optional; import static com.google.common.collect.Streams.stream; import static org.onosproject.ngsdn.tutorial.AppConstants.INITIAL_SETUP_DELAY; /** * Application which handles SRv6 segment routing. */ @Component( immediate = true, // *** TODO EXERCISE 6 // set to true when ready enabled = false, service = Srv6Component.class ) public class Srv6Component { private static final Logger log = LoggerFactory.getLogger(Srv6Component.class); //-------------------------------------------------------------------------- // ONOS CORE SERVICE BINDING // // These variables are set by the Karaf runtime environment before calling // the activate() method. //-------------------------------------------------------------------------- @Reference(cardinality = ReferenceCardinality.MANDATORY) private FlowRuleService flowRuleService; @Reference(cardinality = ReferenceCardinality.MANDATORY) private MastershipService mastershipService; @Reference(cardinality = ReferenceCardinality.MANDATORY) private DeviceService deviceService; @Reference(cardinality = ReferenceCardinality.MANDATORY) private NetworkConfigService networkConfigService; @Reference(cardinality = ReferenceCardinality.MANDATORY) private MainComponent mainComponent; private final DeviceListener deviceListener = new Srv6Component.InternalDeviceListener(); private ApplicationId appId; //-------------------------------------------------------------------------- // COMPONENT ACTIVATION. // // When loading/unloading the app the Karaf runtime environment will call // activate()/deactivate(). //-------------------------------------------------------------------------- @Activate protected void activate() { appId = mainComponent.getAppId(); // Register listeners to be informed about device and host events. deviceService.addListener(deviceListener); // Schedule set up for all devices. mainComponent.scheduleTask(this::setUpAllDevices, INITIAL_SETUP_DELAY); log.info("Started"); } @Deactivate protected void deactivate() { deviceService.removeListener(deviceListener); log.info("Stopped"); } //-------------------------------------------------------------------------- // METHODS TO COMPLETE. // // Complete the implementation wherever you see TODO. //-------------------------------------------------------------------------- /** * Populate the My SID table from the network configuration for the * specified device. * * @param deviceId the device Id */ private void setUpMySidTable(DeviceId deviceId) { Ip6Address mySid = getMySid(deviceId); log.info("Adding mySid rule on {} (sid {})...", deviceId, mySid); // *** TODO EXERCISE 6 // Fill in the table ID for the SRv6 my segment identifier table // ---- START SOLUTION ---- String tableId = "MODIFY ME"; // ---- END SOLUTION ---- // *** TODO EXERCISE 6 // Modify the field and action id to match your P4Info // ---- START SOLUTION ---- PiCriterion match = PiCriterion.builder() .matchLpm( PiMatchFieldId.of("MODIFY ME"), mySid.toOctets(), 128) .build(); PiTableAction action = PiAction.builder() .withId(PiActionId.of("MODIFY ME")) .build(); // ---- END SOLUTION ---- FlowRule myStationRule = Utils.buildFlowRule( deviceId, appId, tableId, match, action); flowRuleService.applyFlowRules(myStationRule); } /** * Insert a SRv6 transit insert policy that will inject an SRv6 header for * packets destined to destIp. * * @param deviceId device ID * @param destIp target IP address for the SRv6 policy * @param prefixLength prefix length for the target IP * @param segmentList list of SRv6 SIDs that make up the path */ public void insertSrv6InsertRule(DeviceId deviceId, Ip6Address destIp, int prefixLength, List segmentList) { if (segmentList.size() < 2 || segmentList.size() > 3) { throw new RuntimeException("List of " + segmentList.size() + " segments is not supported"); } // *** TODO EXERCISE 6 // Fill in the table ID for the SRv6 transit table. // ---- START SOLUTION ---- String tableId = "MODIFY ME"; // ---- END SOLUTION ---- // *** TODO EXERCISE 6 // Modify match field, action id, and action parameters to match your P4Info. // ---- START SOLUTION ---- PiCriterion match = PiCriterion.builder() .matchLpm(PiMatchFieldId.of("MODIFY ME"), destIp.toOctets(), prefixLength) .build(); List actionParams = Lists.newArrayList(); for (int i = 0; i < segmentList.size(); i++) { PiActionParamId paramId = PiActionParamId.of("s" + (i + 1)); PiActionParam param = new PiActionParam(paramId, segmentList.get(i).toOctets()); actionParams.add(param); } PiAction action = PiAction.builder() .withId(PiActionId.of("IngressPipeImpl.srv6_t_insert_" + segmentList.size())) .withParameters(actionParams) .build(); // ---- END SOLUTION ---- final FlowRule rule = Utils.buildFlowRule( deviceId, appId, tableId, match, action); flowRuleService.applyFlowRules(rule); } /** * Remove all SRv6 transit insert polices for the specified device. * * @param deviceId device ID */ public void clearSrv6InsertRules(DeviceId deviceId) { // *** TODO EXERCISE 6 // Fill in the table ID for the SRv6 transit table // ---- START SOLUTION ---- String tableId = "MODIFY ME"; // ---- END SOLUTION ---- FlowRuleOperations.Builder ops = FlowRuleOperations.builder(); stream(flowRuleService.getFlowEntries(deviceId)) .filter(fe -> fe.appId() == appId.id()) .filter(fe -> fe.table().equals(PiTableId.of(tableId))) .forEach(ops::remove); flowRuleService.apply(ops.build()); } // ---------- END METHODS TO COMPLETE ---------------- //-------------------------------------------------------------------------- // EVENT LISTENERS // // Events are processed only if isRelevant() returns true. //-------------------------------------------------------------------------- /** * Listener of device events. */ public class InternalDeviceListener implements DeviceListener { @Override public boolean isRelevant(DeviceEvent event) { switch (event.type()) { case DEVICE_ADDED: case DEVICE_AVAILABILITY_CHANGED: break; default: // Ignore other events. return false; } // Process only if this controller instance is the master. final DeviceId deviceId = event.subject().id(); return mastershipService.isLocalMaster(deviceId); } @Override public void event(DeviceEvent event) { final DeviceId deviceId = event.subject().id(); if (deviceService.isAvailable(deviceId)) { // A P4Runtime device is considered available in ONOS when there // is a StreamChannel session open and the pipeline // configuration has been set. mainComponent.getExecutorService().execute(() -> { log.info("{} event! deviceId={}", event.type(), deviceId); setUpMySidTable(event.subject().id()); }); } } } //-------------------------------------------------------------------------- // UTILITY METHODS //-------------------------------------------------------------------------- /** * Sets up SRv6 My SID table on all devices known by ONOS and for which this * ONOS node instance is currently master. */ private synchronized void setUpAllDevices() { // Set up host routes stream(deviceService.getAvailableDevices()) .map(Device::id) .filter(mastershipService::isLocalMaster) .forEach(deviceId -> { log.info("*** SRV6 - Starting initial set up for {}...", deviceId); this.setUpMySidTable(deviceId); }); } /** * Returns the Srv6 config for the given device. * * @param deviceId the device ID * @return Srv6 device config */ private Optional getDeviceConfig(DeviceId deviceId) { FabricDeviceConfig config = networkConfigService.getConfig(deviceId, FabricDeviceConfig.class); return Optional.ofNullable(config); } /** * Returns Srv6 SID for the given device. * * @param deviceId the device ID * @return SID for the device */ private Ip6Address getMySid(DeviceId deviceId) { return getDeviceConfig(deviceId) .map(FabricDeviceConfig::mySid) .orElseThrow(() -> new RuntimeException( "Missing mySid config for " + deviceId)); } } ================================================ FILE: app/src/main/java/org/onosproject/ngsdn/tutorial/cli/Srv6ClearCommand.java ================================================ /* * Copyright 2019-present Open Networking Foundation * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.onosproject.ngsdn.tutorial.cli; import org.apache.karaf.shell.api.action.Argument; import org.apache.karaf.shell.api.action.Command; import org.apache.karaf.shell.api.action.Completion; import org.apache.karaf.shell.api.action.lifecycle.Service; import org.onosproject.cli.AbstractShellCommand; import org.onosproject.cli.net.DeviceIdCompleter; import org.onosproject.net.Device; import org.onosproject.net.DeviceId; import org.onosproject.net.device.DeviceService; import org.onosproject.ngsdn.tutorial.Srv6Component; /** * SRv6 Transit Clear Command */ @Service @Command(scope = "onos", name = "srv6-clear", description = "Clears all t_insert rules from the SRv6 Transit table") public class Srv6ClearCommand extends AbstractShellCommand { @Argument(index = 0, name = "uri", description = "Device ID", required = true, multiValued = false) @Completion(DeviceIdCompleter.class) String uri = null; @Override protected void doExecute() { DeviceService deviceService = get(DeviceService.class); Srv6Component app = get(Srv6Component.class); Device device = deviceService.getDevice(DeviceId.deviceId(uri)); if (device == null) { print("Device \"%s\" is not found", uri); return; } app.clearSrv6InsertRules(device.id()); } } ================================================ FILE: app/src/main/java/org/onosproject/ngsdn/tutorial/cli/Srv6InsertCommand.java ================================================ /* * Copyright 2019-present Open Networking Foundation * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.onosproject.ngsdn.tutorial.cli; import org.apache.karaf.shell.api.action.Argument; import org.apache.karaf.shell.api.action.Command; import org.apache.karaf.shell.api.action.Completion; import org.apache.karaf.shell.api.action.lifecycle.Service; import org.onlab.packet.Ip6Address; import org.onlab.packet.IpAddress; import org.onosproject.cli.AbstractShellCommand; import org.onosproject.cli.net.DeviceIdCompleter; import org.onosproject.net.Device; import org.onosproject.net.DeviceId; import org.onosproject.net.device.DeviceService; import org.onosproject.ngsdn.tutorial.Srv6Component; import java.util.List; import java.util.stream.Collectors; /** * SRv6 Transit Insert Command */ @Service @Command(scope = "onos", name = "srv6-insert", description = "Insert a t_insert rule into the SRv6 Transit table") public class Srv6InsertCommand extends AbstractShellCommand { @Argument(index = 0, name = "uri", description = "Device ID", required = true, multiValued = false) @Completion(DeviceIdCompleter.class) String uri = null; @Argument(index = 1, name = "segments", description = "SRv6 Segments (space separated list); last segment is target IP address", required = false, multiValued = true) @Completion(Srv6SidCompleter.class) List segments = null; @Override protected void doExecute() { DeviceService deviceService = get(DeviceService.class); Srv6Component app = get(Srv6Component.class); Device device = deviceService.getDevice(DeviceId.deviceId(uri)); if (device == null) { print("Device \"%s\" is not found", uri); return; } if (segments.size() == 0) { print("No segments listed"); return; } List sids = segments.stream() .map(Ip6Address::valueOf) .collect(Collectors.toList()); Ip6Address destIp = sids.get(sids.size() - 1); print("Installing path on device %s: %s", uri, sids.stream() .map(IpAddress::toString) .collect(Collectors.joining(", "))); app.insertSrv6InsertRule(device.id(), destIp, 128, sids); } } ================================================ FILE: app/src/main/java/org/onosproject/ngsdn/tutorial/cli/Srv6SidCompleter.java ================================================ /* * Copyright 2019-present Open Networking Foundation * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.onosproject.ngsdn.tutorial.cli; import org.apache.karaf.shell.api.action.lifecycle.Service; import org.apache.karaf.shell.api.console.CommandLine; import org.apache.karaf.shell.api.console.Completer; import org.apache.karaf.shell.api.console.Session; import org.apache.karaf.shell.support.completers.StringsCompleter; import org.onosproject.cli.AbstractShellCommand; import org.onosproject.net.config.NetworkConfigService; import org.onosproject.net.device.DeviceService; import org.onosproject.ngsdn.tutorial.common.FabricDeviceConfig; import java.util.List; import java.util.Objects; import java.util.SortedSet; import static com.google.common.collect.Streams.stream; /** * Completer for SIDs based on device config. */ @Service public class Srv6SidCompleter implements Completer { @Override public int complete(Session session, CommandLine commandLine, List candidates) { DeviceService deviceService = AbstractShellCommand.get(DeviceService.class); NetworkConfigService netCfgService = AbstractShellCommand.get(NetworkConfigService.class); // Delegate string completer StringsCompleter delegate = new StringsCompleter(); SortedSet strings = delegate.getStrings(); stream(deviceService.getDevices()) .map(d -> netCfgService.getConfig(d.id(), FabricDeviceConfig.class)) .filter(Objects::nonNull) .map(FabricDeviceConfig::mySid) .filter(Objects::nonNull) .forEach(sid -> strings.add(sid.toString())); // Now let the completer do the work for figuring out what to offer. return delegate.complete(session, commandLine, candidates); } } ================================================ FILE: app/src/main/java/org/onosproject/ngsdn/tutorial/cli/package-info.java ================================================ package org.onosproject.ngsdn.tutorial.cli; ================================================ FILE: app/src/main/java/org/onosproject/ngsdn/tutorial/common/FabricDeviceConfig.java ================================================ /* * Copyright 2019-present Open Networking Foundation * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.onosproject.ngsdn.tutorial.common; import org.onlab.packet.Ip6Address; import org.onlab.packet.MacAddress; import org.onosproject.net.DeviceId; import org.onosproject.net.config.Config; /** * Device configuration object for the IPv6 fabric tutorial application. */ public class FabricDeviceConfig extends Config { public static final String CONFIG_KEY = "fabricDeviceConfig"; private static final String MY_STATION_MAC = "myStationMac"; private static final String MY_SID = "mySid"; private static final String IS_SPINE = "isSpine"; @Override public boolean isValid() { return hasOnlyFields(MY_STATION_MAC, MY_SID, IS_SPINE) && myStationMac() != null && mySid() != null; } /** * Gets the MAC address of the switch. * * @return MAC address of the switch. Or null if not configured. */ public MacAddress myStationMac() { String mac = get(MY_STATION_MAC, null); return mac != null ? MacAddress.valueOf(mac) : null; } /** * Gets the SRv6 segment ID (SID) of the switch. * * @return IP address of the router. Or null if not configured. */ public Ip6Address mySid() { String ip = get(MY_SID, null); return ip != null ? Ip6Address.valueOf(ip) : null; } /** * Checks if the switch is a spine switch. * * @return true if the switch is a spine switch. false if the switch is not * a spine switch, or if the value is not configured. */ public boolean isSpine() { String isSpine = get(IS_SPINE, null); return isSpine != null && Boolean.valueOf(isSpine); } } ================================================ FILE: app/src/main/java/org/onosproject/ngsdn/tutorial/common/Utils.java ================================================ /* * Copyright 2019-present Open Networking Foundation * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.onosproject.ngsdn.tutorial.common; import org.onosproject.core.ApplicationId; import org.onosproject.net.DeviceId; import org.onosproject.net.PortNumber; import org.onosproject.net.flow.DefaultFlowRule; import org.onosproject.net.flow.DefaultTrafficSelector; import org.onosproject.net.flow.DefaultTrafficTreatment; import org.onosproject.net.flow.FlowRule; import org.onosproject.net.flow.criteria.PiCriterion; import org.onosproject.net.group.DefaultGroupBucket; import org.onosproject.net.group.DefaultGroupDescription; import org.onosproject.net.group.DefaultGroupKey; import org.onosproject.net.group.GroupBucket; import org.onosproject.net.group.GroupBuckets; import org.onosproject.net.group.GroupDescription; import org.onosproject.net.group.GroupKey; import org.onosproject.net.pi.model.PiActionProfileId; import org.onosproject.net.pi.model.PiTableId; import org.onosproject.net.pi.runtime.PiAction; import org.onosproject.net.pi.runtime.PiGroupKey; import org.onosproject.net.pi.runtime.PiTableAction; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import java.nio.ByteBuffer; import java.util.Collection; import java.util.List; import java.util.stream.Collectors; import static com.google.common.base.Preconditions.checkArgument; import static com.google.common.base.Preconditions.checkNotNull; import static org.onosproject.net.group.DefaultGroupBucket.createAllGroupBucket; import static org.onosproject.net.group.DefaultGroupBucket.createCloneGroupBucket; import static org.onosproject.ngsdn.tutorial.AppConstants.DEFAULT_FLOW_RULE_PRIORITY; public final class Utils { private static final Logger log = LoggerFactory.getLogger(Utils.class); public static GroupDescription buildMulticastGroup( ApplicationId appId, DeviceId deviceId, int groupId, Collection ports) { return buildReplicationGroup(appId, deviceId, groupId, ports, false); } public static GroupDescription buildCloneGroup( ApplicationId appId, DeviceId deviceId, int groupId, Collection ports) { return buildReplicationGroup(appId, deviceId, groupId, ports, true); } private static GroupDescription buildReplicationGroup( ApplicationId appId, DeviceId deviceId, int groupId, Collection ports, boolean isClone) { checkNotNull(deviceId); checkNotNull(appId); checkArgument(!ports.isEmpty()); final GroupKey groupKey = new DefaultGroupKey( ByteBuffer.allocate(4).putInt(groupId).array()); final List bucketList = ports.stream() .map(p -> DefaultTrafficTreatment.builder() .setOutput(p).build()) .map(t -> isClone ? createCloneGroupBucket(t) : createAllGroupBucket(t)) .collect(Collectors.toList()); return new DefaultGroupDescription( deviceId, isClone ? GroupDescription.Type.CLONE : GroupDescription.Type.ALL, new GroupBuckets(bucketList), groupKey, groupId, appId); } public static FlowRule buildFlowRule(DeviceId switchId, ApplicationId appId, String tableId, PiCriterion piCriterion, PiTableAction piAction) { return DefaultFlowRule.builder() .forDevice(switchId) .forTable(PiTableId.of(tableId)) .fromApp(appId) .withPriority(DEFAULT_FLOW_RULE_PRIORITY) .makePermanent() .withSelector(DefaultTrafficSelector.builder() .matchPi(piCriterion).build()) .withTreatment(DefaultTrafficTreatment.builder() .piTableAction(piAction).build()) .build(); } public static GroupDescription buildSelectGroup(DeviceId deviceId, String tableId, String actionProfileId, int groupId, Collection actions, ApplicationId appId) { final GroupKey groupKey = new PiGroupKey( PiTableId.of(tableId), PiActionProfileId.of(actionProfileId), groupId); final List buckets = actions.stream() .map(action -> DefaultTrafficTreatment.builder() .piTableAction(action).build()) .map(DefaultGroupBucket::createSelectGroupBucket) .collect(Collectors.toList()); return new DefaultGroupDescription( deviceId, GroupDescription.Type.SELECT, new GroupBuckets(buckets), groupKey, groupId, appId); } public static void sleep(int millis) { try { Thread.sleep(millis); } catch (InterruptedException e) { log.error("Interrupted!", e); Thread.currentThread().interrupt(); } } } ================================================ FILE: app/src/main/java/org/onosproject/ngsdn/tutorial/pipeconf/InterpreterImpl.java ================================================ /* * Copyright 2019-present Open Networking Foundation * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.onosproject.ngsdn.tutorial.pipeconf; import com.google.common.collect.ImmutableList; import com.google.common.collect.ImmutableMap; import org.onlab.packet.DeserializationException; import org.onlab.packet.Ethernet; import org.onlab.util.ImmutableByteSequence; import org.onosproject.net.ConnectPoint; import org.onosproject.net.DeviceId; import org.onosproject.net.Port; import org.onosproject.net.PortNumber; import org.onosproject.net.device.DeviceService; import org.onosproject.net.driver.AbstractHandlerBehaviour; import org.onosproject.net.flow.TrafficTreatment; import org.onosproject.net.flow.criteria.Criterion; import org.onosproject.net.packet.DefaultInboundPacket; import org.onosproject.net.packet.InboundPacket; import org.onosproject.net.packet.OutboundPacket; import org.onosproject.net.pi.model.PiMatchFieldId; import org.onosproject.net.pi.model.PiPacketMetadataId; import org.onosproject.net.pi.model.PiPipelineInterpreter; import org.onosproject.net.pi.model.PiTableId; import org.onosproject.net.pi.runtime.PiAction; import org.onosproject.net.pi.runtime.PiPacketMetadata; import org.onosproject.net.pi.runtime.PiPacketOperation; import java.nio.ByteBuffer; import java.util.Collection; import java.util.List; import java.util.Map; import java.util.Optional; import static java.lang.String.format; import static java.util.stream.Collectors.toList; import static org.onlab.util.ImmutableByteSequence.copyFrom; import static org.onosproject.net.PortNumber.CONTROLLER; import static org.onosproject.net.PortNumber.FLOOD; import static org.onosproject.net.flow.instructions.Instruction.Type.OUTPUT; import static org.onosproject.net.flow.instructions.Instructions.OutputInstruction; import static org.onosproject.net.pi.model.PiPacketOperationType.PACKET_OUT; import static org.onosproject.ngsdn.tutorial.AppConstants.CPU_PORT_ID; /** * Interpreter implementation. */ public class InterpreterImpl extends AbstractHandlerBehaviour implements PiPipelineInterpreter { // From v1model.p4 private static final int V1MODEL_PORT_BITWIDTH = 9; // From P4Info. private static final Map CRITERION_MAP = new ImmutableMap.Builder() .put(Criterion.Type.IN_PORT, "standard_metadata.ingress_port") .put(Criterion.Type.ETH_DST, "hdr.ethernet.dst_addr") .put(Criterion.Type.ETH_SRC, "hdr.ethernet.src_addr") .put(Criterion.Type.ETH_TYPE, "hdr.ethernet.ether_type") .put(Criterion.Type.IPV6_DST, "hdr.ipv6.dst_addr") .put(Criterion.Type.IP_PROTO, "local_metadata.ip_proto") .put(Criterion.Type.ICMPV4_TYPE, "local_metadata.icmp_type") .put(Criterion.Type.ICMPV6_TYPE, "local_metadata.icmp_type") .build(); /** * Returns a collection of PI packet operations populated with metadata * specific for this pipeconf and equivalent to the given ONOS * OutboundPacket instance. * * @param packet ONOS OutboundPacket * @return collection of PI packet operations * @throws PiInterpreterException if the packet treatments cannot be * executed by this pipeline */ @Override public Collection mapOutboundPacket(OutboundPacket packet) throws PiInterpreterException { TrafficTreatment treatment = packet.treatment(); // Packet-out in main.p4 supports only setting the output port, // i.e. we only understand OUTPUT instructions. List outInstructions = treatment .allInstructions() .stream() .filter(i -> i.type().equals(OUTPUT)) .map(i -> (OutputInstruction) i) .collect(toList()); if (treatment.allInstructions().size() != outInstructions.size()) { // There are other instructions that are not of type OUTPUT. throw new PiInterpreterException("Treatment not supported: " + treatment); } ImmutableList.Builder builder = ImmutableList.builder(); for (OutputInstruction outInst : outInstructions) { if (outInst.port().isLogical() && !outInst.port().equals(FLOOD)) { throw new PiInterpreterException(format( "Packet-out on logical port '%s' not supported", outInst.port())); } else if (outInst.port().equals(FLOOD)) { // To emulate flooding, we create a packet-out operation for // each switch port. final DeviceService deviceService = handler().get(DeviceService.class); for (Port port : deviceService.getPorts(packet.sendThrough())) { builder.add(buildPacketOut(packet.data(), port.number().toLong())); } } else { // Create only one packet-out for the given OUTPUT instruction. builder.add(buildPacketOut(packet.data(), outInst.port().toLong())); } } return builder.build(); } /** * Builds a pipeconf-specific packet-out instance with the given payload and * egress port. * * @param pktData packet payload * @param portNumber egress port * @return packet-out * @throws PiInterpreterException if packet-out cannot be built */ private PiPacketOperation buildPacketOut(ByteBuffer pktData, long portNumber) throws PiInterpreterException { // Make sure port number can fit in v1model port metadata bitwidth. final ImmutableByteSequence portBytes; try { portBytes = copyFrom(portNumber).fit(V1MODEL_PORT_BITWIDTH); } catch (ImmutableByteSequence.ByteSequenceTrimException e) { throw new PiInterpreterException(format( "Port number %d too big, %s", portNumber, e.getMessage())); } // Create metadata instance for egress port. // *** TODO EXERCISE 4: modify metadata names to match P4 program // ---- START SOLUTION ---- final String outPortMetadataName = "ADD HERE METADATA NAME FOR THE EGRESS PORT"; // ---- END SOLUTION ---- final PiPacketMetadata outPortMetadata = PiPacketMetadata.builder() .withId(PiPacketMetadataId.of(outPortMetadataName)) .withValue(portBytes) .build(); // Build packet out. return PiPacketOperation.builder() .withType(PACKET_OUT) .withData(copyFrom(pktData)) .withMetadata(outPortMetadata) .build(); } /** * Returns an ONS InboundPacket equivalent to the given pipeconf-specific * packet-in operation. * * @param packetIn packet operation * @param deviceId ID of the device that originated the packet-in * @return inbound packet * @throws PiInterpreterException if the packet operation cannot be mapped * to an inbound packet */ @Override public InboundPacket mapInboundPacket(PiPacketOperation packetIn, DeviceId deviceId) throws PiInterpreterException { // Find the ingress_port metadata. // *** TODO EXERCISE 4: modify metadata names to match P4Info // ---- START SOLUTION ---- final String inportMetadataName = "ADD HERE METADATA NAME FOR THE INGRESS PORT"; // ---- END SOLUTION ---- Optional inportMetadata = packetIn.metadatas() .stream() .filter(meta -> meta.id().id().equals(inportMetadataName)) .findFirst(); if (!inportMetadata.isPresent()) { throw new PiInterpreterException(format( "Missing metadata '%s' in packet-in received from '%s': %s", inportMetadataName, deviceId, packetIn)); } // Build ONOS InboundPacket instance with the given ingress port. // 1. Parse packet-in object into Ethernet packet instance. final byte[] payloadBytes = packetIn.data().asArray(); final ByteBuffer rawData = ByteBuffer.wrap(payloadBytes); final Ethernet ethPkt; try { ethPkt = Ethernet.deserializer().deserialize( payloadBytes, 0, packetIn.data().size()); } catch (DeserializationException dex) { throw new PiInterpreterException(dex.getMessage()); } // 2. Get ingress port final ImmutableByteSequence portBytes = inportMetadata.get().value(); final short portNum = portBytes.asReadOnlyBuffer().getShort(); final ConnectPoint receivedFrom = new ConnectPoint( deviceId, PortNumber.portNumber(portNum)); return new DefaultInboundPacket(receivedFrom, ethPkt, rawData); } @Override public Optional mapLogicalPortNumber(PortNumber port) { if (CONTROLLER.equals(port)) { return Optional.of(CPU_PORT_ID); } else { return Optional.empty(); } } @Override public Optional mapCriterionType(Criterion.Type type) { if (CRITERION_MAP.containsKey(type)) { return Optional.of(PiMatchFieldId.of(CRITERION_MAP.get(type))); } else { return Optional.empty(); } } @Override public PiAction mapTreatment(TrafficTreatment treatment, PiTableId piTableId) throws PiInterpreterException { throw new PiInterpreterException("Treatment mapping not supported"); } @Override public Optional mapFlowRuleTableId(int flowRuleTableId) { return Optional.empty(); } } ================================================ FILE: app/src/main/java/org/onosproject/ngsdn/tutorial/pipeconf/PipeconfLoader.java ================================================ /* * Copyright 2019-present Open Networking Foundation * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.onosproject.ngsdn.tutorial.pipeconf; import org.onosproject.net.behaviour.Pipeliner; import org.onosproject.net.driver.DriverAdminService; import org.onosproject.net.driver.DriverProvider; import org.onosproject.net.pi.model.DefaultPiPipeconf; import org.onosproject.net.pi.model.PiPipeconf; import org.onosproject.net.pi.model.PiPipelineInterpreter; import org.onosproject.net.pi.model.PiPipelineModel; import org.onosproject.net.pi.service.PiPipeconfService; import org.onosproject.p4runtime.model.P4InfoParser; import org.onosproject.p4runtime.model.P4InfoParserException; import org.osgi.service.component.annotations.Activate; import org.osgi.service.component.annotations.Component; import org.osgi.service.component.annotations.Deactivate; import org.osgi.service.component.annotations.Reference; import org.osgi.service.component.annotations.ReferenceCardinality; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import java.net.URL; import java.util.List; import java.util.stream.Collectors; import static org.onosproject.net.pi.model.PiPipeconf.ExtensionType.BMV2_JSON; import static org.onosproject.net.pi.model.PiPipeconf.ExtensionType.P4_INFO_TEXT; import static org.onosproject.ngsdn.tutorial.AppConstants.PIPECONF_ID; /** * Component that builds and register the pipeconf at app activation. */ @Component(immediate = true, service = PipeconfLoader.class) public final class PipeconfLoader { private final Logger log = LoggerFactory.getLogger(getClass()); private static final String P4INFO_PATH = "/p4info.txt"; private static final String BMV2_JSON_PATH = "/bmv2.json"; @Reference(cardinality = ReferenceCardinality.MANDATORY) private PiPipeconfService pipeconfService; @Reference(cardinality = ReferenceCardinality.MANDATORY) private DriverAdminService driverAdminService; @Activate public void activate() { // Registers the pipeconf at component activation. if (pipeconfService.getPipeconf(PIPECONF_ID).isPresent()) { // Remove first if already registered, to support reloading of the // pipeconf during the tutorial. pipeconfService.unregister(PIPECONF_ID); } removePipeconfDrivers(); try { pipeconfService.register(buildPipeconf()); } catch (P4InfoParserException e) { log.error("Unable to register " + PIPECONF_ID, e); } } @Deactivate public void deactivate() { // Do nothing. } private PiPipeconf buildPipeconf() throws P4InfoParserException { final URL p4InfoUrl = PipeconfLoader.class.getResource(P4INFO_PATH); final URL bmv2JsonUrlUrl = PipeconfLoader.class.getResource(BMV2_JSON_PATH); final PiPipelineModel pipelineModel = P4InfoParser.parse(p4InfoUrl); return DefaultPiPipeconf.builder() .withId(PIPECONF_ID) .withPipelineModel(pipelineModel) .addBehaviour(PiPipelineInterpreter.class, InterpreterImpl.class) .addBehaviour(Pipeliner.class, PipelinerImpl.class) .addExtension(P4_INFO_TEXT, p4InfoUrl) .addExtension(BMV2_JSON, bmv2JsonUrlUrl) .build(); } private void removePipeconfDrivers() { List driverProvidersToRemove = driverAdminService .getProviders().stream() .filter(p -> p.getDrivers().stream() .anyMatch(d -> d.name().endsWith(PIPECONF_ID.id()))) .collect(Collectors.toList()); if (driverProvidersToRemove.isEmpty()) { return; } log.info("Found {} outdated drivers for pipeconf '{}', removing...", driverProvidersToRemove.size(), PIPECONF_ID); driverProvidersToRemove.forEach(driverAdminService::unregisterProvider); } } ================================================ FILE: app/src/main/java/org/onosproject/ngsdn/tutorial/pipeconf/PipelinerImpl.java ================================================ /* * Copyright 2019-present Open Networking Foundation * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.onosproject.ngsdn.tutorial.pipeconf; import org.onosproject.net.DeviceId; import org.onosproject.net.PortNumber; import org.onosproject.net.behaviour.NextGroup; import org.onosproject.net.behaviour.Pipeliner; import org.onosproject.net.behaviour.PipelinerContext; import org.onosproject.net.driver.AbstractHandlerBehaviour; import org.onosproject.net.flow.DefaultFlowRule; import org.onosproject.net.flow.DefaultTrafficTreatment; import org.onosproject.net.flow.FlowRule; import org.onosproject.net.flow.FlowRuleService; import org.onosproject.net.flow.instructions.Instructions; import org.onosproject.net.flowobjective.FilteringObjective; import org.onosproject.net.flowobjective.ForwardingObjective; import org.onosproject.net.flowobjective.NextObjective; import org.onosproject.net.flowobjective.ObjectiveError; import org.onosproject.net.group.GroupDescription; import org.onosproject.net.group.GroupService; import org.onosproject.net.pi.model.PiActionId; import org.onosproject.net.pi.model.PiTableId; import org.onosproject.net.pi.runtime.PiAction; import org.onosproject.ngsdn.tutorial.common.Utils; import org.slf4j.Logger; import java.util.Collections; import java.util.List; import static org.onosproject.net.flow.instructions.Instruction.Type.OUTPUT; import static org.onosproject.ngsdn.tutorial.AppConstants.CPU_CLONE_SESSION_ID; import static org.slf4j.LoggerFactory.getLogger; /** * Pipeliner implementation that maps all forwarding objectives to the ACL * table. All other types of objectives are not supported. */ public class PipelinerImpl extends AbstractHandlerBehaviour implements Pipeliner { // From the P4Info file private static final String ACL_TABLE = "IngressPipeImpl.acl_table"; private static final String CLONE_TO_CPU = "IngressPipeImpl.clone_to_cpu"; private final Logger log = getLogger(getClass()); private FlowRuleService flowRuleService; private GroupService groupService; private DeviceId deviceId; @Override public void init(DeviceId deviceId, PipelinerContext context) { this.deviceId = deviceId; this.flowRuleService = context.directory().get(FlowRuleService.class); this.groupService = context.directory().get(GroupService.class); } @Override public void filter(FilteringObjective obj) { obj.context().ifPresent(c -> c.onError(obj, ObjectiveError.UNSUPPORTED)); } @Override public void forward(ForwardingObjective obj) { if (obj.treatment() == null) { obj.context().ifPresent(c -> c.onError(obj, ObjectiveError.UNSUPPORTED)); } // Whether this objective specifies an OUTPUT:CONTROLLER instruction. final boolean hasCloneToCpuAction = obj.treatment() .allInstructions().stream() .filter(i -> i.type().equals(OUTPUT)) .map(i -> (Instructions.OutputInstruction) i) .anyMatch(i -> i.port().equals(PortNumber.CONTROLLER)); if (!hasCloneToCpuAction) { // We support only objectives for clone to CPU behaviours (e.g. for // host and link discovery) obj.context().ifPresent(c -> c.onError(obj, ObjectiveError.UNSUPPORTED)); } // Create an equivalent FlowRule with same selector and clone_to_cpu action. final PiAction cloneToCpuAction = PiAction.builder() .withId(PiActionId.of(CLONE_TO_CPU)) .build(); final FlowRule.Builder ruleBuilder = DefaultFlowRule.builder() .forTable(PiTableId.of(ACL_TABLE)) .forDevice(deviceId) .withSelector(obj.selector()) .fromApp(obj.appId()) .withPriority(obj.priority()) .withTreatment(DefaultTrafficTreatment.builder() .piTableAction(cloneToCpuAction).build()); if (obj.permanent()) { ruleBuilder.makePermanent(); } else { ruleBuilder.makeTemporary(obj.timeout()); } final GroupDescription cloneGroup = Utils.buildCloneGroup( obj.appId(), deviceId, CPU_CLONE_SESSION_ID, // Ports where to clone the packet. // Just controller in this case. Collections.singleton(PortNumber.CONTROLLER)); switch (obj.op()) { case ADD: flowRuleService.applyFlowRules(ruleBuilder.build()); groupService.addGroup(cloneGroup); break; case REMOVE: flowRuleService.removeFlowRules(ruleBuilder.build()); // Do not remove the clone group as other flow rules might be // pointing to it. break; default: log.warn("Unknown operation {}", obj.op()); } obj.context().ifPresent(c -> c.onSuccess(obj)); } @Override public void next(NextObjective obj) { obj.context().ifPresent(c -> c.onError(obj, ObjectiveError.UNSUPPORTED)); } @Override public List getNextMappings(NextGroup nextGroup) { // We do not use nextObjectives or groups. return Collections.emptyList(); } } ================================================ FILE: docker-compose.yml ================================================ version: "3" services: mininet: image: opennetworking/ngsdn-tutorial:stratum_bmv2 hostname: mininet container_name: mininet privileged: true tty: true stdin_open: true restart: always volumes: - ./tmp:/tmp - ./mininet:/mininet ports: - "50001:50001" - "50002:50002" - "50003:50003" - "50004:50004" # NGSDN_TOPO_PY is a Python-based Mininet script defining the topology. Its # value is passed to docker-compose as an environment variable, defined in # the Makefile. entrypoint: "/mininet/${NGSDN_TOPO_PY}" onos: image: onosproject/onos:2.2.2 hostname: onos container_name: onos ports: - "8181:8181" # HTTP - "8101:8101" # SSH (CLI) volumes: - ./tmp/onos:/root/onos/apache-karaf-4.2.8/data/tmp environment: - ONOS_APPS=gui2,drivers.bmv2,lldpprovider,hostprovider links: - mininet ================================================ FILE: mininet/flowrule-gtp.json ================================================ { "flows": [ { "deviceId": "device:leaf1", "tableId": "FabricIngress.spgw_ingress.dl_sess_lookup", "priority": 10, "timeout": 0, "isPermanent": true, "selector": { "criteria": [ { "type": "IPV4_DST", "ip": "/32" } ] }, "treatment": { "instructions": [ { "type": "PROTOCOL_INDEPENDENT", "subtype": "ACTION", "actionId": "FabricIngress.spgw_ingress.set_dl_sess_info", "actionParams": { "teid": "BEEF", "s1u_enb_addr": "0a006401", "s1u_sgw_addr": "0a0064fe" } } ] } } ] } ================================================ FILE: mininet/host-cmd ================================================ #!/bin/bash # Attach to a Mininet host and run a command if [ -z $1 ]; then echo "usage: $0 host cmd [args...]" exit 1 else host=$1 fi pid=`ps ax | grep "mininet:$host$" | grep bash | grep -v mnexec | awk '{print $1};'` if echo $pid | grep -q ' '; then echo "Error: found multiple mininet:$host processes" exit 2 fi if [ "$pid" == "" ]; then echo "Could not find Mininet host $host" exit 3 fi if [ -z $2 ]; then cmd='bash' else shift cmd=$* fi cgroup=/sys/fs/cgroup/cpu/$host if [ -d "$cgroup" ]; then cg="-g $host" fi # Check whether host should be running in a chroot dir rootdir="/var/run/mn/$host/root" if [ -d $rootdir -a -x $rootdir/bin/bash ]; then cmd="'cd `pwd`; exec $cmd'" cmd="chroot $rootdir /bin/bash -c $cmd" fi mnexec $cg -a $pid hostname $host cmd="exec mnexec $cg -a $pid $cmd" eval $cmd ================================================ FILE: mininet/netcfg-gtp.json ================================================ { "devices": { "device:leaf1": { "basic": { "managementAddress": "grpc://mininet:50001?device_id=1", "driver": "stratum-bmv2", "pipeconf": "org.onosproject.pipelines.fabric", "locType": "grid", "gridX": 200, "gridY": 600 }, "segmentrouting": { "name": "leaf1", "ipv4NodeSid": 101, "ipv4Loopback": "192.168.1.1", "routerMac": "00:AA:00:00:00:01", "isEdgeRouter": true, "adjacencySids": [] } }, "device:leaf2": { "basic": { "managementAddress": "grpc://mininet:50002?device_id=1", "driver": "stratum-bmv2", "pipeconf": "org.onosproject.pipelines.fabric", "locType": "grid", "gridX": 800, "gridY": 600 }, "segmentrouting": { "name": "leaf2", "ipv4NodeSid": 102, "ipv4Loopback": "192.168.1.2", "routerMac": "00:AA:00:00:00:02", "isEdgeRouter": true, "adjacencySids": [] } }, "device:spine1": { "basic": { "managementAddress": "grpc://mininet:50003?device_id=1", "driver": "stratum-bmv2", "pipeconf": "org.onosproject.pipelines.fabric", "locType": "grid", "gridX": 400, "gridY": 400 }, "segmentrouting": { "name": "spine1", "ipv4NodeSid": 201, "ipv4Loopback": "192.168.2.1", "routerMac": "00:BB:00:00:00:01", "isEdgeRouter": false, "adjacencySids": [] } }, "device:spine2": { "basic": { "managementAddress": "grpc://mininet:50004?device_id=1", "driver": "stratum-bmv2", "pipeconf": "org.onosproject.pipelines.fabric", "locType": "grid", "gridX": 600, "gridY": 400 }, "segmentrouting": { "name": "spine2", "ipv4NodeSid": 202, "ipv4Loopback": "192.168.2.2", "routerMac": "00:BB:00:00:00:02", "isEdgeRouter": false, "adjacencySids": [] } } }, "ports": { "device:leaf1/3": { "interfaces": [ { "name": "leaf1-3", "ips": [ "10.0.100.254/24" ], "vlan-untagged": 100 } ] }, "device:leaf2/3": { "interfaces": [ { "name": "leaf2-3", "ips": [ "10.0.200.254/24" ], "vlan-untagged": 200 } ] } }, "hosts": { "00:00:00:00:00:10/None": { "basic": { "name": "enodeb", "gridX": 100, "gridY": 700, "locType": "grid", "ips": [ "10.0.100.1" ], "locations": [ "device:leaf1/3" ] } }, "00:00:00:00:00:20/None": { "basic": { "name": "pdn", "gridX": 850, "gridY": 700, "locType": "grid", "ips": [ "10.0.200.1" ], "locations": [ "device:leaf2/3" ] } } } } ================================================ FILE: mininet/netcfg-sr.json ================================================ { "devices": { "device:leaf1": { "basic": { "managementAddress": "grpc://mininet:50001?device_id=1", "driver": "stratum-bmv2", "pipeconf": "org.onosproject.pipelines.fabric", "locType": "grid", "gridX": 200, "gridY": 600 }, "segmentrouting": { "name": "leaf1", "ipv4NodeSid": 101, "ipv4Loopback": "192.168.1.1", "routerMac": "00:AA:00:00:00:01", "isEdgeRouter": true, "adjacencySids": [] } }, "device:leaf2": { "basic": { "managementAddress": "grpc://mininet:50002?device_id=1", "driver": "stratum-bmv2", "pipeconf": "org.onosproject.pipelines.fabric", "locType": "grid", "gridX": 800, "gridY": 600 }, "segmentrouting": { "name": "leaf2", "ipv4NodeSid": 102, "ipv4Loopback": "192.168.1.2", "routerMac": "00:AA:00:00:00:02", "isEdgeRouter": true, "adjacencySids": [] } }, "device:spine1": { "basic": { "managementAddress": "grpc://mininet:50003?device_id=1", "driver": "stratum-bmv2", "pipeconf": "org.onosproject.pipelines.fabric", "locType": "grid", "gridX": 400, "gridY": 400 }, "segmentrouting": { "name": "spine1", "ipv4NodeSid": 201, "ipv4Loopback": "192.168.2.1", "routerMac": "00:BB:00:00:00:01", "isEdgeRouter": false, "adjacencySids": [] } }, "device:spine2": { "basic": { "managementAddress": "grpc://mininet:50004?device_id=1", "driver": "stratum-bmv2", "pipeconf": "org.onosproject.pipelines.fabric", "locType": "grid", "gridX": 600, "gridY": 400 }, "segmentrouting": { "name": "spine2", "ipv4NodeSid": 202, "ipv4Loopback": "192.168.2.2", "routerMac": "00:BB:00:00:00:02", "isEdgeRouter": false, "adjacencySids": [] } } }, "ports": { "device:leaf1/3": { "interfaces": [ { "name": "leaf1-3", "ips": [ "172.16.1.254/24" ], "vlan-untagged": 100 } ] }, "device:leaf1/4": { "interfaces": [ { "name": "leaf1-4", "ips": [ "172.16.1.254/24" ], "vlan-untagged": 100 } ] }, "device:leaf1/5": { "interfaces": [ { "name": "leaf1-5", "ips": [ "172.16.1.254/24" ], "vlan-tagged": [ 100 ] } ] }, "device:leaf1/6": { "interfaces": [ { "name": "leaf1-6", "ips": [ "172.16.2.254/24" ], "vlan-tagged": [ 200 ] } ] } }, "hosts": { "00:00:00:00:00:1A/None": { "basic": { "name": "h1a", "locType": "grid", "gridX": 100, "gridY": 700 } }, "00:00:00:00:00:1B/None": { "basic": { "name": "h1b", "locType": "grid", "gridX": 100, "gridY": 800 } }, "00:00:00:00:00:1C/100": { "basic": { "name": "h1c", "locType": "grid", "gridX": 250, "gridY": 800 } }, "00:00:00:00:00:20/200": { "basic": { "name": "h2", "locType": "grid", "gridX": 400, "gridY": 700 } }, "00:00:00:00:00:30/300": { "basic": { "name": "h3", "locType": "grid", "gridX": 750, "gridY": 700 } }, "00:00:00:00:00:40/None": { "basic": { "name": "h4", "locType": "grid", "gridX": 850, "gridY": 700 } } } } ================================================ FILE: mininet/netcfg.json ================================================ { "devices": { "device:leaf1": { "basic": { "managementAddress": "grpc://mininet:50001?device_id=1", "driver": "stratum-bmv2", "pipeconf": "org.onosproject.ngsdn-tutorial", "locType": "grid", "gridX": 200, "gridY": 600 }, "fabricDeviceConfig": { "myStationMac": "00:aa:00:00:00:01", "mySid": "3:101:2::", "isSpine": false } }, "device:leaf2": { "basic": { "managementAddress": "grpc://mininet:50002?device_id=1", "driver": "stratum-bmv2", "pipeconf": "org.onosproject.ngsdn-tutorial", "locType": "grid", "gridX": 800, "gridY": 600 }, "fabricDeviceConfig": { "myStationMac": "00:aa:00:00:00:02", "mySid": "3:102:2::", "isSpine": false } }, "device:spine1": { "basic": { "managementAddress": "grpc://mininet:50003?device_id=1", "driver": "stratum-bmv2", "pipeconf": "org.onosproject.ngsdn-tutorial", "locType": "grid", "gridX": 400, "gridY": 400 }, "fabricDeviceConfig": { "myStationMac": "00:bb:00:00:00:01", "mySid": "3:201:2::", "isSpine": true } }, "device:spine2": { "basic": { "managementAddress": "grpc://mininet:50004?device_id=1", "driver": "stratum-bmv2", "pipeconf": "org.onosproject.ngsdn-tutorial", "locType": "grid", "gridX": 600, "gridY": 400 }, "fabricDeviceConfig": { "myStationMac": "00:bb:00:00:00:02", "mySid": "3:202:2::", "isSpine": true } } }, "ports": { "device:leaf1/3": { "interfaces": [ { "name": "leaf1-3", "ips": ["2001:1:1::ff/64"] } ] }, "device:leaf1/4": { "interfaces": [ { "name": "leaf1-4", "ips": ["2001:1:1::ff/64"] } ] }, "device:leaf1/5": { "interfaces": [ { "name": "leaf1-5", "ips": ["2001:1:1::ff/64"] } ] }, "device:leaf1/6": { "interfaces": [ { "name": "leaf1-6", "ips": ["2001:1:2::ff/64"] } ] }, "device:leaf2/3": { "interfaces": [ { "name": "leaf2-3", "ips": ["2001:2:3::ff/64"] } ] }, "device:leaf2/4": { "interfaces": [ { "name": "leaf2-4", "ips": ["2001:2:4::ff/64"] } ] } }, "hosts": { "00:00:00:00:00:1A/None": { "basic": { "name": "h1a", "locType": "grid", "gridX": 100, "gridY": 700 } }, "00:00:00:00:00:1B/None": { "basic": { "name": "h1b", "locType": "grid", "gridX": 100, "gridY": 800 } }, "00:00:00:00:00:1C/None": { "basic": { "name": "h1c", "locType": "grid", "gridX": 250, "gridY": 800 } }, "00:00:00:00:00:20/None": { "basic": { "name": "h2", "locType": "grid", "gridX": 400, "gridY": 700 } }, "00:00:00:00:00:30/None": { "basic": { "name": "h3", "locType": "grid", "gridX": 750, "gridY": 700 } }, "00:00:00:00:00:40/None": { "basic": { "name": "h4", "locType": "grid", "gridX": 850, "gridY": 700 } } } } ================================================ FILE: mininet/recv-gtp.py ================================================ #!/usr/bin/python # Script used in Exercise 8 that sniffs packets and prints on screen whether # they are GTP encapsulated or not. import signal import sys from ptf.packet import IP from scapy.contrib import gtp from scapy.sendrecv import sniff pkt_count = 0 def handle_pkt(pkt, ex): global pkt_count pkt_count = pkt_count + 1 if gtp.GTP_U_Header in pkt: is_gtp_encap = True else: is_gtp_encap = False print "[%d] %d bytes: %s -> %s, is_gtp_encap=%s\n\t%s" \ % (pkt_count, len(pkt), pkt[IP].src, pkt[IP].dst, is_gtp_encap, pkt.summary()) if is_gtp_encap and ex: exit() print "Will print a line for each UDP packet received..." def handle_timeout(signum, frame): print "Timeout! Did not receive any GTP packet" exit(1) exitOnSuccess = False if len(sys.argv) > 1 and sys.argv[1] == "-e": # wait max 10 seconds or exit signal.signal(signal.SIGALRM, handle_timeout) signal.alarm(10) exitOnSuccess = True sniff(count=0, store=False, filter="udp", prn=lambda x: handle_pkt(x, exitOnSuccess)) ================================================ FILE: mininet/send-udp.py ================================================ #!/usr/bin/python # Script used in Exercise 8. # Send downlink packets to UE address. from scapy.layers.inet import IP, UDP from scapy.sendrecv import send UE_ADDR = '17.0.0.1' RATE = 5 # packets per second PAYLOAD = ' '.join(['P4 is great!'] * 50) print "Sending %d UDP packets per second to %s..." % (RATE, UE_ADDR) pkt = IP(dst=UE_ADDR) / UDP(sport=80, dport=400) / PAYLOAD send(pkt, inter=1.0 / RATE, loop=True, verbose=True) ================================================ FILE: mininet/topo-gtp.py ================================================ #!/usr/bin/python # Copyright 2019-present Open Networking Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import argparse from mininet.cli import CLI from mininet.log import setLogLevel from mininet.net import Mininet from mininet.node import Host from mininet.topo import Topo from stratum import StratumBmv2Switch CPU_PORT = 255 class IPv4Host(Host): """Host that can be configured with an IPv4 gateway (default route). """ def config(self, mac=None, ip=None, defaultRoute=None, lo='up', gw=None, **_params): super(IPv4Host, self).config(mac, ip, defaultRoute, lo, **_params) self.cmd('ip -4 addr flush dev %s' % self.defaultIntf()) self.cmd('ip -6 addr flush dev %s' % self.defaultIntf()) self.cmd('sysctl -w net.ipv4.ip_forward=0') self.cmd('ip -4 link set up %s' % self.defaultIntf()) self.cmd('ip -4 addr add %s dev %s' % (ip, self.defaultIntf())) if gw: self.cmd('ip -4 route add default via %s' % gw) # Disable offload for attr in ["rx", "tx", "sg"]: cmd = "/sbin/ethtool --offload %s %s off" % ( self.defaultIntf(), attr) self.cmd(cmd) def updateIP(): return ip.split('/')[0] self.defaultIntf().updateIP = updateIP class TutorialTopo(Topo): """2x2 fabric topology for GTP encap exercise with 2 IPv4 hosts emulating an enodeb (base station) and a gateway to a Packet Data Metwork (PDN) """ def __init__(self, *args, **kwargs): Topo.__init__(self, *args, **kwargs) # Leaves # gRPC port 50001 leaf1 = self.addSwitch('leaf1', cls=StratumBmv2Switch, cpuport=CPU_PORT) # gRPC port 50002 leaf2 = self.addSwitch('leaf2', cls=StratumBmv2Switch, cpuport=CPU_PORT) # Spines # gRPC port 50003 spine1 = self.addSwitch('spine1', cls=StratumBmv2Switch, cpuport=CPU_PORT) # gRPC port 50004 spine2 = self.addSwitch('spine2', cls=StratumBmv2Switch, cpuport=CPU_PORT) # Switch Links self.addLink(spine1, leaf1) self.addLink(spine1, leaf2) self.addLink(spine2, leaf1) self.addLink(spine2, leaf2) # IPv4 hosts attached to leaf 1 enodeb = self.addHost('enodeb', cls=IPv4Host, mac='00:00:00:00:00:10', ip='10.0.100.1/24', gw='10.0.100.254') self.addLink(enodeb, leaf1) # port 3 # IPv4 hosts attached to leaf 2 pdn = self.addHost('pdn', cls=IPv4Host, mac='00:00:00:00:00:20', ip='10.0.200.1/24', gw='10.0.200.254') self.addLink(pdn, leaf2) # port 3 def main(): net = Mininet(topo=TutorialTopo(), controller=None) net.start() CLI(net) net.stop() print '#' * 80 print 'ATTENTION: Mininet was stopped! Perhaps accidentally?' print 'No worries, it will restart automatically in a few seconds...' print 'To access again the Mininet CLI, use `make mn-cli`' print 'To detach from the CLI (without stopping), press Ctrl-D' print 'To permanently quit Mininet, use `make stop`' print '#' * 80 if __name__ == "__main__": parser = argparse.ArgumentParser( description='Mininet topology script for 2x2 fabric with stratum_bmv2 and IPv4 hosts') args = parser.parse_args() setLogLevel('info') main() ================================================ FILE: mininet/topo-v4.py ================================================ #!/usr/bin/python # Copyright 2019-present Open Networking Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import argparse from mininet.cli import CLI from mininet.log import setLogLevel from mininet.net import Mininet from mininet.node import Host from mininet.topo import Topo from stratum import StratumBmv2Switch CPU_PORT = 255 class IPv4Host(Host): """Host that can be configured with an IPv4 gateway (default route). """ def config(self, mac=None, ip=None, defaultRoute=None, lo='up', gw=None, **_params): super(IPv4Host, self).config(mac, ip, defaultRoute, lo, **_params) self.cmd('ip -4 addr flush dev %s' % self.defaultIntf()) self.cmd('ip -6 addr flush dev %s' % self.defaultIntf()) self.cmd('ip -4 link set up %s' % self.defaultIntf()) self.cmd('ip -4 addr add %s dev %s' % (ip, self.defaultIntf())) if gw: self.cmd('ip -4 route add default via %s' % gw) # Disable offload for attr in ["rx", "tx", "sg"]: cmd = "/sbin/ethtool --offload %s %s off" % ( self.defaultIntf(), attr) self.cmd(cmd) def updateIP(): return ip.split('/')[0] self.defaultIntf().updateIP = updateIP class TaggedIPv4Host(Host): """VLAN-tagged host that can be configured with an IPv4 gateway (default route). """ vlanIntf = None def config(self, mac=None, ip=None, defaultRoute=None, lo='up', gw=None, vlan=None, **_params): super(TaggedIPv4Host, self).config(mac, ip, defaultRoute, lo, **_params) self.vlanIntf = "%s.%s" % (self.defaultIntf(), vlan) # Replace default interface with a tagged one self.cmd('ip -4 addr flush dev %s' % self.defaultIntf()) self.cmd('ip -6 addr flush dev %s' % self.defaultIntf()) self.cmd('ip -4 link add link %s name %s type vlan id %s' % ( self.defaultIntf(), self.vlanIntf, vlan)) self.cmd('ip -4 link set up %s' % self.vlanIntf) self.cmd('ip -4 addr add %s dev %s' % (ip, self.vlanIntf)) if gw: self.cmd('ip -4 route add default via %s' % gw) self.defaultIntf().name = self.vlanIntf self.nameToIntf[self.vlanIntf] = self.defaultIntf() # Disable offload for attr in ["rx", "tx", "sg"]: cmd = "/sbin/ethtool --offload %s %s off" % ( self.defaultIntf(), attr) self.cmd(cmd) def updateIP(): return ip.split('/')[0] self.defaultIntf().updateIP = updateIP def terminate(self): self.cmd('ip -4 link remove link %s' % self.vlanIntf) super(TaggedIPv4Host, self).terminate() class TutorialTopo(Topo): """2x2 fabric topology with IPv4 hosts""" def __init__(self, *args, **kwargs): Topo.__init__(self, *args, **kwargs) # Leaves # gRPC port 50001 leaf1 = self.addSwitch('leaf1', cls=StratumBmv2Switch, cpuport=CPU_PORT) # gRPC port 50002 leaf2 = self.addSwitch('leaf2', cls=StratumBmv2Switch, cpuport=CPU_PORT) # Spines # gRPC port 50003 spine1 = self.addSwitch('spine1', cls=StratumBmv2Switch, cpuport=CPU_PORT) # gRPC port 50004 spine2 = self.addSwitch('spine2', cls=StratumBmv2Switch, cpuport=CPU_PORT) # Switch Links self.addLink(spine1, leaf1) self.addLink(spine1, leaf2) self.addLink(spine2, leaf1) self.addLink(spine2, leaf2) # IPv4 hosts attached to leaf 1 h1a = self.addHost('h1a', cls=IPv4Host, mac="00:00:00:00:00:1A", ip='172.16.1.1/24', gw='172.16.1.254') h1b = self.addHost('h1b', cls=IPv4Host, mac="00:00:00:00:00:1B", ip='172.16.1.2/24', gw='172.16.1.254') h1c = self.addHost('h1c', cls=TaggedIPv4Host, mac="00:00:00:00:00:1C", ip='172.16.1.3/24', gw='172.16.1.254', vlan=100) h2 = self.addHost('h2', cls=TaggedIPv4Host, mac="00:00:00:00:00:20", ip='172.16.2.1/24', gw='172.16.2.254', vlan=200) self.addLink(h1a, leaf1) # port 3 self.addLink(h1b, leaf1) # port 4 self.addLink(h1c, leaf1) # port 5 self.addLink(h2, leaf1) # port 6 # IPv4 hosts attached to leaf 2 h3 = self.addHost('h3', cls=TaggedIPv4Host, mac="00:00:00:00:00:30", ip='172.16.3.1/24', gw='172.16.3.254', vlan=300) h4 = self.addHost('h4', cls=IPv4Host, mac="00:00:00:00:00:40", ip='172.16.4.1/24', gw='172.16.4.254') self.addLink(h3, leaf2) # port 3 self.addLink(h4, leaf2) # port 4 def main(): net = Mininet(topo=TutorialTopo(), controller=None) net.start() CLI(net) net.stop() print '#' * 80 print 'ATTENTION: Mininet was stopped! Perhaps accidentally?' print 'No worries, it will restart automatically in a few seconds...' print 'To access again the Mininet CLI, use `make mn-cli`' print 'To detach from the CLI (without stopping), press Ctrl-D' print 'To permanently quit Mininet, use `make stop`' print '#' * 80 if __name__ == "__main__": parser = argparse.ArgumentParser( description='Mininet topology script for 2x2 fabric with stratum_bmv2 and IPv4 hosts') args = parser.parse_args() setLogLevel('info') main() ================================================ FILE: mininet/topo-v6.py ================================================ #!/usr/bin/python # Copyright 2019-present Open Networking Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import argparse from mininet.cli import CLI from mininet.log import setLogLevel from mininet.net import Mininet from mininet.node import Host from mininet.topo import Topo from stratum import StratumBmv2Switch CPU_PORT = 255 class IPv6Host(Host): """Host that can be configured with an IPv6 gateway (default route). """ def config(self, ipv6, ipv6_gw=None, **params): super(IPv6Host, self).config(**params) self.cmd('ip -4 addr flush dev %s' % self.defaultIntf()) self.cmd('ip -6 addr flush dev %s' % self.defaultIntf()) self.cmd('ip -6 addr add %s dev %s' % (ipv6, self.defaultIntf())) if ipv6_gw: self.cmd('ip -6 route add default via %s' % ipv6_gw) # Disable offload for attr in ["rx", "tx", "sg"]: cmd = "/sbin/ethtool --offload %s %s off" % (self.defaultIntf(), attr) self.cmd(cmd) def updateIP(): return ipv6.split('/')[0] self.defaultIntf().updateIP = updateIP def terminate(self): super(IPv6Host, self).terminate() class TutorialTopo(Topo): """2x2 fabric topology with IPv6 hosts""" def __init__(self, *args, **kwargs): Topo.__init__(self, *args, **kwargs) # Leaves # gRPC port 50001 leaf1 = self.addSwitch('leaf1', cls=StratumBmv2Switch, cpuport=CPU_PORT) # gRPC port 50002 leaf2 = self.addSwitch('leaf2', cls=StratumBmv2Switch, cpuport=CPU_PORT) # Spines # gRPC port 50003 spine1 = self.addSwitch('spine1', cls=StratumBmv2Switch, cpuport=CPU_PORT) # gRPC port 50004 spine2 = self.addSwitch('spine2', cls=StratumBmv2Switch, cpuport=CPU_PORT) # Switch Links self.addLink(spine1, leaf1) self.addLink(spine1, leaf2) self.addLink(spine2, leaf1) self.addLink(spine2, leaf2) # IPv6 hosts attached to leaf 1 h1a = self.addHost('h1a', cls=IPv6Host, mac="00:00:00:00:00:1A", ipv6='2001:1:1::a/64', ipv6_gw='2001:1:1::ff') h1b = self.addHost('h1b', cls=IPv6Host, mac="00:00:00:00:00:1B", ipv6='2001:1:1::b/64', ipv6_gw='2001:1:1::ff') h1c = self.addHost('h1c', cls=IPv6Host, mac="00:00:00:00:00:1C", ipv6='2001:1:1::c/64', ipv6_gw='2001:1:1::ff') h2 = self.addHost('h2', cls=IPv6Host, mac="00:00:00:00:00:20", ipv6='2001:1:2::1/64', ipv6_gw='2001:1:2::ff') self.addLink(h1a, leaf1) # port 3 self.addLink(h1b, leaf1) # port 4 self.addLink(h1c, leaf1) # port 5 self.addLink(h2, leaf1) # port 6 # IPv6 hosts attached to leaf 2 h3 = self.addHost('h3', cls=IPv6Host, mac="00:00:00:00:00:30", ipv6='2001:2:3::1/64', ipv6_gw='2001:2:3::ff') h4 = self.addHost('h4', cls=IPv6Host, mac="00:00:00:00:00:40", ipv6='2001:2:4::1/64', ipv6_gw='2001:2:4::ff') self.addLink(h3, leaf2) # port 3 self.addLink(h4, leaf2) # port 4 def main(): net = Mininet(topo=TutorialTopo(), controller=None) net.start() CLI(net) net.stop() print '#' * 80 print 'ATTENTION: Mininet was stopped! Perhaps accidentally?' print 'No worries, it will restart automatically in a few seconds...' print 'To access again the Mininet CLI, use `make mn-cli`' print 'To detach from the CLI (without stopping), press Ctrl-D' print 'To permanently quit Mininet, use `make stop`' print '#' * 80 if __name__ == "__main__": parser = argparse.ArgumentParser( description='Mininet topology script for 2x2 fabric with stratum_bmv2 and IPv6 hosts') args = parser.parse_args() setLogLevel('info') main() ================================================ FILE: p4src/main.p4 ================================================ /* * Copyright 2019-present Open Networking Foundation * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #include #include // CPU_PORT specifies the P4 port number associated to controller packet-in and // packet-out. All packets forwarded via this port will be delivered to the // controller as P4Runtime PacketIn messages. Similarly, PacketOut messages from // the controller will be seen by the P4 pipeline as coming from the CPU_PORT. #define CPU_PORT 255 // CPU_CLONE_SESSION_ID specifies the mirroring session for packets to be cloned // to the CPU port. Packets associated with this session ID will be cloned to // the CPU_PORT as well as being transmitted via their egress port (set by the // bridging/routing/acl table). For cloning to work, the P4Runtime controller // needs first to insert a CloneSessionEntry that maps this session ID to the // CPU_PORT. #define CPU_CLONE_SESSION_ID 99 // Maximum number of hops supported when using SRv6. // Required for Exercise 7. #define SRV6_MAX_HOPS 4 typedef bit<9> port_num_t; typedef bit<48> mac_addr_t; typedef bit<16> mcast_group_id_t; typedef bit<32> ipv4_addr_t; typedef bit<128> ipv6_addr_t; typedef bit<16> l4_port_t; const bit<16> ETHERTYPE_IPV4 = 0x0800; const bit<16> ETHERTYPE_IPV6 = 0x86dd; const bit<8> IP_PROTO_ICMP = 1; const bit<8> IP_PROTO_TCP = 6; const bit<8> IP_PROTO_UDP = 17; const bit<8> IP_PROTO_SRV6 = 43; const bit<8> IP_PROTO_ICMPV6 = 58; const mac_addr_t IPV6_MCAST_01 = 0x33_33_00_00_00_01; const bit<8> ICMP6_TYPE_NS = 135; const bit<8> ICMP6_TYPE_NA = 136; const bit<8> NDP_OPT_TARGET_LL_ADDR = 2; const bit<32> NDP_FLAG_ROUTER = 0x80000000; const bit<32> NDP_FLAG_SOLICITED = 0x40000000; const bit<32> NDP_FLAG_OVERRIDE = 0x20000000; //------------------------------------------------------------------------------ // HEADER DEFINITIONS //------------------------------------------------------------------------------ header ethernet_t { mac_addr_t dst_addr; mac_addr_t src_addr; bit<16> ether_type; } header ipv4_t { bit<4> version; bit<4> ihl; bit<6> dscp; bit<2> ecn; bit<16> total_len; bit<16> identification; bit<3> flags; bit<13> frag_offset; bit<8> ttl; bit<8> protocol; bit<16> hdr_checksum; bit<32> src_addr; bit<32> dst_addr; } header ipv6_t { bit<4> version; bit<8> traffic_class; bit<20> flow_label; bit<16> payload_len; bit<8> next_hdr; bit<8> hop_limit; bit<128> src_addr; bit<128> dst_addr; } header srv6h_t { bit<8> next_hdr; bit<8> hdr_ext_len; bit<8> routing_type; bit<8> segment_left; bit<8> last_entry; bit<8> flags; bit<16> tag; } header srv6_list_t { bit<128> segment_id; } header tcp_t { bit<16> src_port; bit<16> dst_port; bit<32> seq_no; bit<32> ack_no; bit<4> data_offset; bit<3> res; bit<3> ecn; bit<6> ctrl; bit<16> window; bit<16> checksum; bit<16> urgent_ptr; } header udp_t { bit<16> src_port; bit<16> dst_port; bit<16> len; bit<16> checksum; } header icmp_t { bit<8> type; bit<8> icmp_code; bit<16> checksum; bit<16> identifier; bit<16> sequence_number; bit<64> timestamp; } header icmpv6_t { bit<8> type; bit<8> code; bit<16> checksum; } header ndp_t { bit<32> flags; ipv6_addr_t target_ipv6_addr; // NDP option. bit<8> type; bit<8> length; bit<48> target_mac_addr; } // Packet-in header. Prepended to packets sent to the CPU_PORT and used by the // P4Runtime server (Stratum) to populate the PacketIn message metadata fields. // Here we use it to carry the original ingress port where the packet was // received. @controller_header("packet_in") header cpu_in_header_t { port_num_t ingress_port; bit<7> _pad; } // Packet-out header. Prepended to packets received from the CPU_PORT. Fields of // this header are populated by the P4Runtime server based on the P4Runtime // PacketOut metadata fields. Here we use it to inform the P4 pipeline on which // port this packet-out should be transmitted. @controller_header("packet_out") header cpu_out_header_t { port_num_t egress_port; bit<7> _pad; } struct parsed_headers_t { cpu_out_header_t cpu_out; cpu_in_header_t cpu_in; ethernet_t ethernet; ipv4_t ipv4; ipv6_t ipv6; srv6h_t srv6h; srv6_list_t[SRV6_MAX_HOPS] srv6_list; tcp_t tcp; udp_t udp; icmp_t icmp; icmpv6_t icmpv6; ndp_t ndp; } struct local_metadata_t { l4_port_t l4_src_port; l4_port_t l4_dst_port; bool is_multicast; ipv6_addr_t next_srv6_sid; bit<8> ip_proto; bit<8> icmp_type; } //------------------------------------------------------------------------------ // INGRESS PIPELINE //------------------------------------------------------------------------------ parser ParserImpl (packet_in packet, out parsed_headers_t hdr, inout local_metadata_t local_metadata, inout standard_metadata_t standard_metadata) { state start { transition select(standard_metadata.ingress_port) { CPU_PORT: parse_packet_out; default: parse_ethernet; } } state parse_packet_out { packet.extract(hdr.cpu_out); transition parse_ethernet; } state parse_ethernet { packet.extract(hdr.ethernet); transition select(hdr.ethernet.ether_type){ ETHERTYPE_IPV4: parse_ipv4; ETHERTYPE_IPV6: parse_ipv6; default: accept; } } state parse_ipv4 { packet.extract(hdr.ipv4); local_metadata.ip_proto = hdr.ipv4.protocol; transition select(hdr.ipv4.protocol) { IP_PROTO_TCP: parse_tcp; IP_PROTO_UDP: parse_udp; IP_PROTO_ICMP: parse_icmp; default: accept; } } state parse_ipv6 { packet.extract(hdr.ipv6); local_metadata.ip_proto = hdr.ipv6.next_hdr; transition select(hdr.ipv6.next_hdr) { IP_PROTO_TCP: parse_tcp; IP_PROTO_UDP: parse_udp; IP_PROTO_ICMPV6: parse_icmpv6; IP_PROTO_SRV6: parse_srv6; default: accept; } } state parse_tcp { packet.extract(hdr.tcp); local_metadata.l4_src_port = hdr.tcp.src_port; local_metadata.l4_dst_port = hdr.tcp.dst_port; transition accept; } state parse_udp { packet.extract(hdr.udp); local_metadata.l4_src_port = hdr.udp.src_port; local_metadata.l4_dst_port = hdr.udp.dst_port; transition accept; } state parse_icmp { packet.extract(hdr.icmp); local_metadata.icmp_type = hdr.icmp.type; transition accept; } state parse_icmpv6 { packet.extract(hdr.icmpv6); local_metadata.icmp_type = hdr.icmpv6.type; transition select(hdr.icmpv6.type) { ICMP6_TYPE_NS: parse_ndp; ICMP6_TYPE_NA: parse_ndp; default: accept; } } state parse_ndp { packet.extract(hdr.ndp); transition accept; } state parse_srv6 { packet.extract(hdr.srv6h); transition parse_srv6_list; } state parse_srv6_list { packet.extract(hdr.srv6_list.next); bool next_segment = (bit<32>)hdr.srv6h.segment_left - 1 == (bit<32>)hdr.srv6_list.lastIndex; transition select(next_segment) { true: mark_current_srv6; default: check_last_srv6; } } state mark_current_srv6 { local_metadata.next_srv6_sid = hdr.srv6_list.last.segment_id; transition check_last_srv6; } state check_last_srv6 { // working with bit<8> and int<32> which cannot be cast directly; using // bit<32> as common intermediate type for comparision bool last_segment = (bit<32>)hdr.srv6h.last_entry == (bit<32>)hdr.srv6_list.lastIndex; transition select(last_segment) { true: parse_srv6_next_hdr; false: parse_srv6_list; } } state parse_srv6_next_hdr { transition select(hdr.srv6h.next_hdr) { IP_PROTO_TCP: parse_tcp; IP_PROTO_UDP: parse_udp; IP_PROTO_ICMPV6: parse_icmpv6; default: accept; } } } control VerifyChecksumImpl(inout parsed_headers_t hdr, inout local_metadata_t meta) { // Not used here. We assume all packets have valid checksum, if not, we let // the end hosts detect errors. apply { /* EMPTY */ } } control IngressPipeImpl (inout parsed_headers_t hdr, inout local_metadata_t local_metadata, inout standard_metadata_t standard_metadata) { // Drop action shared by many tables. action drop() { mark_to_drop(standard_metadata); } // *** L2 BRIDGING // // Here we define tables to forward packets based on their Ethernet // destination address. There are two types of L2 entries that we // need to support: // // 1. Unicast entries: which will be filled in by the control plane when the // location (port) of new hosts is learned. // 2. Broadcast/multicast entries: used replicate NDP Neighbor Solicitation // (NS) messages to all host-facing ports; // // For (2), unlike ARP messages in IPv4 which are broadcasted to Ethernet // destination address FF:FF:FF:FF:FF:FF, NDP messages are sent to special // Ethernet addresses specified by RFC2464. These addresses are prefixed // with 33:33 and the last four octets are the last four octets of the IPv6 // destination multicast address. The most straightforward way of matching // on such IPv6 broadcast/multicast packets, without digging in the details // of RFC2464, is to use a ternary match on 33:33:**:**:**:**, where * means // "don't care". // // For this reason, our solution defines two tables. One that matches in an // exact fashion (easier to scale on switch ASIC memory) and one that uses // ternary matching (which requires more expensive TCAM memories, usually // much smaller). // --- l2_exact_table (for unicast entries) -------------------------------- action set_egress_port(port_num_t port_num) { standard_metadata.egress_spec = port_num; } table l2_exact_table { key = { hdr.ethernet.dst_addr: exact; } actions = { set_egress_port; @defaultonly drop; } const default_action = drop; // The @name annotation is used here to provide a name to this table // counter, as it will be needed by the compiler to generate the // corresponding P4Info entity. @name("l2_exact_table_counter") counters = direct_counter(CounterType.packets_and_bytes); } // --- l2_ternary_table (for broadcast/multicast entries) ------------------ action set_multicast_group(mcast_group_id_t gid) { // gid will be used by the Packet Replication Engine (PRE) in the // Traffic Manager--located right after the ingress pipeline, to // replicate a packet to multiple egress ports, specified by the control // plane by means of P4Runtime MulticastGroupEntry messages. standard_metadata.mcast_grp = gid; local_metadata.is_multicast = true; } table l2_ternary_table { key = { hdr.ethernet.dst_addr: ternary; } actions = { set_multicast_group; @defaultonly drop; } const default_action = drop; @name("l2_ternary_table_counter") counters = direct_counter(CounterType.packets_and_bytes); } // *** TODO EXERCISE 5 (IPV6 ROUTING) // // 1. Create a table to to handle NDP messages to resolve the MAC address of // switch. This table should: // - match on hdr.ndp.target_ipv6_addr (exact match) // - provide action "ndp_ns_to_na" (look in snippets.p4) // - default_action should be "NoAction" // // 2. Create table to handle IPv6 routing. Create a L2 my station table (hit // when Ethernet destination address is the switch address). This table // should not do anything to the packet (i.e., NoAction), but the control // block below should use the result (table.hit) to decide how to process // the packet. // // 3. Create a table for IPv6 routing. An action selector should be use to // pick a next hop MAC address according to a hash of packet header // fields (IPv6 source/destination address and the flow label). Look in // snippets.p4 for an example of an action selector and table using it. // // You can name your tables whatever you like. You will need to fill // the name in elsewhere in this exercise. // *** TODO EXERCISE 6 (SRV6) // // Implement tables to provide SRV6 logic. // *** ACL // // Provides ways to override a previous forwarding decision, for example // requiring that a packet is cloned/sent to the CPU, or dropped. // // We use this table to clone all NDP packets to the control plane, so to // enable host discovery. When the location of a new host is discovered, the // controller is expected to update the L2 and L3 tables with the // corresponding bridging and routing entries. action send_to_cpu() { standard_metadata.egress_spec = CPU_PORT; } action clone_to_cpu() { // Cloning is achieved by using a v1model-specific primitive. Here we // set the type of clone operation (ingress-to-egress pipeline), the // clone session ID (the CPU one), and the metadata fields we want to // preserve for the cloned packet replica. clone3(CloneType.I2E, CPU_CLONE_SESSION_ID, { standard_metadata.ingress_port }); } table acl_table { key = { standard_metadata.ingress_port: ternary; hdr.ethernet.dst_addr: ternary; hdr.ethernet.src_addr: ternary; hdr.ethernet.ether_type: ternary; local_metadata.ip_proto: ternary; local_metadata.icmp_type: ternary; local_metadata.l4_src_port: ternary; local_metadata.l4_dst_port: ternary; } actions = { send_to_cpu; clone_to_cpu; drop; } @name("acl_table_counter") counters = direct_counter(CounterType.packets_and_bytes); } apply { if (hdr.cpu_out.isValid()) { // *** TODO EXERCISE 4 // Implement logic such that if this is a packet-out from the // controller: // 1. Set the packet egress port to that found in the cpu_out header // 2. Remove (set invalid) the cpu_out header // 3. Exit the pipeline here (no need to go through other tables } bool do_l3_l2 = true; if (hdr.icmpv6.isValid() && hdr.icmpv6.type == ICMP6_TYPE_NS) { // *** TODO EXERCISE 5 // Insert logic to handle NDP messages to resolve the MAC address of the // switch. You should apply the NDP reply table created before. // If this is an NDP NS packet, i.e., if a matching entry is found, // unset the "do_l3_l2" flag to skip the L3 and L2 tables, as the // "ndp_ns_to_na" action already set an egress port. } if (do_l3_l2) { // *** TODO EXERCISE 5 // Insert logic to match the My Station table and upon hit, the // routing table. You should also add a conditional to drop the // packet if the hop_limit reaches 0. // *** TODO EXERCISE 6 // Insert logic to match the SRv6 My SID and Transit tables as well // as logic to perform PSP behavior. HINT: This logic belongs // somewhere between checking the switch's my station table and // applying the routing table. // L2 bridging logic. Apply the exact table first... if (!l2_exact_table.apply().hit) { // ...if an entry is NOT found, apply the ternary one in case // this is a multicast/broadcast NDP NS packet. l2_ternary_table.apply(); } } // Lastly, apply the ACL table. acl_table.apply(); } } control EgressPipeImpl (inout parsed_headers_t hdr, inout local_metadata_t local_metadata, inout standard_metadata_t standard_metadata) { apply { if (standard_metadata.egress_port == CPU_PORT) { // *** TODO EXERCISE 4 // Implement logic such that if the packet is to be forwarded to the // CPU port, e.g., if in ingress we matched on the ACL table with // action send/clone_to_cpu... // 1. Set cpu_in header as valid // 2. Set the cpu_in.ingress_port field to the original packet's // ingress port (standard_metadata.ingress_port). } // If this is a multicast packet (flag set by l2_ternary_table), make // sure we are not replicating the packet on the same port where it was // received. This is useful to avoid broadcasting NDP requests on the // ingress port. if (local_metadata.is_multicast == true && standard_metadata.ingress_port == standard_metadata.egress_port) { mark_to_drop(standard_metadata); } } } control ComputeChecksumImpl(inout parsed_headers_t hdr, inout local_metadata_t local_metadata) { apply { // The following is used to update the ICMPv6 checksum of NDP // NA packets generated by the ndp reply table in the ingress pipeline. // This function is executed only if the NDP header is present. update_checksum(hdr.ndp.isValid(), { hdr.ipv6.src_addr, hdr.ipv6.dst_addr, hdr.ipv6.payload_len, 8w0, hdr.ipv6.next_hdr, hdr.icmpv6.type, hdr.icmpv6.code, hdr.ndp.flags, hdr.ndp.target_ipv6_addr, hdr.ndp.type, hdr.ndp.length, hdr.ndp.target_mac_addr }, hdr.icmpv6.checksum, HashAlgorithm.csum16 ); } } control DeparserImpl(packet_out packet, in parsed_headers_t hdr) { apply { packet.emit(hdr.cpu_in); packet.emit(hdr.ethernet); packet.emit(hdr.ipv4); packet.emit(hdr.ipv6); packet.emit(hdr.srv6h); packet.emit(hdr.srv6_list); packet.emit(hdr.tcp); packet.emit(hdr.udp); packet.emit(hdr.icmp); packet.emit(hdr.icmpv6); packet.emit(hdr.ndp); } } V1Switch( ParserImpl(), VerifyChecksumImpl(), IngressPipeImpl(), EgressPipeImpl(), ComputeChecksumImpl(), DeparserImpl() ) main; ================================================ FILE: p4src/snippets.p4 ================================================ //------------------------------------------------------------------------------ // SNIPPETS FOR EXERCISE 5 (IPV6 ROUTING) //------------------------------------------------------------------------------ // Action that transforms an NDP NS packet into an NDP NA one for the given // target MAC address. The action also sets the egress port to the ingress // one where the NDP NS packet was received. action ndp_ns_to_na(mac_addr_t target_mac) { hdr.ethernet.src_addr = target_mac; hdr.ethernet.dst_addr = IPV6_MCAST_01; ipv6_addr_t host_ipv6_tmp = hdr.ipv6.src_addr; hdr.ipv6.src_addr = hdr.ndp.target_ipv6_addr; hdr.ipv6.dst_addr = host_ipv6_tmp; hdr.ipv6.next_hdr = IP_PROTO_ICMPV6; hdr.icmpv6.type = ICMP6_TYPE_NA; hdr.ndp.flags = NDP_FLAG_ROUTER | NDP_FLAG_OVERRIDE; hdr.ndp.type = NDP_OPT_TARGET_LL_ADDR; hdr.ndp.length = 1; hdr.ndp.target_mac_addr = target_mac; standard_metadata.egress_spec = standard_metadata.ingress_port; } // ECMP action selector definition: action_selector(HashAlgorithm.crc16, 32w1024, 32w16) ecmp_selector; // Example indirect table that uses the ecmp_selector. "Selector" match fields // are used as input to the action selector hash function. // table table_with_action_selector { // key = { // hdr_field_1: lpm / exact / ternary; // hdr_field_2: selector; // hdr_field_3: selector; // ... // } // actions = { ... } // implementation = ecmp_selector; // ... // } //------------------------------------------------------------------------------ // SNIPPETS FOR EXERCISE 6 (SRV6) //------------------------------------------------------------------------------ action insert_srv6h_header(bit<8> num_segments) { hdr.srv6h.setValid(); hdr.srv6h.next_hdr = hdr.ipv6.next_hdr; hdr.srv6h.hdr_ext_len = num_segments * 2; hdr.srv6h.routing_type = 4; hdr.srv6h.segment_left = num_segments - 1; hdr.srv6h.last_entry = num_segments - 1; hdr.srv6h.flags = 0; hdr.srv6h.tag = 0; hdr.ipv6.next_hdr = IP_PROTO_SRV6; } action srv6_t_insert_2(ipv6_addr_t s1, ipv6_addr_t s2) { hdr.ipv6.dst_addr = s1; hdr.ipv6.payload_len = hdr.ipv6.payload_len + 40; insert_srv6h_header(2); hdr.srv6_list[0].setValid(); hdr.srv6_list[0].segment_id = s2; hdr.srv6_list[1].setValid(); hdr.srv6_list[1].segment_id = s1; } action srv6_t_insert_3(ipv6_addr_t s1, ipv6_addr_t s2, ipv6_addr_t s3) { hdr.ipv6.dst_addr = s1; hdr.ipv6.payload_len = hdr.ipv6.payload_len + 56; insert_srv6h_header(3); hdr.srv6_list[0].setValid(); hdr.srv6_list[0].segment_id = s3; hdr.srv6_list[1].setValid(); hdr.srv6_list[1].segment_id = s2; hdr.srv6_list[2].setValid(); hdr.srv6_list[2].segment_id = s1; } table srv6_transit { key = { // TODO: Add match fields for SRv6 transit rules; we'll start with the // destination IP address. } actions = { // Note: Single segment header doesn't make sense given PSP // i.e. we will pop the SRv6 header when segments_left reaches 0 srv6_t_insert_2; srv6_t_insert_3; // Extra credit: set a metadata field, then push label stack in egress } @name("srv6_transit_table_counter") counters = direct_counter(CounterType.packets_and_bytes); } action srv6_pop() { hdr.ipv6.next_hdr = hdr.srv6h.next_hdr; // SRv6 header is 8 bytes // SRv6 list entry is 16 bytes each // (((bit<16>)hdr.srv6h.last_entry + 1) * 16) + 8; bit<16> srv6h_size = (((bit<16>)hdr.srv6h.last_entry + 1) << 4) + 8; hdr.ipv6.payload_len = hdr.ipv6.payload_len - srv6h_size; hdr.srv6h.setInvalid(); // Need to set MAX_HOPS headers invalid hdr.srv6_list[0].setInvalid(); hdr.srv6_list[1].setInvalid(); hdr.srv6_list[2].setInvalid(); } ================================================ FILE: ptf/lib/__init__.py ================================================ ================================================ FILE: ptf/lib/base_test.py ================================================ # Copyright 2013-present Barefoot Networks, Inc. # Copyright 2018-present Open Networking Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # # Antonin Bas (antonin@barefootnetworks.com) # Carmelo Cascone (carmelo@opennetworking.org) # import logging # https://stackoverflow.com/questions/24812604/hide-scapy-warning-message-ipv6 logging.getLogger("scapy.runtime").setLevel(logging.ERROR) import itertools import Queue import sys import threading import time from StringIO import StringIO from functools import wraps, partial from unittest import SkipTest import google.protobuf.text_format import grpc import ptf import scapy.packet import scapy.utils from google.protobuf import text_format from google.rpc import status_pb2, code_pb2 from ipaddress import ip_address from p4.config.v1 import p4info_pb2 from p4.v1 import p4runtime_pb2, p4runtime_pb2_grpc from ptf import config from ptf import testutils as testutils from ptf.base_tests import BaseTest from ptf.dataplane import match_exp_pkt from ptf.packet import IPv6 from scapy.layers.inet6 import * from scapy.layers.l2 import Ether from scapy.pton_ntop import inet_pton, inet_ntop from scapy.utils6 import in6_getnsma, in6_getnsmac from helper import P4InfoHelper DEFAULT_PRIORITY = 10 IPV6_MCAST_MAC_1 = "33:33:00:00:00:01" SWITCH1_MAC = "00:00:00:00:aa:01" SWITCH2_MAC = "00:00:00:00:aa:02" SWITCH3_MAC = "00:00:00:00:aa:03" HOST1_MAC = "00:00:00:00:00:01" HOST2_MAC = "00:00:00:00:00:02" MAC_BROADCAST = "FF:FF:FF:FF:FF:FF" MAC_FULL_MASK = "FF:FF:FF:FF:FF:FF" MAC_MULTICAST = "33:33:00:00:00:00" MAC_MULTICAST_MASK = "FF:FF:00:00:00:00" SWITCH1_IPV6 = "2001:0:1::1" SWITCH2_IPV6 = "2001:0:2::1" SWITCH3_IPV6 = "2001:0:3::1" SWITCH4_IPV6 = "2001:0:4::1" HOST1_IPV6 = "2001:0000:85a3::8a2e:370:1111" HOST2_IPV6 = "2001:0000:85a3::8a2e:370:2222" IPV6_MASK_ALL = "FFFF:FFFF:FFFF:FFFF:FFFF:FFFF:FFFF:FFFF" ARP_ETH_TYPE = 0x0806 IPV6_ETH_TYPE = 0x86DD ICMPV6_IP_PROTO = 58 NS_ICMPV6_TYPE = 135 NA_ICMPV6_TYPE = 136 # FIXME: this should be removed, use generic packet in test PACKET_IN_INGRESS_PORT_META_ID = 1 def print_inline(text): sys.stdout.write(text) sys.stdout.flush() # See https://gist.github.com/carymrobbins/8940382 # functools.partialmethod is introduced in Python 3.4 class partialmethod(partial): def __get__(self, instance, owner): if instance is None: return self return partial(self.func, instance, *(self.args or ()), **(self.keywords or {})) # Convert integer (with length) to binary byte string # Equivalent to Python 3.2 int.to_bytes # See # https://stackoverflow.com/questions/16022556/has-python-3-to-bytes-been-back-ported-to-python-2-7 def stringify(n, length): h = '%x' % n s = ('0' * (len(h) % 2) + h).zfill(length * 2).decode('hex') return s def ipv4_to_binary(addr): bytes_ = [int(b, 10) for b in addr.split('.')] return "".join(chr(b) for b in bytes_) def ipv6_to_binary(addr): ip = ip_address(addr.decode("utf-8")) return ip.packed def mac_to_binary(addr): bytes_ = [int(b, 16) for b in addr.split(':')] return "".join(chr(b) for b in bytes_) def format_pkt_match(received_pkt, expected_pkt): # Taken from PTF dataplane class stdout_save = sys.stdout try: # The scapy packet dissection methods print directly to stdout, # so we have to redirect stdout to a string. sys.stdout = StringIO() print "========== EXPECTED ==========" if isinstance(expected_pkt, scapy.packet.Packet): scapy.packet.ls(expected_pkt) print '--' scapy.utils.hexdump(expected_pkt) print "========== RECEIVED ==========" if isinstance(received_pkt, scapy.packet.Packet): scapy.packet.ls(received_pkt) print '--' scapy.utils.hexdump(received_pkt) print "==============================" return sys.stdout.getvalue() finally: sys.stdout.close() sys.stdout = stdout_save # Restore the original stdout. def format_pb_msg_match(received_msg, expected_msg): result = StringIO() result.write("========== EXPECTED PROTO ==========\n") result.write(text_format.MessageToString(expected_msg)) result.write("========== RECEIVED PROTO ==========\n") result.write(text_format.MessageToString(received_msg)) result.write("==============================\n") val = result.getvalue() result.close() return val def pkt_mac_swap(pkt): orig_dst = pkt[Ether].dst pkt[Ether].dst = pkt[Ether].src pkt[Ether].src = orig_dst return pkt def pkt_route(pkt, mac_dst): pkt[Ether].src = pkt[Ether].dst pkt[Ether].dst = mac_dst return pkt def pkt_decrement_ttl(pkt): if IP in pkt: pkt[IP].ttl -= 1 elif IPv6 in pkt: pkt[IPv6].hlim -= 1 return pkt def genNdpNsPkt(target_ip, src_mac=HOST1_MAC, src_ip=HOST1_IPV6): nsma = in6_getnsma(inet_pton(socket.AF_INET6, target_ip)) d = inet_ntop(socket.AF_INET6, nsma) dm = in6_getnsmac(nsma) p = Ether(dst=dm) / IPv6(dst=d, src=src_ip, hlim=255) p /= ICMPv6ND_NS(tgt=target_ip) p /= ICMPv6NDOptSrcLLAddr(lladdr=src_mac) return p def genNdpNaPkt(target_ip, target_mac, src_mac=SWITCH1_MAC, dst_mac=IPV6_MCAST_MAC_1, src_ip=SWITCH1_IPV6, dst_ip=HOST1_IPV6): p = Ether(src=src_mac, dst=dst_mac) p /= IPv6(dst=dst_ip, src=src_ip, hlim=255) p /= ICMPv6ND_NA(tgt=target_ip) p /= ICMPv6NDOptDstLLAddr(lladdr=target_mac) return p class P4RuntimeErrorFormatException(Exception): """Used to indicate that the gRPC error Status object returned by the server has an incorrect format. """ def __init__(self, message): super(P4RuntimeErrorFormatException, self).__init__(message) # Used to iterate over the p4.Error messages in a gRPC error Status object class P4RuntimeErrorIterator: def __init__(self, grpc_error): assert (grpc_error.code() == grpc.StatusCode.UNKNOWN) self.grpc_error = grpc_error error = None # The gRPC Python package does not have a convenient way to access the # binary details for the error: they are treated as trailing metadata. for meta in itertools.chain(self.grpc_error.initial_metadata(), self.grpc_error.trailing_metadata()): if meta[0] == "grpc-status-details-bin": error = status_pb2.Status() error.ParseFromString(meta[1]) break if error is None: raise P4RuntimeErrorFormatException("No binary details field") if len(error.details) == 0: raise P4RuntimeErrorFormatException( "Binary details field has empty Any details repeated field") self.errors = error.details self.idx = 0 def __iter__(self): return self def next(self): while self.idx < len(self.errors): p4_error = p4runtime_pb2.Error() one_error_any = self.errors[self.idx] if not one_error_any.Unpack(p4_error): raise P4RuntimeErrorFormatException( "Cannot convert Any message to p4.Error") if p4_error.canonical_code == code_pb2.OK: continue v = self.idx, p4_error self.idx += 1 return v raise StopIteration # P4Runtime uses a 3-level message in case of an error during the processing of # a write batch. This means that if we do not wrap the grpc.RpcError inside a # custom exception, we can end-up with a non-helpful exception message in case # of failure as only the first level will be printed. In this custom exception # class, we extract the nested error message (one for each operation included in # the batch) in order to print error code + user-facing message. See P4 Runtime # documentation for more details on error-reporting. class P4RuntimeWriteException(Exception): def __init__(self, grpc_error): assert (grpc_error.code() == grpc.StatusCode.UNKNOWN) super(P4RuntimeWriteException, self).__init__() self.errors = [] try: error_iterator = P4RuntimeErrorIterator(grpc_error) for error_tuple in error_iterator: self.errors.append(error_tuple) except P4RuntimeErrorFormatException: raise # just propagate exception for now def __str__(self): message = "Error(s) during Write:\n" for idx, p4_error in self.errors: code_name = code_pb2._CODE.values_by_number[ p4_error.canonical_code].name message += "\t* At index {}: {}, '{}'\n".format( idx, code_name, p4_error.message) return message # This code is common to all tests. setUp() is invoked at the beginning of the # test and tearDown is called at the end, no matter whether the test passed / # failed / errored. # noinspection PyUnresolvedReferences class P4RuntimeTest(BaseTest): def setUp(self): BaseTest.setUp(self) # Setting up PTF dataplane self.dataplane = ptf.dataplane_instance self.dataplane.flush() self._swports = [] for device, port, ifname in config["interfaces"]: self._swports.append(port) self.port1 = self.swports(0) self.port2 = self.swports(1) self.port3 = self.swports(2) grpc_addr = testutils.test_param_get("grpcaddr") if grpc_addr is None: grpc_addr = 'localhost:50051' self.device_id = int(testutils.test_param_get("device_id")) if self.device_id is None: self.fail("Device ID is not set") self.cpu_port = int(testutils.test_param_get("cpu_port")) if self.cpu_port is None: self.fail("CPU port is not set") pltfm = testutils.test_param_get("pltfm") if pltfm is not None and pltfm == 'hw' and getattr(self, "_skip_on_hw", False): raise SkipTest("Skipping test in HW") self.channel = grpc.insecure_channel(grpc_addr) self.stub = p4runtime_pb2_grpc.P4RuntimeStub(self.channel) proto_txt_path = testutils.test_param_get("p4info") # print "Importing p4info proto from", proto_txt_path self.p4info = p4info_pb2.P4Info() with open(proto_txt_path, "rb") as fin: google.protobuf.text_format.Merge(fin.read(), self.p4info) self.helper = P4InfoHelper(proto_txt_path) # used to store write requests sent to the P4Runtime server, useful for # autocleanup of tests (see definition of autocleanup decorator below) self.reqs = [] self.election_id = 1 self.set_up_stream() def set_up_stream(self): self.stream_out_q = Queue.Queue() self.stream_in_q = Queue.Queue() def stream_req_iterator(): while True: p = self.stream_out_q.get() if p is None: break yield p def stream_recv(stream): for p in stream: self.stream_in_q.put(p) self.stream = self.stub.StreamChannel(stream_req_iterator()) self.stream_recv_thread = threading.Thread( target=stream_recv, args=(self.stream,)) self.stream_recv_thread.start() self.handshake() def handshake(self): req = p4runtime_pb2.StreamMessageRequest() arbitration = req.arbitration arbitration.device_id = self.device_id election_id = arbitration.election_id election_id.high = 0 election_id.low = self.election_id self.stream_out_q.put(req) rep = self.get_stream_packet("arbitration", timeout=2) if rep is None: self.fail("Failed to establish handshake") def tearDown(self): self.tear_down_stream() BaseTest.tearDown(self) def tear_down_stream(self): self.stream_out_q.put(None) self.stream_recv_thread.join() def get_packet_in(self, timeout=2): msg = self.get_stream_packet("packet", timeout) if msg is None: self.fail("PacketIn message not received") else: return msg.packet def verify_packet_in(self, exp_packet_in_msg, timeout=2): rx_packet_in_msg = self.get_packet_in(timeout=timeout) # Check payload first, then metadata rx_pkt = Ether(rx_packet_in_msg.payload) exp_pkt = exp_packet_in_msg.payload if not match_exp_pkt(exp_pkt, rx_pkt): self.fail("Received PacketIn.payload is not the expected one\n" + format_pkt_match(rx_pkt, exp_pkt)) rx_meta_dict = {m.metadata_id: m.value for m in rx_packet_in_msg.metadata} exp_meta_dict = {m.metadata_id: m.value for m in exp_packet_in_msg.metadata} shared_meta = {mid: rx_meta_dict[mid] for mid in rx_meta_dict if mid in exp_meta_dict and rx_meta_dict[mid] == exp_meta_dict[mid]} if len(rx_meta_dict) is not len(exp_meta_dict) \ or len(shared_meta) is not len(exp_meta_dict): self.fail("Received PacketIn.metadata is not the expected one\n" + format_pb_msg_match(rx_packet_in_msg, exp_packet_in_msg)) def get_stream_packet(self, type_, timeout=1): start = time.time() try: while True: remaining = timeout - (time.time() - start) if remaining < 0: break msg = self.stream_in_q.get(timeout=remaining) if not msg.HasField(type_): continue return msg except: # timeout expired pass return None def send_packet_out(self, packet): packet_out_req = p4runtime_pb2.StreamMessageRequest() packet_out_req.packet.CopyFrom(packet) self.stream_out_q.put(packet_out_req) def swports(self, idx): if idx >= len(self._swports): self.fail("Index {} is out-of-bound of port map".format(idx)) return self._swports[idx] def _write(self, req): try: return self.stub.Write(req) except grpc.RpcError as e: if e.code() != grpc.StatusCode.UNKNOWN: raise e raise P4RuntimeWriteException(e) def write_request(self, req, store=True): rep = self._write(req) if store: self.reqs.append(req) return rep def insert(self, entity): if isinstance(entity, list) or isinstance(entity, tuple): for e in entity: self.insert(e) return req = self.get_new_write_request() update = req.updates.add() update.type = p4runtime_pb2.Update.INSERT if isinstance(entity, p4runtime_pb2.TableEntry): msg_entity = update.entity.table_entry elif isinstance(entity, p4runtime_pb2.ActionProfileGroup): msg_entity = update.entity.action_profile_group elif isinstance(entity, p4runtime_pb2.ActionProfileMember): msg_entity = update.entity.action_profile_member else: self.fail("Entity %s not supported" % entity.__name__) msg_entity.CopyFrom(entity) self.write_request(req) def get_new_write_request(self): req = p4runtime_pb2.WriteRequest() req.device_id = self.device_id election_id = req.election_id election_id.high = 0 election_id.low = self.election_id return req def insert_pre_multicast_group(self, group_id, ports): req = self.get_new_write_request() update = req.updates.add() update.type = p4runtime_pb2.Update.INSERT pre_entry = update.entity.packet_replication_engine_entry mg_entry = pre_entry.multicast_group_entry mg_entry.multicast_group_id = group_id for port in ports: replica = mg_entry.replicas.add() replica.egress_port = port replica.instance = 0 return req, self.write_request(req) def insert_pre_clone_session(self, session_id, ports, cos=0, packet_length_bytes=0): req = self.get_new_write_request() update = req.updates.add() update.type = p4runtime_pb2.Update.INSERT pre_entry = update.entity.packet_replication_engine_entry clone_entry = pre_entry.clone_session_entry clone_entry.session_id = session_id clone_entry.class_of_service = cos clone_entry.packet_length_bytes = packet_length_bytes for port in ports: replica = clone_entry.replicas.add() replica.egress_port = port replica.instance = 1 return req, self.write_request(req) # iterates over all requests in reverse order; if they are INSERT updates, # replay them as DELETE updates; this is a convenient way to clean-up a lot # of switch state def undo_write_requests(self, reqs): updates = [] for req in reversed(reqs): for update in reversed(req.updates): if update.type == p4runtime_pb2.Update.INSERT: updates.append(update) new_req = self.get_new_write_request() for update in updates: update.type = p4runtime_pb2.Update.DELETE new_req.updates.add().CopyFrom(update) self._write(new_req) # this decorator can be used on the runTest method of P4Runtime PTF tests # when it is used, the undo_write_requests will be called at the end of the test # (irrespective of whether the test was a failure, a success, or an exception # was raised). When this is used, all write requests must be performed through # one of the send_request_* convenience functions, or by calling write_request; # do not use stub.Write directly! # most of the time, it is a great idea to use this decorator, as it makes the # tests less verbose. In some circumstances, it is difficult to use it, in # particular when the test itself issues DELETE request to remove some # objects. In this case you will want to do the cleanup yourself (in the # tearDown function for example); you can still use undo_write_request which # should make things easier. # because the PTF test writer needs to choose whether or not to use autocleanup, # it seems more appropriate to define a decorator for this rather than do it # unconditionally in the P4RuntimeTest tearDown method. def autocleanup(f): @wraps(f) def handle(*args, **kwargs): test = args[0] assert (isinstance(test, P4RuntimeTest)) try: return f(*args, **kwargs) finally: test.undo_write_requests(test.reqs) return handle def skip_on_hw(cls): cls._skip_on_hw = True return cls ================================================ FILE: ptf/lib/chassis_config.pb.txt ================================================ description: "Config for PTF tests using virtual interfaces" chassis { platform: PLT_P4_SOFT_SWITCH name: "bmv2-simple_switch" } nodes { id: 1 slot: 1 index: 1 } singleton_ports { id: 1 name: "veth0" slot: 1 port: 1 channel: 1 speed_bps: 100000000000 config_params { admin_state: ADMIN_STATE_ENABLED } node: 1 } singleton_ports { id: 2 name: "veth2" slot: 1 port: 2 channel: 1 speed_bps: 100000000000 config_params { admin_state: ADMIN_STATE_ENABLED } node: 1 } singleton_ports { id: 3 name: "veth4" slot: 1 port: 3 channel: 1 speed_bps: 100000000000 config_params { admin_state: ADMIN_STATE_ENABLED } node: 1 } singleton_ports { id: 4 name: "veth6" slot: 1 port: 4 channel: 1 speed_bps: 100000000000 config_params { admin_state: ADMIN_STATE_ENABLED } node: 1 } singleton_ports { id: 5 name: "veth8" slot: 1 port: 5 channel: 1 speed_bps: 100000000000 config_params { admin_state: ADMIN_STATE_ENABLED } node: 1 } singleton_ports { id: 6 name: "veth10" slot: 1 port: 6 channel: 1 speed_bps: 100000000000 config_params { admin_state: ADMIN_STATE_ENABLED } node: 1 } singleton_ports { id: 7 name: "veth12" slot: 1 port: 7 channel: 1 speed_bps: 100000000000 config_params { admin_state: ADMIN_STATE_ENABLED } node: 1 } singleton_ports { id: 8 name: "veth14" slot: 1 port: 8 channel: 1 speed_bps: 100000000000 config_params { admin_state: ADMIN_STATE_ENABLED } node: 1 } ================================================ FILE: ptf/lib/convert.py ================================================ # Copyright 2017-present Open Networking Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import math import re import socket import ipaddress """ This package contains several helper functions for encoding to and decoding from byte strings: - integers - IPv4 address strings - IPv6 address strings - Ethernet address strings """ mac_pattern = re.compile(r'^([\da-fA-F]{2}:){5}([\da-fA-F]{2})$') def matchesMac(mac_addr_string): return mac_pattern.match(mac_addr_string) is not None def encodeMac(mac_addr_string): return mac_addr_string.replace(':', '').decode('hex') def decodeMac(encoded_mac_addr): return ':'.join(s.encode('hex') for s in encoded_mac_addr) ip_pattern = re.compile(r'^(\d{1,3}\.){3}(\d{1,3})$') def matchesIPv4(ip_addr_string): return ip_pattern.match(ip_addr_string) is not None def encodeIPv4(ip_addr_string): return socket.inet_aton(ip_addr_string) def decodeIPv4(encoded_ip_addr): return socket.inet_ntoa(encoded_ip_addr) def matchesIPv6(ip_addr_string): try: addr = ipaddress.ip_address(unicode(ip_addr_string, "utf-8")) return isinstance(addr, ipaddress.IPv6Address) except ValueError: return False def encodeIPv6(ip_addr_string): return socket.inet_pton(socket.AF_INET6, ip_addr_string) def bitwidthToBytes(bitwidth): return int(math.ceil(bitwidth / 8.0)) def encodeNum(number, bitwidth): byte_len = bitwidthToBytes(bitwidth) num_str = '%x' % number if number >= 2 ** bitwidth: raise Exception( "Number, %d, does not fit in %d bits" % (number, bitwidth)) return ('0' * (byte_len * 2 - len(num_str)) + num_str).decode('hex') def decodeNum(encoded_number): return int(encoded_number.encode('hex'), 16) def encode(x, bitwidth): 'Tries to infer the type of `x` and encode it' byte_len = bitwidthToBytes(bitwidth) if (type(x) == list or type(x) == tuple) and len(x) == 1: x = x[0] encoded_bytes = None if type(x) == str: if matchesMac(x): encoded_bytes = encodeMac(x) elif matchesIPv4(x): encoded_bytes = encodeIPv4(x) elif matchesIPv6(x): encoded_bytes = encodeIPv6(x) else: # Assume that the string is already encoded encoded_bytes = x elif type(x) == int: encoded_bytes = encodeNum(x, bitwidth) else: raise Exception("Encoding objects of %r is not supported" % type(x)) assert (len(encoded_bytes) == byte_len) return encoded_bytes def test(): # TODO These tests should be moved out of main eventually mac = "aa:bb:cc:dd:ee:ff" enc_mac = encodeMac(mac) assert (enc_mac == '\xaa\xbb\xcc\xdd\xee\xff') dec_mac = decodeMac(enc_mac) assert (mac == dec_mac) ip = "10.0.0.1" enc_ip = encodeIPv4(ip) assert (enc_ip == '\x0a\x00\x00\x01') dec_ip = decodeIPv4(enc_ip) assert (ip == dec_ip) num = 1337 byte_len = 5 enc_num = encodeNum(num, byte_len * 8) assert (enc_num == '\x00\x00\x00\x05\x39') dec_num = decodeNum(enc_num) assert (num == dec_num) assert (matchesIPv4('10.0.0.1')) assert (not matchesIPv4('10.0.0.1.5')) assert (not matchesIPv4('1000.0.0.1')) assert (not matchesIPv4('10001')) assert (matchesIPv6('::1')) assert (encode('1:2:3:4:5:6:7:8', 128) == '\x00\x01\x00\x02\x00\x03\x00\x04\x00\x05\x00\x06\x00\x07\x00\x08') assert (matchesIPv6('2001:0000:85a3::8a2e:370:1111')) assert (not matchesIPv6('10.0.0.1')) assert (encode(mac, 6 * 8) == enc_mac) assert (encode(ip, 4 * 8) == enc_ip) assert (encode(num, 5 * 8) == enc_num) assert (encode((num,), 5 * 8) == enc_num) assert (encode([num], 5 * 8) == enc_num) num = 256 try: encodeNum(num, 8) raise Exception("expected exception") except Exception as e: print e if __name__ == '__main__': test() ================================================ FILE: ptf/lib/helper.py ================================================ # Copyright 2017-present Open Networking Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import re import google.protobuf.text_format from p4.config.v1 import p4info_pb2 from p4.v1 import p4runtime_pb2 from convert import encode def get_match_field_value(match_field): match_type = match_field.WhichOneof("field_match_type") if match_type == 'valid': return match_field.valid.value elif match_type == 'exact': return match_field.exact.value elif match_type == 'lpm': return match_field.lpm.value, match_field.lpm.prefix_len elif match_type == 'ternary': return match_field.ternary.value, match_field.ternary.mask elif match_type == 'range': return match_field.range.low, match_field.range.high else: raise Exception("Unsupported match type with type %r" % match_type) class P4InfoHelper(object): def __init__(self, p4_info_filepath): p4info = p4info_pb2.P4Info() # Load the p4info file into a skeleton P4Info object with open(p4_info_filepath) as p4info_f: google.protobuf.text_format.Merge(p4info_f.read(), p4info) self.p4info = p4info self.next_mbr_id = 1 self.next_grp_id = 1 def get_next_mbr_id(self): mbr_id = self.next_mbr_id self.next_mbr_id = self.next_mbr_id + 1 return mbr_id def get_next_grp_id(self): grp_id = self.next_grp_id self.next_grp_id = self.next_grp_id + 1 return grp_id def get(self, entity_type, name=None, id=None): if name is not None and id is not None: raise AssertionError("name or id must be None") for o in getattr(self.p4info, entity_type): pre = o.preamble if name: if pre.name == name: return o else: if pre.id == id: return o if name: raise AttributeError("Could not find %r of type %s" % (name, entity_type)) else: raise AttributeError("Could not find id %r of type %s" % (id, entity_type)) def get_id(self, entity_type, name): return self.get(entity_type, name=name).preamble.id def get_name(self, entity_type, id): return self.get(entity_type, id=id).preamble.name def __getattr__(self, attr): # Synthesize convenience functions for name to id lookups for top-level # entities e.g. get_tables_id(name_string) or # get_actions_id(name_string) m = re.search(r"^get_(\w+)_id$", attr) if m: primitive = m.group(1) return lambda name: self.get_id(primitive, name) # Synthesize convenience functions for id to name lookups # e.g. get_tables_name(id) or get_actions_name(id) m = re.search(r"^get_(\w+)_name$", attr) if m: primitive = m.group(1) return lambda x: self.get_name(primitive, x) raise AttributeError( "%r object has no attribute %r (check your P4Info)" % (self.__class__, attr)) def get_match_field(self, table_name, name=None, id=None): t = None for t in self.p4info.tables: if t.preamble.name == table_name: break if not t: raise AttributeError("No such table %r in P4Info" % table_name) for mf in t.match_fields: if name is not None: if mf.name == name: return mf elif id is not None: if mf.id == id: return mf raise AttributeError("%r has no match field %r (check your P4Info)" % (table_name, name if name is not None else id)) def get_packet_metadata(self, meta_type, name=None, id=None): for t in self.p4info.controller_packet_metadata: pre = t.preamble if pre.name == meta_type: for m in t.metadata: if name is not None: if m.name == name: return m elif id is not None: if m.id == id: return m raise AttributeError( "ControllerPacketMetadata %r has no metadata %r (check your P4Info)" % (meta_type, name if name is not None else id)) def get_match_field_id(self, table_name, match_field_name): return self.get_match_field(table_name, name=match_field_name).id def get_match_field_name(self, table_name, match_field_id): return self.get_match_field(table_name, id=match_field_id).name def get_match_field_pb(self, table_name, match_field_name, value): p4info_match = self.get_match_field(table_name, match_field_name) bitwidth = p4info_match.bitwidth p4runtime_match = p4runtime_pb2.FieldMatch() p4runtime_match.field_id = p4info_match.id match_type = p4info_match.match_type if match_type == p4info_pb2.MatchField.EXACT: exact = p4runtime_match.exact exact.value = encode(value, bitwidth) elif match_type == p4info_pb2.MatchField.LPM: lpm = p4runtime_match.lpm lpm.value = encode(value[0], bitwidth) lpm.prefix_len = value[1] elif match_type == p4info_pb2.MatchField.TERNARY: lpm = p4runtime_match.ternary lpm.value = encode(value[0], bitwidth) lpm.mask = encode(value[1], bitwidth) elif match_type == p4info_pb2.MatchField.RANGE: lpm = p4runtime_match.range lpm.low = encode(value[0], bitwidth) lpm.high = encode(value[1], bitwidth) else: raise Exception("Unsupported match type with type %r" % match_type) return p4runtime_match def get_action_param(self, action_name, name=None, id=None): for a in self.p4info.actions: pre = a.preamble if pre.name == action_name: for p in a.params: if name is not None: if p.name == name: return p elif id is not None: if p.id == id: return p raise AttributeError( "Action %r has no param %r (check your P4Info)" % (action_name, name if name is not None else id)) def get_action_param_id(self, action_name, param_name): return self.get_action_param(action_name, name=param_name).id def get_action_param_name(self, action_name, param_id): return self.get_action_param(action_name, id=param_id).name def get_action_param_pb(self, action_name, param_name, value): p4info_param = self.get_action_param(action_name, param_name) p4runtime_param = p4runtime_pb2.Action.Param() p4runtime_param.param_id = p4info_param.id p4runtime_param.value = encode(value, p4info_param.bitwidth) return p4runtime_param def build_table_entry(self, table_name, match_fields=None, default_action=False, action_name=None, action_params=None, group_id=None, priority=None): table_entry = p4runtime_pb2.TableEntry() table_entry.table_id = self.get_tables_id(table_name) if priority is not None: table_entry.priority = priority if match_fields: table_entry.match.extend([ self.get_match_field_pb(table_name, match_field_name, value) for match_field_name, value in match_fields.iteritems() ]) if default_action: table_entry.is_default_action = True if action_name: action = table_entry.action.action action.CopyFrom(self.build_action(action_name, action_params)) if group_id: table_entry.action.action_profile_group_id = group_id return table_entry def build_action(self, action_name, action_params=None): action = p4runtime_pb2.Action() action.action_id = self.get_actions_id(action_name) if action_params: action.params.extend([ self.get_action_param_pb(action_name, field_name, value) for field_name, value in action_params.iteritems() ]) return action def build_act_prof_member(self, act_prof_name, action_name, action_params=None, member_id=None): member = p4runtime_pb2.ActionProfileMember() member.action_profile_id = self.get_action_profiles_id(act_prof_name) member.member_id = member_id if member_id else self.get_next_mbr_id() member.action.CopyFrom(self.build_action(action_name, action_params)) return member def build_act_prof_group(self, act_prof_name, group_id, actions=()): messages = [] group = p4runtime_pb2.ActionProfileGroup() group.action_profile_id = self.get_action_profiles_id(act_prof_name) group.group_id = group_id for action in actions: action_name = action[0] if len(action) > 1: action_params = action[1] else: action_params = None member = self.build_act_prof_member( act_prof_name, action_name, action_params) messages.extend([member]) group_member = p4runtime_pb2.ActionProfileGroup.Member() group_member.member_id = member.member_id group_member.weight = 1 group.members.extend([group_member]) messages.append(group) return messages def build_packet_out(self, payload, metadata=None): packet_out = p4runtime_pb2.PacketOut() packet_out.payload = payload if not metadata: return packet_out for name, value in metadata.items(): p4info_meta = self.get_packet_metadata("packet_out", name) meta = packet_out.metadata.add() meta.metadata_id = p4info_meta.id meta.value = encode(value, p4info_meta.bitwidth) return packet_out def build_packet_in(self, payload, metadata=None): packet_in = p4runtime_pb2.PacketIn() packet_in.payload = payload if not metadata: return packet_in for name, value in metadata.items(): p4info_meta = self.get_packet_metadata("packet_in", name) meta = packet_in.metadata.add() meta.metadata_id = p4info_meta.id meta.value = encode(value, p4info_meta.bitwidth) return packet_in ================================================ FILE: ptf/lib/port_map.json ================================================ [ { "ptf_port": 0, "p4_port": 1, "iface_name": "veth1" }, { "ptf_port": 1, "p4_port": 2, "iface_name": "veth3" }, { "ptf_port": 2, "p4_port": 3, "iface_name": "veth5" }, { "ptf_port": 3, "p4_port": 4, "iface_name": "veth7" }, { "ptf_port": 4, "p4_port": 5, "iface_name": "veth9" }, { "ptf_port": 5, "p4_port": 6, "iface_name": "veth11" }, { "ptf_port": 6, "p4_port": 7, "iface_name": "veth13" }, { "ptf_port": 7, "p4_port": 8, "iface_name": "veth15" } ] ================================================ FILE: ptf/lib/runner.py ================================================ #!/usr/bin/env python2 # Copyright 2013-present Barefoot Networks, Inc. # Copyright 2018-present Open Networking Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import Queue import argparse import json import logging import os import re import subprocess import sys import threading import time from collections import OrderedDict import google.protobuf.text_format import grpc from p4.v1 import p4runtime_pb2, p4runtime_pb2_grpc PTF_ROOT = os.path.dirname(os.path.realpath(__file__)) logging.basicConfig(level=logging.INFO) logger = logging.getLogger("PTF runner") def error(msg, *args, **kwargs): logger.error(msg, *args, **kwargs) def warn(msg, *args, **kwargs): logger.warn(msg, *args, **kwargs) def info(msg, *args, **kwargs): logger.info(msg, *args, **kwargs) def debug(msg, *args, **kwargs): logger.debug(msg, *args, **kwargs) def check_ifaces(ifaces): """ Checks that required interfaces exist. """ ifconfig_out = subprocess.check_output(['ifconfig']) iface_list = re.findall(r'^([a-zA-Z0-9]+)', ifconfig_out, re.S | re.M) present_ifaces = set(iface_list) ifaces = set(ifaces) return ifaces <= present_ifaces def build_bmv2_config(bmv2_json_path): """ Builds the device config for BMv2 """ with open(bmv2_json_path) as f: return f.read() def update_config(p4info_path, bmv2_json_path, grpc_addr, device_id): """ Performs a SetForwardingPipelineConfig on the device """ channel = grpc.insecure_channel(grpc_addr) stub = p4runtime_pb2_grpc.P4RuntimeStub(channel) debug("Sending P4 config") # Send master arbitration via stream channel # This should go in library, to be re-used also by base_test.py. stream_out_q = Queue.Queue() stream_in_q = Queue.Queue() def stream_req_iterator(): while True: p = stream_out_q.get() if p is None: break yield p def stream_recv(stream): for p in stream: stream_in_q.put(p) def get_stream_packet(type_, timeout=1): start = time.time() try: while True: remaining = timeout - (time.time() - start) if remaining < 0: break msg = stream_in_q.get(timeout=remaining) if not msg.HasField(type_): continue return msg except: # timeout expired pass return None stream = stub.StreamChannel(stream_req_iterator()) stream_recv_thread = threading.Thread(target=stream_recv, args=(stream,)) stream_recv_thread.start() req = p4runtime_pb2.StreamMessageRequest() arbitration = req.arbitration arbitration.device_id = device_id election_id = arbitration.election_id election_id.high = 0 election_id.low = 1 stream_out_q.put(req) rep = get_stream_packet("arbitration", timeout=5) if rep is None: error("Failed to establish handshake") return False try: # Set pipeline config. request = p4runtime_pb2.SetForwardingPipelineConfigRequest() request.device_id = device_id election_id = request.election_id election_id.high = 0 election_id.low = 1 config = request.config with open(p4info_path, 'r') as p4info_f: google.protobuf.text_format.Merge(p4info_f.read(), config.p4info) config.p4_device_config = build_bmv2_config(bmv2_json_path) request.action = p4runtime_pb2.SetForwardingPipelineConfigRequest.VERIFY_AND_COMMIT try: stub.SetForwardingPipelineConfig(request) except Exception as e: error("Error during SetForwardingPipelineConfig") error(str(e)) return False return True finally: stream_out_q.put(None) stream_recv_thread.join() def run_test(p4info_path, grpc_addr, device_id, cpu_port, ptfdir, port_map_path, extra_args=()): """ Runs PTF tests included in provided directory. Device must be running and configfured with appropriate P4 program. """ # TODO: check schema? # "ptf_port" is ignored for now, we assume that ports are provided by # increasing values of ptf_port, in the range [0, NUM_IFACES[. port_map = OrderedDict() with open(port_map_path, 'r') as port_map_f: port_list = json.load(port_map_f) for entry in port_list: p4_port = entry["p4_port"] iface_name = entry["iface_name"] port_map[p4_port] = iface_name if not check_ifaces(port_map.values()): error("Some interfaces are missing") return False ifaces = [] # FIXME # find base_test.py pypath = os.path.dirname(os.path.abspath(__file__)) if 'PYTHONPATH' in os.environ: os.environ['PYTHONPATH'] += ":" + pypath else: os.environ['PYTHONPATH'] = pypath for iface_idx, iface_name in port_map.items(): ifaces.extend(['-i', '{}@{}'.format(iface_idx, iface_name)]) cmd = ['ptf'] cmd.extend(['--test-dir', ptfdir]) cmd.extend(ifaces) test_params = 'p4info=\'{}\''.format(p4info_path) test_params += ';grpcaddr=\'{}\''.format(grpc_addr) test_params += ';device_id=\'{}\''.format(device_id) test_params += ';cpu_port=\'{}\''.format(cpu_port) cmd.append('--test-params={}'.format(test_params)) cmd.extend(extra_args) debug("Executing PTF command: {}".format(' '.join(cmd))) try: # we want the ptf output to be sent to stdout p = subprocess.Popen(cmd) p.wait() except: error("Error when running PTF tests") return False return p.returncode == 0 def check_ptf(): try: with open(os.devnull, 'w') as devnull: subprocess.check_call(['ptf', '--version'], stdout=devnull, stderr=devnull) return True except subprocess.CalledProcessError: return True except OSError: # PTF not found return False # noinspection PyTypeChecker def main(): parser = argparse.ArgumentParser( description="Compile the provided P4 program and run PTF tests on it") parser.add_argument('--p4info', help='Location of p4info proto in text format', type=str, action="store", required=True) parser.add_argument('--bmv2-json', help='Location BMv2 JSON output from p4c (if target is bmv2)', type=str, action="store", required=False) parser.add_argument('--grpc-addr', help='Address to use to connect to P4 Runtime server', type=str, default='localhost:50051') parser.add_argument('--device-id', help='Device id for device under test', type=int, default=1) parser.add_argument('--cpu-port', help='CPU port ID of device under test', type=int, required=True) parser.add_argument('--ptf-dir', help='Directory containing PTF tests', type=str, required=True) parser.add_argument('--port-map', help='Path to JSON port mapping', type=str, required=True) args, unknown_args = parser.parse_known_args() if not check_ptf(): error("Cannot find PTF executable") sys.exit(1) if not os.path.exists(args.p4info): error("P4Info file {} not found".format(args.p4info)) sys.exit(1) if not os.path.exists(args.bmv2_json): error("BMv2 json file {} not found".format(args.bmv2_json)) sys.exit(1) if not os.path.exists(args.port_map): print "Port map path '{}' does not exist".format(args.port_map) sys.exit(1) try: success = update_config(p4info_path=args.p4info, bmv2_json_path=args.bmv2_json, grpc_addr=args.grpc_addr, device_id=args.device_id) if not success: sys.exit(2) success = run_test(p4info_path=args.p4info, device_id=args.device_id, grpc_addr=args.grpc_addr, cpu_port=args.cpu_port, ptfdir=args.ptf_dir, port_map_path=args.port_map, extra_args=unknown_args) if not success: sys.exit(3) except Exception: raise if __name__ == '__main__': main() ================================================ FILE: ptf/lib/start_bmv2.sh ================================================ #!/usr/bin/env bash set -xe DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )" CPU_PORT=255 GRPC_PORT=28000 # Create veths for idx in 0 1 2 3 4 5 6 7; do intf0="veth$(($idx*2))" intf1="veth$(($idx*2+1))" if ! ip link show $intf0 &> /dev/null; then ip link add name $intf0 type veth peer name $intf1 ip link set dev $intf0 up ip link set dev $intf1 up # Set the MTU of these interfaces to be larger than default of # 1500 bytes, so that P4 behavioral-model testing can be done # on jumbo frames. ip link set $intf0 mtu 9500 ip link set $intf1 mtu 9500 # Disable IPv6 on the interfaces, so that the Linux kernel # will not automatically send IPv6 MDNS, Router Solicitation, # and Multicast Listener Report packets on the interface, # which can make P4 program debugging more confusing. # # Testing indicates that we can still send IPv6 packets across # such interfaces, both from scapy to simple_switch, and from # simple_switch out to scapy sniffing. # # https://superuser.com/questions/356286/how-can-i-switch-off-ipv6-nd-ra-transmissions-in-linux sysctl net.ipv6.conf.${intf0}.disable_ipv6=1 sysctl net.ipv6.conf.${intf1}.disable_ipv6=1 fi done # shellcheck disable=SC2086 stratum_bmv2 \ --external_stratum_urls=0.0.0.0:${GRPC_PORT} \ --persistent_config_dir=/tmp \ --forwarding_pipeline_configs_file=/dev/null \ --chassis_config_file="${DIR}"/chassis_config.pb.txt \ --write_req_log_file=p4rt_write.log \ --initial_pipeline=/root/dummy.json \ --bmv2_log_level=trace \ --cpu_port 255 \ > stratum_bmv2.log 2>&1 ================================================ FILE: ptf/run_tests ================================================ #!/usr/bin/env bash set -e PTF_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )" P4SRC_DIR=${PTF_DIR}/../p4src P4C_OUT=${P4SRC_DIR}/build PTF_DOCKER_IMG=${PTF_DOCKER_IMG:-undefined} runName=ptf-${RANDOM} function stop() { echo "Stopping container ${runName}..." docker stop -t0 "${runName}" > /dev/null } trap stop INT # Start container. Entrypoint starts stratum_bmv2. We put that in the background # and execute the PTF scripts separately. echo "*** Starting stratum_bmv2 in Docker (${runName})..." docker run --name "${runName}" -d --privileged --rm \ -v "${PTF_DIR}":/ptf -w /ptf \ -v "${P4C_OUT}":/p4c-out \ "${PTF_DOCKER_IMG}" \ ./lib/start_bmv2.sh > /dev/null sleep 2 set +e printf "*** Starting tests...\n" docker exec "${runName}" ./lib/runner.py \ --bmv2-json /p4c-out/bmv2.json \ --p4info /p4c-out/p4info.txt \ --grpc-addr localhost:28000 \ --device-id 1 \ --ptf-dir ./tests \ --cpu-port 255 \ --port-map /ptf/lib/port_map.json "${@}" stop ================================================ FILE: ptf/tests/bridging.py ================================================ # Copyright 2013-present Barefoot Networks, Inc. # Copyright 2018-present Open Networking Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # ------------------------------------------------------------------------------ # BRIDGING TESTS # # To run all tests in this file: # make p4-test TEST=bridging # # To run a specific test case: # make p4-test TEST=bridging. # # For example: # make p4-test TEST=bridging.BridgingTest # ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------ # Modify everywhere you see TODO # # When providing your solution, make sure to use the same names for P4Runtime # entities as specified in your P4Info file. # # Test cases are based on the P4 program design suggested in the exercises # README. Make sure to modify the test cases accordingly if you decide to # implement the pipeline differently. # ------------------------------------------------------------------------------ from ptf.testutils import group from base_test import * # From the P4 program. CPU_CLONE_SESSION_ID = 99 @group("bridging") class ArpNdpRequestWithCloneTest(P4RuntimeTest): """Tests ability to broadcast ARP requests and NDP Neighbor Solicitation (NS) messages as well as cloning to CPU (controller) for host discovery. """ def runTest(self): # Test With both ARP and NDP NS packets... print_inline("ARP request ... ") arp_pkt = testutils.simple_arp_packet() self.testPacket(arp_pkt) print_inline("NDP NS ... ") ndp_pkt = genNdpNsPkt(src_mac=HOST1_MAC, src_ip=HOST1_IPV6, target_ip=HOST2_IPV6) self.testPacket(ndp_pkt) @autocleanup def testPacket(self, pkt): mcast_group_id = 10 mcast_ports = [self.port1, self.port2, self.port3] # Add multicast group. self.insert_pre_multicast_group( group_id=mcast_group_id, ports=mcast_ports) # Match eth dst: FF:FF:FF:FF:FF:FF (MAC broadcast for ARP requests) # Modify names to match content of P4Info file (look for the fully # qualified name of tables, match fields, and actions. # ---- START SOLUTION ---- self.insert(self.helper.build_table_entry( table_name="IngressPipeImpl.l2_ternary_table", match_fields={ # Ternary match. "hdr.ethernet.dst_addr": ( "FF:FF:FF:FF:FF:FF", "FF:FF:FF:FF:FF:FF") }, action_name="IngressPipeImpl.set_multicast_group", action_params={ "gid": mcast_group_id }, priority=DEFAULT_PRIORITY )) # ---- END SOLUTION ---- # Match eth dst: 33:33:**:**:**:** (IPv6 multicast for NDP requests) # Modify names to match content of P4Info file (look for the fully # qualified name of tables, match fields, and actions. # ---- START SOLUTION ---- self.insert(self.helper.build_table_entry( table_name="IngressPipeImpl.l2_ternary_table", match_fields={ # Ternary match (value, mask) "hdr.ethernet.dst_addr": ( "33:33:00:00:00:00", "FF:FF:00:00:00:00") }, action_name="IngressPipeImpl.set_multicast_group", action_params={ "gid": mcast_group_id }, priority=DEFAULT_PRIORITY )) # ---- END SOLUTION ---- # Insert CPU clone session. self.insert_pre_clone_session( session_id=CPU_CLONE_SESSION_ID, ports=[self.cpu_port]) # ACL entry to clone ARPs self.insert(self.helper.build_table_entry( table_name="IngressPipeImpl.acl_table", match_fields={ # Ternary match. "hdr.ethernet.ether_type": (ARP_ETH_TYPE, 0xffff) }, action_name="IngressPipeImpl.clone_to_cpu", priority=DEFAULT_PRIORITY )) # ACL entry to clone NDP Neighbor Solicitation self.insert(self.helper.build_table_entry( table_name="IngressPipeImpl.acl_table", match_fields={ # Ternary match. "hdr.ethernet.ether_type": (IPV6_ETH_TYPE, 0xffff), "local_metadata.ip_proto": (ICMPV6_IP_PROTO, 0xff), "local_metadata.icmp_type": (NS_ICMPV6_TYPE, 0xff) }, action_name="IngressPipeImpl.clone_to_cpu", priority=DEFAULT_PRIORITY )) for inport in mcast_ports: # Send packet... testutils.send_packet(self, inport, str(pkt)) # Pkt should be received on CPU via PacketIn... # Expected P4Runtime PacketIn message. exp_packet_in_msg = self.helper.build_packet_in( payload=str(pkt), metadata={ "ingress_port": inport, "_pad": 0 }) self.verify_packet_in(exp_packet_in_msg) # ...and on all ports except the ingress one. verify_ports = set(mcast_ports) verify_ports.discard(inport) for port in verify_ports: testutils.verify_packet(self, pkt, port) testutils.verify_no_other_packets(self) @group("bridging") class ArpNdpReplyWithCloneTest(P4RuntimeTest): """Tests ability to clone ARP replies and NDP Neighbor Advertisement (NA) messages as well as unicast forwarding to requesting host. """ def runTest(self): # Test With both ARP and NDP NS packets... print_inline("ARP reply ... ") # op=1 request, op=2 relpy arp_pkt = testutils.simple_arp_packet( eth_src=HOST1_MAC, eth_dst=HOST2_MAC, arp_op=2) self.testPacket(arp_pkt) print_inline("NDP NA ... ") ndp_pkt = genNdpNaPkt(target_ip=HOST1_IPV6, target_mac=HOST1_MAC) self.testPacket(ndp_pkt) @autocleanup def testPacket(self, pkt): # L2 unicast entry, match on pkt's eth dst address. # Modify names to match content of P4Info file (look for the fully # qualified name of tables, match fields, and actions. # ---- START SOLUTION ---- self.insert(self.helper.build_table_entry( table_name="IngressPipeImpl.l2_exact_table", match_fields={ # Exact match. "hdr.ethernet.dst_addr": pkt[Ether].dst }, action_name="IngressPipeImpl.set_egress_port", action_params={ "port_num": self.port2 } )) # ---- END SOLUTION ---- # CPU clone session. self.insert_pre_clone_session( session_id=CPU_CLONE_SESSION_ID, ports=[self.cpu_port]) # ACL entry to clone ARPs self.insert(self.helper.build_table_entry( table_name="IngressPipeImpl.acl_table", match_fields={ # Ternary match. "hdr.ethernet.ether_type": (ARP_ETH_TYPE, 0xffff) }, action_name="IngressPipeImpl.clone_to_cpu", priority=DEFAULT_PRIORITY )) # ACL entry to clone NDP Neighbor Solicitation self.insert(self.helper.build_table_entry( table_name="IngressPipeImpl.acl_table", match_fields={ # Ternary match. "hdr.ethernet.ether_type": (IPV6_ETH_TYPE, 0xffff), "local_metadata.ip_proto": (ICMPV6_IP_PROTO, 0xff), "local_metadata.icmp_type": (NA_ICMPV6_TYPE, 0xff) }, action_name="IngressPipeImpl.clone_to_cpu", priority=DEFAULT_PRIORITY )) testutils.send_packet(self, self.port1, str(pkt)) # Pkt should be received on CPU via PacketIn... # Expected P4Runtime PacketIn message. exp_packet_in_msg = self.helper.build_packet_in( payload=str(pkt), metadata={ "ingress_port": self.port1, "_pad": 0 }) self.verify_packet_in(exp_packet_in_msg) # ..and on port2 as indicated by the L2 unicast rule. testutils.verify_packet(self, pkt, self.port2) @group("bridging") class BridgingTest(P4RuntimeTest): """Tests basic L2 unicast forwarding""" def runTest(self): # Test with different type of packets. for pkt_type in ["tcp", "udp", "icmp", "tcpv6", "udpv6", "icmpv6"]: print_inline("%s ... " % pkt_type) pkt = getattr(testutils, "simple_%s_packet" % pkt_type)(pktlen=120) self.testPacket(pkt) @autocleanup def testPacket(self, pkt): # Insert L2 unicast entry, match on pkt's eth dst address. # Modify names to match content of P4Info file (look for the fully # qualified name of tables, match fields, and actions. # ---- START SOLUTION ---- self.insert(self.helper.build_table_entry( table_name="IngressPipeImpl.l2_exact_table", match_fields={ # Exact match. "hdr.ethernet.dst_addr": pkt[Ether].dst }, action_name="IngressPipeImpl.set_egress_port", action_params={ "port_num": self.port2 } )) # ---- END SOLUTION ---- # Test bidirectional forwarding by swapping MAC addresses on the pkt pkt2 = pkt_mac_swap(pkt.copy()) # Insert L2 unicast entry for pkt2. # Modify names to match content of P4Info file (look for the fully # qualified name of tables, match fields, and actions. # ---- START SOLUTION ---- self.insert(self.helper.build_table_entry( table_name="IngressPipeImpl.l2_exact_table", match_fields={ # Exact match. "hdr.ethernet.dst_addr": pkt2[Ether].dst }, action_name="IngressPipeImpl.set_egress_port", action_params={ "port_num": self.port1 } )) # ---- END SOLUTION ---- # Send and verify. testutils.send_packet(self, self.port1, str(pkt)) testutils.send_packet(self, self.port2, str(pkt2)) testutils.verify_each_packet_on_each_port( self, [pkt, pkt2], [self.port2, self.port1]) ================================================ FILE: ptf/tests/packetio.py ================================================ # Copyright 2019-present Open Networking Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # ------------------------------------------------------------------------------ # CONTROLLER PACKET-IN/OUT TESTS # # To run all tests in this file: # make p4-test TEST=packetio # # To run a specific test case: # make p4-test TEST=packetio. # # For example: # make p4-test TEST=packetio.PacketOutTest # ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------ # Modify everywhere you see TODO # # When providing your solution, make sure to use the same names for P4Runtime # entities as specified in your P4Info file. # # Test cases are based on the P4 program design suggested in the exercises # README. Make sure to modify the test cases accordingly if you decide to # implement the pipeline differently. # ------------------------------------------------------------------------------ from ptf.testutils import group from base_test import * CPU_CLONE_SESSION_ID = 99 @group("packetio") class PacketOutTest(P4RuntimeTest): """Tests controller packet-out capability by sending PacketOut messages and expecting a corresponding packet on the output port set in the PacketOut metadata. """ def runTest(self): for pkt_type in ["tcp", "udp", "icmp", "arp", "tcpv6", "udpv6", "icmpv6"]: print_inline("%s ... " % pkt_type) pkt = getattr(testutils, "simple_%s_packet" % pkt_type)() self.testPacket(pkt) def testPacket(self, pkt): for outport in [self.port1, self.port2]: # Build PacketOut message. # TODO EXERCISE 4 # Modify metadata names to match the content of your P4Info file # ---- START SOLUTION ---- packet_out_msg = self.helper.build_packet_out( payload=str(pkt), metadata={ "MODIFY ME": outport, "_pad": 0 }) # ---- END SOLUTION ---- # Send message and expect packet on the given data plane port. self.send_packet_out(packet_out_msg) testutils.verify_packet(self, pkt, outport) # Make sure packet was forwarded only on the specified ports testutils.verify_no_other_packets(self) @group("packetio") class PacketInTest(P4RuntimeTest): """Tests controller packet-in capability my matching on the packet EtherType and cloning to the CPU port. """ def runTest(self): for pkt_type in ["tcp", "udp", "icmp", "arp", "tcpv6", "udpv6", "icmpv6"]: print_inline("%s ... " % pkt_type) pkt = getattr(testutils, "simple_%s_packet" % pkt_type)() self.testPacket(pkt) @autocleanup def testPacket(self, pkt): # Insert clone to CPU session self.insert_pre_clone_session( session_id=CPU_CLONE_SESSION_ID, ports=[self.cpu_port]) # Insert ACL entry to match on the given eth_type and clone to CPU. eth_type = pkt[Ether].type # TODO EXERCISE 4 # Modify names to match content of P4Info file (look for the fully # qualified name of the ACL table, EtherType match field, and # clone_to_cpu action. # ---- START SOLUTION ---- self.insert(self.helper.build_table_entry( table_name="MODIFY ME", match_fields={ # Ternary match. "MODIFY ME": (eth_type, 0xffff) }, action_name="MODIFY ME", priority=DEFAULT_PRIORITY )) # ---- END SOLUTION ---- for inport in [self.port1, self.port2, self.port3]: # TODO EXERCISE 4 # Modify metadata names to match the content of your P4Info file # ---- START SOLUTION ---- # Expected P4Runtime PacketIn message. exp_packet_in_msg = self.helper.build_packet_in( payload=str(pkt), metadata={ "MODIFY ME": inport, "_pad": 0 }) # ---- END SOLUTION ---- # Send packet to given switch ingress port and expect P4Runtime # PacketIn message. testutils.send_packet(self, inport, str(pkt)) self.verify_packet_in(exp_packet_in_msg) ================================================ FILE: ptf/tests/routing.py ================================================ # Copyright 2019-present Open Networking Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # ------------------------------------------------------------------------------ # IPV6 ROUTING TESTS # # To run all tests: # make p4-test TEST=routing # # To run a specific test case: # make p4-test TEST=routing. # # For example: # make p4-test TEST=routing.IPv6RoutingTest # ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------ # Modify everywhere you see TODO # # When providing your solution, make sure to use the same names for P4Runtime # entities as specified in your P4Info file. # # Test cases are based on the P4 program design suggested in the exercises # README. Make sure to modify the test cases accordingly if you decide to # implement the pipeline differently. # ------------------------------------------------------------------------------ from ptf.testutils import group from base_test import * @group("routing") class IPv6RoutingTest(P4RuntimeTest): """Tests basic IPv6 routing""" def runTest(self): # Test with different type of packets. for pkt_type in ["tcpv6", "udpv6", "icmpv6"]: print_inline("%s ... " % pkt_type) pkt = getattr(testutils, "simple_%s_packet" % pkt_type)() self.testPacket(pkt) @autocleanup def testPacket(self, pkt): next_hop_mac = SWITCH2_MAC # Add entry to "My Station" table. Consider the given pkt's eth dst addr # as myStationMac address. # *** TODO EXERCISE 5 # Modify names to match content of P4Info file (look for the fully # qualified name of tables, match fields, and actions. # ---- START SOLUTION ---- self.insert(self.helper.build_table_entry( table_name="MODIFY ME", match_fields={ # Exact match. "MODIFY ME": pkt[Ether].dst }, action_name="NoAction" )) # ---- END SOLUTION ---- # Insert ECMP group with only one member (next_hop_mac) # *** TODO EXERCISE 5 # Modify names to match content of P4Info file (look for the fully # qualified name of tables, match fields, and actions. # ---- START SOLUTION ---- self.insert(self.helper.build_act_prof_group( act_prof_name="MODIFY ME", group_id=1, actions=[ # List of tuples (action name, action param dict) ("MODIFY ME", {"MODIFY ME": next_hop_mac}), ] )) # ---- END SOLUTION ---- # Insert L3 routing entry to map pkt's IPv6 dst addr to group # *** TODO EXERCISE 5 # Modify names to match content of P4Info file (look for the fully # qualified name of tables, match fields, and actions. # ---- START SOLUTION ---- self.insert(self.helper.build_table_entry( table_name="MODIFY ME", match_fields={ # LPM match (value, prefix) "MODIFY ME": (pkt[IPv6].dst, 128) }, group_id=1 )) # ---- END SOLUTION ---- # Insert L3 entry to map next_hop_mac to output port 2. # *** TODO EXERCISE 5 # Modify names to match content of P4Info file (look for the fully # qualified name of tables, match fields, and actions. # ---- START SOLUTION ---- self.insert(self.helper.build_table_entry( table_name="MODIFY ME", match_fields={ # Exact match "MODIFY ME": next_hop_mac }, action_name="MODIFY ME", action_params={ "MODIFY ME": self.port2 } )) # ---- END SOLUTION ---- # Expected pkt should have routed MAC addresses and decremented hop # limit (TTL). exp_pkt = pkt.copy() pkt_route(exp_pkt, next_hop_mac) pkt_decrement_ttl(exp_pkt) testutils.send_packet(self, self.port1, str(pkt)) testutils.verify_packet(self, exp_pkt, self.port2) @group("routing") class NdpReplyGenTest(P4RuntimeTest): """Tests automatic generation of NDP Neighbor Advertisement for IPV6 addresses associated to the switch interface. """ @autocleanup def runTest(self): switch_ip = SWITCH1_IPV6 target_mac = SWITCH1_MAC # Insert entry to transform NDP NA packets for the given target address # (match), to NDP NA packets with the given target MAC address (action # *** TODO EXERCISE 5 # Modify names to match content of P4Info file (look for the fully # qualified name of tables, match fields, and actions. # ---- START SOLUTION ---- self.insert(self.helper.build_table_entry( table_name="MODIFY ME", match_fields={ # Exact match. "MODIFY ME": switch_ip }, action_name="MODIFY ME", action_params={ "MODIFY ME": target_mac } )) # ---- END SOLUTION ---- # NDP Neighbor Solicitation packet pkt = genNdpNsPkt(target_ip=switch_ip) # NDP Neighbor Advertisement packet exp_pkt = genNdpNaPkt(target_ip=switch_ip, target_mac=target_mac, src_mac=target_mac, src_ip=switch_ip, dst_ip=pkt[IPv6].src) # Send NDP NS, expect NDP NA from the same port. testutils.send_packet(self, self.port1, str(pkt)) testutils.verify_packet(self, exp_pkt, self.port1) ================================================ FILE: ptf/tests/srv6.py ================================================ # Copyright 2019-present Open Networking Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # ------------------------------------------------------------------------------ # SRV6 TESTS # # To run all tests: # make p4-test TEST=srv6 # # To run a specific test case: # make p4-test TEST=srv6. # # For example: # make p4-test TEST=srv6.Srv6InsertTest # ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------ # Modify everywhere you see TODO # # When providing your solution, make sure to use the same names for P4Runtime # entities as specified in your P4Info file. # # Test cases are based on the P4 program design suggested in the exercises # README. Make sure to modify the test cases accordingly if you decide to # implement the pipeline differently. # ------------------------------------------------------------------------------ from ptf.testutils import group from base_test import * def insert_srv6_header(pkt, sid_list): """Applies SRv6 insert transformation to the given packet. """ # Set IPv6 dst to first SID... pkt[IPv6].dst = sid_list[0] # Insert SRv6 header between IPv6 header and payload sid_len = len(sid_list) srv6_hdr = IPv6ExtHdrSegmentRouting( nh=pkt[IPv6].nh, addresses=sid_list[::-1], len=sid_len * 2, segleft=sid_len - 1, lastentry=sid_len - 1) pkt[IPv6].nh = 43 # next IPv6 header is SR header pkt[IPv6].payload = srv6_hdr / pkt[IPv6].payload return pkt def pop_srv6_header(pkt): """Removes SRv6 header from the given packet. """ pkt[IPv6].nh = pkt[IPv6ExtHdrSegmentRouting].nh pkt[IPv6].payload = pkt[IPv6ExtHdrSegmentRouting].payload def set_cksum(pkt, cksum): if TCP in pkt: pkt[TCP].chksum = cksum if UDP in pkt: pkt[UDP].chksum = cksum if ICMPv6Unknown in pkt: pkt[ICMPv6Unknown].cksum = cksum @group("srv6") class Srv6InsertTest(P4RuntimeTest): """Tests SRv6 insert behavior, where the switch receives an IPv6 packet and inserts the SRv6 header """ def runTest(self): sid_lists = ( [SWITCH2_IPV6, SWITCH3_IPV6, HOST2_IPV6], [SWITCH2_IPV6, HOST2_IPV6], ) next_hop_mac = SWITCH2_MAC for sid_list in sid_lists: for pkt_type in ["tcpv6", "udpv6", "icmpv6"]: print_inline("%s %d SIDs ... " % (pkt_type, len(sid_list))) pkt = getattr(testutils, "simple_%s_packet" % pkt_type)() self.testPacket(pkt, sid_list, next_hop_mac) @autocleanup def testPacket(self, pkt, sid_list, next_hop_mac): # *** TODO EXERCISE 6 # Modify names to match content of P4Info file (look for the fully # qualified name of tables, match fields, and actions. # ---- START SOLUTION ---- # Add entry to "My Station" table. Consider the given pkt's eth dst addr # as myStationMac address. self.insert(self.helper.build_table_entry( table_name="MODIFY ME", match_fields={ # Exact match. "MODIFY ME": pkt[Ether].dst }, action_name="NoAction" )) # Insert SRv6 header when matching the pkt's IPV6 dst addr. # Action name an params are generated based on the number of SIDs given. # For example, with 2 SIDs: # action_name = IngressPipeImpl.srv6_t_insert_2 # action_params = { # "s1": sid[0], # "s2": sid[1] # } sid_len = len(sid_list) action_name = "IngressPipeImpl.srv6_t_insert_%d" % sid_len actions_params = {"s%d" % (x + 1): sid_list[x] for x in range(sid_len)} self.insert(self.helper.build_table_entry( table_name="MODIFY ME", match_fields={ # LPM match (value, prefix) "MODIFY ME": (pkt[IPv6].dst, 128) }, action_name=action_name, action_params=actions_params )) # Insert ECMP group with only one member (next_hop_mac) self.insert(self.helper.build_act_prof_group( act_prof_name="MODIFY ME", group_id=1, actions=[ # List of tuples (action name, {action param: value}) ("MODIFY ME", {"MODIFY ME": next_hop_mac}), ] )) # Now that we inserted the SRv6 header, we expect the pkt's IPv6 dst # addr to be the first on the SID list. # Match on L3 routing table. first_sid = sid_list[0] self.insert(self.helper.build_table_entry( table_name="MODIFY ME", match_fields={ # LPM match (value, prefix) "hdr.ipv6.dst_addr": (first_sid, 128) }, group_id=1 )) # Map next_hop_mac to output port self.insert(self.helper.build_table_entry( table_name="MODIFY ME", match_fields={ # Exact match "MODIFY ME": next_hop_mac }, action_name="MODIFY ME", action_params={ "MODIFY ME": self.port2 } )) # ---- END SOLUTION ---- # Build expected packet from the given one... exp_pkt = insert_srv6_header(pkt.copy(), sid_list) # Route and decrement TTL pkt_route(exp_pkt, next_hop_mac) pkt_decrement_ttl(exp_pkt) # Bonus: update P4 program to calculate correct checksum set_cksum(pkt, 1) set_cksum(exp_pkt, 1) testutils.send_packet(self, self.port1, str(pkt)) testutils.verify_packet(self, exp_pkt, self.port2) @group("srv6") class Srv6TransitTest(P4RuntimeTest): """Tests SRv6 transit behavior, where the switch ignores the SRv6 header and routes the packet normally, without applying any SRv6-related modifications. """ def runTest(self): my_sid = SWITCH1_IPV6 sid_lists = ( [SWITCH2_IPV6, SWITCH3_IPV6, HOST2_IPV6], [SWITCH2_IPV6, HOST2_IPV6], ) next_hop_mac = SWITCH2_MAC for sid_list in sid_lists: for pkt_type in ["tcpv6", "udpv6", "icmpv6"]: print_inline("%s %d SIDs ... " % (pkt_type, len(sid_list))) pkt = getattr(testutils, "simple_%s_packet" % pkt_type)() pkt = insert_srv6_header(pkt, sid_list) self.testPacket(pkt, next_hop_mac, my_sid) @autocleanup def testPacket(self, pkt, next_hop_mac, my_sid): # *** TODO EXERCISE 6 # Modify names to match content of P4Info file (look for the fully # qualified name of tables, match fields, and actions. # ---- START SOLUTION ---- # Add entry to "My Station" table. Consider the given pkt's eth dst addr # as myStationMac address. self.insert(self.helper.build_table_entry( table_name="MODIFY ME", match_fields={ # Exact match. "MODIFY ME": pkt[Ether].dst }, action_name="NoAction" )) # This should be missed, this is plain IPv6 routing. self.insert(self.helper.build_table_entry( table_name="MODIFY ME", match_fields={ # Longest prefix match (value, prefix length) "MODIFY ME": (my_sid, 128) }, action_name="MODIFY ME" )) # Insert ECMP group with only one member (next_hop_mac) self.insert(self.helper.build_act_prof_group( act_prof_name="MODIFY ME", group_id=1, actions=[ # List of tuples (action name, {action param: value}) ("MODIFY ME", {"MODIFY ME": next_hop_mac}), ] )) # Map pkt's IPv6 dst addr to group self.insert(self.helper.build_table_entry( table_name="MODIFY ME", match_fields={ # LPM match (value, prefix) "hdr.ipv6.dst_addr": (pkt[IPv6].dst, 128) }, group_id=1 )) # Map next_hop_mac to output port self.insert(self.helper.build_table_entry( table_name="MODIFY ME", match_fields={ # Exact match "MODIFY ME": next_hop_mac }, action_name="MODIFY ME", action_params={ "MODIFY ME": self.port2 } )) # ---- END SOLUTION ---- # Build expected packet from the given one... exp_pkt = pkt.copy() # Route and decrement TTL pkt_route(exp_pkt, next_hop_mac) pkt_decrement_ttl(exp_pkt) # Bonus: update P4 program to calculate correct checksum set_cksum(pkt, 1) set_cksum(exp_pkt, 1) testutils.send_packet(self, self.port1, str(pkt)) testutils.verify_packet(self, exp_pkt, self.port2) @group("srv6") class Srv6EndTest(P4RuntimeTest): """Tests SRv6 end behavior (without pop), where the switch forwards the packet to the next SID found in the SRv6 header. """ def runTest(self): my_sid = SWITCH2_IPV6 sid_lists = ( [SWITCH2_IPV6, SWITCH3_IPV6, HOST2_IPV6], [SWITCH2_IPV6, SWITCH3_IPV6, SWITCH4_IPV6, HOST2_IPV6], ) next_hop_mac = SWITCH3_MAC for sid_list in sid_lists: for pkt_type in ["tcpv6", "udpv6", "icmpv6"]: print_inline("%s %d SIDs ... " % (pkt_type, len(sid_list))) pkt = getattr(testutils, "simple_%s_packet" % pkt_type)() pkt = insert_srv6_header(pkt, sid_list) self.testPacket(pkt, sid_list, next_hop_mac, my_sid) @autocleanup def testPacket(self, pkt, sid_list, next_hop_mac, my_sid): # *** TODO EXERCISE 6 # Modify names to match content of P4Info file (look for the fully # qualified name of tables, match fields, and actions. # ---- START SOLUTION ---- # Add entry to "My Station" table. Consider the given pkt's eth dst addr # as myStationMac address. self.insert(self.helper.build_table_entry( table_name="MODIFY ME", match_fields={ # Exact match. "MODIFY ME": pkt[Ether].dst }, action_name="NoAction" )) # This should be matched, we want SRv6 end behavior to be applied. self.insert(self.helper.build_table_entry( table_name="MODIFY ME", match_fields={ # Longest prefix match (value, prefix length) "MODIFY ME": (my_sid, 128) }, action_name="MODIFY ME" )) # Insert ECMP group with only one member (next_hop_mac) self.insert(self.helper.build_act_prof_group( act_prof_name="MODIFY ME", group_id=1, actions=[ # List of tuples (action name, {action param: value}) ("MODIFY ME", {"MODIFY ME": next_hop_mac}), ] )) # After applying the srv6_end action, we expect to IPv6 dst to be the # next SID in the list, we should route based on that. next_sid = sid_list[1] self.insert(self.helper.build_table_entry( table_name="MODIFY ME", match_fields={ # LPM match (value, prefix) "hdr.ipv6.dst_addr": (next_sid, 128) }, group_id=1 )) # Map next_hop_mac to output port self.insert(self.helper.build_table_entry( table_name="MODIFY ME", match_fields={ # Exact match "MODIFY ME": next_hop_mac }, action_name="MODIFY ME", action_params={ "MODIFY ME": self.port2 } )) # ---- END SOLUTION ---- # Build expected packet from the given one... exp_pkt = pkt.copy() # Set IPv6 dst to next SID and decrement segleft... exp_pkt[IPv6].dst = next_sid exp_pkt[IPv6ExtHdrSegmentRouting].segleft -= 1 # Route and decrement TTL... pkt_route(exp_pkt, next_hop_mac) pkt_decrement_ttl(exp_pkt) # Bonus: update P4 program to calculate correct checksum set_cksum(pkt, 1) set_cksum(exp_pkt, 1) testutils.send_packet(self, self.port1, str(pkt)) testutils.verify_packet(self, exp_pkt, self.port2) @group("srv6") class Srv6EndPspTest(P4RuntimeTest): """Tests SRv6 End with Penultimate Segment Pop (PSP) behavior, where the switch SID is the penultimate in the SID list and the switch removes the SRv6 header before routing the packet to it's final destination (last SID in the list). """ def runTest(self): my_sid = SWITCH3_IPV6 sid_lists = ( [SWITCH3_IPV6, HOST2_IPV6], ) next_hop_mac = HOST2_MAC for sid_list in sid_lists: for pkt_type in ["tcpv6", "udpv6", "icmpv6"]: print_inline("%s %d SIDs ... " % (pkt_type, len(sid_list))) pkt = getattr(testutils, "simple_%s_packet" % pkt_type)() pkt = insert_srv6_header(pkt, sid_list) self.testPacket(pkt, sid_list, next_hop_mac, my_sid) @autocleanup def testPacket(self, pkt, sid_list, next_hop_mac, my_sid): # *** TODO EXERCISE 6 # Modify names to match content of P4Info file (look for the fully # qualified name of tables, match fields, and actions. # ---- START SOLUTION ---- # Add entry to "My Station" table. Consider the given pkt's eth dst addr # as myStationMac address. self.insert(self.helper.build_table_entry( table_name="MODIFY ME", match_fields={ # Exact match. "MODIFY ME": pkt[Ether].dst }, action_name="NoAction" )) # This should be matched, we want SRv6 end behavior to be applied. self.insert(self.helper.build_table_entry( table_name="MODIFY ME", match_fields={ # Longest prefix match (value, prefix length) "MODIFY ME": (my_sid, 128) }, action_name="MODIFY ME" )) # Insert ECMP group with only one member (next_hop_mac) self.insert(self.helper.build_act_prof_group( act_prof_name="MODIFY ME", group_id=1, actions=[ # List of tuples (action name, {action param: value}) ("MODIFY ME", {"MODIFY ME": next_hop_mac}), ] )) # Map pkt's IPv6 dst addr to group next_sid = sid_list[1] self.insert(self.helper.build_table_entry( table_name="MODIFY ME", match_fields={ # LPM match (value, prefix) "hdr.ipv6.dst_addr": (next_sid, 128) }, group_id=1 )) # Map next_hop_mac to output port self.insert(self.helper.build_table_entry( table_name="MODIFY ME", match_fields={ # Exact match "MODIFY ME": next_hop_mac }, action_name="MODIFY ME", action_params={ "MODIFY ME": self.port2 } )) # ---- END SOLUTION ---- # Build expected packet from the given one... exp_pkt = pkt.copy() # Expect IPv6 dst to be the next SID... exp_pkt[IPv6].dst = next_sid # Remove SRv6 header since we are performing PSP. pop_srv6_header(exp_pkt) # Route and decrement TTL pkt_route(exp_pkt, next_hop_mac) pkt_decrement_ttl(exp_pkt) # Bonus: update P4 program to calculate correct checksum set_cksum(pkt, 1) set_cksum(exp_pkt, 1) testutils.send_packet(self, self.port1, str(pkt)) testutils.verify_packet(self, exp_pkt, self.port2) ================================================ FILE: solution/app/src/main/java/org/onosproject/ngsdn/tutorial/Ipv6RoutingComponent.java ================================================ /* * Copyright 2019-present Open Networking Foundation * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.onosproject.ngsdn.tutorial; import com.google.common.collect.Lists; import org.onlab.packet.Ip6Address; import org.onlab.packet.Ip6Prefix; import org.onlab.packet.IpAddress; import org.onlab.packet.IpPrefix; import org.onlab.packet.MacAddress; import org.onlab.util.ItemNotFoundException; import org.onosproject.core.ApplicationId; import org.onosproject.mastership.MastershipService; import org.onosproject.net.Device; import org.onosproject.net.DeviceId; import org.onosproject.net.Host; import org.onosproject.net.Link; import org.onosproject.net.PortNumber; import org.onosproject.net.config.NetworkConfigService; import org.onosproject.net.device.DeviceEvent; import org.onosproject.net.device.DeviceListener; import org.onosproject.net.device.DeviceService; import org.onosproject.net.flow.FlowRule; import org.onosproject.net.flow.FlowRuleService; import org.onosproject.net.flow.criteria.PiCriterion; import org.onosproject.net.group.GroupDescription; import org.onosproject.net.group.GroupService; import org.onosproject.net.host.HostEvent; import org.onosproject.net.host.HostListener; import org.onosproject.net.host.HostService; import org.onosproject.net.host.InterfaceIpAddress; import org.onosproject.net.intf.Interface; import org.onosproject.net.intf.InterfaceService; import org.onosproject.net.link.LinkEvent; import org.onosproject.net.link.LinkListener; import org.onosproject.net.link.LinkService; import org.onosproject.net.pi.model.PiActionId; import org.onosproject.net.pi.model.PiActionParamId; import org.onosproject.net.pi.model.PiMatchFieldId; import org.onosproject.net.pi.runtime.PiAction; import org.onosproject.net.pi.runtime.PiActionParam; import org.onosproject.net.pi.runtime.PiActionProfileGroupId; import org.onosproject.net.pi.runtime.PiTableAction; import org.osgi.service.component.annotations.Activate; import org.osgi.service.component.annotations.Component; import org.osgi.service.component.annotations.Deactivate; import org.osgi.service.component.annotations.Reference; import org.osgi.service.component.annotations.ReferenceCardinality; import org.onosproject.ngsdn.tutorial.common.FabricDeviceConfig; import org.onosproject.ngsdn.tutorial.common.Utils; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import java.util.Collection; import java.util.Collections; import java.util.List; import java.util.Optional; import java.util.Set; import java.util.stream.Collectors; import static com.google.common.collect.Streams.stream; import static org.onosproject.ngsdn.tutorial.AppConstants.INITIAL_SETUP_DELAY; /** * App component that configures devices to provide IPv6 routing capabilities * across the whole fabric. */ @Component( immediate = true, // *** TODO EXERCISE 5 // set to true when ready enabled = true ) public class Ipv6RoutingComponent { private static final Logger log = LoggerFactory.getLogger(Ipv6RoutingComponent.class); private static final int DEFAULT_ECMP_GROUP_ID = 0xec3b0000; private static final long GROUP_INSERT_DELAY_MILLIS = 200; private final HostListener hostListener = new InternalHostListener(); private final LinkListener linkListener = new InternalLinkListener(); private final DeviceListener deviceListener = new InternalDeviceListener(); private ApplicationId appId; //-------------------------------------------------------------------------- // ONOS CORE SERVICE BINDING // // These variables are set by the Karaf runtime environment before calling // the activate() method. //-------------------------------------------------------------------------- @Reference(cardinality = ReferenceCardinality.MANDATORY) private FlowRuleService flowRuleService; @Reference(cardinality = ReferenceCardinality.MANDATORY) private HostService hostService; @Reference(cardinality = ReferenceCardinality.MANDATORY) private MastershipService mastershipService; @Reference(cardinality = ReferenceCardinality.MANDATORY) private GroupService groupService; @Reference(cardinality = ReferenceCardinality.MANDATORY) private DeviceService deviceService; @Reference(cardinality = ReferenceCardinality.MANDATORY) private NetworkConfigService networkConfigService; @Reference(cardinality = ReferenceCardinality.MANDATORY) private InterfaceService interfaceService; @Reference(cardinality = ReferenceCardinality.MANDATORY) private LinkService linkService; @Reference(cardinality = ReferenceCardinality.MANDATORY) private MainComponent mainComponent; //-------------------------------------------------------------------------- // COMPONENT ACTIVATION. // // When loading/unloading the app the Karaf runtime environment will call // activate()/deactivate(). //-------------------------------------------------------------------------- @Activate protected void activate() { appId = mainComponent.getAppId(); hostService.addListener(hostListener); linkService.addListener(linkListener); deviceService.addListener(deviceListener); // Schedule set up for all devices. mainComponent.scheduleTask(this::setUpAllDevices, INITIAL_SETUP_DELAY); log.info("Started"); } @Deactivate protected void deactivate() { hostService.removeListener(hostListener); linkService.removeListener(linkListener); deviceService.removeListener(deviceListener); log.info("Stopped"); } //-------------------------------------------------------------------------- // METHODS TO COMPLETE. // // Complete the implementation wherever you see TODO. //-------------------------------------------------------------------------- /** * Sets up the "My Station" table for the given device using the * myStationMac address found in the config. *

* This method will be called at component activation for each device * (switch) known by ONOS, and every time a new device-added event is * captured by the InternalDeviceListener defined below. * * @param deviceId the device ID */ private void setUpMyStationTable(DeviceId deviceId) { log.info("Adding My Station rules to {}...", deviceId); final MacAddress myStationMac = getMyStationMac(deviceId); // HINT: in our solution, the My Station table matches on the *ethernet // destination* and there is only one action called *NoAction*, which is // used as an indication of "table hit" in the control block. // *** TODO EXERCISE 5 // Modify P4Runtime entity names to match content of P4Info file (look // for the fully qualified name of tables, match fields, and actions. // ---- START SOLUTION ---- final String tableId = "IngressPipeImpl.my_station_table"; final PiCriterion match = PiCriterion.builder() .matchExact( PiMatchFieldId.of("hdr.ethernet.dst_addr"), myStationMac.toBytes()) .build(); // Creates an action which do *NoAction* when hit. final PiTableAction action = PiAction.builder() .withId(PiActionId.of("NoAction")) .build(); // ---- END SOLUTION ---- final FlowRule myStationRule = Utils.buildFlowRule( deviceId, appId, tableId, match, action); flowRuleService.applyFlowRules(myStationRule); } /** * Creates an ONOS SELECT group for the routing table to provide ECMP * forwarding for the given collection of next hop MAC addresses. ONOS * SELECT groups are equivalent to P4Runtime action selector groups. *

* This method will be called by the routing policy methods below to insert * groups in the L3 table * * @param nextHopMacs the collection of mac addresses of next hops * @param deviceId the device where the group will be installed * @return a SELECT group */ private GroupDescription createNextHopGroup(int groupId, Collection nextHopMacs, DeviceId deviceId) { String actionProfileId = "IngressPipeImpl.ecmp_selector"; final List actions = Lists.newArrayList(); // Build one "set next hop" action for each next hop // *** TODO EXERCISE 5 // Modify P4Runtime entity names to match content of P4Info file (look // for the fully qualified name of tables, match fields, and actions. // ---- START SOLUTION ---- final String tableId = "IngressPipeImpl.routing_v6_table"; for (MacAddress nextHopMac : nextHopMacs) { final PiAction action = PiAction.builder() .withId(PiActionId.of("IngressPipeImpl.set_next_hop")) .withParameter(new PiActionParam( // Action param name. PiActionParamId.of("dmac"), // Action param value. nextHopMac.toBytes())) .build(); actions.add(action); } // ---- END SOLUTION ---- return Utils.buildSelectGroup( deviceId, tableId, actionProfileId, groupId, actions, appId); } /** * Creates a routing flow rule that matches on the given IPv6 prefix and * executes the given group ID (created before). * * @param deviceId the device where flow rule will be installed * @param ip6Prefix the IPv6 prefix * @param groupId the group ID * @return a flow rule */ private FlowRule createRoutingRule(DeviceId deviceId, Ip6Prefix ip6Prefix, int groupId) { // *** TODO EXERCISE 5 // Modify P4Runtime entity names to match content of P4Info file (look // for the fully qualified name of tables, match fields, and actions. // ---- START SOLUTION ---- final String tableId = "IngressPipeImpl.routing_v6_table"; final PiCriterion match = PiCriterion.builder() .matchLpm( PiMatchFieldId.of("hdr.ipv6.dst_addr"), ip6Prefix.address().toOctets(), ip6Prefix.prefixLength()) .build(); final PiTableAction action = PiActionProfileGroupId.of(groupId); // ---- END SOLUTION ---- return Utils.buildFlowRule( deviceId, appId, tableId, match, action); } /** * Creates a flow rule for the L2 table mapping the given next hop MAC to * the given output port. *

* This is called by the routing policy methods below to establish L2-based * forwarding inside the fabric, e.g., when deviceId is a leaf switch and * nextHopMac is the one of a spine switch. * * @param deviceId the device * @param nexthopMac the next hop (destination) mac * @param outPort the output port */ private FlowRule createL2NextHopRule(DeviceId deviceId, MacAddress nexthopMac, PortNumber outPort) { // *** TODO EXERCISE 5 // Modify P4Runtime entity names to match content of P4Info file (look // for the fully qualified name of tables, match fields, and actions. // ---- START SOLUTION ---- final String tableId = "IngressPipeImpl.l2_exact_table"; final PiCriterion match = PiCriterion.builder() .matchExact(PiMatchFieldId.of("hdr.ethernet.dst_addr"), nexthopMac.toBytes()) .build(); final PiAction action = PiAction.builder() .withId(PiActionId.of("IngressPipeImpl.set_egress_port")) .withParameter(new PiActionParam( PiActionParamId.of("port_num"), outPort.toLong())) .build(); // ---- END SOLUTION ---- return Utils.buildFlowRule( deviceId, appId, tableId, match, action); } //-------------------------------------------------------------------------- // EVENT LISTENERS // // Events are processed only if isRelevant() returns true. //-------------------------------------------------------------------------- /** * Listener of host events which triggers configuration of routing rules on * the device where the host is attached. */ class InternalHostListener implements HostListener { @Override public boolean isRelevant(HostEvent event) { switch (event.type()) { case HOST_ADDED: break; case HOST_REMOVED: case HOST_UPDATED: case HOST_MOVED: default: // Ignore other events. // Food for thoughts: // how to support host moved/removed events? return false; } // Process host event only if this controller instance is the master // for the device where this host is attached. final Host host = event.subject(); final DeviceId deviceId = host.location().deviceId(); return mastershipService.isLocalMaster(deviceId); } @Override public void event(HostEvent event) { Host host = event.subject(); DeviceId deviceId = host.location().deviceId(); mainComponent.getExecutorService().execute(() -> { log.info("{} event! host={}, deviceId={}, port={}", event.type(), host.id(), deviceId, host.location().port()); setUpHostRules(deviceId, host); }); } } /** * Listener of link events, which triggers configuration of routing rules to * forward packets across the fabric, i.e. from leaves to spines and vice * versa. *

* Reacting to link events instead of device ones, allows us to make sure * all device are always configured with a topology view that includes all * links, e.g. modifying an ECMP group as soon as a new link is added. The * downside is that we might be configuring the same device twice for the * same set of links/paths. However, the ONOS core treats these cases as a * no-op when the device is already configured with the desired forwarding * state (i.e. flows and groups) */ class InternalLinkListener implements LinkListener { @Override public boolean isRelevant(LinkEvent event) { switch (event.type()) { case LINK_ADDED: break; case LINK_UPDATED: case LINK_REMOVED: default: return false; } DeviceId srcDev = event.subject().src().deviceId(); DeviceId dstDev = event.subject().dst().deviceId(); return mastershipService.isLocalMaster(srcDev) || mastershipService.isLocalMaster(dstDev); } @Override public void event(LinkEvent event) { DeviceId srcDev = event.subject().src().deviceId(); DeviceId dstDev = event.subject().dst().deviceId(); if (mastershipService.isLocalMaster(srcDev)) { mainComponent.getExecutorService().execute(() -> { log.info("{} event! Configuring {}... linkSrc={}, linkDst={}", event.type(), srcDev, srcDev, dstDev); setUpFabricRoutes(srcDev); setUpL2NextHopRules(srcDev); }); } if (mastershipService.isLocalMaster(dstDev)) { mainComponent.getExecutorService().execute(() -> { log.info("{} event! Configuring {}... linkSrc={}, linkDst={}", event.type(), dstDev, srcDev, dstDev); setUpFabricRoutes(dstDev); setUpL2NextHopRules(dstDev); }); } } } /** * Listener of device events which triggers configuration of the My Station * table. */ class InternalDeviceListener implements DeviceListener { @Override public boolean isRelevant(DeviceEvent event) { switch (event.type()) { case DEVICE_AVAILABILITY_CHANGED: case DEVICE_ADDED: break; default: return false; } // Process device event if this controller instance is the master // for the device and the device is available. DeviceId deviceId = event.subject().id(); return mastershipService.isLocalMaster(deviceId) && deviceService.isAvailable(event.subject().id()); } @Override public void event(DeviceEvent event) { mainComponent.getExecutorService().execute(() -> { DeviceId deviceId = event.subject().id(); log.info("{} event! device id={}", event.type(), deviceId); setUpMyStationTable(deviceId); }); } } //-------------------------------------------------------------------------- // ROUTING POLICY METHODS // // Called by event listeners, these methods implement the actual routing // policy, responsible of computing paths and creating ECMP groups. //-------------------------------------------------------------------------- /** * Set up L2 nexthop rules of a device to providing forwarding inside the * fabric, i.e. between leaf and spine switches. * * @param deviceId the device ID */ private void setUpL2NextHopRules(DeviceId deviceId) { Set egressLinks = linkService.getDeviceEgressLinks(deviceId); for (Link link : egressLinks) { // For each other switch directly connected to this. final DeviceId nextHopDevice = link.dst().deviceId(); // Get port of this device connecting to next hop. final PortNumber outPort = link.src().port(); // Get next hop MAC address. final MacAddress nextHopMac = getMyStationMac(nextHopDevice); final FlowRule nextHopRule = createL2NextHopRule( deviceId, nextHopMac, outPort); flowRuleService.applyFlowRules(nextHopRule); } } /** * Sets up the given device with the necessary rules to route packets to the * given host. * * @param deviceId deviceId the device ID * @param host the host */ private void setUpHostRules(DeviceId deviceId, Host host) { // Get all IPv6 addresses associated to this host. In this tutorial we // use hosts with only 1 IPv6 address. final Collection hostIpv6Addrs = host.ipAddresses().stream() .filter(IpAddress::isIp6) .map(IpAddress::getIp6Address) .collect(Collectors.toSet()); if (hostIpv6Addrs.isEmpty()) { // Ignore. log.debug("No IPv6 addresses for host {}, ignore", host.id()); return; } else { log.info("Adding routes on {} for host {} [{}]", deviceId, host.id(), hostIpv6Addrs); } // Create an ECMP group with only one member, where the group ID is // derived from the host MAC. final MacAddress hostMac = host.mac(); int groupId = macToGroupId(hostMac); final GroupDescription group = createNextHopGroup( groupId, Collections.singleton(hostMac), deviceId); // Map each host IPV6 address to corresponding /128 prefix and obtain a // flow rule that points to the group ID. In this tutorial we expect // only one flow rule per host. final List flowRules = hostIpv6Addrs.stream() .map(IpAddress::toIpPrefix) .filter(IpPrefix::isIp6) .map(IpPrefix::getIp6Prefix) .map(prefix -> createRoutingRule(deviceId, prefix, groupId)) .collect(Collectors.toList()); // Helper function to install flows after groups, since here flows // points to the group and P4Runtime enforces this dependency during // write operations. insertInOrder(group, flowRules); } /** * Set up routes on a given device to forward packets across the fabric, * making a distinction between spines and leaves. * * @param deviceId the device ID. */ private void setUpFabricRoutes(DeviceId deviceId) { if (isSpine(deviceId)) { setUpSpineRoutes(deviceId); } else { setUpLeafRoutes(deviceId); } } /** * Insert routing rules on the given spine switch, matching on leaf * interface subnets and forwarding packets to the corresponding leaf. * * @param spineId the spine device ID */ private void setUpSpineRoutes(DeviceId spineId) { log.info("Adding up spine routes on {}...", spineId); for (Device device : deviceService.getDevices()) { if (isSpine(device.id())) { // We only need routes to leaf switches. Ignore spines. continue; } final DeviceId leafId = device.id(); final MacAddress leafMac = getMyStationMac(leafId); final Set subnetsToRoute = getInterfaceIpv6Prefixes(leafId); // Since we're here, we also add a route for SRv6 (Exercise 7), to // forward packets with IPv6 dst the SID of a leaf switch. final Ip6Address leafSid = getDeviceSid(leafId); subnetsToRoute.add(Ip6Prefix.valueOf(leafSid, 128)); // Create a group with only one member. int groupId = macToGroupId(leafMac); GroupDescription group = createNextHopGroup( groupId, Collections.singleton(leafMac), spineId); List flowRules = subnetsToRoute.stream() .map(subnet -> createRoutingRule(spineId, subnet, groupId)) .collect(Collectors.toList()); insertInOrder(group, flowRules); } } /** * Insert routing rules on the given leaf switch, matching on interface * subnets associated to other leaves and forwarding packets the spines * using ECMP. * * @param leafId the leaf device ID */ private void setUpLeafRoutes(DeviceId leafId) { log.info("Setting up leaf routes: {}", leafId); // Get the set of subnets (interface IPv6 prefixes) associated to other // leafs but not this one. Set subnetsToRouteViaSpines = stream(deviceService.getDevices()) .map(Device::id) .filter(this::isLeaf) .filter(deviceId -> !deviceId.equals(leafId)) .map(this::getInterfaceIpv6Prefixes) .flatMap(Collection::stream) .collect(Collectors.toSet()); // Get myStationMac address of all spines. Set spineMacs = stream(deviceService.getDevices()) .map(Device::id) .filter(this::isSpine) .map(this::getMyStationMac) .collect(Collectors.toSet()); // Create an ECMP group to distribute traffic across all spines. final int groupId = DEFAULT_ECMP_GROUP_ID; final GroupDescription ecmpGroup = createNextHopGroup( groupId, spineMacs, leafId); // Generate a flow rule for each subnet pointing to the ECMP group. List flowRules = subnetsToRouteViaSpines.stream() .map(subnet -> createRoutingRule(leafId, subnet, groupId)) .collect(Collectors.toList()); insertInOrder(ecmpGroup, flowRules); // Since we're here, we also add a route for SRv6 (Exercise 7), to // forward packets with IPv6 dst the SID of a spine switch, in this case // using a single-member group. stream(deviceService.getDevices()) .map(Device::id) .filter(this::isSpine) .forEach(spineId -> { MacAddress spineMac = getMyStationMac(spineId); Ip6Address spineSid = getDeviceSid(spineId); int spineGroupId = macToGroupId(spineMac); GroupDescription group = createNextHopGroup( spineGroupId, Collections.singleton(spineMac), leafId); FlowRule routingRule = createRoutingRule( leafId, Ip6Prefix.valueOf(spineSid, 128), spineGroupId); insertInOrder(group, Collections.singleton(routingRule)); }); } //-------------------------------------------------------------------------- // UTILITY METHODS //-------------------------------------------------------------------------- /** * Returns true if the given device has isSpine flag set to true in the * config, false otherwise. * * @param deviceId the device ID * @return true if the device is a spine, false otherwise */ private boolean isSpine(DeviceId deviceId) { return getDeviceConfig(deviceId).map(FabricDeviceConfig::isSpine) .orElseThrow(() -> new ItemNotFoundException( "Missing isSpine config for " + deviceId)); } /** * Returns true if the given device is not configured as spine. * * @param deviceId the device ID * @return true if the device is a leaf, false otherwise */ private boolean isLeaf(DeviceId deviceId) { return !isSpine(deviceId); } /** * Returns the MAC address configured in the "myStationMac" property of the * given device config. * * @param deviceId the device ID * @return MyStation MAC address */ private MacAddress getMyStationMac(DeviceId deviceId) { return getDeviceConfig(deviceId) .map(FabricDeviceConfig::myStationMac) .orElseThrow(() -> new ItemNotFoundException( "Missing myStationMac config for " + deviceId)); } /** * Returns the FabricDeviceConfig config object for the given device. * * @param deviceId the device ID * @return FabricDeviceConfig device config */ private Optional getDeviceConfig(DeviceId deviceId) { FabricDeviceConfig config = networkConfigService.getConfig( deviceId, FabricDeviceConfig.class); return Optional.ofNullable(config); } /** * Returns the set of interface IPv6 subnets (prefixes) configured for the * given device. * * @param deviceId the device ID * @return set of IPv6 prefixes */ private Set getInterfaceIpv6Prefixes(DeviceId deviceId) { return interfaceService.getInterfaces().stream() .filter(iface -> iface.connectPoint().deviceId().equals(deviceId)) .map(Interface::ipAddressesList) .flatMap(Collection::stream) .map(InterfaceIpAddress::subnetAddress) .filter(IpPrefix::isIp6) .map(IpPrefix::getIp6Prefix) .collect(Collectors.toSet()); } /** * Returns a 32 bit bit group ID from the given MAC address. * * @param mac the MAC address * @return an integer */ private int macToGroupId(MacAddress mac) { return mac.hashCode() & 0x7fffffff; } /** * Inserts the given groups and flow rules in order, groups first, then flow * rules. In P4Runtime, when operating on an indirect table (i.e. with * action selectors), groups must be inserted before table entries. * * @param group the group * @param flowRules the flow rules depending on the group */ private void insertInOrder(GroupDescription group, Collection flowRules) { try { groupService.addGroup(group); // Wait for groups to be inserted. Thread.sleep(GROUP_INSERT_DELAY_MILLIS); flowRules.forEach(flowRuleService::applyFlowRules); } catch (InterruptedException e) { log.error("Interrupted!", e); Thread.currentThread().interrupt(); } } /** * Gets Srv6 SID for the given device. * * @param deviceId the device ID * @return SID for the device */ private Ip6Address getDeviceSid(DeviceId deviceId) { return getDeviceConfig(deviceId) .map(FabricDeviceConfig::mySid) .orElseThrow(() -> new ItemNotFoundException( "Missing mySid config for " + deviceId)); } /** * Sets up IPv6 routing on all devices known by ONOS and for which this ONOS * node instance is currently master. */ private synchronized void setUpAllDevices() { // Set up host routes stream(deviceService.getAvailableDevices()) .map(Device::id) .filter(mastershipService::isLocalMaster) .forEach(deviceId -> { log.info("*** IPV6 ROUTING - Starting initial set up for {}...", deviceId); setUpMyStationTable(deviceId); setUpFabricRoutes(deviceId); setUpL2NextHopRules(deviceId); hostService.getConnectedHosts(deviceId) .forEach(host -> setUpHostRules(deviceId, host)); }); } } ================================================ FILE: solution/app/src/main/java/org/onosproject/ngsdn/tutorial/L2BridgingComponent.java ================================================ /* * Copyright 2019-present Open Networking Foundation * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.onosproject.ngsdn.tutorial; import org.onlab.packet.MacAddress; import org.onosproject.core.ApplicationId; import org.onosproject.mastership.MastershipService; import org.onosproject.net.ConnectPoint; import org.onosproject.net.DeviceId; import org.onosproject.net.Host; import org.onosproject.net.PortNumber; import org.onosproject.net.config.NetworkConfigService; import org.onosproject.net.device.DeviceEvent; import org.onosproject.net.device.DeviceListener; import org.onosproject.net.device.DeviceService; import org.onosproject.net.flow.FlowRule; import org.onosproject.net.flow.FlowRuleService; import org.onosproject.net.flow.criteria.PiCriterion; import org.onosproject.net.group.GroupDescription; import org.onosproject.net.group.GroupService; import org.onosproject.net.host.HostEvent; import org.onosproject.net.host.HostListener; import org.onosproject.net.host.HostService; import org.onosproject.net.intf.Interface; import org.onosproject.net.intf.InterfaceService; import org.onosproject.net.pi.model.PiActionId; import org.onosproject.net.pi.model.PiActionParamId; import org.onosproject.net.pi.model.PiMatchFieldId; import org.onosproject.net.pi.runtime.PiAction; import org.onosproject.net.pi.runtime.PiActionParam; import org.osgi.service.component.annotations.Activate; import org.osgi.service.component.annotations.Component; import org.osgi.service.component.annotations.Deactivate; import org.osgi.service.component.annotations.Reference; import org.osgi.service.component.annotations.ReferenceCardinality; import org.onosproject.ngsdn.tutorial.common.FabricDeviceConfig; import org.onosproject.ngsdn.tutorial.common.Utils; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import java.util.Set; import java.util.stream.Collectors; import static org.onosproject.ngsdn.tutorial.AppConstants.INITIAL_SETUP_DELAY; /** * App component that configures devices to provide L2 bridging capabilities. */ @Component( immediate = true, // *** TODO EXERCISE 4 // Enable component (enabled = true) enabled = true ) public class L2BridgingComponent { private final Logger log = LoggerFactory.getLogger(getClass()); private static final int DEFAULT_BROADCAST_GROUP_ID = 255; private final DeviceListener deviceListener = new InternalDeviceListener(); private final HostListener hostListener = new InternalHostListener(); private ApplicationId appId; //-------------------------------------------------------------------------- // ONOS CORE SERVICE BINDING // // These variables are set by the Karaf runtime environment before calling // the activate() method. //-------------------------------------------------------------------------- @Reference(cardinality = ReferenceCardinality.MANDATORY) private HostService hostService; @Reference(cardinality = ReferenceCardinality.MANDATORY) private DeviceService deviceService; @Reference(cardinality = ReferenceCardinality.MANDATORY) private InterfaceService interfaceService; @Reference(cardinality = ReferenceCardinality.MANDATORY) private NetworkConfigService configService; @Reference(cardinality = ReferenceCardinality.MANDATORY) private FlowRuleService flowRuleService; @Reference(cardinality = ReferenceCardinality.MANDATORY) private GroupService groupService; @Reference(cardinality = ReferenceCardinality.MANDATORY) private MastershipService mastershipService; @Reference(cardinality = ReferenceCardinality.MANDATORY) private MainComponent mainComponent; //-------------------------------------------------------------------------- // COMPONENT ACTIVATION. // // When loading/unloading the app the Karaf runtime environment will call // activate()/deactivate(). //-------------------------------------------------------------------------- @Activate protected void activate() { appId = mainComponent.getAppId(); // Register listeners to be informed about device and host events. deviceService.addListener(deviceListener); hostService.addListener(hostListener); // Schedule set up of existing devices. Needed when reloading the app. mainComponent.scheduleTask(this::setUpAllDevices, INITIAL_SETUP_DELAY); log.info("Started"); } @Deactivate protected void deactivate() { deviceService.removeListener(deviceListener); hostService.removeListener(hostListener); log.info("Stopped"); } //-------------------------------------------------------------------------- // METHODS TO COMPLETE. // // Complete the implementation wherever you see TODO. //-------------------------------------------------------------------------- /** * Sets up everything necessary to support L2 bridging on the given device. * * @param deviceId the device to set up */ private void setUpDevice(DeviceId deviceId) { if (isSpine(deviceId)) { // Stop here. We support bridging only on leaf/tor switches. return; } insertMulticastGroup(deviceId); insertMulticastFlowRules(deviceId); // Uncomment the following line after you have implemented the method: insertUnmatchedBridgingFlowRule(deviceId); } /** * Inserts an ALL group in the ONOS core to replicate packets on all host * facing ports. This group will be used to broadcast all ARP/NDP requests. *

* ALL groups in ONOS are equivalent to P4Runtime packet replication engine * (PRE) Multicast groups. * * @param deviceId the device where to install the group */ private void insertMulticastGroup(DeviceId deviceId) { // Replicate packets where we know hosts are attached. Set ports = getHostFacingPorts(deviceId); if (ports.isEmpty()) { // Stop here. log.warn("Device {} has 0 host facing ports", deviceId); return; } log.info("Adding L2 multicast group with {} ports on {}...", ports.size(), deviceId); // Forge group object. final GroupDescription multicastGroup = Utils.buildMulticastGroup( appId, deviceId, DEFAULT_BROADCAST_GROUP_ID, ports); // Insert. groupService.addGroup(multicastGroup); } /** * Insert flow rules matching ethernet destination * broadcast/multicast addresses (e.g. ARP requests, NDP Neighbor * Solicitation, etc.). Such packets should be processed by the multicast * group created before. *

* This method will be called at component activation for each device * (switch) known by ONOS, and every time a new device-added event is * captured by the InternalDeviceListener defined below. * * @param deviceId device ID where to install the rules */ private void insertMulticastFlowRules(DeviceId deviceId) { log.info("Adding L2 multicast rules on {}...", deviceId); // Modify P4Runtime entity names to match content of P4Info file (look // for the fully qualified name of tables, match fields, and actions. // ---- START SOLUTION ---- // Match ARP request - Match exactly FF:FF:FF:FF:FF:FF final PiCriterion macBroadcastCriterion = PiCriterion.builder() .matchTernary( PiMatchFieldId.of("hdr.ethernet.dst_addr"), MacAddress.valueOf("FF:FF:FF:FF:FF:FF").toBytes(), MacAddress.valueOf("FF:FF:FF:FF:FF:FF").toBytes()) .build(); // Match NDP NS - Match ternary 33:33:**:**:**:** final PiCriterion ipv6MulticastCriterion = PiCriterion.builder() .matchTernary( PiMatchFieldId.of("hdr.ethernet.dst_addr"), MacAddress.valueOf("33:33:00:00:00:00").toBytes(), MacAddress.valueOf("FF:FF:00:00:00:00").toBytes()) .build(); // Action: set multicast group id final PiAction setMcastGroupAction = PiAction.builder() .withId(PiActionId.of("IngressPipeImpl.set_multicast_group")) .withParameter(new PiActionParam( PiActionParamId.of("gid"), DEFAULT_BROADCAST_GROUP_ID)) .build(); // Build 2 flow rules. final String tableId = "IngressPipeImpl.l2_ternary_table"; // ---- END SOLUTION ---- final FlowRule rule1 = Utils.buildFlowRule( deviceId, appId, tableId, macBroadcastCriterion, setMcastGroupAction); final FlowRule rule2 = Utils.buildFlowRule( deviceId, appId, tableId, ipv6MulticastCriterion, setMcastGroupAction); // Insert rules. flowRuleService.applyFlowRules(rule1, rule2); } /** * Insert flow rule that matches all unmatched ethernet traffic. This * will implement the traditional briding behavior that floods all * unmatched traffic. *

* This method will be called at component activation for each device * (switch) known by ONOS, and every time a new device-added event is * captured by the InternalDeviceListener defined below. * * @param deviceId device ID where to install the rules */ @SuppressWarnings("unused") private void insertUnmatchedBridgingFlowRule(DeviceId deviceId) { log.info("Adding L2 multicast rules on {}...", deviceId); // Modify P4Runtime entity names to match content of P4Info file (look // for the fully qualified name of tables, match fields, and actions. // ---- START SOLUTION ---- // Match unmatched traffic - Match ternary **:**:**:**:**:** final PiCriterion unmatchedTrafficCriterion = PiCriterion.builder() .matchTernary( PiMatchFieldId.of("hdr.ethernet.dst_addr"), MacAddress.valueOf("00:00:00:00:00:00").toBytes(), MacAddress.valueOf("00:00:00:00:00:00").toBytes()) .build(); // Action: set multicast group id final PiAction setMcastGroupAction = PiAction.builder() .withId(PiActionId.of("IngressPipeImpl.set_multicast_group")) .withParameter(new PiActionParam( PiActionParamId.of("gid"), DEFAULT_BROADCAST_GROUP_ID)) .build(); // Build flow rule. final String tableId = "IngressPipeImpl.l2_ternary_table"; // ---- END SOLUTION ---- final FlowRule rule = Utils.buildFlowRule( deviceId, appId, tableId, unmatchedTrafficCriterion, setMcastGroupAction); // Insert rules. flowRuleService.applyFlowRules(rule); } /** * Insert flow rules to forward packets to a given host located at the given * device and port. *

* This method will be called at component activation for each host known by * ONOS, and every time a new host-added event is captured by the * InternalHostListener defined below. * * @param host host instance * @param deviceId device where the host is located * @param port port where the host is attached to */ private void learnHost(Host host, DeviceId deviceId, PortNumber port) { log.info("Adding L2 unicast rule on {} for host {} (port {})...", deviceId, host.id(), port); // Modify P4Runtime entity names to match content of P4Info file (look // for the fully qualified name of tables, match fields, and actions. // ---- START SOLUTION ---- final String tableId = "IngressPipeImpl.l2_exact_table"; // Match exactly on the host MAC address. final MacAddress hostMac = host.mac(); final PiCriterion hostMacCriterion = PiCriterion.builder() .matchExact(PiMatchFieldId.of("hdr.ethernet.dst_addr"), hostMac.toBytes()) .build(); // Action: set output port final PiAction l2UnicastAction = PiAction.builder() .withId(PiActionId.of("IngressPipeImpl.set_egress_port")) .withParameter(new PiActionParam( PiActionParamId.of("port_num"), port.toLong())) .build(); // ---- END SOLUTION ---- // Forge flow rule. final FlowRule rule = Utils.buildFlowRule( deviceId, appId, tableId, hostMacCriterion, l2UnicastAction); // Insert. flowRuleService.applyFlowRules(rule); } //-------------------------------------------------------------------------- // EVENT LISTENERS // // Events are processed only if isRelevant() returns true. //-------------------------------------------------------------------------- /** * Listener of device events. */ public class InternalDeviceListener implements DeviceListener { @Override public boolean isRelevant(DeviceEvent event) { switch (event.type()) { case DEVICE_ADDED: case DEVICE_AVAILABILITY_CHANGED: break; default: // Ignore other events. return false; } // Process only if this controller instance is the master. final DeviceId deviceId = event.subject().id(); return mastershipService.isLocalMaster(deviceId); } @Override public void event(DeviceEvent event) { final DeviceId deviceId = event.subject().id(); if (deviceService.isAvailable(deviceId)) { // A P4Runtime device is considered available in ONOS when there // is a StreamChannel session open and the pipeline // configuration has been set. // Events are processed using a thread pool defined in the // MainComponent. mainComponent.getExecutorService().execute(() -> { log.info("{} event! deviceId={}", event.type(), deviceId); setUpDevice(deviceId); }); } } } /** * Listener of host events. */ public class InternalHostListener implements HostListener { @Override public boolean isRelevant(HostEvent event) { switch (event.type()) { case HOST_ADDED: // Host added events will be generated by the // HostLocationProvider by intercepting ARP/NDP packets. break; case HOST_REMOVED: case HOST_UPDATED: case HOST_MOVED: default: // Ignore other events. // Food for thoughts: how to support host moved/removed? return false; } // Process host event only if this controller instance is the master // for the device where this host is attached to. final Host host = event.subject(); final DeviceId deviceId = host.location().deviceId(); return mastershipService.isLocalMaster(deviceId); } @Override public void event(HostEvent event) { final Host host = event.subject(); // Device and port where the host is located. final DeviceId deviceId = host.location().deviceId(); final PortNumber port = host.location().port(); mainComponent.getExecutorService().execute(() -> { log.info("{} event! host={}, deviceId={}, port={}", event.type(), host.id(), deviceId, port); learnHost(host, deviceId, port); }); } } //-------------------------------------------------------------------------- // UTILITY METHODS //-------------------------------------------------------------------------- /** * Returns a set of ports for the given device that are used to connect * hosts to the fabric. * * @param deviceId device ID * @return set of host facing ports */ private Set getHostFacingPorts(DeviceId deviceId) { // Get all interfaces configured via netcfg for the given device ID and // return the corresponding device port number. Interface configuration // in the netcfg.json looks like this: // "device:leaf1/3": { // "interfaces": [ // { // "name": "leaf1-3", // "ips": ["2001:1:1::ff/64"] // } // ] // } return interfaceService.getInterfaces().stream() .map(Interface::connectPoint) .filter(cp -> cp.deviceId().equals(deviceId)) .map(ConnectPoint::port) .collect(Collectors.toSet()); } /** * Returns true if the given device is defined as a spine in the * netcfg.json. * * @param deviceId device ID * @return true if spine, false otherwise */ private boolean isSpine(DeviceId deviceId) { // Example netcfg defining a device as spine: // "devices": { // "device:spine1": { // ... // "fabricDeviceConfig": { // "myStationMac": "...", // "mySid": "...", // "isSpine": true // } // }, // ... final FabricDeviceConfig cfg = configService.getConfig( deviceId, FabricDeviceConfig.class); return cfg != null && cfg.isSpine(); } /** * Sets up L2 bridging on all devices known by ONOS and for which this ONOS * node instance is currently master. *

* This method is called at component activation. */ private void setUpAllDevices() { deviceService.getAvailableDevices().forEach(device -> { if (mastershipService.isLocalMaster(device.id())) { log.info("*** L2 BRIDGING - Starting initial set up for {}...", device.id()); setUpDevice(device.id()); // For all hosts connected to this device... hostService.getConnectedHosts(device.id()).forEach( host -> learnHost(host, host.location().deviceId(), host.location().port())); } }); } } ================================================ FILE: solution/app/src/main/java/org/onosproject/ngsdn/tutorial/NdpReplyComponent.java ================================================ /* * Copyright 2019-present Open Networking Foundation * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.onosproject.ngsdn.tutorial; import org.onlab.packet.Ip6Address; import org.onlab.packet.IpAddress; import org.onlab.packet.MacAddress; import org.onlab.util.ItemNotFoundException; import org.onosproject.core.ApplicationId; import org.onosproject.mastership.MastershipService; import org.onosproject.net.DeviceId; import org.onosproject.net.config.NetworkConfigService; import org.onosproject.net.device.DeviceEvent; import org.onosproject.net.device.DeviceListener; import org.onosproject.net.device.DeviceService; import org.onosproject.net.flow.FlowRule; import org.onosproject.net.flow.FlowRuleOperations; import org.onosproject.net.flow.FlowRuleService; import org.onosproject.net.flow.criteria.PiCriterion; import org.onosproject.net.host.InterfaceIpAddress; import org.onosproject.net.intf.Interface; import org.onosproject.net.intf.InterfaceService; import org.onosproject.net.pi.model.PiActionId; import org.onosproject.net.pi.model.PiActionParamId; import org.onosproject.net.pi.model.PiMatchFieldId; import org.onosproject.net.pi.runtime.PiAction; import org.onosproject.net.pi.runtime.PiActionParam; import org.onosproject.ngsdn.tutorial.common.FabricDeviceConfig; import org.onosproject.ngsdn.tutorial.common.Utils; import org.osgi.service.component.annotations.Activate; import org.osgi.service.component.annotations.Component; import org.osgi.service.component.annotations.Deactivate; import org.osgi.service.component.annotations.Reference; import org.osgi.service.component.annotations.ReferenceCardinality; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import java.util.Collection; import java.util.stream.Collectors; import static org.onosproject.ngsdn.tutorial.AppConstants.INITIAL_SETUP_DELAY; /** * App component that configures devices to generate NDP Neighbor Advertisement * packets for all interface IPv6 addresses configured in the netcfg. */ @Component( immediate = true, // *** TODO EXERCISE 5 // Enable component (enabled = true) enabled = true ) public class NdpReplyComponent { private static final Logger log = LoggerFactory.getLogger(NdpReplyComponent.class.getName()); //-------------------------------------------------------------------------- // ONOS CORE SERVICE BINDING // // These variables are set by the Karaf runtime environment before calling // the activate() method. //-------------------------------------------------------------------------- @Reference(cardinality = ReferenceCardinality.MANDATORY) protected NetworkConfigService configService; @Reference(cardinality = ReferenceCardinality.MANDATORY) protected FlowRuleService flowRuleService; @Reference(cardinality = ReferenceCardinality.MANDATORY) protected InterfaceService interfaceService; @Reference(cardinality = ReferenceCardinality.MANDATORY) protected MastershipService mastershipService; @Reference(cardinality = ReferenceCardinality.MANDATORY) protected DeviceService deviceService; @Reference(cardinality = ReferenceCardinality.MANDATORY) private MainComponent mainComponent; private DeviceListener deviceListener = new InternalDeviceListener(); private ApplicationId appId; //-------------------------------------------------------------------------- // COMPONENT ACTIVATION. // // When loading/unloading the app the Karaf runtime environment will call // activate()/deactivate(). //-------------------------------------------------------------------------- @Activate public void activate() { appId = mainComponent.getAppId(); // Register listeners to be informed about device events. deviceService.addListener(deviceListener); // Schedule set up of existing devices. Needed when reloading the app. mainComponent.scheduleTask(this::setUpAllDevices, INITIAL_SETUP_DELAY); log.info("Started"); } @Deactivate public void deactivate() { deviceService.removeListener(deviceListener); log.info("Stopped"); } //-------------------------------------------------------------------------- // METHODS TO COMPLETE. // // Complete the implementation wherever you see TODO. //-------------------------------------------------------------------------- /** * Set up all devices for which this ONOS instance is currently master. */ private void setUpAllDevices() { deviceService.getAvailableDevices().forEach(device -> { if (mastershipService.isLocalMaster(device.id())) { log.info("*** NDP REPLY - Starting Initial set up for {}...", device.id()); setUpDevice(device.id()); } }); } /** * Performs setup of the given device by creating a flow rule to generate * NDP NA packets for IPv6 addresses associated to the device interfaces. * * @param deviceId device ID */ private void setUpDevice(DeviceId deviceId) { // Get this device config from netcfg.json. final FabricDeviceConfig config = configService.getConfig( deviceId, FabricDeviceConfig.class); if (config == null) { // Config not available yet throw new ItemNotFoundException("Missing fabricDeviceConfig for " + deviceId); } // Get this device myStation mac. final MacAddress deviceMac = config.myStationMac(); // Get all interfaces currently configured for the device final Collection interfaces = interfaceService.getInterfaces() .stream() .filter(iface -> iface.connectPoint().deviceId().equals(deviceId)) .collect(Collectors.toSet()); if (interfaces.isEmpty()) { log.info("{} does not have any IPv6 interface configured", deviceId); return; } // Generate and install flow rules. log.info("Adding rules to {} to generate NDP NA for {} IPv6 interfaces...", deviceId, interfaces.size()); final Collection flowRules = interfaces.stream() .map(this::getIp6Addresses) .flatMap(Collection::stream) .map(ipv6addr -> buildNdpReplyFlowRule(deviceId, ipv6addr, deviceMac)) .collect(Collectors.toSet()); installRules(flowRules); } /** * Build a flow rule for the NDP reply table on the given device, for the * given target IPv6 address and MAC address. * * @param deviceId device ID where to install the flow rules * @param targetIpv6Address target IPv6 address * @param targetMac target MAC address * @return flow rule object */ private FlowRule buildNdpReplyFlowRule(DeviceId deviceId, Ip6Address targetIpv6Address, MacAddress targetMac) { // *** TODO EXERCISE 5 // Modify P4Runtime entity names to match content of P4Info file (look // for the fully qualified name of tables, match fields, and actions. // ---- START SOLUTION ---- // Build match. final PiCriterion match = PiCriterion.builder() .matchExact(PiMatchFieldId.of("hdr.ndp.target_ipv6_addr"), targetIpv6Address.toOctets()) .build(); // Build action. final PiActionParam targetMacParam = new PiActionParam( PiActionParamId.of("target_mac"), targetMac.toBytes()); final PiAction action = PiAction.builder() .withId(PiActionId.of("IngressPipeImpl.ndp_ns_to_na")) .withParameter(targetMacParam) .build(); // Table ID. final String tableId = "IngressPipeImpl.ndp_reply_table"; // ---- END SOLUTION ---- // Build flow rule. final FlowRule rule = Utils.buildFlowRule( deviceId, appId, tableId, match, action); return rule; } //-------------------------------------------------------------------------- // EVENT LISTENERS // // Events are processed only if isRelevant() returns true. //-------------------------------------------------------------------------- /** * Listener of device events. */ public class InternalDeviceListener implements DeviceListener { @Override public boolean isRelevant(DeviceEvent event) { switch (event.type()) { case DEVICE_ADDED: case DEVICE_AVAILABILITY_CHANGED: break; default: // Ignore other events. return false; } // Process only if this controller instance is the master. final DeviceId deviceId = event.subject().id(); return mastershipService.isLocalMaster(deviceId); } @Override public void event(DeviceEvent event) { final DeviceId deviceId = event.subject().id(); if (deviceService.isAvailable(deviceId)) { // A P4Runtime device is considered available in ONOS when there // is a StreamChannel session open and the pipeline // configuration has been set. // Events are processed using a thread pool defined in the // MainComponent. mainComponent.getExecutorService().execute(() -> { log.info("{} event! deviceId={}", event.type(), deviceId); setUpDevice(deviceId); }); } } } //-------------------------------------------------------------------------- // UTILITY METHODS //-------------------------------------------------------------------------- /** * Returns all IPv6 addresses associated with the given interface. * * @param iface interface instance * @return collection of IPv6 addresses */ private Collection getIp6Addresses(Interface iface) { return iface.ipAddressesList() .stream() .map(InterfaceIpAddress::ipAddress) .filter(IpAddress::isIp6) .map(IpAddress::getIp6Address) .collect(Collectors.toSet()); } /** * Install the given flow rules in batch using the flow rule service. * * @param flowRules flow rules to install */ private void installRules(Collection flowRules) { FlowRuleOperations.Builder ops = FlowRuleOperations.builder(); flowRules.forEach(ops::add); flowRuleService.apply(ops.build()); } } ================================================ FILE: solution/app/src/main/java/org/onosproject/ngsdn/tutorial/Srv6Component.java ================================================ /* * Copyright 2019-present Open Networking Foundation * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.onosproject.ngsdn.tutorial; import com.google.common.collect.Lists; import org.onlab.packet.Ip6Address; import org.onosproject.core.ApplicationId; import org.onosproject.mastership.MastershipService; import org.onosproject.net.Device; import org.onosproject.net.DeviceId; import org.onosproject.net.config.NetworkConfigService; import org.onosproject.net.device.DeviceEvent; import org.onosproject.net.device.DeviceListener; import org.onosproject.net.device.DeviceService; import org.onosproject.net.flow.FlowRule; import org.onosproject.net.flow.FlowRuleOperations; import org.onosproject.net.flow.FlowRuleService; import org.onosproject.net.flow.criteria.PiCriterion; import org.onosproject.net.pi.model.PiActionId; import org.onosproject.net.pi.model.PiActionParamId; import org.onosproject.net.pi.model.PiMatchFieldId; import org.onosproject.net.pi.model.PiTableId; import org.onosproject.net.pi.runtime.PiAction; import org.onosproject.net.pi.runtime.PiActionParam; import org.onosproject.net.pi.runtime.PiTableAction; import org.osgi.service.component.annotations.Activate; import org.osgi.service.component.annotations.Component; import org.osgi.service.component.annotations.Deactivate; import org.osgi.service.component.annotations.Reference; import org.osgi.service.component.annotations.ReferenceCardinality; import org.onosproject.ngsdn.tutorial.common.FabricDeviceConfig; import org.onosproject.ngsdn.tutorial.common.Utils; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import java.util.List; import java.util.Optional; import static com.google.common.collect.Streams.stream; import static org.onosproject.ngsdn.tutorial.AppConstants.INITIAL_SETUP_DELAY; /** * Application which handles SRv6 segment routing. */ @Component( immediate = true, // *** TODO EXERCISE 6 // set to true when ready enabled = true, service = Srv6Component.class ) public class Srv6Component { private static final Logger log = LoggerFactory.getLogger(Srv6Component.class); //-------------------------------------------------------------------------- // ONOS CORE SERVICE BINDING // // These variables are set by the Karaf runtime environment before calling // the activate() method. //-------------------------------------------------------------------------- @Reference(cardinality = ReferenceCardinality.MANDATORY) private FlowRuleService flowRuleService; @Reference(cardinality = ReferenceCardinality.MANDATORY) private MastershipService mastershipService; @Reference(cardinality = ReferenceCardinality.MANDATORY) private DeviceService deviceService; @Reference(cardinality = ReferenceCardinality.MANDATORY) private NetworkConfigService networkConfigService; @Reference(cardinality = ReferenceCardinality.MANDATORY) private MainComponent mainComponent; private final DeviceListener deviceListener = new Srv6Component.InternalDeviceListener(); private ApplicationId appId; //-------------------------------------------------------------------------- // COMPONENT ACTIVATION. // // When loading/unloading the app the Karaf runtime environment will call // activate()/deactivate(). //-------------------------------------------------------------------------- @Activate protected void activate() { appId = mainComponent.getAppId(); // Register listeners to be informed about device and host events. deviceService.addListener(deviceListener); // Schedule set up for all devices. mainComponent.scheduleTask(this::setUpAllDevices, INITIAL_SETUP_DELAY); log.info("Started"); } @Deactivate protected void deactivate() { deviceService.removeListener(deviceListener); log.info("Stopped"); } //-------------------------------------------------------------------------- // METHODS TO COMPLETE. // // Complete the implementation wherever you see TODO. //-------------------------------------------------------------------------- /** * Populate the My SID table from the network configuration for the * specified device. * * @param deviceId the device Id */ private void setUpMySidTable(DeviceId deviceId) { Ip6Address mySid = getMySid(deviceId); log.info("Adding mySid rule on {} (sid {})...", deviceId, mySid); // *** TODO EXERCISE 6 // Fill in the table ID for the SRv6 my segment identifier table // ---- START SOLUTION ---- String tableId = "IngressPipeImpl.srv6_my_sid"; // ---- END SOLUTION ---- // *** TODO EXERCISE 6 // Modify the field and action id to match your P4Info // ---- START SOLUTION ---- PiCriterion match = PiCriterion.builder() .matchLpm( PiMatchFieldId.of("hdr.ipv6.dst_addr"), mySid.toOctets(), 128) .build(); PiTableAction action = PiAction.builder() .withId(PiActionId.of("IngressPipeImpl.srv6_end")) .build(); // ---- END SOLUTION ---- FlowRule myStationRule = Utils.buildFlowRule( deviceId, appId, tableId, match, action); flowRuleService.applyFlowRules(myStationRule); } /** * Insert a SRv6 transit insert policy that will inject an SRv6 header for * packets destined to destIp. * * @param deviceId device ID * @param destIp target IP address for the SRv6 policy * @param prefixLength prefix length for the target IP * @param segmentList list of SRv6 SIDs that make up the path */ public void insertSrv6InsertRule(DeviceId deviceId, Ip6Address destIp, int prefixLength, List segmentList) { if (segmentList.size() < 2 || segmentList.size() > 3) { throw new RuntimeException("List of " + segmentList.size() + " segments is not supported"); } // *** TODO EXERCISE 6 // Fill in the table ID for the SRv6 transit table. // ---- START SOLUTION ---- String tableId = "IngressPipeImpl.srv6_transit"; // ---- END SOLUTION ---- // *** TODO EXERCISE 6 // Modify match field, action id, and action parameters to match your P4Info. // ---- START SOLUTION ---- PiCriterion match = PiCriterion.builder() .matchLpm(PiMatchFieldId.of("hdr.ipv6.dst_addr"), destIp.toOctets(), prefixLength) .build(); List actionParams = Lists.newArrayList(); for (int i = 0; i < segmentList.size(); i++) { PiActionParamId paramId = PiActionParamId.of("s" + (i + 1)); PiActionParam param = new PiActionParam(paramId, segmentList.get(i).toOctets()); actionParams.add(param); } PiAction action = PiAction.builder() .withId(PiActionId.of("IngressPipeImpl.srv6_t_insert_" + segmentList.size())) .withParameters(actionParams) .build(); // ---- END SOLUTION ---- final FlowRule rule = Utils.buildFlowRule( deviceId, appId, tableId, match, action); flowRuleService.applyFlowRules(rule); } /** * Remove all SRv6 transit insert polices for the specified device. * * @param deviceId device ID */ public void clearSrv6InsertRules(DeviceId deviceId) { // *** TODO EXERCISE 6 // Fill in the table ID for the SRv6 transit table // ---- START SOLUTION ---- String tableId = "IngressPipeImpl.srv6_transit"; // ---- END SOLUTION ---- FlowRuleOperations.Builder ops = FlowRuleOperations.builder(); stream(flowRuleService.getFlowEntries(deviceId)) .filter(fe -> fe.appId() == appId.id()) .filter(fe -> fe.table().equals(PiTableId.of(tableId))) .forEach(ops::remove); flowRuleService.apply(ops.build()); } // ---------- END METHODS TO COMPLETE ---------------- //-------------------------------------------------------------------------- // EVENT LISTENERS // // Events are processed only if isRelevant() returns true. //-------------------------------------------------------------------------- /** * Listener of device events. */ public class InternalDeviceListener implements DeviceListener { @Override public boolean isRelevant(DeviceEvent event) { switch (event.type()) { case DEVICE_ADDED: case DEVICE_AVAILABILITY_CHANGED: break; default: // Ignore other events. return false; } // Process only if this controller instance is the master. final DeviceId deviceId = event.subject().id(); return mastershipService.isLocalMaster(deviceId); } @Override public void event(DeviceEvent event) { final DeviceId deviceId = event.subject().id(); if (deviceService.isAvailable(deviceId)) { // A P4Runtime device is considered available in ONOS when there // is a StreamChannel session open and the pipeline // configuration has been set. mainComponent.getExecutorService().execute(() -> { log.info("{} event! deviceId={}", event.type(), deviceId); setUpMySidTable(event.subject().id()); }); } } } //-------------------------------------------------------------------------- // UTILITY METHODS //-------------------------------------------------------------------------- /** * Sets up SRv6 My SID table on all devices known by ONOS and for which this * ONOS node instance is currently master. */ private synchronized void setUpAllDevices() { // Set up host routes stream(deviceService.getAvailableDevices()) .map(Device::id) .filter(mastershipService::isLocalMaster) .forEach(deviceId -> { log.info("*** SRV6 - Starting initial set up for {}...", deviceId); this.setUpMySidTable(deviceId); }); } /** * Returns the Srv6 config for the given device. * * @param deviceId the device ID * @return Srv6 device config */ private Optional getDeviceConfig(DeviceId deviceId) { FabricDeviceConfig config = networkConfigService.getConfig(deviceId, FabricDeviceConfig.class); return Optional.ofNullable(config); } /** * Returns Srv6 SID for the given device. * * @param deviceId the device ID * @return SID for the device */ private Ip6Address getMySid(DeviceId deviceId) { return getDeviceConfig(deviceId) .map(FabricDeviceConfig::mySid) .orElseThrow(() -> new RuntimeException( "Missing mySid config for " + deviceId)); } } ================================================ FILE: solution/app/src/main/java/org/onosproject/ngsdn/tutorial/pipeconf/InterpreterImpl.java ================================================ /* * Copyright 2019-present Open Networking Foundation * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.onosproject.ngsdn.tutorial.pipeconf; import com.google.common.collect.ImmutableList; import com.google.common.collect.ImmutableMap; import org.onlab.packet.DeserializationException; import org.onlab.packet.Ethernet; import org.onlab.util.ImmutableByteSequence; import org.onosproject.net.ConnectPoint; import org.onosproject.net.DeviceId; import org.onosproject.net.Port; import org.onosproject.net.PortNumber; import org.onosproject.net.device.DeviceService; import org.onosproject.net.driver.AbstractHandlerBehaviour; import org.onosproject.net.flow.TrafficTreatment; import org.onosproject.net.flow.criteria.Criterion; import org.onosproject.net.packet.DefaultInboundPacket; import org.onosproject.net.packet.InboundPacket; import org.onosproject.net.packet.OutboundPacket; import org.onosproject.net.pi.model.PiMatchFieldId; import org.onosproject.net.pi.model.PiPacketMetadataId; import org.onosproject.net.pi.model.PiPipelineInterpreter; import org.onosproject.net.pi.model.PiTableId; import org.onosproject.net.pi.runtime.PiAction; import org.onosproject.net.pi.runtime.PiPacketMetadata; import org.onosproject.net.pi.runtime.PiPacketOperation; import java.nio.ByteBuffer; import java.util.Collection; import java.util.List; import java.util.Map; import java.util.Optional; import static java.lang.String.format; import static java.util.stream.Collectors.toList; import static org.onlab.util.ImmutableByteSequence.copyFrom; import static org.onosproject.net.PortNumber.CONTROLLER; import static org.onosproject.net.PortNumber.FLOOD; import static org.onosproject.net.flow.instructions.Instruction.Type.OUTPUT; import static org.onosproject.net.flow.instructions.Instructions.OutputInstruction; import static org.onosproject.net.pi.model.PiPacketOperationType.PACKET_OUT; import static org.onosproject.ngsdn.tutorial.AppConstants.CPU_PORT_ID; /** * Interpreter implementation. */ public class InterpreterImpl extends AbstractHandlerBehaviour implements PiPipelineInterpreter { // From v1model.p4 private static final int V1MODEL_PORT_BITWIDTH = 9; // From P4Info. private static final Map CRITERION_MAP = new ImmutableMap.Builder() .put(Criterion.Type.IN_PORT, "standard_metadata.ingress_port") .put(Criterion.Type.ETH_DST, "hdr.ethernet.dst_addr") .put(Criterion.Type.ETH_SRC, "hdr.ethernet.src_addr") .put(Criterion.Type.ETH_TYPE, "hdr.ethernet.ether_type") .put(Criterion.Type.IPV6_DST, "hdr.ipv6.dst_addr") .put(Criterion.Type.IP_PROTO, "local_metadata.ip_proto") .put(Criterion.Type.ICMPV4_TYPE, "local_metadata.icmp_type") .put(Criterion.Type.ICMPV6_TYPE, "local_metadata.icmp_type") .build(); /** * Returns a collection of PI packet operations populated with metadata * specific for this pipeconf and equivalent to the given ONOS * OutboundPacket instance. * * @param packet ONOS OutboundPacket * @return collection of PI packet operations * @throws PiInterpreterException if the packet treatments cannot be * executed by this pipeline */ @Override public Collection mapOutboundPacket(OutboundPacket packet) throws PiInterpreterException { TrafficTreatment treatment = packet.treatment(); // Packet-out in main.p4 supports only setting the output port, // i.e. we only understand OUTPUT instructions. List outInstructions = treatment .allInstructions() .stream() .filter(i -> i.type().equals(OUTPUT)) .map(i -> (OutputInstruction) i) .collect(toList()); if (treatment.allInstructions().size() != outInstructions.size()) { // There are other instructions that are not of type OUTPUT. throw new PiInterpreterException("Treatment not supported: " + treatment); } ImmutableList.Builder builder = ImmutableList.builder(); for (OutputInstruction outInst : outInstructions) { if (outInst.port().isLogical() && !outInst.port().equals(FLOOD)) { throw new PiInterpreterException(format( "Packet-out on logical port '%s' not supported", outInst.port())); } else if (outInst.port().equals(FLOOD)) { // To emulate flooding, we create a packet-out operation for // each switch port. final DeviceService deviceService = handler().get(DeviceService.class); for (Port port : deviceService.getPorts(packet.sendThrough())) { builder.add(buildPacketOut(packet.data(), port.number().toLong())); } } else { // Create only one packet-out for the given OUTPUT instruction. builder.add(buildPacketOut(packet.data(), outInst.port().toLong())); } } return builder.build(); } /** * Builds a pipeconf-specific packet-out instance with the given payload and * egress port. * * @param pktData packet payload * @param portNumber egress port * @return packet-out * @throws PiInterpreterException if packet-out cannot be built */ private PiPacketOperation buildPacketOut(ByteBuffer pktData, long portNumber) throws PiInterpreterException { // Make sure port number can fit in v1model port metadata bitwidth. final ImmutableByteSequence portBytes; try { portBytes = copyFrom(portNumber).fit(V1MODEL_PORT_BITWIDTH); } catch (ImmutableByteSequence.ByteSequenceTrimException e) { throw new PiInterpreterException(format( "Port number %d too big, %s", portNumber, e.getMessage())); } // Create metadata instance for egress port. // *** TODO EXERCISE 4: modify metadata names to match P4 program // ---- START SOLUTION ---- final String outPortMetadataName = "egress_port"; // ---- END SOLUTION ---- final PiPacketMetadata outPortMetadata = PiPacketMetadata.builder() .withId(PiPacketMetadataId.of(outPortMetadataName)) .withValue(portBytes) .build(); // Build packet out. return PiPacketOperation.builder() .withType(PACKET_OUT) .withData(copyFrom(pktData)) .withMetadata(outPortMetadata) .build(); } /** * Returns an ONS InboundPacket equivalent to the given pipeconf-specific * packet-in operation. * * @param packetIn packet operation * @param deviceId ID of the device that originated the packet-in * @return inbound packet * @throws PiInterpreterException if the packet operation cannot be mapped * to an inbound packet */ @Override public InboundPacket mapInboundPacket(PiPacketOperation packetIn, DeviceId deviceId) throws PiInterpreterException { // Find the ingress_port metadata. // *** TODO EXERCISE 4: modify metadata names to match P4Info // ---- START SOLUTION ---- final String inportMetadataName = "ingress_port"; // ---- END SOLUTION ---- Optional inportMetadata = packetIn.metadatas() .stream() .filter(meta -> meta.id().id().equals(inportMetadataName)) .findFirst(); if (!inportMetadata.isPresent()) { throw new PiInterpreterException(format( "Missing metadata '%s' in packet-in received from '%s': %s", inportMetadataName, deviceId, packetIn)); } // Build ONOS InboundPacket instance with the given ingress port. // 1. Parse packet-in object into Ethernet packet instance. final byte[] payloadBytes = packetIn.data().asArray(); final ByteBuffer rawData = ByteBuffer.wrap(payloadBytes); final Ethernet ethPkt; try { ethPkt = Ethernet.deserializer().deserialize( payloadBytes, 0, packetIn.data().size()); } catch (DeserializationException dex) { throw new PiInterpreterException(dex.getMessage()); } // 2. Get ingress port final ImmutableByteSequence portBytes = inportMetadata.get().value(); final short portNum = portBytes.asReadOnlyBuffer().getShort(); final ConnectPoint receivedFrom = new ConnectPoint( deviceId, PortNumber.portNumber(portNum)); return new DefaultInboundPacket(receivedFrom, ethPkt, rawData); } @Override public Optional mapLogicalPortNumber(PortNumber port) { if (CONTROLLER.equals(port)) { return Optional.of(CPU_PORT_ID); } else { return Optional.empty(); } } @Override public Optional mapCriterionType(Criterion.Type type) { if (CRITERION_MAP.containsKey(type)) { return Optional.of(PiMatchFieldId.of(CRITERION_MAP.get(type))); } else { return Optional.empty(); } } @Override public PiAction mapTreatment(TrafficTreatment treatment, PiTableId piTableId) throws PiInterpreterException { throw new PiInterpreterException("Treatment mapping not supported"); } @Override public Optional mapFlowRuleTableId(int flowRuleTableId) { return Optional.empty(); } } ================================================ FILE: solution/mininet/flowrule-gtp.json ================================================ { "flows": [ { "deviceId": "device:leaf1", "tableId": "FabricIngress.spgw_ingress.dl_sess_lookup", "priority": 10, "timeout": 0, "isPermanent": true, "selector": { "criteria": [ { "type": "IPV4_DST", "ip": "17.0.0.1/32" } ] }, "treatment": { "instructions": [ { "type": "PROTOCOL_INDEPENDENT", "subtype": "ACTION", "actionId": "FabricIngress.spgw_ingress.set_dl_sess_info", "actionParams": { "teid": "BEEF", "s1u_enb_addr": "0a006401", "s1u_sgw_addr": "0a0064fe" } } ] } } ] } ================================================ FILE: solution/mininet/netcfg-gtp.json ================================================ { "devices": { "device:leaf1": { "basic": { "managementAddress": "grpc://mininet:50001?device_id=1", "driver": "stratum-bmv2", "pipeconf": "org.onosproject.pipelines.fabric-spgw", "locType": "grid", "gridX": 200, "gridY": 600 }, "segmentrouting": { "name": "leaf1", "ipv4NodeSid": 101, "ipv4Loopback": "192.168.1.1", "routerMac": "00:AA:00:00:00:01", "isEdgeRouter": true, "adjacencySids": [] } }, "device:leaf2": { "basic": { "managementAddress": "grpc://mininet:50002?device_id=1", "driver": "stratum-bmv2", "pipeconf": "org.onosproject.pipelines.fabric", "locType": "grid", "gridX": 800, "gridY": 600 }, "segmentrouting": { "name": "leaf2", "ipv4NodeSid": 102, "ipv4Loopback": "192.168.1.2", "routerMac": "00:AA:00:00:00:02", "isEdgeRouter": true, "adjacencySids": [] } }, "device:spine1": { "basic": { "managementAddress": "grpc://mininet:50003?device_id=1", "driver": "stratum-bmv2", "pipeconf": "org.onosproject.pipelines.fabric", "locType": "grid", "gridX": 400, "gridY": 400 }, "segmentrouting": { "name": "spine1", "ipv4NodeSid": 201, "ipv4Loopback": "192.168.2.1", "routerMac": "00:BB:00:00:00:01", "isEdgeRouter": false, "adjacencySids": [] } }, "device:spine2": { "basic": { "managementAddress": "grpc://mininet:50004?device_id=1", "driver": "stratum-bmv2", "pipeconf": "org.onosproject.pipelines.fabric", "locType": "grid", "gridX": 600, "gridY": 400 }, "segmentrouting": { "name": "spine2", "ipv4NodeSid": 202, "ipv4Loopback": "192.168.2.2", "routerMac": "00:BB:00:00:00:02", "isEdgeRouter": false, "adjacencySids": [] } } }, "ports": { "device:leaf1/3": { "interfaces": [ { "name": "leaf1-3", "ips": [ "10.0.100.254/24" ], "vlan-untagged": 100 } ] }, "device:leaf2/3": { "interfaces": [ { "name": "leaf2-3", "ips": [ "10.0.200.254/24" ], "vlan-untagged": 200 } ] } }, "hosts": { "00:00:00:00:00:10/None": { "basic": { "name": "enodeb", "gridX": 100, "gridY": 700, "locType": "grid", "ips": [ "10.0.100.1" ], "locations": [ "device:leaf1/3" ] } }, "00:00:00:00:00:20/None": { "basic": { "name": "pdn", "gridX": 850, "gridY": 700, "locType": "grid", "ips": [ "10.0.200.1" ], "locations": [ "device:leaf2/3" ] } } } } ================================================ FILE: solution/mininet/netcfg-sr.json ================================================ { "devices": { "device:leaf1": { "basic": { "managementAddress": "grpc://mininet:50001?device_id=1", "driver": "stratum-bmv2", "pipeconf": "org.onosproject.pipelines.fabric", "locType": "grid", "gridX": 200, "gridY": 600 }, "segmentrouting": { "name": "leaf1", "ipv4NodeSid": 101, "ipv4Loopback": "192.168.1.1", "routerMac": "00:AA:00:00:00:01", "isEdgeRouter": true, "adjacencySids": [] } }, "device:leaf2": { "basic": { "managementAddress": "grpc://mininet:50002?device_id=1", "driver": "stratum-bmv2", "pipeconf": "org.onosproject.pipelines.fabric", "locType": "grid", "gridX": 800, "gridY": 600 }, "segmentrouting": { "name": "leaf2", "ipv4NodeSid": 102, "ipv4Loopback": "192.168.1.2", "routerMac": "00:AA:00:00:00:02", "isEdgeRouter": true, "adjacencySids": [] } }, "device:spine1": { "basic": { "managementAddress": "grpc://mininet:50003?device_id=1", "driver": "stratum-bmv2", "pipeconf": "org.onosproject.pipelines.fabric", "locType": "grid", "gridX": 400, "gridY": 400 }, "segmentrouting": { "name": "spine1", "ipv4NodeSid": 201, "ipv4Loopback": "192.168.2.1", "routerMac": "00:BB:00:00:00:01", "isEdgeRouter": false, "adjacencySids": [] } }, "device:spine2": { "basic": { "managementAddress": "grpc://mininet:50004?device_id=1", "driver": "stratum-bmv2", "pipeconf": "org.onosproject.pipelines.fabric", "locType": "grid", "gridX": 600, "gridY": 400 }, "segmentrouting": { "name": "spine2", "ipv4NodeSid": 202, "ipv4Loopback": "192.168.2.2", "routerMac": "00:BB:00:00:00:02", "isEdgeRouter": false, "adjacencySids": [] } } }, "ports": { "device:leaf1/3": { "interfaces": [ { "name": "leaf1-3", "ips": [ "172.16.1.254/24" ], "vlan-untagged": 100 } ] }, "device:leaf1/4": { "interfaces": [ { "name": "leaf1-4", "ips": [ "172.16.1.254/24" ], "vlan-untagged": 100 } ] }, "device:leaf1/5": { "interfaces": [ { "name": "leaf1-5", "ips": [ "172.16.1.254/24" ], "vlan-tagged": [ 100 ] } ] }, "device:leaf1/6": { "interfaces": [ { "name": "leaf1-6", "ips": [ "172.16.2.254/24" ], "vlan-tagged": [ 200 ] } ] }, "device:leaf2/3": { "interfaces": [ { "name": "leaf2-3", "ips": [ "172.16.3.254/24" ], "vlan-tagged": [ 300 ] } ] }, "device:leaf2/4": { "interfaces": [ { "name": "leaf2-4", "ips": [ "172.16.4.254/24" ], "vlan-untagged": 400 } ] } }, "hosts": { "00:00:00:00:00:1A/None": { "basic": { "name": "h1a", "locType": "grid", "gridX": 100, "gridY": 700 } }, "00:00:00:00:00:1B/None": { "basic": { "name": "h1b", "locType": "grid", "gridX": 100, "gridY": 800 } }, "00:00:00:00:00:1C/100": { "basic": { "name": "h1c", "locType": "grid", "gridX": 250, "gridY": 800 } }, "00:00:00:00:00:20/200": { "basic": { "name": "h2", "locType": "grid", "gridX": 400, "gridY": 700 } }, "00:00:00:00:00:30/300": { "basic": { "name": "h3", "locType": "grid", "gridX": 750, "gridY": 700 } }, "00:00:00:00:00:40/None": { "basic": { "name": "h4", "locType": "grid", "gridX": 850, "gridY": 700 } } } } ================================================ FILE: solution/p4src/main.p4 ================================================ /* * Copyright 2019-present Open Networking Foundation * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #include #include // CPU_PORT specifies the P4 port number associated to controller packet-in and // packet-out. All packets forwarded via this port will be delivered to the // controller as P4Runtime PacketIn messages. Similarly, PacketOut messages from // the controller will be seen by the P4 pipeline as coming from the CPU_PORT. #define CPU_PORT 255 // CPU_CLONE_SESSION_ID specifies the mirroring session for packets to be cloned // to the CPU port. Packets associated with this session ID will be cloned to // the CPU_PORT as well as being transmitted via their egress port (set by the // bridging/routing/acl table). For cloning to work, the P4Runtime controller // needs first to insert a CloneSessionEntry that maps this session ID to the // CPU_PORT. #define CPU_CLONE_SESSION_ID 99 // Maximum number of hops supported when using SRv6. // Required for Exercise 7. #define SRV6_MAX_HOPS 4 typedef bit<9> port_num_t; typedef bit<48> mac_addr_t; typedef bit<16> mcast_group_id_t; typedef bit<32> ipv4_addr_t; typedef bit<128> ipv6_addr_t; typedef bit<16> l4_port_t; const bit<16> ETHERTYPE_IPV4 = 0x0800; const bit<16> ETHERTYPE_IPV6 = 0x86dd; const bit<8> IP_PROTO_ICMP = 1; const bit<8> IP_PROTO_TCP = 6; const bit<8> IP_PROTO_UDP = 17; const bit<8> IP_PROTO_SRV6 = 43; const bit<8> IP_PROTO_ICMPV6 = 58; const mac_addr_t IPV6_MCAST_01 = 0x33_33_00_00_00_01; const bit<8> ICMP6_TYPE_NS = 135; const bit<8> ICMP6_TYPE_NA = 136; const bit<8> NDP_OPT_TARGET_LL_ADDR = 2; const bit<32> NDP_FLAG_ROUTER = 0x80000000; const bit<32> NDP_FLAG_SOLICITED = 0x40000000; const bit<32> NDP_FLAG_OVERRIDE = 0x20000000; //------------------------------------------------------------------------------ // HEADER DEFINITIONS //------------------------------------------------------------------------------ header ethernet_t { mac_addr_t dst_addr; mac_addr_t src_addr; bit<16> ether_type; } header ipv4_t { bit<4> version; bit<4> ihl; bit<6> dscp; bit<2> ecn; bit<16> total_len; bit<16> identification; bit<3> flags; bit<13> frag_offset; bit<8> ttl; bit<8> protocol; bit<16> hdr_checksum; bit<32> src_addr; bit<32> dst_addr; } header ipv6_t { bit<4> version; bit<8> traffic_class; bit<20> flow_label; bit<16> payload_len; bit<8> next_hdr; bit<8> hop_limit; bit<128> src_addr; bit<128> dst_addr; } header srv6h_t { bit<8> next_hdr; bit<8> hdr_ext_len; bit<8> routing_type; bit<8> segment_left; bit<8> last_entry; bit<8> flags; bit<16> tag; } header srv6_list_t { bit<128> segment_id; } header tcp_t { bit<16> src_port; bit<16> dst_port; bit<32> seq_no; bit<32> ack_no; bit<4> data_offset; bit<3> res; bit<3> ecn; bit<6> ctrl; bit<16> window; bit<16> checksum; bit<16> urgent_ptr; } header udp_t { bit<16> src_port; bit<16> dst_port; bit<16> len; bit<16> checksum; } header icmp_t { bit<8> type; bit<8> icmp_code; bit<16> checksum; bit<16> identifier; bit<16> sequence_number; bit<64> timestamp; } header icmpv6_t { bit<8> type; bit<8> code; bit<16> checksum; } header ndp_t { bit<32> flags; ipv6_addr_t target_ipv6_addr; // NDP option. bit<8> type; bit<8> length; bit<48> target_mac_addr; } // Packet-in header. Prepended to packets sent to the CPU_PORT and used by the // P4Runtime server (Stratum) to populate the PacketIn message metadata fields. // Here we use it to carry the original ingress port where the packet was // received. @controller_header("packet_in") header cpu_in_header_t { port_num_t ingress_port; bit<7> _pad; } // Packet-out header. Prepended to packets received from the CPU_PORT. Fields of // this header are populated by the P4Runtime server based on the P4Runtime // PacketOut metadata fields. Here we use it to inform the P4 pipeline on which // port this packet-out should be transmitted. @controller_header("packet_out") header cpu_out_header_t { port_num_t egress_port; bit<7> _pad; } struct parsed_headers_t { cpu_out_header_t cpu_out; cpu_in_header_t cpu_in; ethernet_t ethernet; ipv4_t ipv4; ipv6_t ipv6; srv6h_t srv6h; srv6_list_t[SRV6_MAX_HOPS] srv6_list; tcp_t tcp; udp_t udp; icmp_t icmp; icmpv6_t icmpv6; ndp_t ndp; } struct local_metadata_t { l4_port_t l4_src_port; l4_port_t l4_dst_port; bool is_multicast; ipv6_addr_t next_srv6_sid; bit<8> ip_proto; bit<8> icmp_type; } //------------------------------------------------------------------------------ // INGRESS PIPELINE //------------------------------------------------------------------------------ parser ParserImpl (packet_in packet, out parsed_headers_t hdr, inout local_metadata_t local_metadata, inout standard_metadata_t standard_metadata) { state start { transition select(standard_metadata.ingress_port) { CPU_PORT: parse_packet_out; default: parse_ethernet; } } state parse_packet_out { packet.extract(hdr.cpu_out); transition parse_ethernet; } state parse_ethernet { packet.extract(hdr.ethernet); transition select(hdr.ethernet.ether_type){ ETHERTYPE_IPV4: parse_ipv4; ETHERTYPE_IPV6: parse_ipv6; default: accept; } } state parse_ipv4 { packet.extract(hdr.ipv4); local_metadata.ip_proto = hdr.ipv4.protocol; transition select(hdr.ipv4.protocol) { IP_PROTO_TCP: parse_tcp; IP_PROTO_UDP: parse_udp; IP_PROTO_ICMP: parse_icmp; default: accept; } } state parse_ipv6 { packet.extract(hdr.ipv6); local_metadata.ip_proto = hdr.ipv6.next_hdr; transition select(hdr.ipv6.next_hdr) { IP_PROTO_TCP: parse_tcp; IP_PROTO_UDP: parse_udp; IP_PROTO_ICMPV6: parse_icmpv6; IP_PROTO_SRV6: parse_srv6; default: accept; } } state parse_tcp { packet.extract(hdr.tcp); local_metadata.l4_src_port = hdr.tcp.src_port; local_metadata.l4_dst_port = hdr.tcp.dst_port; transition accept; } state parse_udp { packet.extract(hdr.udp); local_metadata.l4_src_port = hdr.udp.src_port; local_metadata.l4_dst_port = hdr.udp.dst_port; transition accept; } state parse_icmp { packet.extract(hdr.icmp); local_metadata.icmp_type = hdr.icmp.type; transition accept; } state parse_icmpv6 { packet.extract(hdr.icmpv6); local_metadata.icmp_type = hdr.icmpv6.type; transition select(hdr.icmpv6.type) { ICMP6_TYPE_NS: parse_ndp; ICMP6_TYPE_NA: parse_ndp; default: accept; } } state parse_ndp { packet.extract(hdr.ndp); transition accept; } state parse_srv6 { packet.extract(hdr.srv6h); transition parse_srv6_list; } state parse_srv6_list { packet.extract(hdr.srv6_list.next); bool next_segment = (bit<32>)hdr.srv6h.segment_left - 1 == (bit<32>)hdr.srv6_list.lastIndex; transition select(next_segment) { true: mark_current_srv6; default: check_last_srv6; } } state mark_current_srv6 { local_metadata.next_srv6_sid = hdr.srv6_list.last.segment_id; transition check_last_srv6; } state check_last_srv6 { // working with bit<8> and int<32> which cannot be cast directly; using // bit<32> as common intermediate type for comparision bool last_segment = (bit<32>)hdr.srv6h.last_entry == (bit<32>)hdr.srv6_list.lastIndex; transition select(last_segment) { true: parse_srv6_next_hdr; false: parse_srv6_list; } } state parse_srv6_next_hdr { transition select(hdr.srv6h.next_hdr) { IP_PROTO_TCP: parse_tcp; IP_PROTO_UDP: parse_udp; IP_PROTO_ICMPV6: parse_icmpv6; default: accept; } } } control VerifyChecksumImpl(inout parsed_headers_t hdr, inout local_metadata_t meta) { // Not used here. We assume all packets have valid checksum, if not, we let // the end hosts detect errors. apply { /* EMPTY */ } } control IngressPipeImpl (inout parsed_headers_t hdr, inout local_metadata_t local_metadata, inout standard_metadata_t standard_metadata) { // Drop action shared by many tables. action drop() { mark_to_drop(standard_metadata); } // *** L2 BRIDGING // // Here we define tables to forward packets based on their Ethernet // destination address. There are two types of L2 entries that we // need to support: // // 1. Unicast entries: which will be filled in by the control plane when the // location (port) of new hosts is learned. // 2. Broadcast/multicast entries: used replicate NDP Neighbor Solicitation // (NS) messages to all host-facing ports; // // For (2), unlike ARP messages in IPv4 which are broadcasted to Ethernet // destination address FF:FF:FF:FF:FF:FF, NDP messages are sent to special // Ethernet addresses specified by RFC2464. These addresses are prefixed // with 33:33 and the last four octets are the last four octets of the IPv6 // destination multicast address. The most straightforward way of matching // on such IPv6 broadcast/multicast packets, without digging in the details // of RFC2464, is to use a ternary match on 33:33:**:**:**:**, where * means // "don't care". // // For this reason, our solution defines two tables. One that matches in an // exact fashion (easier to scale on switch ASIC memory) and one that uses // ternary matching (which requires more expensive TCAM memories, usually // much smaller). // --- l2_exact_table (for unicast entries) -------------------------------- action set_egress_port(port_num_t port_num) { standard_metadata.egress_spec = port_num; } table l2_exact_table { key = { hdr.ethernet.dst_addr: exact; } actions = { set_egress_port; @defaultonly drop; } const default_action = drop; // The @name annotation is used here to provide a name to this table // counter, as it will be needed by the compiler to generate the // corresponding P4Info entity. @name("l2_exact_table_counter") counters = direct_counter(CounterType.packets_and_bytes); } // --- l2_ternary_table (for broadcast/multicast entries) ------------------ action set_multicast_group(mcast_group_id_t gid) { // gid will be used by the Packet Replication Engine (PRE) in the // Traffic Manager--located right after the ingress pipeline, to // replicate a packet to multiple egress ports, specified by the control // plane by means of P4Runtime MulticastGroupEntry messages. standard_metadata.mcast_grp = gid; local_metadata.is_multicast = true; } table l2_ternary_table { key = { hdr.ethernet.dst_addr: ternary; } actions = { set_multicast_group; @defaultonly drop; } const default_action = drop; @name("l2_ternary_table_counter") counters = direct_counter(CounterType.packets_and_bytes); } // *** TODO EXERCISE 5 (IPV6 ROUTING) // // 1. Create a table to to handle NDP messages to resolve the MAC address of // switch. This table should: // - match on hdr.ndp.target_ipv6_addr (exact match) // - provide action "ndp_ns_to_na" (look in snippets.p4) // - default_action should be "NoAction" // // 2. Create table to handle IPv6 routing. Create a L2 my station table (hit // when Ethernet destination address is the switch address). This table // should not do anything to the packet (i.e., NoAction), but the control // block below should use the result (table.hit) to decide how to process // the packet. // // 3. Create a table for IPv6 routing. An action selector should be use to // pick a next hop MAC address according to a hash of packet header // fields (IPv6 source/destination address and the flow label). Look in // snippets.p4 for an example of an action selector and table using it. // // You can name your tables whatever you like. You will need to fill // the name in elsewhere in this exercise. // --- ndp_reply_table ----------------------------------------------------- action ndp_ns_to_na(mac_addr_t target_mac) { hdr.ethernet.src_addr = target_mac; hdr.ethernet.dst_addr = IPV6_MCAST_01; ipv6_addr_t host_ipv6_tmp = hdr.ipv6.src_addr; hdr.ipv6.src_addr = hdr.ndp.target_ipv6_addr; hdr.ipv6.dst_addr = host_ipv6_tmp; hdr.ipv6.next_hdr = IP_PROTO_ICMPV6; hdr.icmpv6.type = ICMP6_TYPE_NA; hdr.ndp.flags = NDP_FLAG_ROUTER | NDP_FLAG_OVERRIDE; hdr.ndp.type = NDP_OPT_TARGET_LL_ADDR; hdr.ndp.length = 1; hdr.ndp.target_mac_addr = target_mac; standard_metadata.egress_spec = standard_metadata.ingress_port; } table ndp_reply_table { key = { hdr.ndp.target_ipv6_addr: exact; } actions = { ndp_ns_to_na; } @name("ndp_reply_table_counter") counters = direct_counter(CounterType.packets_and_bytes); } // --- my_station_table --------------------------------------------------- table my_station_table { key = { hdr.ethernet.dst_addr: exact; } actions = { NoAction; } @name("my_station_table_counter") counters = direct_counter(CounterType.packets_and_bytes); } // --- routing_v6_table ---------------------------------------------------- action_selector(HashAlgorithm.crc16, 32w1024, 32w16) ecmp_selector; action set_next_hop(mac_addr_t dmac) { hdr.ethernet.src_addr = hdr.ethernet.dst_addr; hdr.ethernet.dst_addr = dmac; // Decrement TTL hdr.ipv6.hop_limit = hdr.ipv6.hop_limit - 1; } table routing_v6_table { key = { hdr.ipv6.dst_addr: lpm; // The following fields are not used for matching, but as input to the // ecmp_selector hash function. hdr.ipv6.dst_addr: selector; hdr.ipv6.src_addr: selector; hdr.ipv6.flow_label: selector; // The rest of the 5-tuple is optional per RFC6438 hdr.ipv6.next_hdr: selector; local_metadata.l4_src_port: selector; local_metadata.l4_dst_port: selector; } actions = { set_next_hop; } implementation = ecmp_selector; @name("routing_v6_table_counter") counters = direct_counter(CounterType.packets_and_bytes); } // *** TODO EXERCISE 6 (SRV6) // // Implement tables to provide SRV6 logic. // --- srv6_my_sid---------------------------------------------------------- // Process the packet if the destination IP is the segemnt Id(sid) of this // device. This table will decrement the "segment left" field from the Srv6 // header and set destination IP address to next segment. action srv6_end() { hdr.srv6h.segment_left = hdr.srv6h.segment_left - 1; hdr.ipv6.dst_addr = local_metadata.next_srv6_sid; } direct_counter(CounterType.packets_and_bytes) srv6_my_sid_table_counter; table srv6_my_sid { key = { hdr.ipv6.dst_addr: lpm; } actions = { srv6_end; } counters = srv6_my_sid_table_counter; } // --- srv6_transit -------------------------------------------------------- // Inserts the SRv6 header to the IPv6 header of the packet based on the // destination IP address. action insert_srv6h_header(bit<8> num_segments) { hdr.srv6h.setValid(); hdr.srv6h.next_hdr = hdr.ipv6.next_hdr; hdr.srv6h.hdr_ext_len = num_segments * 2; hdr.srv6h.routing_type = 4; hdr.srv6h.segment_left = num_segments - 1; hdr.srv6h.last_entry = num_segments - 1; hdr.srv6h.flags = 0; hdr.srv6h.tag = 0; hdr.ipv6.next_hdr = IP_PROTO_SRV6; } /* Single segment header doesn't make sense given PSP i.e. we will pop the SRv6 header when segments_left reaches 0 */ action srv6_t_insert_2(ipv6_addr_t s1, ipv6_addr_t s2) { hdr.ipv6.dst_addr = s1; hdr.ipv6.payload_len = hdr.ipv6.payload_len + 40; insert_srv6h_header(2); hdr.srv6_list[0].setValid(); hdr.srv6_list[0].segment_id = s2; hdr.srv6_list[1].setValid(); hdr.srv6_list[1].segment_id = s1; } action srv6_t_insert_3(ipv6_addr_t s1, ipv6_addr_t s2, ipv6_addr_t s3) { hdr.ipv6.dst_addr = s1; hdr.ipv6.payload_len = hdr.ipv6.payload_len + 56; insert_srv6h_header(3); hdr.srv6_list[0].setValid(); hdr.srv6_list[0].segment_id = s3; hdr.srv6_list[1].setValid(); hdr.srv6_list[1].segment_id = s2; hdr.srv6_list[2].setValid(); hdr.srv6_list[2].segment_id = s1; } direct_counter(CounterType.packets_and_bytes) srv6_transit_table_counter; table srv6_transit { key = { hdr.ipv6.dst_addr: lpm; // TODO: what other fields do we want to match? } actions = { srv6_t_insert_2; srv6_t_insert_3; // Extra credit: set a metadata field, then push label stack in egress } counters = srv6_transit_table_counter; } // Called directly in the apply block. action srv6_pop() { hdr.ipv6.next_hdr = hdr.srv6h.next_hdr; // SRv6 header is 8 bytes // SRv6 list entry is 16 bytes each // (((bit<16>)hdr.srv6h.last_entry + 1) * 16) + 8; bit<16> srv6h_size = (((bit<16>)hdr.srv6h.last_entry + 1) << 4) + 8; hdr.ipv6.payload_len = hdr.ipv6.payload_len - srv6h_size; hdr.srv6h.setInvalid(); // Need to set MAX_HOPS headers invalid hdr.srv6_list[0].setInvalid(); hdr.srv6_list[1].setInvalid(); hdr.srv6_list[2].setInvalid(); } // *** ACL // // Provides ways to override a previous forwarding decision, for example // requiring that a packet is cloned/sent to the CPU, or dropped. // // We use this table to clone all NDP packets to the control plane, so to // enable host discovery. When the location of a new host is discovered, the // controller is expected to update the L2 and L3 tables with the // correspionding brinding and routing entries. action send_to_cpu() { standard_metadata.egress_spec = CPU_PORT; } action clone_to_cpu() { // Cloning is achieved by using a v1model-specific primitive. Here we // set the type of clone operation (ingress-to-egress pipeline), the // clone session ID (the CPU one), and the metadata fields we want to // preserve for the cloned packet replica. clone3(CloneType.I2E, CPU_CLONE_SESSION_ID, { standard_metadata.ingress_port }); } table acl_table { key = { standard_metadata.ingress_port: ternary; hdr.ethernet.dst_addr: ternary; hdr.ethernet.src_addr: ternary; hdr.ethernet.ether_type: ternary; local_metadata.ip_proto: ternary; local_metadata.icmp_type: ternary; local_metadata.l4_src_port: ternary; local_metadata.l4_dst_port: ternary; } actions = { send_to_cpu; clone_to_cpu; drop; } @name("acl_table_counter") counters = direct_counter(CounterType.packets_and_bytes); } apply { if (hdr.cpu_out.isValid()) { // *** TODO EXERCISE 4 // Implement logic such that if this is a packet-out from the // controller: // 1. Set the packet egress port to that found in the cpu_out header // 2. Remove (set invalid) the cpu_out header // 3. Exit the pipeline here (no need to go through other tables standard_metadata.egress_spec = hdr.cpu_out.egress_port; hdr.cpu_out.setInvalid(); exit; } bool do_l3_l2 = true; if (hdr.icmpv6.isValid() && hdr.icmpv6.type == ICMP6_TYPE_NS) { // *** TODO EXERCISE 5 // Insert logic to handle NDP messages to resolve the MAC address of the // switch. You should apply the NDP reply table created before. // If this is an NDP NS packet, i.e., if a matching entry is found, // unset the "do_l3_l2" flag to skip the L3 and L2 tables, as the // "ndp_ns_to_na" action already set an egress port. if (ndp_reply_table.apply().hit) { do_l3_l2 = false; } } if (do_l3_l2) { // *** TODO EXERCISE 5 // Insert logic to match the My Station table and upon hit, the // routing table. You should also add a conditional to drop the // packet if the hop_limit reaches 0. // *** TODO EXERCISE 6 // Insert logic to match the SRv6 My SID and Transit tables as well // as logic to perform PSP behavior. HINT: This logic belongs // somewhere between checking the switch's my station table and // applying the routing table. if (hdr.ipv6.isValid() && my_station_table.apply().hit) { if (srv6_my_sid.apply().hit) { // PSP logic -- enabled for all packets if (hdr.srv6h.isValid() && hdr.srv6h.segment_left == 0) { srv6_pop(); } } else { srv6_transit.apply(); } routing_v6_table.apply(); // Check TTL, drop packet if necessary to avoid loops. if(hdr.ipv6.hop_limit == 0) { drop(); } } // L2 bridging logic. Apply the exact table first... if (!l2_exact_table.apply().hit) { // ...if an entry is NOT found, apply the ternary one in case // this is a multicast/broadcast NDP NS packet. l2_ternary_table.apply(); } } // Lastly, apply the ACL table. acl_table.apply(); } } control EgressPipeImpl (inout parsed_headers_t hdr, inout local_metadata_t local_metadata, inout standard_metadata_t standard_metadata) { apply { if (standard_metadata.egress_port == CPU_PORT) { // *** TODO EXERCISE 4 // Implement logic such that if the packet is to be forwarded to the // CPU port, e.g., if in ingress we matched on the ACL table with // action send/clone_to_cpu... // 1. Set cpu_in header as valid // 2. Set the cpu_in.ingress_port field to the original packet's // ingress port (standard_metadata.ingress_port). hdr.cpu_in.setValid(); hdr.cpu_in.ingress_port = standard_metadata.ingress_port; exit; } // If this is a multicast packet (flag set by l2_ternary_table), make // sure we are not replicating the packet on the same port where it was // received. This is useful to avoid broadcasting NDP requests on the // ingress port. if (local_metadata.is_multicast == true && standard_metadata.ingress_port == standard_metadata.egress_port) { mark_to_drop(standard_metadata); } } } control ComputeChecksumImpl(inout parsed_headers_t hdr, inout local_metadata_t local_metadata) { apply { // The following is used to update the ICMPv6 checksum of NDP // NA packets generated by the ndp reply table in the ingress pipeline. // This function is executed only if the NDP header is present. update_checksum(hdr.ndp.isValid(), { hdr.ipv6.src_addr, hdr.ipv6.dst_addr, hdr.ipv6.payload_len, 8w0, hdr.ipv6.next_hdr, hdr.icmpv6.type, hdr.icmpv6.code, hdr.ndp.flags, hdr.ndp.target_ipv6_addr, hdr.ndp.type, hdr.ndp.length, hdr.ndp.target_mac_addr }, hdr.icmpv6.checksum, HashAlgorithm.csum16 ); } } control DeparserImpl(packet_out packet, in parsed_headers_t hdr) { apply { packet.emit(hdr.cpu_in); packet.emit(hdr.ethernet); packet.emit(hdr.ipv4); packet.emit(hdr.ipv6); packet.emit(hdr.srv6h); packet.emit(hdr.srv6_list); packet.emit(hdr.tcp); packet.emit(hdr.udp); packet.emit(hdr.icmp); packet.emit(hdr.icmpv6); packet.emit(hdr.ndp); } } V1Switch( ParserImpl(), VerifyChecksumImpl(), IngressPipeImpl(), EgressPipeImpl(), ComputeChecksumImpl(), DeparserImpl() ) main; ================================================ FILE: solution/ptf/tests/bridging.py ================================================ # Copyright 2013-present Barefoot Networks, Inc. # Copyright 2018-present Open Networking Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # ------------------------------------------------------------------------------ # BRIDGING TESTS # # To run all tests in this file: # make p4-test TEST=bridging # # To run a specific test case: # make p4-test TEST=bridging. # # For example: # make p4-test TEST=bridging.BridgingTest # ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------ # Modify everywhere you see TODO # # When providing your solution, make sure to use the same names for P4Runtime # entities as specified in your P4Info file. # # Test cases are based on the P4 program design suggested in the exercises # README. Make sure to modify the test cases accordingly if you decide to # implement the pipeline differently. # ------------------------------------------------------------------------------ from ptf.testutils import group from base_test import * # From the P4 program. CPU_CLONE_SESSION_ID = 99 @group("bridging") class ArpNdpRequestWithCloneTest(P4RuntimeTest): """Tests ability to broadcast ARP requests and NDP Neighbor Solicitation (NS) messages as well as cloning to CPU (controller) for host discovery. """ def runTest(self): # Test With both ARP and NDP NS packets... print_inline("ARP request ... ") arp_pkt = testutils.simple_arp_packet() self.testPacket(arp_pkt) print_inline("NDP NS ... ") ndp_pkt = genNdpNsPkt(src_mac=HOST1_MAC, src_ip=HOST1_IPV6, target_ip=HOST2_IPV6) self.testPacket(ndp_pkt) @autocleanup def testPacket(self, pkt): mcast_group_id = 10 mcast_ports = [self.port1, self.port2, self.port3] # Add multicast group. self.insert_pre_multicast_group( group_id=mcast_group_id, ports=mcast_ports) # Match eth dst: FF:FF:FF:FF:FF:FF (MAC broadcast for ARP requests) # Modify names to match content of P4Info file (look for the fully # qualified name of tables, match fields, and actions. # ---- START SOLUTION ---- self.insert(self.helper.build_table_entry( table_name="IngressPipeImpl.l2_ternary_table", match_fields={ # Ternary match. "hdr.ethernet.dst_addr": ( "FF:FF:FF:FF:FF:FF", "FF:FF:FF:FF:FF:FF") }, action_name="IngressPipeImpl.set_multicast_group", action_params={ "gid": mcast_group_id }, priority=DEFAULT_PRIORITY )) # ---- END SOLUTION ---- # Match eth dst: 33:33:**:**:**:** (IPv6 multicast for NDP requests) # Modify names to match content of P4Info file (look for the fully # qualified name of tables, match fields, and actions. # ---- START SOLUTION ---- self.insert(self.helper.build_table_entry( table_name="IngressPipeImpl.l2_ternary_table", match_fields={ # Ternary match (value, mask) "hdr.ethernet.dst_addr": ( "33:33:00:00:00:00", "FF:FF:00:00:00:00") }, action_name="IngressPipeImpl.set_multicast_group", action_params={ "gid": mcast_group_id }, priority=DEFAULT_PRIORITY )) # ---- END SOLUTION ---- # Insert CPU clone session. self.insert_pre_clone_session( session_id=CPU_CLONE_SESSION_ID, ports=[self.cpu_port]) # ACL entry to clone ARPs self.insert(self.helper.build_table_entry( table_name="IngressPipeImpl.acl_table", match_fields={ # Ternary match. "hdr.ethernet.ether_type": (ARP_ETH_TYPE, 0xffff) }, action_name="IngressPipeImpl.clone_to_cpu", priority=DEFAULT_PRIORITY )) # ACL entry to clone NDP Neighbor Solicitation self.insert(self.helper.build_table_entry( table_name="IngressPipeImpl.acl_table", match_fields={ # Ternary match. "hdr.ethernet.ether_type": (IPV6_ETH_TYPE, 0xffff), "local_metadata.ip_proto": (ICMPV6_IP_PROTO, 0xff), "local_metadata.icmp_type": (NS_ICMPV6_TYPE, 0xff) }, action_name="IngressPipeImpl.clone_to_cpu", priority=DEFAULT_PRIORITY )) for inport in mcast_ports: # Send packet... testutils.send_packet(self, inport, str(pkt)) # Pkt should be received on CPU via PacketIn... # Expected P4Runtime PacketIn message. exp_packet_in_msg = self.helper.build_packet_in( payload=str(pkt), metadata={ "ingress_port": inport, "_pad": 0 }) self.verify_packet_in(exp_packet_in_msg) # ...and on all ports except the ingress one. verify_ports = set(mcast_ports) verify_ports.discard(inport) for port in verify_ports: testutils.verify_packet(self, pkt, port) testutils.verify_no_other_packets(self) @group("bridging") class ArpNdpReplyWithCloneTest(P4RuntimeTest): """Tests ability to clone ARP replies and NDP Neighbor Advertisement (NA) messages as well as unicast forwarding to requesting host. """ def runTest(self): # Test With both ARP and NDP NS packets... print_inline("ARP reply ... ") # op=1 request, op=2 relpy arp_pkt = testutils.simple_arp_packet( eth_src=HOST1_MAC, eth_dst=HOST2_MAC, arp_op=2) self.testPacket(arp_pkt) print_inline("NDP NA ... ") ndp_pkt = genNdpNaPkt(target_ip=HOST1_IPV6, target_mac=HOST1_MAC) self.testPacket(ndp_pkt) @autocleanup def testPacket(self, pkt): # L2 unicast entry, match on pkt's eth dst address. # Modify names to match content of P4Info file (look for the fully # qualified name of tables, match fields, and actions. # ---- START SOLUTION ---- self.insert(self.helper.build_table_entry( table_name="IngressPipeImpl.l2_exact_table", match_fields={ # Exact match. "hdr.ethernet.dst_addr": pkt[Ether].dst }, action_name="IngressPipeImpl.set_egress_port", action_params={ "port_num": self.port2 } )) # ---- END SOLUTION ---- # CPU clone session. self.insert_pre_clone_session( session_id=CPU_CLONE_SESSION_ID, ports=[self.cpu_port]) # ACL entry to clone ARPs self.insert(self.helper.build_table_entry( table_name="IngressPipeImpl.acl_table", match_fields={ # Ternary match. "hdr.ethernet.ether_type": (ARP_ETH_TYPE, 0xffff) }, action_name="IngressPipeImpl.clone_to_cpu", priority=DEFAULT_PRIORITY )) # ACL entry to clone NDP Neighbor Solicitation self.insert(self.helper.build_table_entry( table_name="IngressPipeImpl.acl_table", match_fields={ # Ternary match. "hdr.ethernet.ether_type": (IPV6_ETH_TYPE, 0xffff), "local_metadata.ip_proto": (ICMPV6_IP_PROTO, 0xff), "local_metadata.icmp_type": (NA_ICMPV6_TYPE, 0xff) }, action_name="IngressPipeImpl.clone_to_cpu", priority=DEFAULT_PRIORITY )) testutils.send_packet(self, self.port1, str(pkt)) # Pkt should be received on CPU via PacketIn... # Expected P4Runtime PacketIn message. exp_packet_in_msg = self.helper.build_packet_in( payload=str(pkt), metadata={ "ingress_port": self.port1, "_pad": 0 }) self.verify_packet_in(exp_packet_in_msg) # ..and on port2 as indicated by the L2 unicast rule. testutils.verify_packet(self, pkt, self.port2) @group("bridging") class BridgingTest(P4RuntimeTest): """Tests basic L2 unicast forwarding""" def runTest(self): # Test with different type of packets. for pkt_type in ["tcp", "udp", "icmp", "tcpv6", "udpv6", "icmpv6"]: print_inline("%s ... " % pkt_type) pkt = getattr(testutils, "simple_%s_packet" % pkt_type)(pktlen=120) self.testPacket(pkt) @autocleanup def testPacket(self, pkt): # Insert L2 unicast entry, match on pkt's eth dst address. # Modify names to match content of P4Info file (look for the fully # qualified name of tables, match fields, and actions. # ---- START SOLUTION ---- self.insert(self.helper.build_table_entry( table_name="IngressPipeImpl.l2_exact_table", match_fields={ # Exact match. "hdr.ethernet.dst_addr": pkt[Ether].dst }, action_name="IngressPipeImpl.set_egress_port", action_params={ "port_num": self.port2 } )) # ---- END SOLUTION ---- # Test bidirectional forwarding by swapping MAC addresses on the pkt pkt2 = pkt_mac_swap(pkt.copy()) # Insert L2 unicast entry for pkt2. # Modify names to match content of P4Info file (look for the fully # qualified name of tables, match fields, and actions. # ---- START SOLUTION ---- self.insert(self.helper.build_table_entry( table_name="IngressPipeImpl.l2_exact_table", match_fields={ # Exact match. "hdr.ethernet.dst_addr": pkt2[Ether].dst }, action_name="IngressPipeImpl.set_egress_port", action_params={ "port_num": self.port1 } )) # ---- END SOLUTION ---- # Send and verify. testutils.send_packet(self, self.port1, str(pkt)) testutils.send_packet(self, self.port2, str(pkt2)) testutils.verify_each_packet_on_each_port( self, [pkt, pkt2], [self.port2, self.port1]) ================================================ FILE: solution/ptf/tests/packetio.py ================================================ # Copyright 2019-present Open Networking Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # ------------------------------------------------------------------------------ # CONTROLLER PACKET-IN/OUT TESTS # # To run all tests in this file: # make p4-test TEST=packetio # # To run a specific test case: # make p4-test TEST=packetio. # # For example: # make p4-test TEST=packetio.PacketOutTest # ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------ # Modify everywhere you see TODO # # When providing your solution, make sure to use the same names for P4Runtime # entities as specified in your P4Info file. # # Test cases are based on the P4 program design suggested in the exercises # README. Make sure to modify the test cases accordingly if you decide to # implement the pipeline differently. # ------------------------------------------------------------------------------ from ptf.testutils import group from base_test import * CPU_CLONE_SESSION_ID = 99 @group("packetio") class PacketOutTest(P4RuntimeTest): """Tests controller packet-out capability by sending PacketOut messages and expecting a corresponding packet on the output port set in the PacketOut metadata. """ def runTest(self): for pkt_type in ["tcp", "udp", "icmp", "arp", "tcpv6", "udpv6", "icmpv6"]: print_inline("%s ... " % pkt_type) pkt = getattr(testutils, "simple_%s_packet" % pkt_type)() self.testPacket(pkt) def testPacket(self, pkt): for outport in [self.port1, self.port2]: # Build PacketOut message. # TODO EXERCISE 4 # Modify metadata names to match the content of your P4Info file # ---- START SOLUTION ---- packet_out_msg = self.helper.build_packet_out( payload=str(pkt), metadata={ "egress_port": outport, "_pad": 0 }) # ---- END SOLUTION ---- # Send message and expect packet on the given data plane port. self.send_packet_out(packet_out_msg) testutils.verify_packet(self, pkt, outport) # Make sure packet was forwarded only on the specified ports testutils.verify_no_other_packets(self) @group("packetio") class PacketInTest(P4RuntimeTest): """Tests controller packet-in capability my matching on the packet EtherType and cloning to the CPU port. """ def runTest(self): for pkt_type in ["tcp", "udp", "icmp", "arp", "tcpv6", "udpv6", "icmpv6"]: print_inline("%s ... " % pkt_type) pkt = getattr(testutils, "simple_%s_packet" % pkt_type)() self.testPacket(pkt) @autocleanup def testPacket(self, pkt): # Insert clone to CPU session self.insert_pre_clone_session( session_id=CPU_CLONE_SESSION_ID, ports=[self.cpu_port]) # Insert ACL entry to match on the given eth_type and clone to CPU. eth_type = pkt[Ether].type # TODO EXERCISE 4 # Modify names to match content of P4Info file (look for the fully # qualified name of the ACL table, EtherType match field, and # clone_to_cpu action. # ---- START SOLUTION ---- self.insert(self.helper.build_table_entry( table_name="IngressPipeImpl.acl_table", match_fields={ # Ternary match. "hdr.ethernet.ether_type": (eth_type, 0xffff) }, action_name="IngressPipeImpl.clone_to_cpu", priority=DEFAULT_PRIORITY )) # ---- END SOLUTION ---- for inport in [self.port1, self.port2, self.port3]: # TODO EXERCISE 4 # Modify metadata names to match the content of your P4Info file # ---- START SOLUTION ---- # Expected P4Runtime PacketIn message. exp_packet_in_msg = self.helper.build_packet_in( payload=str(pkt), metadata={ "ingress_port": inport, "_pad": 0 }) # ---- END SOLUTION ---- # Send packet to given switch ingress port and expect P4Runtime # PacketIn message. testutils.send_packet(self, inport, str(pkt)) self.verify_packet_in(exp_packet_in_msg) ================================================ FILE: solution/ptf/tests/routing.py ================================================ # Copyright 2019-present Open Networking Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # ------------------------------------------------------------------------------ # IPV6 ROUTING TESTS # # To run all tests: # make p4-test TEST=routing # # To run a specific test case: # make p4-test TEST=routing. # # For example: # make p4-test TEST=routing.IPv6RoutingTest # ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------ # Modify everywhere you see TODO # # When providing your solution, make sure to use the same names for P4Runtime # entities as specified in your P4Info file. # # Test cases are based on the P4 program design suggested in the exercises # README. Make sure to modify the test cases accordingly if you decide to # implement the pipeline differently. # ------------------------------------------------------------------------------ from ptf.testutils import group from base_test import * @group("routing") class IPv6RoutingTest(P4RuntimeTest): """Tests basic IPv6 routing""" def runTest(self): # Test with different type of packets. for pkt_type in ["tcpv6", "udpv6", "icmpv6"]: print_inline("%s ... " % pkt_type) pkt = getattr(testutils, "simple_%s_packet" % pkt_type)() self.testPacket(pkt) @autocleanup def testPacket(self, pkt): next_hop_mac = SWITCH2_MAC # Add entry to "My Station" table. Consider the given pkt's eth dst addr # as myStationMac address. # *** TODO EXERCISE 5 # Modify names to match content of P4Info file (look for the fully # qualified name of tables, match fields, and actions. # ---- START SOLUTION ---- self.insert(self.helper.build_table_entry( table_name="IngressPipeImpl.my_station_table", match_fields={ # Exact match. "hdr.ethernet.dst_addr": pkt[Ether].dst }, action_name="NoAction" )) # ---- END SOLUTION ---- # Insert ECMP group with only one member (next_hop_mac) # *** TODO EXERCISE 5 # Modify names to match content of P4Info file (look for the fully # qualified name of tables, match fields, and actions. # ---- START SOLUTION ---- self.insert(self.helper.build_act_prof_group( act_prof_name="IngressPipeImpl.ecmp_selector", group_id=1, actions=[ # List of tuples (action name, action param dict) ("IngressPipeImpl.set_next_hop", {"dmac": next_hop_mac}), ] )) # ---- END SOLUTION ---- # Insert L3 entry to app pkt's IPv6 dst addr to group # *** TODO EXERCISE 5 # Modify names to match content of P4Info file (look for the fully # qualified name of tables, match fields, and actions. # ---- START SOLUTION ---- self.insert(self.helper.build_table_entry( table_name="IngressPipeImpl.routing_v6_table", match_fields={ # LPM match (value, prefix) "hdr.ipv6.dst_addr": (pkt[IPv6].dst, 128) }, group_id=1 )) # ---- END SOLUTION ---- # Insert L3 entry to map next_hop_mac to output port 2. # *** TODO EXERCISE 5 # Modify names to match content of P4Info file (look for the fully # qualified name of tables, match fields, and actions. # ---- START SOLUTION ---- self.insert(self.helper.build_table_entry( table_name="IngressPipeImpl.l2_exact_table", match_fields={ # Exact match "hdr.ethernet.dst_addr": next_hop_mac }, action_name="IngressPipeImpl.set_egress_port", action_params={ "port_num": self.port2 } )) # ---- END SOLUTION ---- # Expected pkt should have routed MAC addresses and decremented hop # limit (TTL). exp_pkt = pkt.copy() pkt_route(exp_pkt, next_hop_mac) pkt_decrement_ttl(exp_pkt) testutils.send_packet(self, self.port1, str(pkt)) testutils.verify_packet(self, exp_pkt, self.port2) @group("routing") class NdpReplyGenTest(P4RuntimeTest): """Tests automatic generation of NDP Neighbor Advertisement for IPV6 addresses associated to the switch interface. """ @autocleanup def runTest(self): switch_ip = SWITCH1_IPV6 target_mac = SWITCH1_MAC # Insert entry to transform NDP NA packets for the given target address # (match), to NDP NA packets with the given target MAC address (action # *** TODO EXERCISE 5 # Modify names to match content of P4Info file (look for the fully # qualified name of tables, match fields, and actions. # ---- START SOLUTION ---- self.insert(self.helper.build_table_entry( table_name="IngressPipeImpl.ndp_reply_table", match_fields={ # Exact match. "hdr.ndp.target_ipv6_addr": switch_ip }, action_name="IngressPipeImpl.ndp_ns_to_na", action_params={ "target_mac": target_mac } )) # ---- END SOLUTION ---- # NDP Neighbor Solicitation packet pkt = genNdpNsPkt(target_ip=switch_ip) # NDP Neighbor Advertisement packet exp_pkt = genNdpNaPkt(target_ip=switch_ip, target_mac=target_mac, src_mac=target_mac, src_ip=switch_ip, dst_ip=pkt[IPv6].src) # Send NDP NS, expect NDP NA from the same port. testutils.send_packet(self, self.port1, str(pkt)) testutils.verify_packet(self, exp_pkt, self.port1) ================================================ FILE: solution/ptf/tests/srv6.py ================================================ # Copyright 2019-present Open Networking Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # ------------------------------------------------------------------------------ # SRV6 TESTS # # To run all tests: # make p4-test TEST=srv6 # # To run a specific test case: # make p4-test TEST=srv6. # # For example: # make p4-test TEST=srv6.Srv6InsertTest # ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------ # Modify everywhere you see TODO # # When providing your solution, make sure to use the same names for P4Runtime # entities as specified in your P4Info file. # # Test cases are based on the P4 program design suggested in the exercises # README. Make sure to modify the test cases accordingly if you decide to # implement the pipeline differently. # ------------------------------------------------------------------------------ from ptf.testutils import group from base_test import * def insert_srv6_header(pkt, sid_list): """Applies SRv6 insert transformation to the given packet. """ # Set IPv6 dst to first SID... pkt[IPv6].dst = sid_list[0] # Insert SRv6 header between IPv6 header and payload sid_len = len(sid_list) srv6_hdr = IPv6ExtHdrSegmentRouting( nh=pkt[IPv6].nh, addresses=sid_list[::-1], len=sid_len * 2, segleft=sid_len - 1, lastentry=sid_len - 1) pkt[IPv6].nh = 43 # next IPv6 header is SR header pkt[IPv6].payload = srv6_hdr / pkt[IPv6].payload return pkt def pop_srv6_header(pkt): """Removes SRv6 header from the given packet. """ pkt[IPv6].nh = pkt[IPv6ExtHdrSegmentRouting].nh pkt[IPv6].payload = pkt[IPv6ExtHdrSegmentRouting].payload def set_cksum(pkt, cksum): if TCP in pkt: pkt[TCP].chksum = cksum if UDP in pkt: pkt[UDP].chksum = cksum if ICMPv6Unknown in pkt: pkt[ICMPv6Unknown].cksum = cksum @group("srv6") class Srv6InsertTest(P4RuntimeTest): """Tests SRv6 insert behavior, where the switch receives an IPv6 packet and inserts the SRv6 header """ def runTest(self): sid_lists = ( [SWITCH2_IPV6, SWITCH3_IPV6, HOST2_IPV6], [SWITCH2_IPV6, HOST2_IPV6], ) next_hop_mac = SWITCH2_MAC for sid_list in sid_lists: for pkt_type in ["tcpv6", "udpv6", "icmpv6"]: print_inline("%s %d SIDs ... " % (pkt_type, len(sid_list))) pkt = getattr(testutils, "simple_%s_packet" % pkt_type)() self.testPacket(pkt, sid_list, next_hop_mac) @autocleanup def testPacket(self, pkt, sid_list, next_hop_mac): # *** TODO EXERCISE 6 # Modify names to match content of P4Info file (look for the fully # qualified name of tables, match fields, and actions. # ---- START SOLUTION ---- # Add entry to "My Station" table. Consider the given pkt's eth dst addr # as myStationMac address. self.insert(self.helper.build_table_entry( table_name="IngressPipeImpl.my_station_table", match_fields={ # Exact match. "hdr.ethernet.dst_addr": pkt[Ether].dst }, action_name="NoAction" )) # Insert SRv6 header when matching the pkt's IPV6 dst addr. # Action name an params are generated based on the number of SIDs given. # For example, with 2 SIDs: # action_name = IngressPipeImpl.srv6_t_insert_2 # action_params = { # "s1": sid[0], # "s2": sid[1] # } sid_len = len(sid_list) action_name = "IngressPipeImpl.srv6_t_insert_%d" % sid_len actions_params = {"s%d" % (x + 1): sid_list[x] for x in range(sid_len)} self.insert(self.helper.build_table_entry( table_name="IngressPipeImpl.srv6_transit", match_fields={ # LPM match (value, prefix) "hdr.ipv6.dst_addr": (pkt[IPv6].dst, 128) }, action_name=action_name, action_params=actions_params )) # Insert ECMP group with only one member (next_hop_mac) self.insert(self.helper.build_act_prof_group( act_prof_name="IngressPipeImpl.ecmp_selector", group_id=1, actions=[ # List of tuples (action name, {action param: value}) ("IngressPipeImpl.set_next_hop", {"dmac": next_hop_mac}), ] )) # Now that we inserted the SRv6 header, we expect the pkt's IPv6 dst # addr to be the first on the SID list. # Match on L3 routing table. first_sid = sid_list[0] self.insert(self.helper.build_table_entry( table_name="IngressPipeImpl.routing_v6_table", match_fields={ # LPM match (value, prefix) "hdr.ipv6.dst_addr": (first_sid, 128) }, group_id=1 )) # Map next_hop_mac to output port self.insert(self.helper.build_table_entry( table_name="IngressPipeImpl.l2_exact_table", match_fields={ # Exact match. "hdr.ethernet.dst_addr": next_hop_mac }, action_name="IngressPipeImpl.set_egress_port", action_params={ "port_num": self.port2 } )) # ---- END SOLUTION ---- # Build expected packet from the given one... exp_pkt = insert_srv6_header(pkt.copy(), sid_list) # Route and decrement TTL pkt_route(exp_pkt, next_hop_mac) pkt_decrement_ttl(exp_pkt) # Bonus: update P4 program to calculate correct checksum set_cksum(pkt, 1) set_cksum(exp_pkt, 1) testutils.send_packet(self, self.port1, str(pkt)) testutils.verify_packet(self, exp_pkt, self.port2) @group("srv6") class Srv6TransitTest(P4RuntimeTest): """Tests SRv6 transit behavior, where the switch ignores the SRv6 header and routes the packet normally, without applying any SRv6-related modifications. """ def runTest(self): my_sid = SWITCH1_IPV6 sid_lists = ( [SWITCH2_IPV6, SWITCH3_IPV6, HOST2_IPV6], [SWITCH2_IPV6, HOST2_IPV6], ) next_hop_mac = SWITCH2_MAC for sid_list in sid_lists: for pkt_type in ["tcpv6", "udpv6", "icmpv6"]: print_inline("%s %d SIDs ... " % (pkt_type, len(sid_list))) pkt = getattr(testutils, "simple_%s_packet" % pkt_type)() pkt = insert_srv6_header(pkt, sid_list) self.testPacket(pkt, next_hop_mac, my_sid) @autocleanup def testPacket(self, pkt, next_hop_mac, my_sid): # *** TODO EXERCISE 6 # Modify names to match content of P4Info file (look for the fully # qualified name of tables, match fields, and actions. # ---- START SOLUTION ---- # Add entry to "My Station" table. Consider the given pkt's eth dst addr # as myStationMac address. self.insert(self.helper.build_table_entry( table_name="IngressPipeImpl.my_station_table", match_fields={ # Exact match. "hdr.ethernet.dst_addr": pkt[Ether].dst }, action_name="NoAction" )) # This should be missed, this is plain IPv6 routing. self.insert(self.helper.build_table_entry( table_name="IngressPipeImpl.srv6_my_sid", match_fields={ # Longest prefix match (value, prefix length) "hdr.ipv6.dst_addr": (my_sid, 128) }, action_name="IngressPipeImpl.srv6_end" )) # Insert ECMP group with only one member (next_hop_mac) self.insert(self.helper.build_act_prof_group( act_prof_name="IngressPipeImpl.ecmp_selector", group_id=1, actions=[ # List of tuples (action name, {action param: value}) ("IngressPipeImpl.set_next_hop", {"dmac": next_hop_mac}), ] )) # Map pkt's IPv6 dst addr to group self.insert(self.helper.build_table_entry( table_name="IngressPipeImpl.routing_v6_table", match_fields={ # LPM match (value, prefix) "hdr.ipv6.dst_addr": (pkt[IPv6].dst, 128) }, group_id=1 )) # Map next_hop_mac to output port self.insert(self.helper.build_table_entry( table_name="IngressPipeImpl.l2_exact_table", match_fields={ # Exact match. "hdr.ethernet.dst_addr": next_hop_mac }, action_name="IngressPipeImpl.set_egress_port", action_params={ "port_num": self.port2 } )) # ---- END SOLUTION ---- # Build expected packet from the given one... exp_pkt = pkt.copy() # Route and decrement TTL pkt_route(exp_pkt, next_hop_mac) pkt_decrement_ttl(exp_pkt) # Bonus: update P4 program to calculate correct checksum set_cksum(pkt, 1) set_cksum(exp_pkt, 1) testutils.send_packet(self, self.port1, str(pkt)) testutils.verify_packet(self, exp_pkt, self.port2) @group("srv6") class Srv6EndTest(P4RuntimeTest): """Tests SRv6 end behavior (without pop), where the switch forwards the packet to the next SID found in the SRv6 header. """ def runTest(self): my_sid = SWITCH2_IPV6 sid_lists = ( [SWITCH2_IPV6, SWITCH3_IPV6, HOST2_IPV6], [SWITCH2_IPV6, SWITCH3_IPV6, SWITCH4_IPV6, HOST2_IPV6], ) next_hop_mac = SWITCH3_MAC for sid_list in sid_lists: for pkt_type in ["tcpv6", "udpv6", "icmpv6"]: print_inline("%s %d SIDs ... " % (pkt_type, len(sid_list))) pkt = getattr(testutils, "simple_%s_packet" % pkt_type)() pkt = insert_srv6_header(pkt, sid_list) self.testPacket(pkt, sid_list, next_hop_mac, my_sid) @autocleanup def testPacket(self, pkt, sid_list, next_hop_mac, my_sid): # *** TODO EXERCISE 6 # Modify names to match content of P4Info file (look for the fully # qualified name of tables, match fields, and actions. # ---- START SOLUTION ---- # Add entry to "My Station" table. Consider the given pkt's eth dst addr # as myStationMac address. self.insert(self.helper.build_table_entry( table_name="IngressPipeImpl.my_station_table", match_fields={ # Exact match. "hdr.ethernet.dst_addr": pkt[Ether].dst }, action_name="NoAction" )) # This should be matched, we want SRv6 end behavior to be applied. self.insert(self.helper.build_table_entry( table_name="IngressPipeImpl.srv6_my_sid", match_fields={ # Longest prefix match (value, prefix length) "hdr.ipv6.dst_addr": (my_sid, 128) }, action_name="IngressPipeImpl.srv6_end" )) # Insert ECMP group with only one member (next_hop_mac) self.insert(self.helper.build_act_prof_group( act_prof_name="IngressPipeImpl.ecmp_selector", group_id=1, actions=[ # List of tuples (action name, {action param: value}) ("IngressPipeImpl.set_next_hop", {"dmac": next_hop_mac}), ] )) # After applying the srv6_end action, we expect to IPv6 dst to be the # next SID in the list, we should route based on that. next_sid = sid_list[1] self.insert(self.helper.build_table_entry( table_name="IngressPipeImpl.routing_v6_table", match_fields={ # LPM match (value, prefix) "hdr.ipv6.dst_addr": (next_sid, 128) }, group_id=1 )) # Map next_hop_mac to output port self.insert(self.helper.build_table_entry( table_name="IngressPipeImpl.l2_exact_table", match_fields={ # Exact match. "hdr.ethernet.dst_addr": next_hop_mac }, action_name="IngressPipeImpl.set_egress_port", action_params={ "port_num": self.port2 } )) # ---- END SOLUTION ---- # Build expected packet from the given one... exp_pkt = pkt.copy() # Set IPv6 dst to next SID and decrement segleft... exp_pkt[IPv6].dst = next_sid exp_pkt[IPv6ExtHdrSegmentRouting].segleft -= 1 # Route and decrement TTL... pkt_route(exp_pkt, next_hop_mac) pkt_decrement_ttl(exp_pkt) # Bonus: update P4 program to calculate correct checksum set_cksum(pkt, 1) set_cksum(exp_pkt, 1) testutils.send_packet(self, self.port1, str(pkt)) testutils.verify_packet(self, exp_pkt, self.port2) @group("srv6") class Srv6EndPspTest(P4RuntimeTest): """Tests SRv6 End with Penultimate Segment Pop (PSP) behavior, where the switch SID is the penultimate in the SID list and the switch removes the SRv6 header before routing the packet to it's final destination (last SID in the list). """ def runTest(self): my_sid = SWITCH3_IPV6 sid_lists = ( [SWITCH3_IPV6, HOST2_IPV6], ) next_hop_mac = HOST2_MAC for sid_list in sid_lists: for pkt_type in ["tcpv6", "udpv6", "icmpv6"]: print_inline("%s %d SIDs ... " % (pkt_type, len(sid_list))) pkt = getattr(testutils, "simple_%s_packet" % pkt_type)() pkt = insert_srv6_header(pkt, sid_list) self.testPacket(pkt, sid_list, next_hop_mac, my_sid) @autocleanup def testPacket(self, pkt, sid_list, next_hop_mac, my_sid): # *** TODO EXERCISE 6 # Modify names to match content of P4Info file (look for the fully # qualified name of tables, match fields, and actions. # ---- START SOLUTION ---- # Add entry to "My Station" table. Consider the given pkt's eth dst addr # as myStationMac address. self.insert(self.helper.build_table_entry( table_name="IngressPipeImpl.my_station_table", match_fields={ # Exact match. "hdr.ethernet.dst_addr": pkt[Ether].dst }, action_name="NoAction" )) # This should be matched, we want SRv6 end behavior to be applied. self.insert(self.helper.build_table_entry( table_name="IngressPipeImpl.srv6_my_sid", match_fields={ # Longest prefix match (value, prefix length) "hdr.ipv6.dst_addr": (my_sid, 128) }, action_name="IngressPipeImpl.srv6_end" )) # Insert ECMP group with only one member (next_hop_mac) self.insert(self.helper.build_act_prof_group( act_prof_name="IngressPipeImpl.ecmp_selector", group_id=1, actions=[ # List of tuples (action name, {action param: value}) ("IngressPipeImpl.set_next_hop", {"dmac": next_hop_mac}), ] )) # Map pkt's IPv6 dst addr to group next_sid = sid_list[1] self.insert(self.helper.build_table_entry( table_name="IngressPipeImpl.routing_v6_table", match_fields={ # LPM match (value, prefix) "hdr.ipv6.dst_addr": (next_sid, 128) }, group_id=1 )) # Map next_hop_mac to output port self.insert(self.helper.build_table_entry( table_name="IngressPipeImpl.l2_exact_table", match_fields={ # Exact match. "hdr.ethernet.dst_addr": next_hop_mac }, action_name="IngressPipeImpl.set_egress_port", action_params={ "port_num": self.port2 } )) # ---- END SOLUTION ---- # Build expected packet from the given one... exp_pkt = pkt.copy() # Expect IPv6 dst to be the next SID... exp_pkt[IPv6].dst = next_sid # Remove SRv6 header since we are performing PSP. pop_srv6_header(exp_pkt) # Route and decrement TTL pkt_route(exp_pkt, next_hop_mac) pkt_decrement_ttl(exp_pkt) # Bonus: update P4 program to calculate correct checksum set_cksum(pkt, 1) set_cksum(exp_pkt, 1) testutils.send_packet(self, self.port1, str(pkt)) testutils.verify_packet(self, exp_pkt, self.port2) ================================================ FILE: util/docker/Makefile ================================================ include Makefile.vars build: build-stratum_bmv2 build-mvn push: push-stratum_bmv2 push-mvn build-stratum_bmv2: cd stratum_bmv2 && docker build -t ${STRATUM_BMV2_IMG} . build-mvn: cd ../../app && docker build --squash -f ../util/docker/mvn/Dockerfile \ -t ${MVN_IMG} . push-stratum_bmv2: # Remember to update Makefile.vars with the new image sha docker push ${STRATUM_BMV2_IMG} push-mvn: # Remember to update Makefile.vars with the new image sha docker push ${MVN_IMG} ================================================ FILE: util/docker/Makefile.vars ================================================ ONOS_IMG := onosproject/onos:2.2.2 P4RT_SH_IMG := p4lang/p4runtime-sh:latest P4C_IMG := opennetworking/p4c:stable STRATUM_BMV2_IMG := opennetworking/ngsdn-tutorial:stratum_bmv2 MVN_IMG := opennetworking/ngsdn-tutorial:mvn GNMI_CLI_IMG := bocon/gnmi-cli:latest YANG_IMG := bocon/yang-tools:latest SSHPASS_IMG := ictu/sshpass ONOS_SHA := sha256:438815ab20300cd7a31702b7dea635152c4c4b5b2fed9b14970bd2939a139d2a P4RT_SH_SHA := sha256:6ae50afb5bde620acb9473ce6cd7b990ff6cc63fe4113cf5584c8e38fe42176c P4C_SHA := sha256:8f9d27a6edf446c3801db621359fec5de993ebdebc6844d8b1292e369be5dfea STRATUM_BMV2_SHA := sha256:f31faa5e83abbb2d9cf39d28b3578f6e113225641337ec7d16d867b0667524ef MVN_SHA := sha256:d85eb93ac909a90f49b16b33cb872620f9b4f640e7a6451859aec704b21f9243 GNMI_CLI_SHA := sha256:6f1590c35e71c07406539d0e1e288e87e1e520ef58de25293441c3b9c81dffc0 YANG_SHA := sha256:feb2dc322af113fc52f17b5735454abfbe017972c867e522ba53ea44e8386fd2 SSHPASS_SHA := sha256:6e3d0d7564b259ef9612843d220cc390e52aab28b0ff9adaec800c72a051f41c ================================================ FILE: util/docker/mvn/Dockerfile ================================================ # Copyright 2019-present Open Networking Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Docker image to build the ONOS app. # Provides pre-poulated maven repo cache to allow offline builds. FROM maven:3.6.1-jdk-11-slim COPY . /mvn-src WORKDIR /mvn-src RUN mvn clean package && rm -rf ./* ================================================ FILE: util/docker/stratum_bmv2/Dockerfile ================================================ # Copyright 2019-present Open Networking Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Docker image that extends opennetworking/mn-stratum with other dependencies # required by this tutorial. opennetworking/mn-stratum is the official image # from the Stratum project which contains stratum_bmv2 and the Mininet # libraries. We extend that with PTF, scapy, etc. ARG MN_STRATUM_SHA="sha256:1bba2e2c06460c73b0133ae22829937786217e5f20f8f80fcc3063dcf6707ebe" FROM bitnami/minideb:stretch as builder ENV BUILD_DEPS \ python-pip \ python-setuptools \ git RUN install_packages $BUILD_DEPS RUN mkdir -p /ouput ENV PIP_DEPS \ scapy==2.4.3 \ git+https://github.com/p4lang/ptf.git \ googleapis-common-protos==1.6.0 \ ipaddress RUN pip install --no-cache-dir --root /output $PIP_DEPS FROM opennetworking/mn-stratum:latest@$MN_STRATUM_SHA as runtime ENV RUNTIME_DEPS \ make RUN install_packages $RUNTIME_DEPS COPY --from=builder /output / ENV DOCKER_RUN true ENTRYPOINT [] ================================================ FILE: util/gnmi-cli ================================================ #!/bin/bash docker run --rm -it --network host bocon/gnmi-cli:latest $@ ================================================ FILE: util/mn-cmd ================================================ #!/bin/bash if [ -z $1 ]; then echo "usage: $0 host cmd [args...]" exit 1 fi docker exec -it mininet /mininet/host-cmd $@ ================================================ FILE: util/mn-pcap ================================================ #!/bin/bash DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )" if [ -z $1 ]; then echo "usage: $0 host" exit 1 fi iface=$1-eth0 file=${iface}.pcap set -e echo "*** Starting tcpdump on ${iface}... Ctrl-c to stop capture" echo "*** Pcap file will be written in ngsdn-tutorial/tmp/${file}" docker exec -it mininet /mininet/host-cmd $1 tcpdump -i $1-eth0 -w /tmp/"${file}" if [ -x "$(command -v wireshark)" ]; then echo "*** Opening wireshark... Ctrl-c to quit" wireshark "${DIR}/../tmp/${file}" fi ================================================ FILE: util/oc-pb-decoder ================================================ #!/bin/bash docker run --rm -i bocon/yang-tools:latest oc-pb-decoder ================================================ FILE: util/onos-cmd ================================================ #!/bin/bash if [ -z $1 ]; then echo "usage: $0 cmd [args...]" exit 1 fi # Use sshpass to skip the password prompt docker run -it --rm --network host ictu/sshpass \ -procks ssh -o "UserKnownHostsFile=/dev/null" \ -o "StrictHostKeyChecking=no" -o LogLevel=ERROR -p 8101 onos@localhost "$@" ================================================ FILE: util/p4rt-sh ================================================ #!/usr/bin/env python3 # Copyright 2019 Barefoot Networks, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # """ P4Runtime shell docker wrapper From: https://github.com/p4lang/p4runtime-shell/blob/master/p4runtime-sh-docker """ import argparse from collections import namedtuple import logging import os.path import sys import tempfile import shutil import subprocess DOCKER_IMAGE = 'p4lang/p4runtime-sh' TMP_DIR = os.path.dirname(os.path.abspath(__file__)) + '/.pipe_cfg' def main(): FwdPipeConfig = namedtuple('FwdPipeConfig', ['p4info', 'bin']) def pipe_config(arg): try: paths = FwdPipeConfig(*[x for x in arg.split(',')]) if len(paths) != 2: raise argparse.ArgumentError return paths except Exception: raise argparse.ArgumentError( "Invalid pipeline config, expected ,") parser = argparse.ArgumentParser(description='P4Runtime shell docker wrapper', add_help=False) parser.add_argument('--grpc-addr', help='P4Runtime gRPC server address', metavar=':', type=str, action='store', default="localhost:50001") parser.add_argument('--config', help='If you want the shell to push a pipeline config to the server first', metavar=',', type=pipe_config, action='store', default=None) parser.add_argument('-v', '--verbose', help='Increase output verbosity', action='store_true') args, unknown_args = parser.parse_known_args() docker_args = [] new_args = [] if args.verbose: logging.basicConfig(level=logging.DEBUG) new_args.append('--verbose') if args.grpc_addr is not None: print("*** Connecting to P4Runtime server at {} ...".format(args.grpc_addr)) new_args.extend(["--grpc-addr", args.grpc_addr]) if args.config is not None: if not os.path.isfile(args.config.p4info): logging.critical("'{}' is not a valid file".format(args.config.p4info)) sys.exit(1) if not os.path.isfile(args.config.bin): logging.critical("'{}' is not a valid file".format(args.config.bin)) sys.exit(1) mount_path = "/fwd_pipe_config" fname_p4info = "p4info.pb.txt" fname_bin = "config.bin" os.mkdir(TMP_DIR) logging.debug( "Created temporary directory '{}', it will be mounted in the docker as '{}'".format( TMP_DIR, mount_path)) shutil.copy(args.config.p4info, os.path.join(TMP_DIR, fname_p4info)) shutil.copy(args.config.bin, os.path.join(TMP_DIR, fname_bin)) docker_args.extend(["-v", "{}:{}".format(TMP_DIR, mount_path)]) new_args.extend(["--config", "{},{}".format( os.path.join(mount_path, fname_p4info), os.path.join(mount_path, fname_bin))]) cmd = ["docker", "run", "-ti", "--network", "host"] cmd.extend(docker_args) cmd.append(DOCKER_IMAGE) cmd.extend(new_args) cmd.extend(unknown_args) logging.debug("Running cmd: {}".format(" ".join(cmd))) subprocess.run(cmd) if args.config is not None: logging.debug("Cleaning up...") try: shutil.rmtree(TMP_DIR) except Exception: logging.error("Error when removing temporary directory '{}'".format(TMP_DIR)) if __name__ == '__main__': main() ================================================ FILE: util/vm/.gitignore ================================================ *.log .vagrant *.ova ================================================ FILE: util/vm/README.md ================================================ # Scripts to build the tutorial VM ## Requirements - [Vagrant](https://www.vagrantup.com/) (tested v2.2.5) - [VirtualBox](https://www.virtualbox.org/wiki/Downloads) (tested with v5.2.32) ## Steps to build If you want to provision and use the VM locally on your machine: cd util/vm vagrant up Otherwise, if you want to export the VM in `.ova` format for distribution to tutorial attendees: cd util/vm ./build-vm.sh This script will: 1. provision the VM using Vagrant; 2. reduce VM disk size; 3. generate a file named `ngsdn-tutorial.ova`. Use credentials `sdn`/`rocks` to log in the Ubuntu system. **Note on IntelliJ IDEA plugins:** plugins need to be installed manually. We recommend installing the following ones: * https://plugins.jetbrains.com/plugin/10620-p4-plugin * https://plugins.jetbrains.com/plugin/7322-python-community-edition ================================================ FILE: util/vm/Vagrantfile ================================================ REQUIRED_PLUGINS = %w( vagrant-vbguest vagrant-reload vagrant-disksize ) Vagrant.configure(2) do |config| # Install plugins if missing... _retry = false REQUIRED_PLUGINS.each do |plugin| unless Vagrant.has_plugin? plugin system "vagrant plugin install #{plugin}" _retry = true end end if (_retry) exec "vagrant " + ARGV.join(' ') end # Common config. config.vm.box = "lasp/ubuntu16.04-desktop" config.vbguest.auto_update = true config.disksize.size = '50GB' config.vm.synced_folder ".", "/vagrant", disabled: false, type: "virtualbox" config.vm.network "private_network", :type => 'dhcp', :adapter => 2 config.vm.define "default" do |d| d.vm.hostname = "tutorial-vm" d.vm.provider "virtualbox" do |vb| vb.name = "ONF NG-SDN Tutorial " + Time.now.strftime("(%Y-%m-%d)") vb.gui = true vb.cpus = 8 vb.memory = 8192 vb.customize ['modifyvm', :id, '--clipboard', 'bidirectional'] vb.customize ["modifyvm", :id, "--accelerate3d", "on"] vb.customize ["modifyvm", :id, "--graphicscontroller", "vboxvga"] vb.customize ["modifyvm", :id, "--vram", "128"] end d.vm.provision "shell", path: "root-bootstrap.sh" d.vm.provision "shell", inline: "su sdn '/vagrant/user-bootstrap.sh'" end end ================================================ FILE: util/vm/build-vm.sh ================================================ #!/usr/bin/env bash set -xe function wait_vm_shutdown { set +x while vboxmanage showvminfo $1 | grep -c "running (since"; do echo "Waiting for VM to shutdown..." sleep 1 done sleep 2 set -x } # Provision vagrant up # Cleanup VB_UUID=$(cat .vagrant/machines/default/virtualbox/id) vagrant ssh -c 'bash /vagrant/cleanup.sh' sleep 5 vboxmanage controlvm "${VB_UUID}" acpipowerbutton wait_vm_shutdown "${VB_UUID}" # Remove vagrant shared folder vboxmanage sharedfolder remove "${VB_UUID}" -name "vagrant" # Export rm -f ngsdn-tutorial.ova vboxmanage export "${VB_UUID}" -o ngsdn-tutorial.ova ================================================ FILE: util/vm/cleanup.sh ================================================ #!/bin/bash set -ex sudo apt-get clean sudo apt-get -y autoremove sudo rm -rf /tmp/* history -c rm -f ~/.bash_history # Zerofill virtual hd to save space when exporting time sudo dd if=/dev/zero of=/tmp/zero bs=1M || true sync ; sleep 1 ; sync ; sudo rm -f /tmp/zero # Delete vagrant user sudo userdel -r -f vagrant ================================================ FILE: util/vm/root-bootstrap.sh ================================================ #!/usr/bin/env bash set -xe # Create user sdn useradd -m -d /home/sdn -s /bin/bash sdn usermod -aG sudo sdn usermod -aG vboxsf sdn echo "sdn:rocks" | chpasswd echo "sdn ALL=(ALL) NOPASSWD:ALL" > /etc/sudoers.d/99_sdn chmod 440 /etc/sudoers.d/99_sdn update-locale LC_ALL="en_US.UTF-8" apt-get update apt-get install -y --no-install-recommends apt-transport-https ca-certificates curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" apt-get update # Required packages DEBIAN_FRONTEND=noninteractive apt-get -y --no-install-recommends install \ avahi-daemon \ git \ bash-completion \ htop \ python \ zip unzip \ make \ wget \ curl \ vim nano emacs \ docker-ce # Enable Docker at startup systemctl start docker systemctl enable docker # Add sdn user to docker group usermod -a -G docker sdn # Install pip curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py python get-pip.py --force-reinstall rm -f get-pip.py # Bash autocompletion echo "source /etc/profile.d/bash_completion.sh" >> ~/.bashrc # Fix SSH server config tee -a /etc/ssh/sshd_config <