advanced 62705c3cb64f cached
86 files
603.7 KB
152.6k tokens
376 symbols
1 requests
Download .txt
Showing preview only (635K chars total). Download the full file or copy to clipboard to get everything.
Repository: opennetworkinglab/ngsdn-tutorial
Branch: advanced
Commit: 62705c3cb64f
Files: 86
Total size: 603.7 KB

Directory structure:
gitextract_7ohb8phc/

├── .gitignore
├── .travis.yml
├── EXERCISE-1.md
├── EXERCISE-2.md
├── EXERCISE-3.md
├── EXERCISE-4.md
├── EXERCISE-5.md
├── EXERCISE-6.md
├── EXERCISE-7.md
├── EXERCISE-8.md
├── LICENSE
├── Makefile
├── README.md
├── app/
│   ├── pom.xml
│   └── src/
│       └── main/
│           └── java/
│               └── org/
│                   └── onosproject/
│                       └── ngsdn/
│                           └── tutorial/
│                               ├── AppConstants.java
│                               ├── Ipv6RoutingComponent.java
│                               ├── L2BridgingComponent.java
│                               ├── MainComponent.java
│                               ├── NdpReplyComponent.java
│                               ├── Srv6Component.java
│                               ├── cli/
│                               │   ├── Srv6ClearCommand.java
│                               │   ├── Srv6InsertCommand.java
│                               │   ├── Srv6SidCompleter.java
│                               │   └── package-info.java
│                               ├── common/
│                               │   ├── FabricDeviceConfig.java
│                               │   └── Utils.java
│                               └── pipeconf/
│                                   ├── InterpreterImpl.java
│                                   ├── PipeconfLoader.java
│                                   └── PipelinerImpl.java
├── docker-compose.yml
├── mininet/
│   ├── flowrule-gtp.json
│   ├── host-cmd
│   ├── netcfg-gtp.json
│   ├── netcfg-sr.json
│   ├── netcfg.json
│   ├── recv-gtp.py
│   ├── send-udp.py
│   ├── topo-gtp.py
│   ├── topo-v4.py
│   └── topo-v6.py
├── p4src/
│   ├── main.p4
│   └── snippets.p4
├── ptf/
│   ├── lib/
│   │   ├── __init__.py
│   │   ├── base_test.py
│   │   ├── chassis_config.pb.txt
│   │   ├── convert.py
│   │   ├── helper.py
│   │   ├── port_map.json
│   │   ├── runner.py
│   │   └── start_bmv2.sh
│   ├── run_tests
│   └── tests/
│       ├── bridging.py
│       ├── packetio.py
│       ├── routing.py
│       └── srv6.py
├── solution/
│   ├── app/
│   │   └── src/
│   │       └── main/
│   │           └── java/
│   │               └── org/
│   │                   └── onosproject/
│   │                       └── ngsdn/
│   │                           └── tutorial/
│   │                               ├── Ipv6RoutingComponent.java
│   │                               ├── L2BridgingComponent.java
│   │                               ├── NdpReplyComponent.java
│   │                               ├── Srv6Component.java
│   │                               └── pipeconf/
│   │                                   └── InterpreterImpl.java
│   ├── mininet/
│   │   ├── flowrule-gtp.json
│   │   ├── netcfg-gtp.json
│   │   └── netcfg-sr.json
│   ├── p4src/
│   │   └── main.p4
│   └── ptf/
│       └── tests/
│           ├── bridging.py
│           ├── packetio.py
│           ├── routing.py
│           └── srv6.py
├── util/
│   ├── docker/
│   │   ├── Makefile
│   │   ├── Makefile.vars
│   │   ├── mvn/
│   │   │   └── Dockerfile
│   │   └── stratum_bmv2/
│   │       └── Dockerfile
│   ├── gnmi-cli
│   ├── mn-cmd
│   ├── mn-pcap
│   ├── oc-pb-decoder
│   ├── onos-cmd
│   ├── p4rt-sh
│   └── vm/
│       ├── .gitignore
│       ├── README.md
│       ├── Vagrantfile
│       ├── build-vm.sh
│       ├── cleanup.sh
│       ├── root-bootstrap.sh
│       └── user-bootstrap.sh
└── yang/
    └── demo-port.yang

================================================
FILE CONTENTS
================================================

================================================
FILE: .gitignore
================================================
.idea/
tmp/
p4src/build
app/target
app/src/main/resources/p4info.txt
app/src/main/resources/bmv2.json
ptf/stratum_bmv2.log
ptf/p4rt_write.log
ptf/ptf.log
ptf/ptf.pcap
**/*.iml
**/*.pyc
**/*.bak
**/.classpath
**/.project
**/.settings
**/.factorypath
util/.pipe_cfg


================================================
FILE: .travis.yml
================================================
dist: xenial

language: python

services:
  - docker

python:
  - "3.5"

install:
  - make deps

script:
  - make check check-sr check-gtp NGSDN_TUTORIAL_SUDO=sudo


================================================
FILE: EXERCISE-1.md
================================================
# Exercise 1: P4Runtime Basics

This exercise provides a hands-on introduction to the P4Runtime API. You will be
asked to:

1. Look at the P4 starter code
2. Compile it for the BMv2 software switch and understand the output (P4Info
   and BMv2 JSON files)
3. Start Mininet with a 2x2 topology of `stratum_bmv2` switches
4. Use the P4Runtime Shell to manually insert table entries in one of the
   switches to provide connectivity between hosts

## 1. Look at the P4 program

To get started, let's have a look a the P4 program:
[p4src/main.p4](p4src/main.p4)

In the rest of the exercises, you will be asked to build a leaf-spine data
center fabric based on IPv6. To make things easier, we provide a starter P4
program which contains:

* Header definitions
* Parser implementation
* Ingress and egress pipeline implementation (incomplete)
* Checksum verification/update

The implementation already provides logic for L2 bridging and ACL behaviors. We
suggest you start by taking a **quick look** at the whole program to understand
its structure. When you're done, try answering the following questions, while
referring to the P4 program to understand the different parts in more details.

**Parser**

* List all the protocol headers that can be extracted from a packet.
* Which header is expected to be the first one when parsing a new packet

**Ingress pipeline**

* For the L2 bridging case, which table is used to replicate NDP requests to
  all host-facing ports? What type of match is used in that table?
* In the ACL table, what's the difference between `send_to_cpu` and
  `clone_to_cpu` actions?
* In the apply block, what is the first table applied to a packet? Are P4Runtime
  packet-out treated differently?

**Egress pipeline**

* For multicast packets, can they be replicated to the ingress port?

**Deparser**

* What is the first header to be serialized on the wire and in which case?

## 2. Compile P4 program

The next step is to compile the P4 program for the BMv2 `simple_switch` target.
For this, we will use the open source P4_16 compiler ([p4c][p4c]) which includes
a backend for this specific target, named `p4c-bm2-ss`.

To compile the program, open a terminal window in the exercise VM and type the
following command:

```
make p4-build
```

You should see the following output:

```
*** Building P4 program...
docker run --rm -v /home/sdn/ngsdn-tutorial:/workdir -w /workdir
 opennetworking/p4c:stable \
                p4c-bm2-ss --arch v1model -o p4src/build/bmv2.json \
                --p4runtime-files p4src/build/p4info.txt --Wdisable=unsupported \
                p4src/main.p4
*** P4 program compiled successfully! Output files are in p4src/build
```

We have instrumented the Makefile to use a containerized version of the
`p4c-bm2-ss` compiler. If you look at the arguments when calling `p4c-bm2-ss `,
you will notice that we are asking the compiler to:

* Compile for the v1model architecture (`--arch` argument);
* Put the main output in `p4src/build/bmv2.json` (`-o`);
* Generate a P4Info file in `p4src/build/p4info.txt` (`--p4runtime-files`);
* Ignore some warnings about unsupported features (`--Wdisable=unsupported`).
  It's ok to ignore such warnings here, as they are generated because of a bug
  in p4c.

### Compiler output

#### bmv2.json

This file defines a configuration for the BMv2 `simple_switch` target in JSON
format. When `simple_switch` receives a new packet, it uses this configuration
to process the packet in a way that is consistent with the P4 program.

This is quite a big file, but don't worry, there's no need to understand its
content for the sake of this exercise. If you want to learn more, a
specification of the BMv2 JSON format is provided here:
<https://github.com/p4lang/behavioral-model/blob/master/docs/JSON_format.md>

#### p4info.txt

This file contains an instance of a P4Info schema for our P4 program, expressed
using the Protobuf Text format.

Take a look at this file and try to answer the following questions:

1. What is the fully qualified name of the `l2_exact_table`? What is its numeric
   ID?
2. To which P4 entity does the ID `16812802` belong to? A table, an action, or
   something else? What is the corresponding fully qualified name?
3. For the `IngressPipeImpl.set_egress_port` action, how many parameters are
   defined for this action? What is the bitwidth of the parameter named
   `port_num`?
4. At the end of the file, look for the definition of the
   `controller_packet_metadata` message with name `packet_out` at the end of the
   file. Now look at the definition of `header cpu_out_header_t` in the P4
   program. Do you see any relationship between the two?

## 3. Start Mininet topology

It's now time to start an emulated network of `stratum_bmv2` switches. We will
program one of the switches using the compiler output obtained in the previous
step.

To start the topology, use the following command:

```
make start
```

This command will start two Docker containers, one for mininet and one for ONOS.
You can ignore the ONOS one for now, we will use that in exercises 3 and 4.

To make sure the container is started without errors, you can use the `make
mn-log` command to show the Mininet log. Verify that you see the following
output (press Ctrl-C to exit):

```
$ make mn-log
docker-compose logs -f mininet
Attaching to mininet
mininet    | *** Error setting resource limits. Mininet's performance may be affected.
mininet    | *** Creating network
mininet    | *** Adding hosts:
mininet    | h1a h1b h1c h2 h3 h4
mininet    | *** Adding switches:
mininet    | leaf1 leaf2 spine1 spine2
mininet    | *** Adding links:
mininet    | (h1a, leaf1) (h1b, leaf1) (h1c, leaf1) (h2, leaf1) (h3, leaf2) (h4, leaf2) (spine1, leaf1) (spine1, leaf2) (spine2, leaf1) (spine2, leaf2)
mininet    | *** Configuring hosts
mininet    | h1a h1b h1c h2 h3 h4
mininet    | *** Starting controller
mininet    |
mininet    | *** Starting 4 switches
mininet    | leaf1 stratum_bmv2 @ 50001
mininet    | leaf2 stratum_bmv2 @ 50002
mininet    | spine1 stratum_bmv2 @ 50003
mininet    | spine2 stratum_bmv2 @ 50004
mininet    |
mininet    | *** Starting CLI:
```

You can ignore the "*** Error setting resource limits...".

The parameters to start the mininet container are specified in
[docker-compose.yml](docker-compose.yml). The container is configured to execute
the topology script defined in [mininet/topo-v6.py](mininet/topo-v6.py).

The topology includes 4 switches, arranged in a 2x2 fabric topology, as well as
6 hosts attached to leaf switches. 3 hosts `h1a`, `h1b`, and `h1c`, are
configured to be part of the same IPv6 subnet. In the next step you will be
asked to use P4Runtime to insert table entries to be able to ping between
two hosts of this subnet.

![topo-v6](img/topo-v6.png)

### stratum_bmv2 temporary files

When starting the Mininet container, a set of files related to the execution of
each `stratum_bmv2` instance is generated in the
`tmp`directory. Examples include:

* `tmp/leaf1/stratum_bmv2.log`: contains the stratum_bmv2 log for switch
  `leaf1`;
* `tmp/leaf1/chassis-config.txt`: the Stratum "chassis config" file used to
   specify the initial port configuration to use at switch startup; This file is
   automatically generated by the `StratumBmv2Switch` class invoked by
   [mininet/topo-v6.py](mininet/topo-v6.py).
* `tmp/leaf1/write-reqs.txt`: a log of all P4Runtime write requests processed by
  the switch (the file might not exist if the switch has not received any write
  request).

## 4. Program leaf1 using P4Runtime

For this part we will use the [P4Runtime Shell][p4runtime-sh], an interactive
Python CLI that can be used to connect to a P4Runtime server and can run
P4Runtime commands. For example, it can be used to create, read, update, and
delete flow table entries.

The shell can be started in two modes, with or without a P4 pipeline config. In
the first case, the shell will take care of pushing the given pipeline config to
the switch using the P4Runtime `SetPipelineConfig` RPC. In the second case, the
shell will try to retrieve the P4Info that is currently configured in the switch.

In both cases, the shell makes use of the P4Info file to:
* allow specifying runtime entities such as table entries using P4Info names
  rather then numeric IDs (much easier to remember and read);
* provide autocompletion;
* validate the CLI commands.

Finally, when connecting to a P4Runtime server, the specification mandates that
we provide a mastership election ID to be able to write state, such as the
pipeline config and table entries.

To connect the P4Runtime Shell to `leaf1` and push the pipeline configuration
obtained before, use the following command:

```
util/p4rt-sh --grpc-addr localhost:50001 --config p4src/build/p4info.txt,p4src/build/bmv2.json --election-id 0,1
```

`util/p4rt-sh` is a simple Python script that invokes the P4Runtime Shell
Docker container with the given arguments. For a list of arguments you can type
`util/p4rt-sh --help`.

**Note:** we use `--grpc-addr localhost:50001` as the Mininet container is
executed locally, and `50001` is the TCP port associated to the gRPC server
exposed by `leaf1`.

If the shell started successfully, you should see the following output:

```
*** Connecting to P4Runtime server at host.docker.internal:50001 ...
*** Welcome to the IPython shell for P4Runtime ***
P4Runtime sh >>>
```

#### Available commands

Use commands like `tables`, `actions`, `action_profiles`, `counters`,
`direct_counters`, and other named after the P4Info message fields, to query
information about P4Info objects.

Commands such as `table_entry`, `action_profile_member`, `action_profile_group`,
`counter_entry`, `direct_counter_entry`, `meter_entry`, `direct_meter_entry`,
`multicast_group_entry`, and `clone_session_entry`, can be used to read/write
the corresponding P4Runtime entities.

Type the command name followed by `?` for information on each command,
e.g. `table_entry?`.

For more information on P4Runtime Shell, check the official documentation at:
<https://github.com/p4lang/p4runtime-shell>

The shell supports autocompletion when pressing `tab`. For example:

```
tables["IngressPipeImpl.<tab>
```

will show all tables defined inside the `IngressPipeImpl` block.

### Bridging connectivity test

Use the following steps to verify connectivity on leaf1 after inserting the
required P4Runtime table entries. For this part, you will need to use the
Mininet CLI.

On a new terminal window, attach to the Mininet CLI using `make mn-cli`.

You should see the following output:

```
*** Attaching to Mininet CLI...
*** To detach press Ctrl-D (Mininet will keep running)
mininet>
```

### Insert static NDP entries

To be able to ping two IPv6 hosts in the same subnet, first, the hosts need to
resolve their respective MAC address using the Neighbor Discovery Protocol
(NDP). This is equivalent to ARP in IPv4 networks. For example, when trying to
ping `h1b` from `h1a`, `h1a` will first generate an NDP Neighbor Solicitation
(NS) message to resolve the MAC address of `h1b`. Once `h1b` receives the NDP NS
message, it should reply with an NDP Neighbor Advertisement (NA) with its own
MAC address. Now both host are aware of each other MAC address and the ping
packets can be exchanged.

As you might have noted by looking at the P4 program before, the switch should
be able to handle NDP packets if correctly programmed using P4Runtime (see
`l2_ternary_table`), however, **to keep things simple for now, let's insert two
static NDP entries in our hosts.**

Add an NDP entry to `h1a`, mapping `h1b`'s IPv6 address (`2001:1:1::b`) to its
MAC address (`00:00:00:00:00:1B`):

```
mininet> h1a ip -6 neigh replace 2001:1:1::B lladdr 00:00:00:00:00:1B dev h1a-eth0
```

And vice versa, add an NDP entry to `h1b` to resolve `h1a`'s address:

```
mininet> h1b ip -6 neigh replace 2001:1:1::A lladdr 00:00:00:00:00:1A dev h1b-eth0
```

### Start ping

Start a ping between `h1a` and `h1b`. It should not work as we have not inserted
any P4Runtime table entry for forward these packets.

```
mininet> h1a ping h1b
```

You should see no output form the ping command. You can leave that command
running for now.

### Insert P4Runtime table entries

To be able to forward ping packets, we need to add two table entries on
`l2_exact_table` in `leaf1` -- one that matches on destination MAC address
of `h1b` and forwards traffic to port 4 (where `h1b` is attached), and
vice versa (`h1a` is attached to port 3).

Let's use the P4Runtime shell to create and insert such entries. Looking at the
P4Info file, use the commands below to insert the following two entries in the
`l2_exact_table`:

| Match (Ethernet dest) | Egress port number  |
|-----------------------|-------------------- |
| `00:00:00:00:00:1B`   | 4                   |
| `00:00:00:00:00:1A`   | 3                   |

To create a table entry object:

```
P4Runtime sh >>> te = table_entry["P4INFO-TABLE-NAME"](action = "<P4INFO-ACTION-NAME>")
```

Make sure to use the fully qualified name for each entity, e.g.
`IngressPipeImpl.l2_exact_table`, `IngressPipeImpl.set_egress_port`, etc.

To specify a match field:

```
P4Runtime sh >>> te.match["P4INFO-MATCH-FIELD-NAME"] = ("VALUE")
```

`VALUE` can be a MAC address expressed in Colon-Hexadecimal notation
(e.g., `00:11:22:AA:BB:CC`), or IP address in dot notation, or an arbitrary
string. Based on the information contained in the P4Info, P4Runtime shell will
internally convert that value to a Protobuf byte string.

The specify the values for the table entry action parameters:

```
P4Runtime sh >>> te.action["P4INFO-ACTION-PARAM-NAME"] = ("VALUE")
```

You can show the table entry object in Protobuf Text format, using the `print`
command:

```
P4Runtime sh >>> print(te)
```

The shell internally takes care of populating the fields of the corresponding
Protobuf message by using the content of the P4Info file.

To insert the entry (this will issue a P4Runtime Write RPC to the switch):

```
P4Runtime sh >>> te.insert()
```

To read table entries from the switch (this will issue a P4Runtime Read RPC):

```
P4Runtime sh >>> for te in table_entry["P4INFO-TABLE-NAME"].read():
            ...:     print(te)
            ...:
```

After inserting the two entries, ping should work. Go pack to the Mininet CLI
terminal with the ping command running and verify that you see an output similar
to this:

```
mininet> h1a ping h1b
PING 2001:1:1::b(2001:1:1::b) 56 data bytes
64 bytes from 2001:1:1::b: icmp_seq=956 ttl=64 time=1.65 ms
64 bytes from 2001:1:1::b: icmp_seq=957 ttl=64 time=1.28 ms
64 bytes from 2001:1:1::b: icmp_seq=958 ttl=64 time=1.69 ms
...
```

## Congratulations!

You have completed the first exercise! Leave Mininet running, as you will need it
for the following exercises.

[p4c]: https://github.com/p4lang/p4c
[p4runtime-sh]: https://github.com/p4lang/p4runtime-shell


================================================
FILE: EXERCISE-2.md
================================================
# Exercise 2: Yang, OpenConfig, and gNMI Basics

This exercise is designed to give you more exposure to YANG, OpenConfig,
and gNMI. It includes:

1. Understanding the YANG language
2. Understand YANG encoding
3. Understanding YANG-enabled transport protocols (using gNMI)

## 1. Understanding the YANG language

We start with a simple YANG module called `demo-port` in
[`yang/demo-port.yang`](./yang/demo-port.yang)

Take a look at the model and try to derive the structure. What are valid values
for each of the leaf nodes?

This model is self contained, so it isn't too difficult to work it out. However,
most YANG models are defined over many files that makes it very complicated to
work out the overall structure.

To make this easier, we can use a tool called `pyang` to try to visualize the
structure of the model.

Start by entering the yang-tools Docker container:

```
$ make yang-tools
bash-4.4#
```

Next, run `pyang` on the `demo-port.yang` model:

```
bash-4.4# pyang -f tree demo-port.yang
```

You should see a tree representation of the `demo-port` module. Does this match
your expectations?

------

*Extra Credit:* Try to add a new leaf node
to `port-config` or `port-state` grouping, then rerun `pyang` and see where your
new leaf was added.

------

We can also use `pyang` to visualize a more complicated set of models, like the
set of OpenConfig models that Stratum uses.

These models have already been loaded into the `yang-tools` container in the
`/models` directory.

```
bash-4.4# pyang -f tree \
    -p ietf \
    -p openconfig \
    -p hercules \
    openconfig/interfaces/openconfig-interfaces.yang \
    openconfig/interfaces/openconfig-if-ethernet.yang  \
    openconfig/platform/* \
    openconfig/qos/* \
    openconfig/system/openconfig-system.yang \
    hercules/openconfig-hercules-*.yang  | less
```

You should see a tree structure of the models displayed in `less`. You can use
the Arrow keys or `j`/`k` to scroll up and down. Type `q` to quit.

In the interface model, we can see the path to enable or disable an interface:
`interfaces/interface[name]/config/enabled`

What is the path to read the number of incoming packets (`in-pkts`) on an interface?

------

*Extra Credit:* Take a look at the models in the
`/models` directory or browse them on Github:
<https://github.com/openconfig/public/tree/master/release/models>

Try to find the description of the `enabled` or `in-pkts` leaf nodes.

*Hint:* Take a look at the `openconfig-interfaces.yang` file.

------

## 2. Understand YANG encoding

There is no specific YANG data encoding, but data adhering to YANG models can be
encoded into XML, JSON, or Protobuf (among other formats). Each of these formats
has it's own schema format.

First, we can look at YANG's first and canonical representation format XML. To
see a empty skeleton of data encoded in XML, run:

```
bash-4.4# pyang -f sample-xml-skeleton demo-port.yang
```

This skeleton should match the tree representation we saw in part 1.

We can also use `pyang` to generate a DSDL schema based on the YANG model:

```
bash-4.4# pyang -f dsdl demo-port.yang | xmllint --format -
```

The first part of the schema describes the tree structure, and the second part
describes the value constraints for the leaf nodes.

*Extra credit:* Try adding new speed identity (e.g. `SPEED_100G`) or changing
the range for `port-number` values in `demo-port.yang`, then rerun `pyang -f
dsdl`. Do you see your changes reflected in the DSDL schema?

------

Next, we will look at encoding data using Protocol Buffers (protobuf). The
protobuf encoding is a more compact binary encoding than XML, and libraries can
be automatically generated for dozens of languages. We can use
[`ygot`](https://github.com/openconfig/ygot)'s `proto_generator` to generate
protobuf messages from our YANG model.

```
bash-4.4# proto_generator -output_dir=/proto -package_name=tutorial demo-port.yang
```

`proto_generator` will generate two files:
* `/proto/tutorial/demo_port/demo_port.proto`
* `/proto/tutorial/enums/enums.proto`

Open `demo_port.proto` using `less`:

```
bash-4.4# less /proto/tutorial/demo_port/demo_port.proto
```

This file contains a top-level Ports message that matches the structure defined
in the YANG model. You can see that `proto_generator` also adds a
`yext.schemapath` custom option to each protobuf message field that explicitly
maps to the YANG leaf path. Enums (like `tutorial.enums.DemoPortSPEED`) aren't
included in this file, but `proto_generator` puts them in a separate file:
`enums.proto`

Open `enums.proto` using `less`:

```
bash-4.4# less /proto/tutorial/enums/enums.proto
```

You should see an enum for the 10GB speed, along with any other speeds that you
added if you completed the extra credit above.


We can also use `proto_generator` to build the protobuf messages for the
OpenConfig models that Stratum uses:

```
bash-4.4# proto_generator \
    -generate_fakeroot \
    -output_dir=/proto \
    -package_name=openconfig \
    -exclude_modules=ietf-interfaces \
    -compress_paths \
    -base_import_path= \
    -path=ietf,openconfig,hercules \
    openconfig/interfaces/openconfig-interfaces.yang \
    openconfig/interfaces/openconfig-if-ip.yang \
    openconfig/lacp/openconfig-lacp.yang \
    openconfig/platform/openconfig-platform-linecard.yang \
    openconfig/platform/openconfig-platform-port.yang \
    openconfig/platform/openconfig-platform-transceiver.yang \
    openconfig/platform/openconfig-platform.yang \
    openconfig/system/openconfig-system.yang \
    openconfig/vlan/openconfig-vlan.yang \
    hercules/openconfig-hercules-interfaces.yang \
    hercules/openconfig-hercules-platform-chassis.yang \
    hercules/openconfig-hercules-platform-linecard.yang \
    hercules/openconfig-hercules-platform-node.yang \
    hercules/openconfig-hercules-platform-port.yang \
    hercules/openconfig-hercules-platform.yang \
    hercules/openconfig-hercules-qos.yang \
    hercules/openconfig-hercules.yang
```

You will find `openconfig.proto` and `enums.proto` in the `/proto/openconfig` directory.

------

*Extra Credit:* Try to find the Protobuf message fields used to enable a port or
get the ingress packets counter in the protobuf messages.

*Hint:* Searching by schemapath might help.

------

`ygot` can also be used to generate Go structs that adhere to the YANG model
and that are capable of validating the structure, type, and values of data.

------

*Extra Credit:* If you have extra time or are interested in using YANG and Go
together, try generating Go code for the `demo-port` module.

```
bash-4.4# mkdir -p /goSrc
bash-4.4# generator -output_dir=/goSrc -package_name=tutorial demo-port.yang
```

Take a look at the Go files in `/goSrc`.

------

You can now quit out of the container (using `Ctrl-D` or `exit`).

## 3. Understanding YANG-enabled transport protocols

There are several YANG-model agnostic protocols that can be used to get or set
data that adheres to a model, like NETCONF, RESTCONF, and gNMI.

This part focuses on using the protobuf encoding over gNMI.

First, make sure your Mininet container is still running.

```
$ make start
docker-compose up -d
mininet is up-to-date
onos is up-to-date
```

If you see the following output, then Mininet was not running:

```
Starting mininet ... done
Starting onos    ... done
```

You will need to go back to Exercise 1 and install forwarding rules to
re-establish pings between `h1a` and `h1b` for later parts of this exercise.

If you could not complete Exercise 1, you can use the following P4Runtime-sh
commands to enable connectivity:

```python
te = table_entry['IngressPipeImpl.l2_exact_table'](action='IngressPipeImpl.set_egress_port')
te.match['hdr.ethernet.dst_addr'] = '00:00:00:00:00:1A'
te.action['port_num'] = '3'
te.insert()

te = table_entry['IngressPipeImpl.l2_exact_table'](action='IngressPipeImpl.set_egress_port')
te.match['hdr.ethernet.dst_addr'] = '00:00:00:00:00:1B'
te.action['port_num'] = '4'
te.insert()
```

Next, we will use a [gNMI client CLI](https://github.com/Yi-Tseng/Yi-s-gNMI-tool)
to read the all of the configuration from the Stratum switche `leaf1` in our
Mininet network:

```
$ util/gnmi-cli --grpc-addr localhost:50001 get /
```

The first part of the output shows the request that was made by the CLI:
```
REQUEST
path {
}
type: CONFIG
encoding: PROTO
```

The path being requested is the empty path (which means the root of the config
tree), the type of data is just the config tree, and the requested encoding for
the response is protobuf.

The second part of the output shows the response from Stratum:
```
RESPONSE
notification {
  update {
    path {
    }
    val {
      any_val {
        type_url: "type.googleapis.com/openconfig.Device"
        value: \252\221\231\304\001\... TRUNCATED
      }
    }
  }
}
```

You can see that Stratum provides a response of type `openconfig.Device`, which
is the top-level message defined in `openconfig.proto`. The response is the
binary encoding of the data based on the protobuf message.

The value is not human readable, but we can translate the reply using a
utility that converts between the binary and textual representations of the
protobuf message.

We can rerun the command, but this time pipe the output through the converter
utility (then pipe that output to `less` to make scrolling easier):

```
$ util/gnmi-cli --grpc-addr localhost:50001 get / | util/oc-pb-decoder | less
```

The contents of the response should now be easier to read. Scroll down to the first
`interface`. Is the interface enabled? What is the speed of the port?

------

*Extra credit:* Can you find `in-pkts`? If not, why do you think they are
missing?

-------

One of the benefits of gNMI is it's "schema-less" encoding, which allows clients
or devices to update only the paths that need to be updated. This is
particularly useful for subscriptions.

First, let's try out the schema-less representation by requesting the
configuration port between `leaf1` and `h1a`:

```
$ util/gnmi-cli --grpc-addr localhost:50001 get \
    /interfaces/interface[name=leaf1-eth3]/config
```

You should see this response containing 2 leafs under config - **enabled** and
**health-indicator**:

```
RESPONSE
notification {
  update {
    path {
      elem {
        name: "interfaces"
      }
      elem {
        name: "interface"
        key {
          key: "name"
          value: "leaf1-eth3"
        }
      }
      elem {
        name: "config"
      }
      elem {
        name: "enabled"
      }
    }
    val {
      bool_val: true
    }
  }
}
notification {
  update {
    path {
      elem {
        name: "interfaces"
      }
      elem {
        name: "interface"
        key {
          key: "name"
          value: "leaf1-eth3"
        }
      }
      elem {
        name: "config"
      }
      elem {
        name: "health-indicator"
      }
    }
    val {
      string_val: "GOOD"
    }
  }
}
```

The schema-less representation provides and `update` for each leaf containing
both the path the value of the leaf. You can confirm that the interface
is enabled (set to `true`).

Next, we will subscribe to the ingress unicast packet counters for the interface
on `leaf1` attached to `h1a` (port 3):

```
$ util/gnmi-cli --grpc-addr localhost:50001 \
    --interval 1000 sub-sample \
    /interfaces/interface[name=leaf1-eth3]/state/counters/in-unicast-pkts
```

The first part of the output shows the request being made by the CLI:
```
REQUEST
subscribe {
  subscription {
    path {
      elem {
        name: "interfaces"
      }
      elem {
        name: "interface"
        key {
          key: "name"
          value: "leaf1-eth3"
        }
      }
      elem {
        name: "state"
      }
      elem {
        name: "counters"
      }
      elem {
        name: "in-unicast-pkts"
      }
    }
    mode: SAMPLE
    sample_interval: 1000
  }
  updates_only: true
}
```

We have the subscription path, the type of subscription (sampling) and the
sampling rate (every 1000ms, or 1s).

The second part of the output is a stream of responses:
```
RESPONSE
update {
  timestamp: 1567895852136043891
  update {
    path {
      elem {
        name: "interfaces"
      }
      elem {
        name: "interface"
        key {
          key: "name"
          value: "leaf1-eth3"
        }
      }
      elem {
        name: "state"
      }
      elem {
        name: "counters"
      }
      elem {
        name: "in-unicast-pkts"
      }
    }
    val {
      uint_val: 1592
    }
  }
}
```

Each response has a timestamp, path, and new value. Because we are sampling, you
should see a new update printed every second. Leave this running, while we
generate some traffic.

In another window, open the Mininet CLI and start a ping:

```
$ make mn-cli
*** Attaching to Mininet CLI...
*** To detach press Ctrl-D (Mininet will keep running)
mininet> h1a ping h1b
```

In the first window, you should see the `uint_val` increase by 1 every second
while your ping is still running. (If it's not exactly 1, then there could be
other traffic like NDP messages contributing to the increase.)

You can stop the gNMI subscription using `Ctrl-C`.

Finally, we will monitor link events using gNMI's on-change subscriptions.

Start a subscription for the operational status of the first switch's first port:

```
$ util/gnmi-cli --grpc-addr localhost:50001 sub-onchange \
    /interfaces/interface[name=leaf1-eth3]/state/oper-status
```

You should immediately see a response which indicates that port 1 is `UP`:

```
RESPONSE
update {
  timestamp: 1567896668419430407
  update {
    path {
      elem {
        name: "interfaces"
      }
      elem {
        name: "interface"
        key {
          key: "name"
          value: "leaf1-eth3"
        }
      }
      elem {
        name: "state"
      }
      elem {
        name: "oper-status"
      }
    }
    val {
      string_val: "UP"
    }
  }
}
```

In the shell running the Mininet CLI, let's take down the interface on `leaf1`
connected to `h1a`:

```
mininet> sh ifconfig leaf1-eth3 down
```

You should see a response in your gNMI CLI window showing that the interface on
`leaf1` connected to `h1a` is `DOWN`:

```
RESPONSE
update {
  timestamp: 1567896891549363399
  update {
    path {
      elem {
        name: "interfaces"
      }
      elem {
        name: "interface"
        key {
          key: "name"
          value: "leaf1-eth3"
        }
      }
      elem {
        name: "state"
      }
      elem {
        name: "oper-status"
      }
    }
    val {
      string_val: "DOWN"
    }
  }
}
```

We can bring back the interface using the following Mininet command:

```
mininet> sh ifconfig leaf1-eth3 up
```

You should see another response in your gNMI CLI window that indicates the
interface is `UP`.

------

*Extra credit:* We can also use gNMI to disable or enable an interface.

Leave your gNMI subscription for operational status changes running.

In the Mininet CLI, start a ping between two hosts.

```
mininet> h1a ping h1b
```

You should see replies being showed in the Mininet CLI.

In a third window, we will use the gNMI CLI to change the configuration value of
the `enabled` leaf from `true` to `false`.

```
$ util/gnmi-cli --grpc-addr localhost:50001 set \
    /interfaces/interface[name=leaf1-eth3]/config/enabled \
    --bool-val false
```

In the gNMI set window, you should see a request indicating the new value for the
`enabled` leaf:

```
REQUEST
update {
  path {
    elem {
      name: "interfaces"
    }
    elem {
      name: "interface"
      key {
        key: "name"
        value: "leaf1-eth3"
      }
    }
    elem {
      name: "config"
    }
    elem {
      name: "enabled"
    }
  }
  val {
    bool_val: false
  }
}
```

In the gNMI subscription window, you should see a new response indicating that
the operational status of `leaf1-eth3` is `DOWN`:

```
RESPONSE
update {
  timestamp: 1567896891549363399
  update {
    path {
      elem {
        name: "interfaces"
      }
      elem {
        name: "interface"
        key {
          key: "name"
          value: "leaf1-eth3"
        }
      }
      elem {
        name: "state"
      }
      elem {
        name: "oper-status"
      }
    }
    val {
      string_val: "DOWN"
    }
  }
}
```

And in the Mininet CLI window, you should observe that the ping has stopped
working.

Next, we can re-nable the port:
```
$ util/gnmi-cli --grpc-addr localhost:50001 set \
    /interfaces/interface[name=leaf1-eth3]/config/enabled \
    --bool-val true
```

You should see another update in the gNMI subscription window indicating the
interface is `UP`, and the ping should resume in the Mininet CLI wondow.

## Congratulations!

You have completed the second exercise!


================================================
FILE: EXERCISE-3.md
================================================
# Exercise 3: Using ONOS as the Control Plane

This exercise provides a hands-on introduction to ONOS, where you will learn how
to:

1. Start ONOS along with a set of built-in apps for basic services such as
   topology discovery
2. Load a custom ONOS app and pipeconf
3. Push a configuration file to ONOS to discover and control the
   `stratum_bmv2` switches using P4Runtime and gNMI
4. Access the ONOS CLI and UI to verify that all `stratum_bmv2` switches have
   been discovered and configured correctly.

## 1. Start ONOS

In a terminal window, type:

```
$ make restart
```

This command will restart the ONOS and Mininet containers, in case those were
running from previous exercises, clearing any previous state.

The parameters to start the ONOS container are specified in [docker
-compose.yml](docker-compose.yml). The container is configured to pass the
environment variable `ONOS_APPS`, used to define the built-in apps to load
during startup.

In our case, this variable has value:

```
ONOS_APPS=gui2,drivers.bmv2,lldpprovider,hostprovider
```

requesting ONOS to pre-load the following built-in apps:

* `gui2`: ONOS web user interface (available at <http://localhost:8181/onos/ui>)
* `drivers.bmv2`: BMv2/Stratum drivers based on P4Runtime, gNMI, and gNOI
* `lldpprovider`: LLDP-based link discovery application (used in Exercise 4)
* `hostprovider`: Host discovery application (used in Exercise 4)


Once ONOS has started, you can check its log using the `make onos-log` command.

To **verify that all required apps have been activated**, run the following
command in a new terminal window to access the ONOS CLI. Use password `rocks`
when prompted:

```
$ make onos-cli
```

If you see the following error, then ONOS is still starting; wait a minute and try again.
```
ssh_exchange_identification: Connection closed by remote host
make: *** [onos-cli] Error 255
```

When you see the Password prompt, type the default password: `rocks`. 
Then type the following command in the ONOS CLI to show the list of running apps:

```
onos> apps -a -s
```

Make sure you see the following list of apps displayed:

```
*   5 org.onosproject.protocols.grpc        2.2.2    gRPC Protocol Subsystem
*   6 org.onosproject.protocols.gnmi        2.2.2    gNMI Protocol Subsystem
*  29 org.onosproject.drivers               2.2.2    Default Drivers
*  34 org.onosproject.generaldeviceprovider 2.2.2    General Device Provider
*  35 org.onosproject.protocols.p4runtime   2.2.2    P4Runtime Protocol Subsystem
*  36 org.onosproject.p4runtime             2.2.2    P4Runtime Provider
*  37 org.onosproject.drivers.p4runtime     2.2.2    P4Runtime Drivers
*  42 org.onosproject.protocols.gnoi        2.2.2    gNOI Protocol Subsystem
*  52 org.onosproject.hostprovider          2.2.2    Host Location Provider
*  53 org.onosproject.lldpprovider          2.2.2    LLDP Link Provider
*  66 org.onosproject.drivers.gnoi          2.2.2    gNOI Drivers
*  70 org.onosproject.drivers.gnmi          2.2.2    gNMI Drivers
*  71 org.onosproject.pipelines.basic       2.2.2    Basic Pipelines
*  72 org.onosproject.drivers.stratum       2.2.2    Stratum Drivers
* 161 org.onosproject.gui2                  2.2.2    ONOS GUI2
* 181 org.onosproject.drivers.bmv2          2.2.2    BMv2 Drivers
```

There are definitely more apps than defined in `$ONOS_APPS`. That's
because each app in ONOS can define other apps as dependencies. When loading an
app, ONOS automatically resolves dependencies and loads all other required apps.

#### Disable link discovery service

Link discovery will be the focus of the next exercise. For now, this service
lacks support in the P4 program. We suggest you deactivate it for the rest of
this exercise, to avoid running into issues. Use the following ONOS
CLI command to deactivate the link discovery service.

```
onos> app deactivate lldpprovider
```

To exit the ONOS CLI, use `Ctrl-D`. This will stop the CLI process
but will not affect ONOS itself.

#### Restart ONOS in case of errors

If anything goes wrong and you need to kill ONOS, you can use command `make
restart` to restart both Mininet and ONOS.

## 2. Build app and register pipeconf

Inside the [app/](./app) directory you will find a starter implementation of an
ONOS app that includes a pipeconf. The pipeconf-related files are the following:

* [PipeconfLoader.java][PipeconfLoader.java]: A component that registers the
  pipeconf at app activation;
* [InterpreterImpl.java][InterpreterImpl.java]: An implementation of the
  `PipelineInterpreter` driver behavior;
* [PipelinerImpl.java][PipelinerImpl.java]: An implementation of the `Pipeliner`
  driver behavior;

To build the ONOS app (including the pipeconf), run the following
command in the second terminal window:

```
$ make app-build
```

This will produce a binary file `app/target/ngsdn-tutorial-1.0-SNAPSHOT.oar`
that we will use to install the application in the running ONOS instance.

Use the following command to load the app into ONOS and activate it:

```
$ make app-reload
```

After the app has been activated, you should see the following messages in the
ONOS log (`make onos-log`) signaling that the pipeconf has been registered and
the different app components have been started:

```
INFO  [PiPipeconfManager] New pipeconf registered: org.onosproject.ngsdn-tutorial (fingerprint=...)
INFO  [MainComponent] Started
```

Alternatively, you can show the list of registered pipeconfs using the ONOS CLI
(`make onos-cli`) command:

```
onos> pipeconfs
```

## 3. Push netcfg to ONOS

Now that ONOS and Mininet are running, it's time to let ONOS know how to reach
the four switches and control them. We do this by using a configuration file
located at [mininet/netcfg.json](mininet/netcfg.json), which contains
information such as:

* The gRPC address and port associated with each Stratum device;
* The ONOS driver to use for each device, `stratum-bmv2` in this case;
* The pipeconf to use for each device, `org.onosproject.ngsdn-tutorial` in this
  case, as defined in [PipeconfLoader.java][PipeconfLoader.java];
* Configuration specific to our custom app (`fabricDeviceConfig`)

This file also contains information related to the IPv6 configuration associated
with each switch interface. We will discuss this information in more details in
the next exercises.

On a terminal window, type:

```
$ make netcfg
```

This command will push the `netcfg.json` to ONOS, triggering discovery and
configuration of the 4 switches.

Check the ONOS log (`make onos-log`), you should see messages like:

```
INFO  [GrpcChannelControllerImpl] Creating new gRPC channel grpc:///mininet:50001?device_id=1...
...
INFO  [StreamClientImpl] Setting mastership on device:leaf1...
...
INFO  [PipelineConfigClientImpl] Setting pipeline config for device:leaf1 to org.onosproject.ngsdn-tutorial...
...
INFO  [GnmiDeviceStateSubscriber] Started gNMI subscription for 6 ports on device:leaf1
...
INFO  [DeviceManager] Device device:leaf1 port [leaf1-eth1](1) status changed (enabled=true)
INFO  [DeviceManager] Device device:leaf1 port [leaf1-eth2](2) status changed (enabled=true)
INFO  [DeviceManager] Device device:leaf1 port [leaf1-eth3](3) status changed (enabled=true)
INFO  [DeviceManager] Device device:leaf1 port [leaf1-eth4](4) status changed (enabled=true)
INFO  [DeviceManager] Device device:leaf1 port [leaf1-eth5](5) status changed (enabled=true)
INFO  [DeviceManager] Device device:leaf1 port [leaf1-eth6](6) status changed (enabled=true)
```

## 4. Use the ONOS CLI to verify the network configuration

Access the ONOS CLI using `make onos-cli`. Enter the following command to
verify the network config pushed before:

```
onos> netcfg
```

#### Devices

Verify that all 4 devices have been discovered and are connected:

```
onos> devices -s
id=device:leaf1, available=true, role=MASTER, type=SWITCH, driver=stratum-bmv2:org.onosproject.ngsdn-tutorial
id=device:leaf2, available=true, role=MASTER, type=SWITCH, driver=stratum-bmv2:org.onosproject.ngsdn-tutorial
id=device:spine1, available=true, role=MASTER, type=SWITCH, driver=stratum-bmv2:org.onosproject.ngsdn-tutorial
id=device:spine2, available=true, role=MASTER, type=SWITCH, driver=stratum-bmv2:org.onosproject.ngsdn-tutorial
```

Make sure you see `available=true` for all devices. That means ONOS has a gRPC
channel open to the device and the pipeline configuration has been pushed.


#### Ports

Check port information, obtained by ONOS by performing a gNMI Get RPC for the
OpenConfig Interfaces model:

```
onos> ports -s device:spine1
id=device:spine1, available=true, role=MASTER, type=SWITCH, driver=stratum-bmv2:org.onosproject.ngsdn-tutorial
  port=[spine1-eth1](1), state=enabled, type=copper, speed=10000 , ...
  port=[spine1-eth2](2), state=enabled, type=copper, speed=10000 , ...
```

Check port statistics, also obtained by querying the OpenConfig Interfaces model
via gNMI:

```
onos> portstats device:spine1
deviceId=device:spine1
   port=[spine1-eth1](1), pktRx=114, pktTx=114, bytesRx=14022, bytesTx=14136, pktRxDrp=0, pktTxDrp=0, Dur=173
   port=[spine1-eth2](2), pktRx=114, pktTx=114, bytesRx=14022, bytesTx=14136, pktRxDrp=0, pktTxDrp=0, Dur=173

```

#### Flow rules and groups

Check the ONOS flow rules. You should see three flow rules for each device. For
example, to show all flow rules installed so far on device `leaf1`:

```
onos> flows -s any device:leaf1
deviceId=device:leaf1, flowRuleCount=3
    ADDED, bytes=0, packets=0, table=IngressPipeImpl.acl_table, priority=40000, selector=[ETH_TYPE:arp], treatment=[immediate=[IngressPipeImpl.clone_to_cpu()]]
    ADDED, bytes=0, packets=0, table=IngressPipeImpl.acl_table, priority=40000, selector=[ETH_TYPE:ipv6, IP_PROTO:58, ICMPV6_TYPE:136], treatment=[immediate=[IngressPipeImpl.clone_to_cpu()]]
    ADDED, bytes=0, packets=0, table=IngressPipeImpl.acl_table, priority=40000, selector=[ETH_TYPE:ipv6, IP_PROTO:58, ICMPV6_TYPE:135], treatment=[immediate=[IngressPipeImpl.clone_to_cpu()]]

```

This list include flow rules installed by the ONOS built-in services such as
`hostprovider`. We'll talk more about these services in the next exercise.

To show all groups installed so far, you can use the `groups` command. For
example to show groups on `leaf1`:
```
onos> groups any device:leaf1
deviceId=device:leaf1, groupCount=1
   id=0x63, state=ADDED, type=CLONE, bytes=0, packets=0, appId=org.onosproject.core, referenceCount=0
       id=0x63, bucket=1, bytes=0, packets=0, weight=-1, actions=[OUTPUT:CONTROLLER]
```

"Group" is an ONOS northbound abstraction that is mapped internally to different
types of P4Runtime entities. In this case, you should see 1 group of type
`CLONE`, internally mapped to a P4Runtime `CloneSessionEntry`, here used to
clone packets to the controller via packet-in. We'll talk more about controller
packet-in/out in the next session.

## 5. Visualize the topology on the ONOS web UI

Using the ONF Cloud Tutorial Portal, access the ONOS UI.
If you are running the VM on your laptop, open up a browser (e.g. Firefox) to
<http://127.0.0.1:8181/onos/ui>.

When asked, use the username `onos` and password `rocks`.

You should see 4 devices in the topology view, corresponding to the 4 switches
of our 2x2 fabric. Press `L` to show device labels. Because link discovery is
not enabled, the ONOS UI will not show any links between the devices.

While here, feel free to interact with and discover the ONOS UI. For more
information on how to use the ONOS web UI please refer to this guide:

<https://wiki.onosproject.org/x/OYMg>

There is a way to show the pipeconf details for a given device, can you find it?

#### Pipeconf UI

In the ONOS topology view click on one of the switches (e.g `device:leaf1`)
and the Device Details panel appears. In that panel click on the Pipeconf icon
(the last one), to open the Pipeconf view for that device.

![device-leaf1-details-panel](img/device-leaf1-details-panel.png)

Here you will find info on the pipeconf currently used by the specific device,
including details of the P4 tables.

![onos-gui-pipeconf-leaf1](img/onos-gui-pipeconf-leaf1.png)

Clicking the table row brings up the details panel, showing details of the match
fields, actions, action parameter bit widths, etc.


## Congratulations!

You have completed the third exercise! If you're feeling ambitious,
you can do the extra credit steps below.

### Extra Credit: Inspect stratum_bmv2 internal state

You can use the P4Runtime shell to dump all table entries currently
installed on the switch by ONOS. In a separate terminal window, start a
P4Runtime shell for leaf1:

```
$ util/p4rt-sh --grpc-addr localhost:50001 --election-id 0,1
```

On the shell prompt, type the following command to dump all entries from the ACL
table:

```
P4Runtime sh >>> for te in table_entry["IngressPipeImpl.acl_table"].read():
            ...:     print(te)
            ...:
```

You should see exactly three entries, each one corresponding to a flow rule
in ONOS. For example, the flow rule matching on NDP NS packets should look
like this in the P4runtime shell:

```
table_id: 33557865 ("IngressPipeImpl.acl_table")
match {
  field_id: 4 ("hdr.ethernet.ether_type")
  ternary {
    value: "\\x86\\xdd"
    mask: "\\xff\\xff"
  }
}
match {
  field_id: 5 ("hdr.ipv6.next_hdr")
  ternary {
    value: "\\x3a"
    mask: "\\xff"
  }
}
match {
  field_id: 6 ("hdr.icmpv6.type")
  ternary {
    value: "\\x87"
    mask: "\\xff"
  }
}
action {
  action {
    action_id: 16782152 ("IngressPipeImpl.clone_to_cpu")
  }
}
priority: 40001
```

### Extra Credit: Show ONOS gRPC log

ONOS provides a debugging feature that dumps all gRPC messages
exchanged with a device to a file. To enable this feature, type the
following command in the ONOS CLI (`make onos-cli`):

```
onos> cfg set org.onosproject.grpc.ctl.GrpcChannelControllerImpl enableMessageLog true
```

Check the content of directory `tmp/onos` in the `ngsdn-tutorial` root. You
should see many files, some of which starting with name `grpc___mininet_`. You
should see four such files, one file per device, named after the gRPC
port used to establish the gRPC chanel.

Check content of one of these files, you should see a dump of the gRPC messages
in Protobuf Text format for messages like:

* P4Runtime `PacketIn` and `PacketOut`;
* P4Runtime Read RPCs used to periodically dump table entries and read counters;
* gNMI Get RPCs to read port counters.

Remember to disable the gRPC message logging in ONOS when you're done, to avoid
affecting performances:

```
onos> cfg set org.onosproject.grpc.ctl.GrpcChannelControllerImpl enableMessageLog false
```

[PipeconfLoader.java]: app/src/main/java/org/onosproject/ngsdn/tutorial/pipeconf/PipeconfLoader.java
[InterpreterImpl.java]: app/src/main/java/org/onosproject/ngsdn/tutorial/pipeconf/InterpreterImpl.java
[PipelinerImpl.java]: app/src/main/java/org/onosproject/ngsdn/tutorial/pipeconf/PipelinerImpl.java


================================================
FILE: EXERCISE-4.md
================================================
# Exercise 4: Enabling ONOS Built-in Services

In this exercise, you will integrate ONOS built-in services for link and
host discovery with your P4 program. Such built-in services are based on the
ability of switches to send data plane packets to the controller (packet-in) and
vice versa (packet-out).

To make this work with your P4 program, you will need to apply simple changes to
the starter P4 code, validate the P4 changes using PTF-based data plane unit
tests, and finally, apply changes to the pipeconf Java implementation to enable
ONOS's built-in apps to use packet-in/out via P4Runtime.

The exercise has two parts:

1. Enable packet I/O and verify link discovery
2. Host discovery & L2 bridging


## Part 1: Enable packet I/O and verify link discovery

We start by reviewing how controller packet I/O works with P4Runtime.

### Background: Controller packet I/O with P4Runtime

The P4 program under [p4src/main.p4](p4src/main.p4) provides support for
carrying arbitrary metadata in P4Runtime `PacketIn` and `PacketOut` messages.
Two special headers are defined and annotated with the standard P4 annotation
`@controller_header`:

```p4
@controller_header("packet_in")
header cpu_in_header_t {
    port_num_t ingress_port;
    bit<7> _pad;
}

@controller_header("packet_out")
header cpu_out_header_t {
    port_num_t egress_port;
    bit<7> _pad;
}
```

These headers are used to carry the original switch ingress port of a packet-in,
and to specify the intended output port for a packet-out.

When the P4Runtime agent in Stratum receives a packet from the switch CPU port,
it expects to find the `cpu_in_header_t` header as the first one in the frame.
Indeed, it looks at the `controller_packet_metadata` part of the P4Info file to
determine the number of bits to strip at the beginning of the frame and to
populate the corresponding metadata field of the `PacketIn` message, including
the ingress port as in this case.

Similarly, when Stratum receives a P4Runtime `PacketOut` message, it uses the
values found in the `PacketOut`'s metadata fields to serialize and prepend a
`cpu_out_header_t` to the frame before feeding it to the pipeline parser.


### 1. Modify P4 program

The P4 starter code already provides support for the following capabilities:

* Parse the `cpu_out` header (if the ingress port is the CPU one)
* Emit the `cpu_in` header as the first one in the deparser
* Provide an ACL table with ternary match fields and an action to send or clone
  packets to the CPU port (used to generate a packet-ins)

Something is missing to provide complete packet-in/out support, and you have to
modify the P4 program to implement it:

1. Open `p4src/main.p4`;
2. Modify the code where requested (look for `TODO EXERCISE 4`);
3. Compile the modified P4 program using the `make p4-build` command. Make sure
   to address any compiler errors before continuing.

At this point, our P4 pipeline should be ready for testing.

### 2. Run PTF tests

Before starting ONOS, let's make sure the P4 changes work as expected by
running some PTF tests. But first, you need to apply a few simple changes to the
test case implementation.

Open file `ptf/tests/packetio.py` and modify wherever requested (look for `TODO
EXERCISE 4`). This test file provides two test cases: one for packet-in and
one for packet-out. In both test cases, you will have to modify the implementation to
use the same name for P4Runtime entities as specified in the P4Info file
obtained after compiling the P4 program (`p4src/build/p4info.txt`).

To run all the tests for this exercise:

    make p4-test TEST=packetio

This command will run all tests in the `packetio` group (i.e. the content of
`ptf/tests/packetio.py`). To run a specific test case you can use:

    make p4-test TEST=<PYTHON MODULE>.<TEST CLASS NAME>

For example:

    make p4-test TEST=packetio.PacketOutTest

#### Check for regressions

To make sures the new changes are not breaking other features, make sure to run
tests for L2 bridging support.

    make p4-test TEST=bridging

If all tests succeed, congratulations! You can move to the next step.

#### How to debug failing tests?

When running PTF tests, multiple files are produced that you can use to spot bugs:

* `ptf/stratum_bmv2.log`: BMv2 log with trace level (showing tables matched and
  other info for each packet)
* `ptf/p4rt_write.log`: Log of all P4Runtime Write requests
* `ptf/ptf.pcap`: PCAP file with all packets sent and received during tests
  (the tutorial VM comes with Wireshark for easier visualization)
* `ptf/ptf.log`: PTF log of all packet operations (sent and received)

### 3. Modify ONOS pipeline interpreter

The `PipelineInterpreter` is the ONOS driver behavior used to map the ONOS
representation of packet-in/out to one that is consistent with your P4
pipeline (along with other similar mappings).

Specifically, to use services like link and host discovery, ONOS built-in apps
need to be able to set the output port of a packet-out and access the original
ingress port of a packet-in.

In the following, you will be asked to apply a few simple changes to the
`PipelineInterpreter` implementation:

1. Open file:
   `app/src/main/java/org/onosproject/ngsdn/tutorial/pipeconf/InterpreterImpl.java`

2. Modify wherever requested (look for `TODO EXERCISE 4`), specifically:

    * Look for a method named `buildPacketOut`, modify the implementation to use the
      same name of the **egress port** metadata field for the `packet_out`
      header as specified in the P4Info file.

    * Look for method `mapInboundPacket`, modify the implementation to use the
      same name of the **ingress port** metadata field for the `packet_in`
      header as specified in the P4Info file.

3. Build ONOS app (including the pipeconf) with the command `make app-build`.

The P4 compiler outputs (`bmv2.json` and `p4info.txt`) are copied in the app
resource folder (`app/src/main/resources`) and will be included in the ONOS app
binary. The copy that gets included in the ONOS app will be the one that gets
deployed by ONOS to the device after the connection is initiated.

### 4. Restart ONOS

**Note:** ONOS should be already running, and in theory, there should be no need
to restart it. However, while ONOS supports reloading the pipeconf with a
modified one (e.g., with updated `bmv2.json` and `p4info.txt`), the version of
ONOS used in this tutorial (2.2.0, the most recent at the time of writing) does
not support reloading the pipeconf behavior classes, in which case the old
classes will still be used. For this reason, to reload a modified version of
`InterpreterImpl.java`, you need to kill ONOS first.

In a terminal window, type:

```
$ make restart
```

This command will restart all containers, removing any state from previous
executions, including ONOS.

Wait approximately 20 seconds for ONOS to completing booting, or check
the ONOS log (`make onos-log`) until no more messages are shown.

### 5. Load updated app and register pipeconf

On a terminal window, type:

```
$ make app-reload
```

This command will upload to ONOS and activate the app binary previously built (located at app/target/ngsdn-tutorial-1.0-SNAPSHOT.oar).

### 6. Push netcfg to ONOS to trigger device and link discovery

On a terminal window, type:

```
$ make netcfg
```

Use the ONOS CLI to verify that all devices have been discovered:

```
onos> devices -s
id=device:leaf1, available=true, role=MASTER, type=SWITCH, driver=stratum-bmv2:org.onosproject.ngsdn-tutorial
id=device:leaf2, available=true, role=MASTER, type=SWITCH, driver=stratum-bmv2:org.onosproject.ngsdn-tutorial
id=device:spine1, available=true, role=MASTER, type=SWITCH, driver=stratum-bmv2:org.onosproject.ngsdn-tutorial
id=device:spine2, available=true, role=MASTER, type=SWITCH, driver=stratum-bmv2:org.onosproject.ngsdn-tutorial
```

Verify that all links have been discovered. You should see 8 links in total,
each one representing a direction of the 4 bidirectional links of our Mininet
topology:

```
onos> links
src=device:leaf1/1, dst=device:spine1/1, type=DIRECT, state=ACTIVE, expected=false
src=device:leaf1/2, dst=device:spine2/1, type=DIRECT, state=ACTIVE, expected=false
src=device:leaf2/1, dst=device:spine1/2, type=DIRECT, state=ACTIVE, expected=false
src=device:leaf2/2, dst=device:spine2/2, type=DIRECT, state=ACTIVE, expected=false
src=device:spine1/1, dst=device:leaf1/1, type=DIRECT, state=ACTIVE, expected=false
src=device:spine1/2, dst=device:leaf2/1, type=DIRECT, state=ACTIVE, expected=false
src=device:spine2/1, dst=device:leaf1/2, type=DIRECT, state=ACTIVE, expected=false
src=device:spine2/2, dst=device:leaf2/2, type=DIRECT, state=ACTIVE, expected=false
```

**If you don't see a link**, check the ONOS log (`make onos-log`) for any
errors with packet-in/out handling. In case of errors, it's possible that you
have not modified `InterpreterImpl.java` correctly. In this case, go back to
step 3.

You should see 5 flow rules for each device. For example,
to show all flow rules installed so far on device `leaf1`:

```
onos> flows -s any device:leaf1
deviceId=device:leaf1, flowRuleCount=5
    ADDED, ..., table=IngressPipeImpl.acl_table, priority=40000, selector=[ETH_TYPE:lldp], treatment=[immediate=[IngressPipeImpl.clone_to_cpu()]]
    ADDED, ..., table=IngressPipeImpl.acl_table, priority=40000, selector=[ETH_TYPE:bddp], treatment=[immediate=[IngressPipeImpl.clone_to_cpu()]]
    ADDED, ..., table=IngressPipeImpl.acl_table, priority=40000, selector=[ETH_TYPE:arp], treatment=[immediate=[IngressPipeImpl.clone_to_cpu()]]
    ADDED, ..., table=IngressPipeImpl.acl_table, priority=40000, selector=[ETH_TYPE:ipv6, IP_PROTO:58, ICMPV6_TYPE:136], treatment=[immediate=[IngressPipeImpl.clone_to_cpu()]]
    ADDED, ..., table=IngressPipeImpl.acl_table, priority=40000, selector=[ETH_TYPE:ipv6, IP_PROTO:58, ICMPV6_TYPE:135], treatment=[immediate=[IngressPipeImpl.clone_to_cpu()]]
    ...
```

These flow rules are the result of the translation of flow objectives generated
by the `hostprovider` and `lldpprovider` built-in apps.

Flow objectives are translated by the pipeconf, which provides a `Pipeliner`
behavior implementation ([PipelinerImpl.java][PipelinerImpl.java]). These flow
rules specify a match key by using ONOS standard/known header fields, such as
`ETH_TYPE`, `ICMPV6_TYPE`, etc. These types are mapped to P4Info-specific match
fields by the pipeline interpreter
([InterpreterImpl.java][InterpreterImpl.java]; look for method
`mapCriterionType`)

The `hostprovider` app provides host discovery capabilities by intercepting ARP
(`selector=[ETH_TYPE:arp]`) and NDP packets (`selector=[ETH_TYPE:ipv6,
IP_PROTO:58, ICMPV6_TYPE:...]`), which are cloned to the controller
(`treatment=[immediate=[IngressPipeImpl.clone_to_cpu()]]`). Similarly,
`lldpprovider` generates flow objectives to intercept LLDP and BDDP packets
(`selector=[ETH_TYPE:lldp]` and `selector=[ETH_TYPE:bddp]` ) periodically
emitted on all devices' ports as P4Runtime packet-outs, allowing automatic link
discovery.

All flow rules refer to P4 action `clone_to_cpu()`, which invokes a
v1model-specific primitive to set the clone session ID:

```p4
action clone_to_cpu() {
    clone3(CloneType.I2E, CPU_CLONE_SESSION_ID, ...);
}
```

To actually generate P4Runtime packet-in messages for matched packets, the
pipeconf's pipeliner generates a `CLONE` *group*, internally translated into a
P4Runtime`CloneSessionEntry`, that maps `CPU_CLONE_SESSION_ID` to a set of
ports, just the CPU one in this case.

To show all groups installed in ONOS, you can use the `groups` command. For
example, to show groups on `leaf1`:
```
onos> groups any device:leaf1
deviceId=device:leaf1, groupCount=1
   id=0x63, state=ADDED, type=CLONE, ..., appId=org.onosproject.core, referenceCount=0
       id=0x63, bucket=1, ..., weight=-1, actions=[OUTPUT:CONTROLLER]
```

### 7. Visualize links on the ONOS UI

Using the ONF Cloud Tutorial Portal, access the ONOS UI.
If you are running the VM on your laptop, open up a browser
(e.g. Firefox) to <http://127.0.0.1:8181/onos/ui>.

On the same page where the ONOS topology view is shown:
* Press `L` to show device labels;
* Press `A` multiple times until you see link stats, in either
  packets/seconds (pps) or bits/seconds.

Link stats are derived by ONOS by periodically obtaining the port counters for
each device. ONOS internally uses gNMI to read port information, including
counters.

In this case, you should see approximately 1 packet/s, as that's the rate of
packet-outs generated by the `lldpprovider` app.

## Part 2: Host discovery & L2 bridging

By fixing packet I/O support in the pipeline interpreter we did not only get
link discovery, but also enabled the built-in `hostprovider` app to perform
*host* discovery. This service is required by our tutorial app to populate
the bridging tables of our P4 pipeline, to forward packets based on the
Ethernet destination address.

Indeed, the `hostprovider` app works by snooping incoming ARP/NDP packets on the
switch and deducing where a host is connected to from the packet-in message
metadata. Other apps in ONOS, like our tutorial app, can then listen for
host-related events and access information about their addresses (IP, MAC) and
location.

In the following, you will be asked to enable the app's `L2BridgingComponent`,
and to verify that host discovery works by pinging hosts on Mininet. But before,
it's useful to review how the starter code implements L2 bridging.

### Background: Our implementation of L2 bridging

To make things easier, the starter code assumes that hosts of a given subnet are
all connected to the same leaf, and two interfaces of two different leaves
cannot be configured with the same IPv6 subnet. In other words, L2 bridging is
allowed only for hosts connected to the same leaf.

The Mininet script [topo-v6.py](mininet/topo-v6.py) used in this tutorial
defines 4 subnets:

* `2001:1:1::/64` with 3 hosts connected to `leaf1` (`h1a`, `h1b`, and `h1c`)
* `2001:1:2::/64` with 1 hosts connected to `leaf1` (`h2`)
* `2001:2:3::/64` with 1 hosts connected to `leaf2` (`h3`)
* `2001:2:4::/64` with 1 hosts connected to `leaf2` (`h4`)

The same IPv6 prefixes are defined in the [netcfg.json](mininet/netcfg.json)
file and are used to provide interface configuration to ONOS.

#### Data plane

The P4 code defines tables to forward packets based on the Ethernet address,
precisely, two distinct tables, to handle two different types of L2 entries:

1. Unicast entries: which will be filled in by the control plane when the
   location (port) of new hosts is learned.
2. Broadcast/multicast entries: used replicate NDP Neighbor Solicitation
   (NS) messages to all host-facing ports;

For (2), unlike ARP messages in IPv4, which are broadcasted to Ethernet
destination address FF:FF:FF:FF:FF:FF, NDP messages are sent to special Ethernet
addresses specified by RFC2464. These addresses are prefixed with 33:33 and the
last four octets are the last four octets of the IPv6 destination multicast
address. The most straightforward way of matching on such IPv6
broadcast/multicast packets, without digging in the details of RFC2464, is to
use a ternary match on `33:33:**:**:**:**`, where `*` means "don't care".

For this reason, our solution defines two tables. One that matches in an exact
fashion `l2_exact_table` (easier to scale on switch ASIC memory) and one that
uses ternary matching `l2_ternary_table` (which requires more expensive TCAM
memories, usually much smaller).

These tables are applied to packets in an order defined in the `apply` block
of the ingress pipeline (`IngressPipeImpl`):

```p4
if (!l2_exact_table.apply().hit) {
    l2_ternary_table.apply();
}
```

The ternary table has lower priority, and it's applied only if a matching entry
is not found in the exact one.

**Note**: we won't be using VLANs to segment our L2 domains. As such, when
matching packets in the `l2_ternary_table`, these will be broadcasted to ALL
host-facing ports.

#### Control plane (L2BridgingComponent)

We already provide an ONOS app component controlling the L2 bridging tables of
the P4 program: [L2BridgingComponent.java][L2BridgingComponent.java]

This app component defines two event listeners located at the bottom of the
`L2BridgingComponent` class, `InternalDeviceListener` for device events (e.g.
connection of a new switch) and `InternalHostListener` for host events (e.g. new
host discovered). These listeners in turn call methods like:

* `setUpDevice()`: responsible for creating multicast groups for all
  host-facing ports and inserting flow rules for the `l2_ternary_table` pointing
  to such groups.

* `learnHost()`: responsible for inserting unicast L2 entries based on the
  discovered host location.

To support reloading the app implementation, these methods are also called at
component activation for all devices and hosts known by ONOS at the time of
activation (look for methods `activate()` and `setUpAllDevices()`).

To keep things simple, our broadcast domain will be restricted to a single
device, i.e. we allow packet replication only for ports of the same leaf switch.
As such, we can exclude ports going to the spines from the multicast group. To
determine whether a port is expected to be facing hosts or not, we look at the
interface configuration in [netcfg.json](mininet/netcfg.json) file (look for the
`ports` section of the JSON file).

### 1. Enable L2BridgingComponent and reload the app

Before starting, you need to enable the app's L2BridgingComponent, which is
currently disabled.

1. Open file:
   `app/src/main/java/org/onosproject/ngsdn/tutorial/L2BridgingComponent.java`

2. Look for the class definition at the top and enable the component by setting
   the `enabled` flag to `true`

   ```java
   @Component(
           immediate = true,
           enabled = true
   )
   public class L2BridgingComponent {
   ```

3. Build the ONOS app with `make app-build`

4. Re-load the app to apply the changes with `make app-reload`

After reloading the app, you should see the following messages in the ONOS log
(`make onos-log`):

```
INFO  [L2BridgingComponent] Started
...
INFO  [L2BridgingComponent] *** L2 BRIDGING - Starting initial set up for device:leaf1...
INFO  [L2BridgingComponent] Adding L2 multicast group with 4 ports on device:leaf1...
INFO  [L2BridgingComponent] Adding L2 multicast rules on device:leaf1...
...
INFO  [L2BridgingComponent] *** L2 BRIDGING - Starting initial set up for device:leaf2...
INFO  [L2BridgingComponent] Adding L2 multicast group with 2 ports on device:leaf2...
INFO  [L2BridgingComponent] Adding L2 multicast rules on device:leaf2...
...
```

### 2. Examine flow rules and groups

Check the ONOS flow rules, you should see 2 new flow rules for the
`l2_ternary_table` installed by L2BridgingComponent. For example, to show
all flow rules installed so far on device `leaf1`:

```
onos> flows -s any device:leaf1
deviceId=device:leaf1, flowRuleCount=...
    ...
    ADDED, ..., table=IngressPipeImpl.l2_ternary_table, priority=10, selector=[hdr.ethernet.dst_addr=0x333300000000&&&0xffff00000000], treatment=[immediate=[IngressPipeImpl.set_multicast_group(gid=0xff)]]
    ADDED, ..., table=IngressPipeImpl.l2_ternary_table, priority=10, selector=[hdr.ethernet.dst_addr=0xffffffffffff&&&0xffffffffffff], treatment=[immediate=[IngressPipeImpl.set_multicast_group(gid=0xff)]]
    ...
```

To show also the multicast groups, you can use the `groups` command. For example
to show groups on `leaf1`:
```
onos> groups any device:leaf1
deviceId=device:leaf1, groupCount=2
   id=0x63, state=ADDED, type=CLONE, ..., appId=org.onosproject.core, referenceCount=0
       id=0x63, bucket=1, ..., weight=-1, actions=[OUTPUT:CONTROLLER]
   id=0xff, state=ADDED, type=ALL, ..., appId=org.onosproject.ngsdn-tutorial, referenceCount=0
       id=0xff, bucket=1, ..., weight=-1, actions=[OUTPUT:3]
       id=0xff, bucket=2, ..., weight=-1, actions=[OUTPUT:4]
       id=0xff, bucket=3, ..., weight=-1, actions=[OUTPUT:5]
       id=0xff, bucket=4, ..., weight=-1, actions=[OUTPUT:6]
```

The `ALL` group is a new one, created by our app (`appId=org.onosproject.ngsdn-tutorial`).
Groups of type `ALL` in ONOS map to P4Runtime `MulticastGroupEntry`, in this
case used to broadcast NDP NS packets to all host-facing ports.

### 3. Test L2 bridging on Mininet

To verify that L2 bridging works as intended, send a ping between hosts in the
same subnet:

```
mininet> h1a ping h1b
PING 2001:1:1::b(2001:1:1::b) 56 data bytes
64 bytes from 2001:1:1::b: icmp_seq=2 ttl=64 time=0.580 ms
64 bytes from 2001:1:1::b: icmp_seq=3 ttl=64 time=0.483 ms
64 bytes from 2001:1:1::b: icmp_seq=4 ttl=64 time=0.484 ms
...
```

In contrast to Exercise 1, here we have NOT set any NDP static entries.
Instead, NDP NS and NA packets are handled by the data plane thanks to the `ALL`
group and `l2_ternary_table`'s flow rule described above. Moreover, given the
ACL flow rules to clone NDP packets to the controller, hosts can be discovered
by ONOS. Host discovery events are used by `L2BridgingComponent.java` to insert
entries in the P4 `l2_exact_table`. Check the ONOS log, you should see messages
related to the discovery of hosts `h1a` and `h1b`:

```
INFO  [L2BridgingComponent] HOST_ADDED event! host=00:00:00:00:00:1A/None, deviceId=device:leaf1, port=3
INFO  [L2BridgingComponent] Adding L2 unicast rule on device:leaf1 for host 00:00:00:00:00:1A/None (port 3)...
INFO  [L2BridgingComponent] HOST_ADDED event! host=00:00:00:00:00:1B/None, deviceId=device:leaf1, port=4
INFO  [L2BridgingComponent] Adding L2 unicast rule on device:leaf1 for host 00:00:00:00:00:1B/None (port 4).
```

### 4. Visualize hosts on the ONOS CLI and web UI

You should see exactly two hosts in the ONOS CLI (`make onos-cli`):

```
onos> hosts -s
id=00:00:00:00:00:1A/None, mac=00:00:00:00:00:1A, locations=[device:leaf1/3], vlan=None, ip(s)=[2001:1:1::a]
id=00:00:00:00:00:1B/None, mac=00:00:00:00:00:1B, locations=[device:leaf1/4], vlan=None, ip(s)=[2001:1:1::b]
```

Using the ONF Cloud Tutorial Portal, access the ONOS UI.
If you are running the VM on your laptop, open up a browser (e.g. Firefox) to
<http://127.0.0.1:8181/onos/ui>.

To toggle showing hosts on the topology view, press `H` on your keyboard.

## Congratulations!

You have completed the fourth exercise!

[PipeconfLoader.java]: app/src/main/java/org/onosproject/ngsdn/tutorial/pipeconf/PipeconfLoader.java
[InterpreterImpl.java]: app/src/main/java/org/onosproject/ngsdn/tutorial/pipeconf/InterpreterImpl.java
[PipelinerImpl.java]: app/src/main/java/org/onosproject/ngsdn/tutorial/pipeconf/PipelinerImpl.java
[L2BridgingComponent.java]: app/src/main/java/org/onosproject/ngsdn/tutorial/L2BridgingComponent.java


================================================
FILE: EXERCISE-5.md
================================================
# Exercise 5: IPv6 Routing

In this exercise, you will be modifying the P4 program and ONOS app to add
support for IPv6-based (L3) routing between all hosts connected to the fabric,
with support for ECMP to balance traffic flows across multiple spines.

## Background

### Requirements

At this stage, we want our fabric to behave like a standard IP fabric, with
switches functioning as routers. As such, the following requirements should be
satisfied:

* Leaf interfaces should be assigned with an IPv6 address (the gateway address)
  and a MAC address that we will call `myStationMac`;
* Leaf switches should be able to handle NDP Neighbor Solicitation (NS)
  messages -- sent by hosts to resolve the MAC address associated with the
  switch interface/gateway IPv6 addresses, by replying with NDP Neighbor
  Advertisements (NA) notifying their `myStationMac` address;
* Packets received with Ethernet destination `myStationMac` should be processed
  through the routing tables (traffic that is not dropped can then be
  processed through the bridging tables);
* When routing, the P4 program should look at the IPv6 destination address. If a
  matching entry is found, the packet should be forwarded to a given next hop
  and the packet's Ethernet addresses should be modified accordingly (source set
  to `myStationMac` and destination to the next hop one);
* When routing packets to a different leaf across the spines, leaf switches
  should be able to use ECMP to distribute traffic.

### Configuration

The [netcfg.json](mininet/netcfg.json) file includes a special configuration for
each device named `fabricDeviceConfig`, this block defines 3 values:

 * `myStationMac`: MAC address associated with each device, i.e., the router MAC
   address;
 * `mySid`: the SRv6 segment ID of the device, used in the next exercise.
 * `isSpine`: a boolean flag, indicating whether the device should be considered
   as a spine switch.

Moreover, the [netcfg.json](mininet/netcfg.json) file also includes a list of
interfaces with an IPv6 prefix assigned to them (look under the `ports` section
of the file). The same IPv6 addresses are used in the Mininet topology script
[topo-v6.py](mininet/topo-v6.py).

### Try pinging hosts in different subnets

Similarly to the previous exercise, let's start by using Mininet to verify that
pinging between hosts on different subnets does NOT work. It will be your task
to make it work.

On the Mininet CLI:

```
mininet> h2 ping h3
PING 2001:2:3::1(2001:2:3::1) 56 data bytes
From 2001:1:2::1 icmp_seq=1 Destination unreachable: Address unreachable
From 2001:1:2::1 icmp_seq=2 Destination unreachable: Address unreachable
From 2001:1:2::1 icmp_seq=3 Destination unreachable: Address unreachable
...
```

If you check the ONOS log, you will notice that `h2` has been discovered:

```
INFO  [L2BridgingComponent] HOST_ADDED event! host=00:00:00:00:00:20/None, deviceId=device:leaf1, port=6
INFO  [L2BridgingComponent] Adding L2 unicast rule on device:leaf1 for host 00:00:00:00:00:20/None (port 6)...
```

That's because `h2` sends NDP NS messages to resolve the MAC address of its
gateway (`2001:1:2::ff` as configured in [topo-v6.py](mininet/topo-v6.py)).

We can check the IPv6 neighbor table for `h2` to see that the resolution
has failed:

```
mininet> h2 ip -6 n
2001:1:2::ff dev h2-eth0  FAILED
```

## 1. Modify P4 program

The first step will be to add new tables to `main.p4`.

#### P4-based generation of NDP messages

We already provide ways to handle NDP NS and NA exchanged by hosts connected to
the same subnet (see `l2_ternary_table`). However, for hosts, the Linux
networking stack takes care of generating a NDP NA reply. For the switches in
our fabric, there's no traditional networking stack associated to it.

There are multiple solutions to this problem:

* we can configure hosts with static NDP entries, removing the need for the
  switch to reply to NDP NS packets;
* we can intercept NDP NS via packet-in, generate a corresponding NDP NA
  reply in ONOS, and send it back via packet-out; or
* we can instruct the switch to generate NDP NA replies using P4
  (i.e., we write P4 code that takes care of replying to NDP requests without any
  intervention from the control plane).

**Note:** The rest of the exercise assumes you will decide to implement the last
option. You can decide to go with a different one, but you should keep in mind
that there will be less starter code for you to re-use.

The idea is simple: NDP NA packets have the same header structure as NDP NS
ones. They are both ICMPv6 packets with different header field values, such as
different ICMPv6 type, different Ethernet addresses etc. A switch that knows the
MAC address of a given IPv6 target address found in an NDP NS request, can
transform the same packet to an NDP NA reply by modifying some of its fields.

To implement P4-based generation of NDP NA messages, look in
[p4src/snippets.p4](p4src/snippets.p4), we already provide an action named
`ndp_ns_to_na` to transform an NDP NS packet into an NDP NA one. Your task is to
implement a table that uses such action.

This table should define a mapping between the interface IPv6 addresses provided
in [netcfg.json](mininet/netcfg.json) and the `myStationMac` associated to each
switch (also defined in netcfg.json). When an NDP NS packet is received, asking
to resolve one of such IPv6 addresses, the `ndp_ns_to_na` action should be
invoked with the given `myStationMac` as parameter. The ONOS app will be
responsible of inserting entries in this table according to the content of
netcfg.json.

The ONOS app already provides a component
[NdpReplyComponent.java](app/src/main/java/org/onosproject/ngsdn/tutorial/NdpReplyComponent.java)
responsible of inserting entries in this table.

The component is currently disabled. You will need to enable and modify it in
the next steps, but for now, let's focus on the P4 program.

#### LPM IPv6 routing table

The main table for this exercise will be an L3 table that matches on the
destination IPv6 address. You should create a table that performs longest
prefix match (LPM) on the destination address and performs the required packet
transformations:

1. Replace the source Ethernet address with the destination one, expected to be
   `myStationMac` (see next section on "My Station" table).
2. Set the destination Ethernet to the next hop's address (passed as an action
   argument).
3. Decrement the IPv6 `hop_limit`.

This L3 table and action should provide a mapping between a given IPv6 prefix
and a next hop MAC address. In our solution (and hence in the PTF starter code
and ONOS app), we re-use the L2 table defined in Exercise 2 to provide a mapping
between the next hop MAC address and an output port. If you want to apply the
same solution, make sure to call the L3 table before the L2 one in the `apply`
block.

Moreover, we will want to drop the packet when the IPv6 hop limit reaches 0.
This can be accomplished by inserting logic in the `apply` block that inspects
the field after applying your L3 table.

At this point, your pipeline should properly match, transform, and forward IPv6
packets.

**Note:** For simplicity, we are using a global routing table. If you would like
to segment your routing table in virtual ones (i.e. using a VRF ID), you can
tackle this as extra credit.

#### "My Station" table

You may realize that at this point the switch will perform IPv6 routing
indiscriminately, which is technically incorrect. The switch should only route
Ethernet frames that are destined for the router's Ethernet address
(`myStationMac`).

To address this issue, you will need to create a table that matches the
destination Ethernet address and marks the packet for routing if there is a
match. We call this the "My Station" table.

You are free to use a specific action or metadata to carry this information, or
for simplicity, you can use `NoAction` and check for a hit in this table in your
`apply` block. Remember to update your `apply` block after creating this table.

#### Adding support for ECMP with action selectors

The last modification you will make to the pipeline is to add an
`action_selector` that hashes traffic between the different possible next
hops. In our leaf-spine topology, we have an equal-cost path for each spine for
every leaf pair, and we want to be able to take advantage of that.

We have already defined the P4 `ecmp_selector` in
[p4src/snippets.p4](p4src/snippets.p4), but you will need to add the selector to
your L3 table. You will also need to add the selector fields as match keys.

For IPv6 traffic, you will need to include the source and destination IPv6
addresses as well as the IPv6 flow label as part of the ECMP hash, but you are
free to include other parts of the packet header if you would like. For example,
you could include the rest of the 5-tuple (i.e. L4 proto and ports); the L4
ports are parsed into `local_metadata` if would like to use them. For more
details on the required fields for hashing IPv6 traffic, see RFC6438.

You can compile the program using `make p4-build`.

Make sure to address any compiler errors before continuing.

At this point, our P4 pipeline should be ready for testing.

## 2. Run PTF tests

Tests for the IPv6 routing behavior are located in `ptf/tests/routing.py`. Open
that file up and modify wherever requested (look for `TODO EXERCISE 5`).

To run all the tests for this exercise:

    make p4-test TEST=routing

This command will run all tests in the `routing` group (i.e. the content of
`ptf/tests/routing.py`). To run a specific test case you can use:

    make p4-test TEST=<PYTHON MODULE>.<TEST CLASS NAME>

For example:

    make p4-test TEST=routing.NdpReplyGenTest


#### Check for regressions

To make sure the new changes are not breaking other features, run the
tests of the previous exercises as well.

    make p4-test TEST=packetio
    make p4-test TEST=bridging
    make p4-test TEST=routing

If all tests succeed, congratulations! You can move to the next step.

## 3. Modify ONOS app

The last part of the exercise is to update the starter code for the routing
components of our ONOS app, located here:

* `app/src/main/java/org/onosproject/ngsdn/tutorial/Ipv6RoutingComponent.java`
* `app/src/main/java/org/onosproject/ngsdn/tutorial/NdpReplyComponent.java`

Open those files and modify wherever requested (look for `TODO EXERCISE 5`).

#### Ipv6RoutingComponent.java

The starter code already provides an implementation for event listeners and the
routing policy (i.e., methods triggered as a consequence of topology
events), for example to compute ECMP groups based on the available
links between leaves and the spine.

You are asked to modify the implementation of four methods.

* `setUpMyStationTable()`: to insert flow rules for the "My Station" table;

* `createNextHopGroup()`: responsible of creating the ONOS equivalent of a
  P4Runtime action profile group for the ECMP selector of the routing table;

* `createRoutingRule()`: to create a flow rule for the IPv6 routing table;

* `createL2NextHopRule()`: to create flow rules mapping next hop MAC addresses
  (used in the ECMP groups) to output ports. You can find a similar method in
  the `L2BridgingComponent` (see `learnHost()` method). This one is called to
  create L2 rules between switches, e.g. to forward packets between leaves and
  spines. There's no need to handle L2 rules for hosts since those are inserted
  by the `L2BridgingComponent`.

#### NdpReplyComponent.java

This component listens to device events. Each time a new device is added in
ONOS, it uses the content of `netcfg.json` to populate the NDP reply table.

You are asked to modify the implementation of method `buildNdpReplyFlowRule()`,
to insert the name of the table and action to generate NDP replies.

#### Enable the routing components

Once you are confident your solution to the previous step works, before
building and reloading the app, remember to enable the routing-related
components by setting the `enabled` flag to `true` at the top of the class
definition.

For IPv6 routing to work, you should enable the following components:

* `Ipv6RoutingComponent.java`
* `NdpReplyComponent.java`

#### Build and reload the app

Use the following command to build and reload your app while ONOS is running:

```
$ make app-build app-reload
```

When building the app, the modified P4 compiler outputs (`bmv2.json` and
`p4info.txt`) will be packaged together along with the Java classes. After
reloading the app, you should see messages in the ONOS log signaling that a new
pipeline configuration has been set and the `Ipv6RoutingComponent` and
`NdpReplyComponent` have been activated. Also check the log for potentially
harmful messages (`make onos-log`). If needed, take a look at section **Appendix
A: Understanding ONOS error logs** at the end of this exercise.

## 4. Test IPv6 routing on Mininet

#### Verify ping

Type the following commands in the Mininet CLI, in order:

```
mininet> h2 ping h3
mininet> h3 ping h2
PING 2001:1:2::1(2001:1:2::1) 56 data bytes
64 bytes from 2001:1:2::1: icmp_seq=2 ttl=61 time=2.39 ms
64 bytes from 2001:1:2::1: icmp_seq=3 ttl=61 time=2.29 ms
64 bytes from 2001:1:2::1: icmp_seq=4 ttl=61 time=2.71 ms
...
```

Pinging between `h3` and `h2` should work now. If ping does NOT work,
check section **Appendix B: Troubleshooting** at the end of this exercise.

The ONOS log should show messages such as:

```
INFO  [Ipv6RoutingComponent] HOST_ADDED event! host=00:00:00:00:00:20/None, deviceId=device:leaf1, port=6
INFO  [Ipv6RoutingComponent] Adding routes on device:leaf1 for host 00:00:00:00:00:20/None [[2001:1:2::1]]
...
INFO  [Ipv6RoutingComponent] HOST_ADDED event! host=00:00:00:00:00:30/None, deviceId=device:leaf2, port=3
INFO  [Ipv6RoutingComponent] Adding routes on device:leaf2 for host 00:00:00:00:00:30/None [[2001:2:3::1]]
...
```

If you don't see messages about the discovery of `h2` (`00:00:00:00:00:20`)
it's because ONOS has already discovered that host when you tried to ping at
the beginning of the exercise.

**Note:** we need to start the ping first from `h2` and then from `h3` to let
ONOS discover the location of both hosts before ping packets can be forwarded.
That's because the current implementation requires hosts to generate NDP NS
packets to be discovered by ONOS. To avoid having to manually generate NDP NS
messages, a possible solution could be:

* Configure IPv6 hosts in Mininet to periodically and automatically generate a
  different type of NDP messages, named Router Solicitation (RS).

* Insert a flow rule in the ACL table to clone NDP RS packets to the CPU. This
  would require matching on a different value of ICMPv6 code other than NDP NA
  and NS.

* Modify the `hostprovider` built-in app implementation to learn host location
  from NDP RS messages (it currently uses only NDP NA and NS).

#### Verify P4-based NDP NA generation

To verify that the P4-based generation of NDP NA replies by the switch is
working, you can check the neighbor table of `h2` or `h3`. It should show
something similar to this:

```
mininet> h3 ip -6 n
2001:2:3::ff dev h3-eth0 lladdr 00:aa:00:00:00:02 router REACHABLE
```

where `2001:2:3::ff` is the IPv6 gateway address defined in `netcfg.json` and
`topo-v6.py`, and `00:aa:00:00:00:02` is the `myStationMac` defined for `leaf2`
in `netcfg.json`.

#### Visualize ECMP using the ONOS web UI

To verify that ECMP is working, let's start multiple parallel traffic flows from
`h2` to `h3` using iperf. In the Mininet command prompt, type:

```
mininet> h2 iperf -c h3 -u -V -P5 -b1M -t600 -i1
```

This commands starts an iperf client on `h2`, sending UDP packets (`-u`)
over IPv6 (`-V`) to `h3` (`-c`). In doing this, we generate 5 distinct flows
(`-P5`), each one capped at 1Mbit/s (`-b1M`), running for 10 minutes (`-t600`)
and reporting stats every 1 second (`-i1`).

Since we are generating UDP traffic, there's no need to start an iperf server
on `h3`.

Using the ONF Cloud Tutorial Portal, access the ONOS UI.
If you are using the tutorial VM, open up a browser (e.g. Firefox) to
<http://127.0.0.1:8181/onos/ui>. When asked, use the username `onos` and
password `rocks`.

On the same page showing the ONOS topology:

* Press `H` on your keyboard to show hosts;
* Press `L` to show device labels;
* Press `A` multiple times until you see port/link stats, in either
  packets/seconds (pps) or bits/seconds.

If you completed the P4 and app implementation correctly, and ECMP is working,
you should see traffic being forwarded to both spines as in the screenshot
below:

<img src="img/routing-ecmp.png" alt="ECMP Test" width="344"/>

## Congratulations!

You have completed the fifth exercise! Now your fabric is capable of forwarding
IPv6 traffic between any host.

## Appendix A: Understanding ONOS error logs

There are two main types of errors that you might see when
reloading the app:

1. Write errors, such as removing a nonexistent entity or inserting one that
   already exists:

    ```
    WARN  [WriteResponseImpl] Unable to DELETE PRE entry on device...: NOT_FOUND Multicast group does not exist ...
    WARN  [WriteResponseImpl] Unable to INSERT table entry on device...: ALREADY_EXIST Match entry exists, use MODIFY if you wish to change action ...
    ```

    These are usually transient errors and **you should not worry about them**.
    They describe a temporary inconsistency of the ONOS-internal device state,
    which should be soon recovered by a periodic reconciliation mechanism.
    The ONOS core periodically polls the device state to make sure its
    internal representation is accurate, while writing any pending modifications
    to the device, solving these errors.

    Otherwise, if you see them appearing periodically (every 3-4 seconds), it
    means the reconciliation process is not working and something else is wrong.
    Try re-loading the app (`make app-reload`); if that doesn't resolve the
    warnings, check with the instructors.

2. Translation errors, signifying that ONOS is not able to translate the flow
   rules (or groups) generated by apps, to a representation that is compatible
   with your P4Info. For example:

    ```
    WARN  [P4RuntimeFlowRuleProgrammable] Unable to translate flow rule for pipeconf 'org.onosproject.ngsdn-tutorial':...
    ```

    **Carefully read the error message and make changes to the app as needed.**
    Chances are that you are using a table, match field, or action name that
    does not exist in your P4Info. Check your P4Info file, modify, and reload the
    app (`make app-build app-reload`).

## Appendix B: Troubleshooting

If ping is not working, here are few steps you can take to troubleshoot your
network:

1. **Check that all flow rules and groups have been written successfully to the
   device.** Using ONOS CLI commands such as `flows -s any device:leaf1` and
   `groups any device:leaf1`, verify that all flows and groups are in state
   `ADDED`. If you see other states such as `PENDING_ADD`, check the ONOS log
   for possible errors with writing those entries to the device. You can also
   use the ONOS web UI to check flows and group state. 
   
   2. Check `netcfg` in ONOS CLI. If network config is missing, run `make netcfg` again to configure devices and hosts.

2. **Use table counters to verify that tables are being hit as expected.**
   If you don't already have direct counters defined for your table(s), modify
   the P4 program to add some, build and reload the app (`make app-build
   app-reload`). ONOS should automatically detect that and poll counters every
   3-4 seconds (the same period for the reconciliation process). To check their
   values, you can either use the ONOS CLI (`flows -s any device:leaf1`) or the
   web UI.

3. **Double check the PTF tests** and make sure you are creating similar flow
   rules in the `Ipv6RoutingComponent.java` and `NdpReplyComponenet.java`. Do
   you notice any difference?

4. **Look at the BMv2 logs for possible errors.** Check file
   `/tmp/leaf1/stratum_bmv2.log`.

5. If here and still not working, **reach out to one of the instructors for
   assistance.**


================================================
FILE: EXERCISE-6.md
================================================
# Exercise 6: Segment Routing v6 (SRv6)

In this exercise, you will be implementing a simplified version of segment
routing, a source routing method that steers traffic through a specified set of
nodes.

## Background

This exercise is based on an IETF draft specification called SRv6, which uses
IPv6 packets to frame traffic that follows an SRv6 policy. SRv6 packets use the
IPv6 routing header, and they can either encapsulate IPv6 (or IPv4) packets
entirely or they can just inject an IPv6 routing header into an existing IPv6
packet.

The IPv6 routing header looks as follows:
```
     0                   1                   2                   3
     0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
    | Next Header   |  Hdr Ext Len  | Routing Type  | Segments Left |
    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
    |  Last Entry   |     Flags     |              Tag              |
    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
    |                                                               |
    |            Segment List[0] (128 bits IPv6 address)            |
    |                                                               |
    |                                                               |
    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
    |                                                               |
    |                                                               |
                                  ...
    |                                                               |
    |                                                               |
    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
    |                                                               |
    |            Segment List[n] (128 bits IPv6 address)            |
    |                                                               |
    |                                                               |
    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
```

The **Next Header** field is the type of either the next IPv6 header or the
payload.

For SRv6, the **Routing Type** is 4.

**Segments Left** points to the index of the current segment in the segment
list. In properly formed SRv6 packets, the IPv6 destination address equals
`Segment List[Segments Left]`. The original IPv6 address should be `Segment
List[0]` in our exercise so that traffic is routed to the correct destination
eventually.

**Last Entry** is the index of the last entry in the segment list.

Note: This means it should be one less than the length of the list. (In the
example above, the list is `n+1` entries and last entry should be `n`.)

Finally, the **Segment List** is a reverse-sorted list of IPv6 addresses to be
traversed in a specific SRv6 policy. The last entry in the list is the first
segment in the SRv6 policy. The list is not typically mutated; the entire header
is inserted or removed as a whole.

To keep things simple and because we are already using IPv6, your solution will
just be adding the routing header to the existing IPv6 packet. (We won't be
embedding entire packets inside of new IPv6 packets with an SRv6 policy,
although the spec allows it and there are valid use cases for doing so.)

As you may have already noticed, SRv6 uses IPv6 addresses to identify segments
in a policy. While the format of the addresses is the same as IPv6, the address
space is typically different from the space used for switch's internal IPv6
addresses. The format of the address also differs. A typical IPv6 unicast
address is broken into a network prefix and host identifier pieces, and a subnet
mask is used to delineate the boundary between the two. A typical SRv6 segment
identifier (SID) is broken into a locator, a function identifier, and
optionally, function arguments. The locator must be routable, which enables both
SRv6-enable and unaware nodes to participate in forwarding.

HINT: Due to optional arguments, longest prefix match on the 128-bit SID is
preferred to exact match.

There are three types of nodes of interest in a segment routed network:

1. Source Node - the node (either host or switch) that injects the SRv6 policy.
2. Transit Node - a node that forwards an SRv6 packet, but is not the
   destination for the traffic
3. Endpoint Node - a participating waypoint in an SRv6 policy that will modify
   the SRv6 header and perform a specified function

In our implementation, we simplify these types into two roles:

* Endpoint Node - for traffic to the switch's SID, update the SRv6 header
  (decrement segments left), set the IPv6 destination address to the next
  segment, and forward the packets ("End" behavior). For simplicity, we will
  always remove the SRv6 header on the penultimate segment in the policy (called
  Penultimate Segment Pop or PSP in the spec).

* Transit Node - by default, forward traffic normally if it is not destined for
  the switch's IP address or its SID ("T" behavior). Allow the control plane to
  add rules to inject SRv6 policy for traffic destined to specific IPv6
  addresses ("T.Insert" behavior).

For more details, you can read the draft specification here:
https://tools.ietf.org/id/draft-filsfils-spring-srv6-network-programming-06.html


## 1. Adding tables for SRv6

We have already defined the SRv6 header as well as included the logic for
parsing the header in `main.p4`.

The next step is to add two for each of the two roles specified above.
In addition to the tables, you will also need to write the action for the
endpoint node table (otherwise called the "My SID" table); in `snippets.p4`, we
have provided the `t_insert` actions for policies of length 2 and 3, which
should be sufficient to get you started.

Once you've finished that, you will need to apply the tables in the `apply`
block at the bottom of your `EgressPipeImpl` section. You will want to apply the
tables after checking that the L2 destination address matches the switch's, and
before the L3 table is applied (because you'll want to use the same routing
entries to forward traffic after the SRv6 policy is applied). You can also apply
the PSP behavior as part of your `apply` logic because we will always be
applying it if we are the penultimate SID.

## 2. Testing the pipeline with Packet Test Framework (PTF)

In this exercise, you will be modifying tests in [srv6.py](ptf/tests/srv6.py) to
verify the SRv6 behavior of the pipeline.

There are four tests in `srv6.py`:

* Srv6InsertTest: Tests SRv6 insert behavior, where the switch receives an IPv6
  packet and inserts the SRv6 header.

* Srv6TransitTest: Tests SRv6 transit behavior, where the switch ignores the
  SRv6 header and routes the packet normally, without applying any SRv6-related
  modifications.

* Srv6EndTest: Tests SRv6 end behavior (without pop), where the switch forwards
  the packet to the next SID found in the SRv6 header.

* Srv6EndPspTest: Tests SRv6 End with Penultimate Segment Pop (PSP) behavior,
  where the switch SID is the penultimate in the SID list and the switch removes
  the SRv6 header before routing the packet to it's final destination (last SID
  in the list).

You should be able to find `TODO EXERCISE 6` in [srv6.py](ptf/tests/srv6.py)
with some hints.

To run all the tests for this exercise:

    make p4-test TEST=srv6

This command will run all tests in the `srv6` group (i.e. the content of
`ptf/tests/srv6.py`). To run a specific test case you can use:

    make p4-test TEST=<PYTHON MODULE>.<TEST CLASS NAME>

For example:

    make p4-test TEST=srv6.Srv6InsertTest

**Check for regressions**

At this point, our P4 program should be complete. We can check to make sure that
we haven't broken anything from the previous exercises by running all tests from
the `ptf/tests` directory:

```
$ make p4-test
```

Now we have shown that we can install basic rules and pass SRv6 traffic using BMv2.

## 3. Building the ONOS App

For the ONOS application, you will need to update `Srv6Component.java` in the
following ways:

* Complete the `setUpMySidTable` method which will insert an entry into the M
  SID table that matches the specified device's SID and performs the `end`
  action. This function is called whenever a new device is connected.

* Complete the `insertSrv6InsertRule` function, which creates a `t_insert` rule
  along for the provided SRv6 policy. This function is called by the
  `srv6-insert` CLI command.

* Complete the `clearSrv6InsertRules`, which is called by the `srv6-clear` CLI
  command.

Once you are finished, you should rebuild and reload your app. This will also
rebuild and republish any changes to your P4 code and the ONOS pipeconf. Don't
forget to enable your Srv6Component at the top of the file.

As with previous exercises, you can use the following command to build and
reload the app:

```
$ make app-build app-reload
```

## 4. Inserting SRv6 policies

The next step is to show that traffic can be steered using an SRv6 policy.

You should start a ping between `h2` and `h4`:
```
mininet> h2 ping h4
```

Using the ONOS UI, you can observe which paths are being used for the ping
packets.

- Press `a` until you see "Port stats (packets/second)"
- Press `l` to show device labels

<img src="img/srv6-ping-1.png" alt="Ping Test" width="344"/>

Once you determine which of the spines your packets are being hashed to (and it
could be both, with requests and replies taking different paths), you should
insert a set of SRv6 policies that sends the ping packets via the other spine
(or the spine of your choice).

To add new SRv6 policies, you should use the `srv6-insert` command.

```
onos> srv6-insert <device ID> <segment list>
```

Note: In our topology, the SID for spine1 is `3:201:2::` and the SID for spine
is `3:202:2::`.

For example, to add a policy that forwards traffic between h2 and h4 though
spine1 and leaf2, you can use the following command:

* Insert the SRv6 policy from h2 to h4 on leaf1 (through spine1 and leaf2)
```
onos> srv6-insert device:leaf1 3:201:2:: 3:102:2:: 2001:2:4::1
Installing path on device device:leaf1: 3:201:2::, 3:102:2::, 2001:2:4::1
```
* Insert the SRv6 policy from h4 to h2 on leaf2 (through spine1 and leaf1)
```
onos> srv6-insert device:leaf2 3:201:2:: 3:101:2:: 2001:1:2::1
Installing path on device device:leaf2: 3:201:2::, 3:101:2::, 2001:1:2::1
```

These commands will match on traffic to the last segment on the specified device
(e.g. match `2001:1:4::1` on `leaf1`). You can update the command to allow for
more specific match criteria as extra credit.

You can confirm that your rule has been added using a variant of the following:

(HINT: Make sure to update the tableId to match the one in your P4 program.)

```
onos> flows any device:leaf1 | grep tableId=IngressPipeImpl.srv6_transit
    id=c000006d73f05e, state=ADDED, bytes=0, packets=0, duration=871, liveType=UNKNOWN, priority=10,
    tableId=IngressPipeImpl.srv6_transit,
    appId=org.p4.srv6-tutorial,
    selector=[hdr.ipv6.dst_addr=0x20010001000400000000000000000001/128],
    treatment=DefaultTrafficTreatment{immediate=[
        IngressPipeImpl.srv6_t_insert_3(
            s3=0x20010001000400000000000000000001,
            s1=0x30201000200000000000000000000,
            s2=0x30102000200000000000000000000)],
    deferred=[], transition=None, meter=[], cleared=false, StatTrigger=null, metadata=null}
```

You should now return to the ONOS UI to confirm that traffic is flowing through
the specified spine.

<img src="img/srv6-ping-2.png" alt="SRv6 Ping Test" width="335"/>

## 5. Debugging and Clean Up

If you need to remove your SRv6 policies, you can use the `srv6-clear` command
to clear all SRv6 policies from a specific device. For example to remove flows
from `leaf1`, use this command:

```
onos> srv6-clear device:leaf1
```

To verify that the device inserts the correct SRv6 header, you can use
**Wireshark** to capture packet from each device port.

For example, if you want to capture packet from port 1 of spine1, capture
packets from interface `spine1-eth1`.

NOTE: `spine1-eth1` is connected to leaf1, and `spine1-eth2` is connected to
leaf2; spine two follows the same pattern.

## Congratulations!

You have completed the sixth exercise! Now your fabric is capable of steering
traffic using SRv6.


================================================
FILE: EXERCISE-7.md
================================================
# Exercise 7: Trellis Basics

The goal of this exercise is to learn how to set up and configure an emulated
Trellis environment with a simple 2x2 topology.

## Background

Trellis is a set of built-in ONOS applications that provide the control plane
for an IP fabric based on MPLS segment-routing. That is, similar in purpose to
the app we have been developing in the previous exercise, but instead of using
IPv6-based routing or SRv6, Trellis uses MPLS labels to forward packets between
leaf switches and across the spines.

Trellis apps are deployed in Tier-1 carrier networks, and for this reason they
are deemed production-grade. These apps provide an extensive feature set such
as:

* Carrier-oriented networking capabilities: from basic L2 and L3 forwarding, to
  multicast, QinQ, pseudo-wires, integration with external control planes such
  as BGP, OSPF, DHCP relay, etc.
* Fault-tolerance and high-availability: as Trellis is designed to take full
  advantage of the ONOS distributed core, e.g., to withstand controller
  failures. It also provides dataplane-level resiliency against link failures
  and switch failures (with paired leaves and dual-homed hosts). See figure
  below.
* Single-pane-of-glass monitoring and troubleshooting, with dedicated tools such
  as T3.


![trellis-features](img/trellis-features.png)

Trellis is made of several apps running on top of ONOS, the main one is
`segmentrouting`, and its implementation can be found in the ONOS source tree:
[onos/apps/segmentrouting] (open on GitHub)

`segmentrouting` abstracts the leaf and spine switches to make the fabric appear
as "one big IP router", such that operators can program them using APIs similar
to that of a traditional router (e.g. to configure VLANs, subnets, routes, etc.)
The app listens to operator-provided configuration, as well as topology events,
to program the switches with the necessary forwarding rules. Because of this
"one big IP router" abstraction, operators can independently scale the topology
to add more capacity or ports by adding more leaves and spines.

`segmentrouting` and other Trellis apps use the ONOS FlowObjective API, which
allow them to be pipeline-agnostic. As a matter of fact, Trellis was initially
designed to work with fixed-function switches exposing an OpenFlow agent (such
as Broadcom Tomahawk, Trident2, and Qumran via the OF-DPA pipeline). However, in
recent years, support for P4 programmable switches was enabled without changing
the Trellis apps, but instead providing a special ONOS pipeconf that brings in a
P4 program complemented by a set of drivers that among other things are
responsible for translating flow objectives to the P4 program-specific tables.

This P4 program is named `fabric.p4`. It's implementation along with the
corresponding pipeconf drivers can be found in the ONOS source tree:
[onos/pipelines/fabric] (open on GitHub)

This pipeconf currently works on the `stratum_bmv2` software switch as well as
on Intel Barefoot Tofino-based switches (the [fabric-tofino] project provides
instructions and scripts to create a Tofino-enabled pipeconf).

We will come back to the details of `fabric.p4` in the next lab, for now, let's
keep in mind that instead of building our own custom pipeconf, we will use one
provided with ONOS.

The goal of the exercise is to learn the Trellis basics by writing a
configuration in the form of a netcfg JSON file to set up bridging and
IPv4 routing of traffic between hosts.

For a gentle overview of Trellis, please check the online book
"Software-Defined Networks: A Systems Approach":
<https://sdn.systemsapproach.org/trellis.html>

Finally, the official Trellis documentation is also available online:
<https://docs.trellisfabric.org/>

### Topology

We will use a topology similar to previous exercises, however, instead of IPv6
we will use IPv4 hosts. The topology file is located under
[mininet/topo-v4.py][topo-v4.py]. While the Trellis apps support IPv6, the P4
program does not, yet. Development of IPv6 support in `fabric.p4` is work in
progress.

![topo-v4](img/topo-v4.png)

Exactly like in previous exercises, the Mininet script [topo-v4.py] used here
defines 4 IPv4 subnets:

* `172.16.1.0/24` with 3 hosts connected to `leaf1` (`h1a`, `h1b`, and `h1c`)
* `172.16.2.0/24` with 1 hosts connected to `leaf1` (`h2`)
* `172.16.3.0/24` with 1 hosts connected to `leaf2` (`h3`)
* `172.16.4.0/24` with 1 hosts connected to `leaf2` (`h4`)

### VLAN tagged vs. untagged ports

As usually done in a traditional router, different subnets are associated to
different VLANs. For this reason, Trellis allows configuring ports with
different VLANs, either untagged or tagged.

An **untagged** port expects packets to be received and sent **without** a VLAN
tag, but internally, the switch processes all packets as belonging to a given
pre-configured VLAN ID. Similarly, when transmitting packets, the VLAN tag is
removed.

For **tagged** ports, packets are expected to be received **with** a VLAN tag
that has ID that belongs to a pre-configured set of known ones. Packets received
untagged or with an unknown VLAN ID, are dropped.

In our topology, we want the following VLAN configuration:

* `leaf1` port `3` and `4`: VLAN `100` untagged (host `h1a` and `h1b`)
* `leaf1` port `5`: VLAN `100` tagged (`h1c`)
* `leaf1` port `6`: VLAN `200` tagged (`h2`)
* `leaf2` port `3`: VLAN `300` tagged (`h3`)
* `leaf2` port `4`: VLAN `400` untagged (`h4`)

In the Mininet script [topo-v4.py], we use different host Python classes to
create untagged and tagged hosts.

For example, for `h1a` attached to untagged port `leaf1-3`, we use the
`IPv4Host` class:
```
# Excerpt from mininet/topo-v4.py
h1a = self.addHost('h1a', cls=IPv4Host, mac="00:00:00:00:00:1A",
                   ip='172.16.1.1/24', gw='172.16.1.254')
```

For `h2`, which instead is attached to tagged port `leaf1-6`, we use the
`TaggedIPv4Host` class:

```
h2 = self.addHost('h2', cls=TaggedIPv4Host, mac="00:00:00:00:00:20",
                  ip='172.16.2.1/24', gw='172.16.2.254', vlan=200)
```

In the same Python file, you can find the implementation for both classes. For
`TaggedIPv4Host` we use standard Linux commands to create a VLAN tagged
interface.

### Configuration via netcfg

The JSON file in [mininet/netcfg-sr.json][netcfg-sr.json] includes the necessary
configuration for ONOS and the Trellis apps to program switches to forward
traffic between hosts of the topology described above.

**NOTE**: this is a similar but different file then the one used in previous
exercises. Notice the `-sr` suffix, where `sr` stands for `segmentrouting`, as
this file contains the necessary configuration for such app to work.

Take a look at both the old file ([netcfg.json]) and the new one
([netcfg-sr.json]), can you spot the differences? To help, try answering the
following questions:

* What is the pipeconf ID used for all 4 switches? Which
  pipeconf ID did we use before? Why is it different?
* In the new file, each device has a `"segmentrouting"` config block (JSON
  subtree). Do you see any similarities with the previous file and the
  `"fabricDeviceConfig"` block?
* How come all `"fabricDeviceConfig"` blocks are gone in the new file?
* Look at the `"interfaces"` config blocks, what has changed w.r.t. the old
  file?
* In the new file, why do the untagged interfaces have only one VLAN ID value,
  while the tagged ones can take many (JSON array)?
* Is the `interfaces` block provided for all host-facing ports? Which ports are
  missing and which hosts are attached to those ports?


## 1. Restart ONOS and Mininet with the IPv4 topology

Since we want to use a new topology with IPv4 hosts, we need to reset the
current environment:

    $ make reset

This command will stop ONOS and Mininet and remove any state associated with
them.

Re-start ONOS and Mininet, this time with the new IPv4 topology:

**IMPORTANT:** please notice the `-v4` suffix!

    $ make start-v4

Wait about 1 minute before proceeding with the next steps. This will
give ONOS time to start all of its subsystems.

## 2. Load fabric pipeconf and segmentrouting

Differently from previous exercises, instead of building and installing our own
pipeconf and app, here we use built-in ones.

Open up the ONOS CLI (`make onos-cli`) and activate the following apps:

    onos> app activate fabric 
    onos> app activate segmentrouting

**NOTE:** The full ID for both apps is `org.onosproject.pipelines.fabric` and
`org.onosproject.segmentrouting`, respectively. For convenience, when activating
built-in apps using the ONOS CLI, you can specify just the last piece of the
full ID (after the last dot.)

**NOTE 2:** The `fabric` app has the minimal purpose of registering
pipeconfs in the system. Different from `segmentrouting`, even if we
call them both apps, `fabric` does not interact with the network in
any way.

#### Verify apps

Verify that all apps have been activated successfully:

    onos> apps -s -a
    *  18 org.onosproject.drivers               2.2.2    Default Drivers
    *  37 org.onosproject.protocols.grpc        2.2.2    gRPC Protocol Subsystem
    *  38 org.onosproject.protocols.gnmi        2.2.2    gNMI Protocol Subsystem
    *  39 org.onosproject.generaldeviceprovider 2.2.2    General Device Provider
    *  40 org.onosproject.protocols.gnoi        2.2.2    gNOI Protocol Subsystem
    *  41 org.onosproject.drivers.gnoi          2.2.2    gNOI Drivers
    *  42 org.onosproject.route-service         2.2.2    Route Service Server
    *  43 org.onosproject.mcast                 2.2.2    Multicast traffic control
    *  44 org.onosproject.portloadbalancer      2.2.2    Port Load Balance Service
    *  45 org.onosproject.segmentrouting        2.2.2    Segment Routing
    *  53 org.onosproject.hostprovider          2.2.2    Host Location Provider
    *  54 org.onosproject.lldpprovider          2.2.2    LLDP Link Provider
    *  64 org.onosproject.protocols.p4runtime   2.2.2    P4Runtime Protocol Subsystem
    *  65 org.onosproject.p4runtime             2.2.2    P4Runtime Provider
    *  99 org.onosproject.drivers.gnmi          2.2.2    gNMI Drivers
    * 100 org.onosproject.drivers.p4runtime     2.2.2    P4Runtime Drivers
    * 101 org.onosproject.pipelines.basic       2.2.2    Basic Pipelines
    * 102 org.onosproject.drivers.stratum       2.2.2    Stratum Drivers
    * 103 org.onosproject.drivers.bmv2          2.2.2    BMv2 Drivers
    * 111 org.onosproject.pipelines.fabric      2.2.2    Fabric Pipeline
    * 164 org.onosproject.gui2                  2.2.2    ONOS GUI2

Verify that you have the above 20 apps active in your ONOS instance. If you are
wondering why so many apps, remember from EXERCISE 3 that the ONOS container in
[docker-compose.yml] is configured to pass the environment variable `ONOS_APPS`
which defines built-in apps to load during startup.

In our case this variable has value:

    ONOS_APPS=gui2,drivers.bmv2,lldpprovider,hostprovider

Moreover, `segmentrouting` requires other apps as dependencies, such as
`route-service`, `mcast`, and `portloadbalancer`. The combination of all these
apps (and others that we do not need in this exercise) is what makes Trellis.

#### Verify pipeconfs

Verify that the `fabric` pipeconfs have been registered successfully:

    onos> pipeconfs
    id=org.onosproject.pipelines.fabric-full, behaviors=[PortStatisticsDiscovery, PiPipelineInterpreter, Pipeliner, IntProgrammable], extensions=[P4_INFO_TEXT, BMV2_JSON, CPU_PORT_TXT]
    id=org.onosproject.pipelines.int, behaviors=[PiPipelineInterpreter, Pipeliner, PortStatisticsDiscovery, IntProgrammable], extensions=[P4_INFO_TEXT, BMV2_JSON]
    id=org.onosproject.pipelines.fabric-spgw-int, behaviors=[PortStatisticsDiscovery, PiPipelineInterpreter, Pipeliner, IntProgrammable], extensions=[P4_INFO_TEXT, BMV2_JSON, CPU_PORT_TXT]
    id=org.onosproject.pipelines.fabric, behaviors=[PortStatisticsDiscovery, PiPipelineInterpreter, Pipeliner], extensions=[P4_INFO_TEXT, BMV2_JSON, CPU_PORT_TXT]
    id=org.onosproject.pipelines.fabric-bng, behaviors=[PortStatisticsDiscovery, PiPipelineInterpreter, Pipeliner, BngProgrammable], extensions=[P4_INFO_TEXT, BMV2_JSON, CPU_PORT_TXT]
    id=org.onosproject.pipelines.fabric-spgw, behaviors=[PortStatisticsDiscovery, PiPipelineInterpreter, Pipeliner], extensions=[P4_INFO_TEXT, BMV2_JSON, CPU_PORT_TXT]
    id=org.onosproject.pipelines.fabric-int, behaviors=[PortStatisticsDiscovery, PiPipelineInterpreter, Pipeliner, IntProgrammable], extensions=[P4_INFO_TEXT, BMV2_JSON, CPU_PORT_TXT]
    id=org.onosproject.pipelines.basic, behaviors=[PiPipelineInterpreter, Pipeliner, PortStatisticsDiscovery], extensions=[P4_INFO_TEXT, BMV2_JSON]

Wondering why so many pipeconfs? `fabric.p4` comes in different "profiles", used
to enable different dataplane features in the pipeline. We'll come back
to the differences between different profiles in the next exercise, for now
let's make sure the basic one `org.onosproject.pipelines.fabric` is loaded.
This is the one we need to program all four switches, as specified in
[netcfg-sr.json].

#### Increase reconciliation frequency (optional, but recommended)

Run the following commands in the ONOS CLI:

    onos> cfg set org.onosproject.net.flow.impl.FlowRuleManager fallbackFlowPollFrequency 4
    onos> cfg set org.onosproject.net.group.impl.GroupManager fallbackGroupPollFrequency 3

This command tells the ONOS core to modify the period (in seconds) between
reconciliation checks. Reconciliation is used to verify that switches have the
expected forwarding state and to correct any inconsistencies, i.e., writing any
pending flow rule and group. When running ONOS and the emulated switches in the
same machine (especially those with low CPU/memory), it might happen that
P4Runtime write requests time out because the system is overloaded.

The default reconciliation period is 30 seconds, the above commands set it to 4
seconds for flow rules, and 3 seconds for groups.

## 3. Push netcfg-sr.json to ONOS

On a terminal window, type:

**IMPORTANT**: please notice the `-sr` suffix!

    $ make netcfg-sr

As we learned in EXERCISE 3, this command will push [netcfg-sr.json] to ONOS,
triggering discovery and configuration of the 4 switches. Moreover, since the
file specifies a `segmentrouting` config block for each switch, this will
instruct the `segmentrouting` app in ONOS to take control of all of them, i.e.,
the app will start generating flow objectives that will be translated into flow
rules for the `fabric.p4` pipeline.

Check the ONOS log (`make onos-log`). You should see numerous messages from
components such as `TopologyHandler`, `LinkHandler`, `SegmentRoutingManager`,
etc., signaling that switches have been discovered and programmed.

You should also see warning messages such as:

```
[ForwardingObjectiveTranslator] Cannot translate DefaultForwardingObjective: unsupported forwarding function type 'PSEUDO_WIRE'...
```

This is normal, as not all Trellis features are supported in `fabric.p4`. One of
such feature is [pseudo-wire] (L2 tunneling across the L3 fabric). You can
ignore that.

This error is generated by the Pipeliner driver behavior of the `fabric`
pipeconf, which recognizes that the given flow objective cannot be translated.

#### Check configuration in ONOS

Verify that all interfaces have been configured successfully:

    onos> interfaces
    leaf1-3: port=device:leaf1/3 ips=[172.16.1.254/24] mac=00:AA:00:00:00:01 vlanUntagged=100
    leaf1-4: port=device:leaf1/4 ips=[172.16.1.254/24] mac=00:AA:00:00:00:01 vlanUntagged=100
    leaf1-5: port=device:leaf1/5 ips=[172.16.1.254/24] mac=00:AA:00:00:00:01 vlanTagged=[100]
    leaf1-6: port=device:leaf1/6 ips=[172.16.2.254/24] mac=00:AA:00:00:00:01 vlanTagged=[200]

You should see four interfaces in total (for all host-facing ports of `leaf1`),
configured as in the [netcfg-sr.json] file. You will have to add the
configuration for `leaf2`'s ports later in this exercise.

A similar output can be obtained by using a `segmentrouting`-specific command:

    onos> sr-device-subnets
    device:leaf1
        172.16.1.0/24
        172.16.2.0/24
    device:spine1
    device:spine2
    device:leaf2

This command lists all device-subnet mapping known to `segmentrouting`. For a
list of other available sr-specific commands, type `sr-` and press
<kbd>tab</kbd> (as for command auto-completion).

Another interesting command is `sr-ecmp-spg`, which lists all computed ECMP
shortest-path graphs:

    onos> sr-ecmp-spg 
    Root Device: device:leaf1 ECMP Paths: 
      Paths from device:leaf1 to device:spine1
           ==  : device:leaf1/1 -> device:spine1/1
      Paths from device:leaf1 to device:spine2
           ==  : device:leaf1/2 -> device:spine2/1
      Paths from device:leaf1 to device:leaf2
           ==  : device:leaf1/2 -> device:spine2/1 : device:spine2/2 -> device:leaf2/2
           ==  : device:leaf1/1 -> device:spine1/1 : device:spine1/2 -> device:leaf2/1

    Root Device: device:spine1 ECMP Paths: 
      Paths from device:spine1 to device:leaf1
           ==  : device:spine1/1 -> device:leaf1/1
      Paths from device:spine1 to device:spine2
           ==  : device:spine1/2 -> device:leaf2/1 : device:leaf2/2 -> device:spine2/2
           ==  : device:spine1/1 -> device:leaf1/1 : device:leaf1/2 -> device:spine2/1
      Paths from device:spine1 to device:leaf2
           ==  : device:spine1/2 -> device:leaf2/1

    Root Device: device:spine2 ECMP Paths: 
      Paths from device:spine2 to device:leaf1
           ==  : device:spine2/1 -> device:leaf1/2
      Paths from device:spine2 to device:spine1
           ==  : device:spine2/1 -> device:leaf1/2 : device:leaf1/1 -> device:spine1/1
           ==  : device:spine2/2 -> device:leaf2/2 : device:leaf2/1 -> device:spine1/2
      Paths from device:spine2 to device:leaf2
           ==  : device:spine2/2 -> device:leaf2/2

    Root Device: device:leaf2 ECMP Paths: 
      Paths from device:leaf2 to device:leaf1
           ==  : device:leaf2/1 -> device:spine1/2 : device:spine1/1 -> device:leaf1/1
           ==  : device:leaf2/2 -> device:spine2/2 : device:spine2/1 -> device:leaf1/2
      Paths from device:leaf2 to device:spine1
           ==  : device:leaf2/1 -> device:spine1/2
      Paths from device:leaf2 to device:spine2
           ==  : device:leaf2/2 -> device:spine2/2

These graphs are used by `segmentrouting` to program flow rules and groups
(action selectors) in `fabric.p4`, needed to load balance traffic across
multiple spines/paths.

Verify that no hosts have been discovered so far:

    onos> hosts

You should get an empty output.

Verify that all initial flows and groups have be programmed successfully:

    onos> flows -c added
    deviceId=device:leaf1, flowRuleCount=52
    deviceId=device:spine1, flowRuleCount=28
    deviceId=device:spine2, flowRuleCount=28
    deviceId=device:leaf2, flowRuleCount=36
    onos> groups -c added
    deviceId=device:leaf1, groupCount=5
    deviceId=device:leaf2, groupCount=3
    deviceId=device:spine1, groupCount=5
    deviceId=device:spine2, groupCount=5

You should see the same `flowRuleCount` and `groupCount` in your
output. To dump the whole set of flow rules and groups, remove the
`-c` argument from the command. `added` is used to filter only
entities that are known to have been written to the switch (i.e., the
P4Runtime Write RPC was successful.)

## 4. Connectivity test

#### Same-subnet hosts (bridging)

Open up the Mininet CLI (`make mn-cli`). Start by pinging `h1a` and `h1c`,
which are both on the same subnet (VLAN `100` 172.16.1.0/24):

    mininet> h1a ping h1c
    PING 172.16.1.3 (172.16.1.3) 56(84) bytes of data.
    64 bytes from 172.16.1.3: icmp_seq=1 ttl=63 time=13.7 ms
    64 bytes from 172.16.1.3: icmp_seq=2 ttl=63 time=3.63 ms
    64 bytes from 172.16.1.3: icmp_seq=3 ttl=63 time=3.52 ms
    ...

Ping should work. Check the ONOS log, you should see an output similar to that
of exercises 4-5.

    [HostHandler] Host 00:00:00:00:00:1A/None is added at [device:leaf1/3]
    [HostHandler] Populating bridging entry for host 00:00:00:00:00:1A/None at device:leaf1:3
    [HostHandler] Populating routing rule for 172.16.1.1 at device:leaf1/3
    [HostHandler] Host 00:00:00:00:00:1C/100 is added at [device:leaf1/5]
    [HostHandler] Populating bridging entry for host 00:00:00:00:00:1C/100 at device:leaf1:5
    [HostHandler] Populating routing rule for 172.16.1.3 at device:leaf1/5

That's because `segmentrouting` operates in a way that is similar to the custom
app of previous exercises. Hosts are discovered by the built-in service
`hostprovider` intercepting packets such as ARP or NDP. For hosts in the same
subnet, to support ARP resolution, multicast (ALL) groups are used to replicate
ARP requests to all ports belonging to the same VLAN. `segmentrouting` listens
for host events, when a new one is discovered, it installs the necessary
bridging and routing rules.

#### Hosts on different subnets (routing)

On the Mininet prompt, start a ping to `h2` from any host in the subnet with
VLAN `100`, for example, from `h1a`:

    mininet> h1a ping h2

The **ping should NOT work**, and the reason is that the location of `h2` is not
known to ONOS, yet. Usually, Trellis is used in networks where hosts use DHCP
for addressing. In such setup, we could use the DHCP relay app in ONOS to learn
host locations and addresses when the hosts request an IP address via DHCP.
However, in this simpler topology, we need to manually trigger `h2` to generate
some packets to be discovered by ONOS.

When using `segmentrouting`, the easiest way to have ONOS discover an host, is
to ping the gateway address that we configured in [netcfg-sr.json], or that you
can derive from the ONOS CLI (`onos> interfaces`):

    mininet> h2 ping 172.16.2.254
    PING 172.16.2.254 (172.16.2.254) 56(84) bytes of data.
    64 bytes from 172.16.2.254: icmp_seq=1 ttl=64 time=28.9 ms
    64 bytes from 172.16.2.254: icmp_seq=2 ttl=64 time=12.6 ms
    64 bytes from 172.16.2.254: icmp_seq=3 ttl=64 time=15.2 ms
    ...

Ping is working, and ONOS should have discovered `h2` by now. But, who is
replying to our pings?

If you check the ARP table for h2:

    mininet> h2 arp
    Address                  HWtype  HWaddress           Flags Mask            Iface
    172.16.2.254             ether   00:aa:00:00:00:01   C                     h2-eth0.200

You should recognize MAC address `00:aa:00:00:00:01` as the one associated with
`leaf1` in [netcfg-sr.json]. That's it, the `segmentrouting` app in ONOS is
replying to our ICMP echo request (ping) packets! Ping requests are intercepted
by means of P4Runtime packet-in, while replies are generated and injected via
P4Runtime packet-out. This is equivalent to pinging the interface of a
traditional router.

At this point, ping from `h1a` to `h2` should work:

    mininet> h1a ping h2
    PING 172.16.2.1 (172.16.2.1) 56(84) bytes of data.
    64 bytes from 172.16.2.1: icmp_seq=1 ttl=63 time=6.23 ms
    64 bytes from 172.16.2.1: icmp_seq=2 ttl=63 time=3.81 ms
    64 bytes from 172.16.2.1: icmp_seq=3 ttl=63 time=3.84 ms
    ...

Moreover, you can check that all hosts pinged so far have been discovered by
ONOS:

    onos> hosts -s
    id=00:00:00:00:00:1A/None, mac=00:00:00:00:00:1A, locations=[device:leaf1/3], vlan=None, ip(s)=[172.16.1.1]
    id=00:00:00:00:00:1C/100, mac=00:00:00:00:00:1C, locations=[device:leaf1/5], vlan=100, ip(s)=[172.16.1.3]
    id=00:00:00:00:00:20/200, mac=00:00:00:00:00:20, locations=[device:leaf1/6], vlan=200, ip(s)=[172.16.2.1]

## 5. Dump packets to see VLAN tags (optional)

TODO: detailed instructions for this step are still a work-in-progress.

If you feel adventurous, start a ping between any two hosts, and use the tool
[util/mn-pcap](util/mn-pcap) to dump packets to a PCAP file. After dumping
packets, the tool tries to open the pcap file on wireshark (if installed).

For example, to dump packets out of the `h2` main interface:

    $ util/mn-pcap h2

## 6. Add missing interface config

Start a ping from `h3` to any other host, for example `h2`:

    mininet> h3 ping h2
    ...

It should NOT work. Can you explain why?

Let's check the ONOS log (`make onos-log`). You should see the following
messages:

    ...
    INFO  [HostHandler] Host 00:00:00:00:00:30/None is added at [device:leaf2/3]
    INFO  [HostHandler] Populating bridging entry for host 00:00:00:00:00:30/None at device:leaf2:3
    WARN  [RoutingRulePopulator] Untagged host 00:00:00:00:00:30/None is not allowed on device:leaf2/3 without untagged or nativevlan config
    WARN  [RoutingRulePopulator] Fail to build fwd obj for host 00:00:00:00:00:30/None. Abort.
    INFO  [HostHandler] 172.16.3.1 is not included in the subnet config of device:leaf2/3. Ignored.


`h3` is discovered because ONOS intercepted the ARP request to resolve `h3`'s
gateway IP address (`172.16.3.254`), but the rest of the programming fails
because we have not provided a valid Trellis configuration for the switch port
facing `h3` (`leaf2/3`). Indeed, if you look at [netcfg-sr.json] you will notice
that the `"ports"` section includes a config block for all `leaf1` host-facing
ports, but it does NOT provide any for `leaf2`.

As a matter of fact, if you try to start a ping from `h4` (attached to `leaf2`),
that should NOT work as well.

It is your task to modify the [netcfg-sr.json] to add the necessary config
blocks to enable connectivity for `h3` and `h4`:

1. Open up [netcfg-sr.json].
2. Look for the `"ports"` section.
3. Provide a config for ports `device:leaf2/3` and `device:leaf2/4`. When doing
   so, look at the config for other ports as a reference, but make sure to
   provide the right IPv4 gateway address, subnet, and VLAN configuration
   described at the beginning of this document.
4. When done, push the updated file to ONOS using `make netcfg-sr`.
5. Verify that the two new interface configs show up when using the ONOS
   CLI (`onos> interfaces`).
6. If you don't see the new interfaces in the ONOS CLI, verify the ONOS log
   (`make onos-log`) for any possible error, and eventually go back to step 3.
7. If you struggle to make it work, a solution is available in the
   `solution/mininet` directory.

Let's try to ping the corresponding gateway address from `h3` and `h4`:

    mininet> h3 ping 172.16.3.254
    PING 172.16.3.254 (172.16.3.254) 56(84) bytes of data.
    64 bytes from 172.16.3.254: icmp_seq=1 ttl=64 time=66.5 ms
    64 bytes from 172.16.3.254: icmp_seq=2 ttl=64 time=19.1 ms
    64 bytes from 172.16.3.254: icmp_seq=3 ttl=64 time=27.5 ms
    ...

    mininet> h4 ping 172.16.4.254
    PING 172.16.4.254 (172.16.4.254) 56(84) bytes of data.
    64 bytes from 172.16.4.254: icmp_seq=1 ttl=64 time=45.2 ms
    64 bytes from 172.16.4.254: icmp_seq=2 ttl=64 time=12.7 ms
    64 bytes from 172.16.4.254: icmp_seq=3 ttl=64 time=22.0 ms
    ...

At this point, ping between all hosts should work. You can try that using the
special `pingall` command in the Mininet CLI:

    mininet> pingall
    *** Ping: testing ping reachability
    h1a -> h1b h1c h2 h3 h4 
    h1b -> h1a h1c h2 h3 h4 
    h1c -> h1a h1b h2 h3 h4 
    h2 -> h1a h1b h1c h3 h4 
    h3 -> h1a h1b h1c h2 h4 
    h4 -> h1a h1b h1c h2 h3 
    *** Results: 0% dropped (30/30 received)

## Congratulations!

You have completed the seventh exercise! You were able to use ONOS built-in
Trellis apps such as `segmentrouting` and the `fabric` pipeconf to configure
forwarding in a 2x2 leaf-spine fabric of IPv4 hosts.

[topo-v4.py]: mininet/topo-v4.py
[netcfg-sr.json]: mininet/netcfg-sr.json
[netcfg.json]: mininet/netcfg.json
[docker-compose.yml]: docker-compose.yml
[pseudo-wire]: https://en.wikipedia.org/wiki/Pseudo-wire
[onos/apps/segmentrouting]: https://github.com/opennetworkinglab/onos/tree/2.2.2/apps/segmentrouting
[onos/pipelines/fabric]: https://github.com/opennetworkinglab/onos/tree/2.2.2/pipelines/fabric
[fabric-tofino]: https://github.com/opencord/fabric-tofino


================================================
FILE: EXERCISE-8.md
================================================
# Exercise 8: GTP termination with fabric.p4

The goal of this exercise is to learn how to use Trellis and fabric.p4 to
encapsulate and route packets using the GPRS Tunnelling Protocol (GTP) header as
in 4G mobile core networks.

## Background

![Topology GTP](img/topo-gtp.png)

The topology we will use in this exercise ([topo-gtp.py]) is a very simple one,
with the usual 2x2 fabric, but only two hosts. We assume our fabric is used as
part of a 4G (i.e, LTE) network, connecting base stations to a Packet Data
Network (PDN), such as the Internet. The two hosts in our topology represent:

* An eNodeB, i.e., a base station providing radio connectivity to User
  Equipments (UEs) such as smartphones;
* A host on the Packet Data Network (PDN), i.e., any host on the Internet.

To provide connectivity between the UEs and the Internet, we need to program our
fabric to act as a Serving and Packet Gateway (SPGW). The SPGW is a complex and
feature-rich component of the 4G mobile core architecture that is used by the
base stations as a gateway to the PDN. Base stations aggregate UE traffic in GTP
tunnels (one or more per UE). The SPGW has many functions, among which that of
terminating such tunnels. In other words, it encapsulates downlink traffic
(Internet→UE) in an additional IPv4+UDP+GTP-U header, and it removes it for the
uplink direction (UE→Internet).

In this exercise you will learn how to:

* Program a switch with the `fabric-spgw` profile;
* Use Trellis to route traffic from the PDN to the eNodeB;
* Use the ONOS REST APIs to enable GTP encapsulation of downlink traffic on
  `leaf1`.


## 1. Start ONOS and Mininet with GTP topology

Since we want to use a different topology, we need to reset the current
environment (if currently active):

    $ make reset

This command will stop ONOS and Mininet and remove any state associated with
them.

Re-start ONOS and Mininet, this time with the new topology:

**IMPORTANT:** please notice the `-gtp` suffix!

    $ make start-gtp

Wait about 1 minute before proceeding with the next steps, this will give time
to ONOS to start all of its subsystems.

## 2. Load additional apps

As in the previous exercises, let's activate the `segmentrouting` and `fabric`
pipeconf app using the ONOS CLI (`make onos-cli`):

    onos> app activate fabric 
    onos> app activate segmentrouting

Let's also activate a third app named `netcfghostprovider`:

    onos> app activate netcfghostprovider 

The `netcfghostprovider` (Network Config Host Provider ) is a built-in service
similar to the `hostprovider` (Host Location Provider) seen in the previous
exercises. It is responsible for registering hosts in the system, however,
differently from `hostprovider`, it does not listen for ARP or DHCP packet-ins
to automatically discover hosts. Instead, it uses information in the netcfg JSON
file, allowing operators to pre-populate the ONOS host store.

This is useful for static topologies and to avoid relying on ARP, DHCP, and
other host-generated traffic. In this exercise, we use the netcfg JSON to
configure the location of the `enodeb` and `pdn` hosts.

#### Verify apps

The complete list of apps should look like the following (21 in total)

    onos> apps -s -a
    *  18 org.onosproject.drivers              2.2.2    Default Drivers
    *  37 org.onosproject.protocols.grpc       2.2.2    gRPC Protocol Subsystem
    *  38 org.onosproject.protocols.gnmi       2.2.2    gNMI Protocol Subsystem
    *  39 org.onosproject.generaldeviceprovider 2.2.2    General Device Provider
    *  40 org.onosproject.protocols.gnoi       2.2.2    gNOI Protocol Subsystem
    *  41 org.onosproject.drivers.gnoi         2.2.2    gNOI Drivers
    *  42 org.onosproject.route-service        2.2.2    Route Service Server
    *  43 org.onosproject.mcast                2.2.2    Multicast traffic control
    *  44 org.onosproject.portloadbalancer     2.2.2    Port Load Balance Service
    *  45 org.onosproject.segmentrouting       2.2.2    Segment Routing
    *  53 org.onosproject.hostprovider         2.2.2    Host Location Provider
    *  54 org.onosproject.lldpprovider         2.2.2    LLDP Link Provider
    *  64 org.onosproject.protocols.p4runtime  2.2.2    P4Runtime Protocol Subsystem
    *  65 org.onosproject.p4runtime            2.2.2    P4Runtime Provider
    *  96 org.onosproject.netcfghostprovider   2.2.2    Network Config Host Provider
    *  99 org.onosproject.drivers.gnmi         2.2.2    gNMI Drivers
    * 100 org.onosproject.drivers.p4runtime    2.2.2    P4Runtime Drivers
    * 101 org.onosproject.pipelines.basic      2.2.2    Basic Pipelines
    * 102 org.onosproject.drivers.stratum      2.2.2    Stratum Drivers
    * 103 org.onosproject.drivers.bmv2         2.2.2    BMv2 Drivers
    * 111 org.onosproject.pipelines.fabric     2.2.2    Fabric Pipeline
    * 164 org.onosproject.gui2                 2.2.2    ONOS GUI2

#### Verify pipeconfs

All `fabric` pipeconf profiles should have been registered by now. Take a note
on the full ID of the one with SPGW capabilities (`fabric-spgw`), you will need
this ID in the next step.

    onos> pipeconfs
    id=org.onosproject.pipelines.fabric-full, behaviors=[PortStatisticsDiscovery, PiPipelineInterpreter, Pipeliner, IntProgrammable], extensions=[P4_INFO_TEXT, BMV2_JSON, CPU_PORT_TXT]
    id=org.onosproject.pipelines.int, behaviors=[PiPipelineInterpreter, Pipeliner, PortStatisticsDiscovery, IntProgrammable], extensions=[P4_INFO_TEXT, BMV2_JSON]
    id=org.onosproject.pipelines.fabric-spgw-int, behaviors=[PortStatisticsDiscovery, PiPipelineInterpreter, Pipeliner, IntProgrammable], extensions=[P4_INFO_TEXT, BMV2_JSON, CPU_PORT_TXT]
    id=org.onosproject.pipelines.fabric, behaviors=[PortStatisticsDiscovery, PiPipelineInterpreter, Pipeliner], extensions=[P4_INFO_TEXT, BMV2_JSON, CPU_PORT_TXT]
    id=org.onosproject.pipelines.fabric-bng, behaviors=[PortStatisticsDiscovery, PiPipelineInterpreter, Pipeliner, BngProgrammable], extensions=[P4_INFO_TEXT, BMV2_JSON, CPU_PORT_TXT]
    id=org.onosproject.pipelines.fabric-spgw, behaviors=[PortStatisticsDiscovery, PiPipelineInterpreter, Pipeliner], extensions=[P4_INFO_TEXT, BMV2_JSON, CPU_PORT_TXT]
    id=org.onosproject.pipelines.fabric-int, behaviors=[PortStatisticsDiscovery, PiPipelineInterpreter, Pipeliner, IntProgrammable], extensions=[P4_INFO_TEXT, BMV2_JSON, CPU_PORT_TXT]
    id=org.onosproject.pipelines.basic, behaviors=[PiPipelineInterpreter, Pipeliner, PortStatisticsDiscovery], extensions=[P4_INFO_TEXT, BMV2_JSON]

## 3. Modify and push netcfg to use fabric-spgw profile

Up until now, we have used topologies where all switches were configured with
the same pipeconf, and so the same P4 program.

In this exercise, we want all switches to run with the basic `fabric` profile,
while `leaf1` should act as the SPGW, and so we want it programmed with the
`fabric-spgw` profile.

#### Modify netcfg JSON

Let's modify the netcfg JSON to use the `fabric-spgw` profile on switch
`leaf1`.

1. Open up file [netcfg-gtp.json] and look for the configuration of `leaf1`
   in thw `"devices"` block. It should look like this:

    ```
      "devices": {
        "device:leaf1": {
          "basic": {
            "managementAddress": "grpc://mininet:50001?device_id=1",
            "driver": "stratum-bmv2",
            "pipeconf": "org.onosproject.pipelines.fabric",
    ...
    ```

2. Modify the `pipeconf` property to use the full ID of the `fabric-spgw`
   profile obtained in the previous step.

3. Save the file.

#### Push netcfg to ONOS

On a terminal window, type:

**IMPORTANT**: please notice the `-gtp` suffix!

    $ make netcfg-gtp

Use the ONOS CLI (`make onos-cli`) to verify that all 4 switches are connected
to ONOS and provisioned with the right pipeconf/profile.

    onos> devices -s
    id=device:leaf1, available=true, role=MASTER, type=SWITCH, driver=stratum-bmv2:org.onosproject.pipelines.fabric-spgw
    id=device:leaf2, available=true, role=MASTER, type=SWITCH, driver=stratum-bmv2:org.onosproject.pipelines.fabric
    id=device:spine1, available=true, role=MASTER, type=SWITCH, driver=stratum-bmv2:org.onosproject.pipelines.fabric
    id=device:spine2, available=true, role=MASTER, type=SWITCH, driver=stratum-bmv2:org.onosproject.pipelines.fabric

Make sure `leaf1` has driver with the `fabric-sgw` pipeconf, while all other
switches should have the basic `fabric` pipeconf.

##### Troubleshooting

If `leaf1` does NOT have `available=true`, it probably means that you have
inserted the wrong pipeconf ID in [netcfg-gtp.json] and ONOS is not able to
perform the initial provisioning.

Check the ONOS log (`make onos-log`) for possible errors. Remember from the
previous exercise that some errors are expected (e.g., for unsupported
`PSEUDO_WIRE` flow objectives). If you see an error like this:

    ERROR [DeviceTaskExecutor] Unable to complete task CONNECTION_SETUP for device:leaf1: pipeconf ... not registered

It means you have to go to the previous step to correct your pipeconf ID. Modify
the [netcfg-gtp.json] file and push it again using `make netcfg-gtp`. Use the
ONOS CLI and log to make sure the issue is fixed before proceeding.

#### Check configuration in ONOS

Check the interface configuration. In this topology we want `segmentrouting` to
forward traffic based on two IP subnets:

    onos> interfaces
    leaf1-3: port=device:leaf1/3 ips=[10.0.100.254/24] mac=00:AA:00:00:00:01 vlanUntagged=100
    leaf2-3: port=device:leaf2/3 ips=[10.0.200.254/24] mac=00:AA:00:00:00:02 vlanUntagged=200

Check that the `enodeb` and `pdn` hosts have been discovered:

    onos> hosts
    id=00:00:00:00:00:10/None, mac=00:00:00:00:00:10, locations=[device:leaf1/3], auxLocations=null, vlan=None, ip(s)=[10.0.100.1], ..., name=enodeb, ..., provider=host:org.onosproject.netcfghost, configured=true
    id=00:00:00:00:00:20/None, mac=00:00:00:00:00:20, locations=[device:leaf2/3], auxLocations=null, vlan=None, ip(s)=[10.0.200.1], ..., name=pdn, ..., provider=host:org.onosproject.netcfghost, configured=true

`provider=host:org.onosproject.netcfghost` and `configured=true` are indications
that the host entry was created by `netcfghostprovider`.

## 4. Verify IP connectivity between PDN and eNodeB

Since the two hosts have already been discovered, they should be pingable.

Using the Mininet CLI (`make mn-cli`) start a ping between `enodeb` and `pdn`:

    mininet> enodeb ping pdn
    PING 10.0.200.1 (10.0.200.1) 56(84) bytes of data.
    64 bytes from 10.0.200.1: icmp_seq=1 ttl=62 time=1053 ms
    64 bytes from 10.0.200.1: icmp_seq=2 ttl=62 time=49.0 ms
    64 bytes from 10.0.200.1: icmp_seq=3 ttl=62 time=9.63 ms
    ...

## 5. Start PDN and eNodeB processes

We have created two Python scripts to emulate the PDN sending downlink
traffic to the UEs, and the eNodeB, expecting to receive the
same traffic but GTP-encapsulated.

In a new terminal window, start the [send-udp.py] script on the `pdn` host:

    $ util/mn-cmd pdn /mininet/send-udp.py
    Sending 5 UDP packets per second to 17.0.0.1...

[util/mn-cmd] is a convenience script to run commands on mininet hosts when
using multiple terminal windows.

[mininet/send-udp.py][send-udp.py] generates packets with destination
IPv4 address `17.0.0.1` (UE address). In the rest of the exercise we
will configure Trellis to route these packets through switch `leaf1`, and
we will insert a flow rule in this switch to perform the GTP
encapsulation. For now, this traffic will be dropped at `leaf2`.

On a second terminal window, start the [recv-gtp.py] script on the `enodeb`
host:

```
$ util/mn-cmd enodeb /mininet/recv-gtp.py   
Will print a line for each UDP packet received...
```

[mininet/recv-gtp.py][recv-gtp.py] simply sniffs packets received and prints
them on screen, informing if the packet is GTP encapsulated or not. You should
see no packets being printed for the moment.

#### Use ONOS UI to visualize traffic

Using the ONF Cloud Tutorial Portal, access the ONOS UI.
If you are running a VM on your laptop, open up a browser
(e.g. Firefox) to <http://127.0.0.1:8181/onos/ui>.

When asked, use the username `onos` and password `rocks`.

To show hosts, press <kbd>H</kbd>. To show real-time link utilization, press
<kbd>A</kbd> multiple times until showing "Port stats (packets / second)".

You should see traffic (5 pps) on the link between the `pdn` host and `leaf2`,
but not on other links. **Packets are dropped at switch `leaf2` as this switch
does not know how to route packets with IPv4 destination `17.0.0.1`.**

## 6. Install route for UE subnet and debug table entries

Using the ONOS CLI (`make onos-cli`), type the following command to add a route
for the UE subnet (`17.0.0.0/24`) with next hop the `enodeb` (`10.0.100.1`):

    onos> route-add 17.0.0.0/24 10.0.100.1

Check that the new route has been successfully added:

```
onos> routes
B: Best route, R: Resolved route

Table: ipv4
B R  Network            Next Hop        Source (Node)
> *  17.0.0.0/24        10.0.100.1      STATIC (-)
   Total: 1
...
```

Since `10.0.100.1` is a known host to ONOS, i.e., we know its location in the
topology (see `*` under `R` column, which stands for "Resolved route",)
`segmentrouting` uses this information to compute paths and install the
necessary table entries to forward packets with IPv4 destination address
matching `17.0.0.0/24`.

Open up the terminal window with the [recv-gtp.py] script on the `enodeb` host,
you should see the following output:

    [...] 691 bytes: 10.0.200.1 -> 17.0.0.1, is_gtp_encap=False
        Ether / IP / UDP 10.0.200.1:80 > 17.0.0.1:400 / Raw
    ....

These lines indicate that a packet has been received at the eNodeB. The static
route is working! However, there's no trace of GTP headers, yet. We'll get back
to this soon, but for now, let's take a look at table entries in `fabric.p4`.

Feel free to also check on the ONOS UI to see packets forwarded across the
spines, and delivered to the eNodeB (the next hop of our static route).

#### Debug fabric.p4 table entries

You can verify that the table entries for the static route have been added to
the switches by "grepping" the output of the ONOS CLI `flows` command, for
example for `leaf2`:

    onos> flows -s any device:leaf2 | grep "17.0.0.0"
        ADDED, bytes=0, packets=0, table=FabricIngress.forwarding.routing_v4, priority=48010, selector=[IPV4_DST:17.0.0.0/24], treatment=[immediate=[FabricIngress.forwarding.set_next_id_routing_v4(next_id=0xd)]]


One entry has been `ADDED` to table `FabricIngress.forwarding.routing_v4` with
`next_id=0xd`.

Let's grep flow rules for `next_id=0xd`:

    onos> flows -s any device:leaf2 | grep "next_id=0xd"
        ADDED, bytes=0, packets=0, table=FabricIngress.forwarding.routing_v4, priority=48010, selector=[IPV4_DST:17.0.0.0/24], treatment=[immediate=[FabricIngress.forwarding.set_next_id_routing_v4(next_id=0xd)]]
        ADDED, bytes=0, packets=0, table=FabricIngress.forwarding.routing_v4, priority=48010, selector=[IPV4_DST:10.0.100.0/24], treatment=[immediate=[FabricIngress.forwarding.set_next_id_routing_v4(next_id=0xd)]]
        ADDED, bytes=1674881, packets=2429, table=FabricIngress.next.hashed, priority=0, selector=[next_id=0xd], treatment=[immediate=[GROUP:0xd]]
        ADDED, bytes=1674881, packets=2429, table=FabricIngress.next.next_vlan, priority=0, selector=[next_id=0xd], treatment=[immediate=[FabricIngress.next.set_vlan(vlan_id=0xffe)]]

Notice that another route shares the same next ID (`10.0.100.0/24` from
the to interface config for `leaf1`), and that the next ID points to a group
with the same value (`GROUP:0xd`).

Let's take a look at this specific group:

    onos> groups any device:leaf2 | grep "0xd"
       id=0xd, state=ADDED, type=SELECT, bytes=0, packets=0, appId=org.onosproject.segmentrouting, referenceCount=0
           id=0xd, bucket=1, bytes=0, packets=0, weight=1, actions=[FabricIngress.next.mpls_routing_hashed(dmac=0xbb00000001, port_num=0x1, smac=0xaa00000002, label=0x65)]
           id=0xd, bucket=2, bytes=0, packets=0, weight=1, actions=[FabricIngress.next.mpls_routing_hashed(dmac=0xbb00000002, port_num=0x2, smac=0xaa00000002, label=0x65)]

This `SELECT`  group is used to hash traffic to the spines (i.e., ECMP) and to
push an MPLS label with hex value`0x65`, or 101 in base 10.

Spine switches will use this label to forward packets. Can you tell what 101
identifies here? Hint: take a look at [netcfg-gtp.json]

## 7. Use ONOS REST APIs to create GTP flow rule

Finally, it is time to instruct `leaf1` to encapsulate traffic with a GTP tunnel
header. To do this, we will insert a special table entry in the "SPGW portion"
of the [fabric.p4] pipeline, implemented in file [spgw.p4]. Specifically, we
will insert one entry in the [dl_sess_lookup] table, responsible for handling
downlink traffic (i.e., with match on the UE IPv4 address) by setting the GTP
tunnel info which will be used to perform the encapsulation (action
`set_dl_sess_info`).

**NOTE:** this version of spgw.p4 is from ONOS v2.2.2 (the same used in this
exercise). The P4 code might have changed recently, and you might see different
tables if you open up the same file in a different branch.

To insert the flow rule, we will not use an app (which we would have to
implement from scratch!), but instead, we will use the ONOS REST APIs. To learn
more about the available APIs, use the following URL to open up the
automatically generated API docs from your running ONOS instance:
<http://127.0.0.1:8181/onos/v1/docs/>

The specific API we will use to create new flow rules is `POST /flows`,
described here:
<http://127.0.0.1:8181/onos/v1/docs/#!/flows/post_flows>

This API takes a JSON request. The file
[mininet/flowrule-gtp.json][flowrule-gtp.json] specifies the flow rule we
want to create. This file is incomplete, and you need to modify it before we can
send it via the REST APIs.

1. Open up file [mininet/flowrule-gtp.json][flowrule-gtp.json].

   Look for the `"selector"` section that specifies the match fields:
   ```
   "selector": {
     "criteria": [
       {
         "type": "IPV4_DST",
         "ip": "<INSERT HERE UE IP ADDR>/32"
       }
     ]
   },
   ...
   ```
    
2. Modify the `ip` field to match on the IP address of the UE (17.0.0.1).
    
    Since the `dl_sess_lookup` table performs exact match on the IPv4
    address, make sure to specify the match field with `/32` prefix length.
    
    Also, note that the `set_dl_sess_info` action, is specified as
    `PROTOCOL_INDEPENDENT`. This is the ONOS terminology to describe custom
    flow rule actions. For this reason, the action parameters are specified
    as byte strings in hexadecimal format:
    
    * `"teid": "BEEF"`: GTP tunnel identifier (48879 in decimal base)
    * `"s1u_enb_addr": "0a006401"`: destination IPv4 address of the
      GTP tunnel, i.e., outer IPv4 header (10.0.100.1). This is the address of
      the eNodeB.
    * `"s1u_sgw_addr": "0a0064fe"`: source address of the GTP outer IPv4
      header (10.0.100.254). This is the address of the switch interface
      configured in Trellis.
    
3. Save the [flowrule-gtp.json] file.

4. Push the flow rule to ONOS using the REST APIs.

   On a terminal window, type the following commands:
   
   ```
   $ make flowrule-gtp
   ```

   This command uses `cURL` to push the flow rule JSON file to the ONOS REST API
   endpoint. If the flow rule has been created correctly, you should see an
   output similar to the following one:

   ```
   *** Pushing flowrule-gtp.json to ONOS...
   curl --fail -sSL --user onos:rocks --noproxy localhost -X POST -H 'Content-Type:application/json' \
                   http://localhost:8181/onos/v1/flows?appId=rest-api -d@./mininet/flowrule-gtp.json
   {"flows":[{"deviceId":"device:leaf1","flowId":"54606147878186474"}]}

   ```

5. Check the eNodeB process. You should see that the received packets
   are now GTP encapsulated!

    ```
    [...] 727 bytes: 10.0.100.254 -> 10.0.100.1, is_gtp_encap=True
        Ether / IP / UDP / GTP_U_Header / IP / UDP 10.0.200.1:80 > 17.0.0.1:400 / Raw
    ```

## Congratulations!

You have completed the eighth exercise! You were able to use fabric.p4 and
Trellis to encapsulate traffic in GTP tunnels and to route it across the fabric.

[topo-gtp.py]: mininet/topo-gtp.py
[netcfg-gtp.json]: mininet/netcfg-gtp.json
[send-udp.py]: mininet/send-udp.py
[recv-gtp.py]: mininet/recv-gtp.py
[util/mn-cmd]: util/mn-cmd
[fabric.p4]: https://github.com/opennetworkinglab/onos/blob/2.2.2/pipelines/fabric/impl/src/main/resources/fabric.p4
[spgw.p4]: https://github.com/opennetworkinglab/onos/blob/2.2.2/pipelines/fabric/impl/src/main/resources/include/spgw.p4
[dl_sess_lookup]: https://github.com/opennetworkinglab/onos/blob/2.2.2/pipelines/fabric/impl/src/main/resources/include/spgw.p4#L70
[flowrule-gtp.json]: mininet/flowrule-gtp.json


================================================
FILE: LICENSE
================================================
                                 Apache License
                           Version 2.0, January 2004
                        http://www.apache.org/licenses/

   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION

   1. Definitions.

      "License" shall mean the terms and conditions for use, reproduction,
      and distribution as defined by Sections 1 through 9 of this document.

      "Licensor" shall mean the copyright owner or entity authorized by
      the copyright owner that is granting the License.

      "Legal Entity" shall mean the union of the acting entity and all
      other entities that control, are controlled by, or are under common
      control with that entity. For the purposes of this definition,
      "control" means (i) the power, direct or indirect, to cause the
      direction or management of such entity, whether by contract or
      otherwise, or (ii) ownership of fifty percent (50%) or more of the
      outstanding shares, or (iii) beneficial ownership of such entity.

      "You" (or "Your") shall mean an individual or Legal Entity
      exercising permissions granted by this License.

      "Source" form shall mean the preferred form for making modifications,
      including but not limited to software source code, documentation
      source, and configuration files.

      "Object" form shall mean any form resulting from mechanical
      transformation or translation of a Source form, including but
      not limited to compiled object code, generated documentation,
      and conversions to other media types.

      "Work" shall mean the work of authorship, whether in Source or
      Object form, made available under the License, as indicated by a
      copyright notice that is included in or attached to the work
      (an example is provided in the Appendix below).

      "Derivative Works" shall mean any work, whether in Source or Object
      form, that is based on (or derived from) the Work and for which the
      editorial revisions, annotations, elaborations, or other modifications
      represent, as a whole, an original work of authorship. For the purposes
      of this License, Derivative Works shall not include works that remain
      separable from, or merely link (or bind by name) to the interfaces of,
      the Work and Derivative Works thereof.

      "Contribution" shall mean any work of authorship, including
      the original version of the Work and any modifications or additions
      to that Work or Derivative Works thereof, that is intentionally
      submitted to Licensor for inclusion in the Work by the copyright owner
      or by an individual or Legal Entity authorized to submit on behalf of
      the copyright owner. For the purposes of this definition, "submitted"
      means any form of electronic, verbal, or written communication sent
      to the Licensor or its representatives, including but not limited to
      communication on electronic mailing lists, source code control systems,
      and issue tracking systems that are managed by, or on behalf of, the
      Licensor for the purpose of discussing and improving the Work, but
      excluding communication that is conspicuously marked or otherwise
      designated in writing by the copyright owner as "Not a Contribution."

      "Contributor" shall mean Licensor and any individual or Legal Entity
      on behalf of whom a Contribution has been received by Licensor and
      subsequently incorporated within the Work.

   2. Grant of Copyright License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      copyright license to reproduce, prepare Derivative Works of,
      publicly display, publicly perform, sublicense, and distribute the
      Work and such Derivative Works in Source or Object form.

   3. Grant of Patent License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      (except as stated in this section) patent license to make, have made,
      use, offer to sell, sell, import, and otherwise transfer the Work,
      where such license applies only to those patent claims licensable
      by such Contributor that are necessarily infringed by their
      Contribution(s) alone or by combination of their Contribution(s)
      with the Work to which such Contribution(s) was submitted. If You
      institute patent litigation against any entity (including a
      cross-claim or counterclaim in a lawsuit) alleging that the Work
      or a Contribution incorporated within the Work constitutes direct
      or contributory patent infringement, then any patent licenses
      granted to You under this License for that Work shall terminate
      as of the date such litigation is filed.

   4. Redistribution. You may reproduce and distribute copies of the
      Work or Derivative Works thereof in any medium, with or without
      modifications, and in Source or Object form, provided that You
      meet the following conditions:

      (a) You must give any other recipients of the Work or
          Derivative Works a copy of this License; and

      (b) You must cause any modified files to carry prominent notices
          stating that You changed the files; and

      (c) You must retain, in the Source form of any Derivative Works
          that You distribute, all copyright, patent, trademark, and
          attribution notices from the Source form of the Work,
          excluding those notices that do not pertain to any part of
          the Derivative Works; and

      (d) If the Work includes a "NOTICE" text file as part of its
          distribution, then any Derivative Works that You distribute must
          include a readable copy of the attribution notices contained
          within such NOTICE file, excluding those notices that do not
          pertain to any part of the Derivative Works, in at least one
          of the following places: within a NOTICE text file distributed
          as part of the Derivative Works; within the Source form or
          documentation, if provided along with the Derivative Works; or,
          within a display generated by the Derivative Works, if and
          wherever such third-party notices normally appear. The contents
          of the NOTICE file are for informational purposes only and
          do not modify the License. You may add Your own attribution
          notices within Derivative Works that You distribute, alongside
          or as an addendum to the NOTICE text from the Work, provided
          that such additional attribution notices cannot be construed
          as modifying the License.

      You may add Your own copyright statement to Your modifications and
      may provide additional or different license terms and conditions
      for use, reproduction, or distribution of Your modifications, or
      for any such Derivative Works as a whole, provided Your use,
      reproduction, and distribution of the Work otherwise complies with
      the conditions stated in this License.

   5. Submission of Contributions. Unless You explicitly state otherwise,
      any Contribution intentionally submitted for inclusion in the Work
      by You to the Licensor shall be under the terms and conditions of
      this License, without any additional terms or conditions.
      Notwithstanding the above, nothing herein shall supersede or modify
      the terms of any separate license agreement you may have executed
      with Licensor regarding such Contributions.

   6. Trademarks. This License does not grant permission to use the trade
      names, trademarks, service marks, or product names of the Licensor,
      except as required for reasonable and customary use in describing the
      origin of the Work and reproducing the content of the NOTICE file.

   7. Disclaimer of Warranty. Unless required by applicable law or
      agreed to in writing, Licensor provides the Work (and each
      Contributor provides its Contributions) on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
      implied, including, without limitation, any warranties or conditions
      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
      PARTICULAR PURPOSE. You are solely responsible for determining the
      appropriateness of using or redistributing the Work and assume any
      risks associated with Your exercise of permissions under this License.

   8. Limitation of Liability. In no event and under no legal theory,
      whether in tort (including negligence), contract, or otherwise,
      unless required by applicable law (such as deliberate and grossly
      negligent acts) or agreed to in writing, shall any Contributor be
      liable to You for damages, including any direct, indirect, special,
      incidental, or consequential damages of any character arising as a
      result of this License or out of the use or inability to use the
      Work (including but not limited to damages for loss of goodwill,
      work stoppage, computer failure or malfunction, or any and all
      other commercial damages or losses), even if such Contributor
      has been advised of the possibility of such damages.

   9. Accepting Warranty or Additional Liability. While redistributing
      the Work or Derivative Works thereof, You may choose to offer,
      and charge a fee for, acceptance of support, warranty, indemnity,
      or other liability obligations and/or rights consistent with this
      License. However, in accepting such obligations, You may act only
      on Your own behalf and on Your sole responsibility, not on behalf
      of any other Contributor, and only if You agree to indemnify,
      defend, and hold each Contributor harmless for any liability
      incurred by, or claims asserted against, such Contributor by reason
      of your accepting any such warranty or additional liability.

   END OF TERMS AND CONDITIONS

   APPENDIX: How to apply the Apache License to your work.

      To apply the Apache License to your work, attach the following
      boilerplate notice, with the fields enclosed by brackets "[]"
      replaced with your own identifying information. (Don't include
      the brackets!)  The text should be enclosed in the appropriate
      comment syntax for the file format. We also recommend that a
      file or class name and description of purpose be included on the
      same "printed page" as the copyright notice for easier
      identification within third-party archives.

   Copyright [yyyy] [name of copyright owner]

   Licensed under the Apache License, Version 2.0 (the "License");
   you may not use this file except in compliance with the License.
   You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.


================================================
FILE: Makefile
================================================
mkfile_path := $(abspath $(lastword $(MAKEFILE_LIST)))
curr_dir := $(patsubst %/,%,$(dir $(mkfile_path)))

include util/docker/Makefile.vars

onos_url := http://localhost:8181/onos
onos_curl := curl --fail -sSL --user onos:rocks --noproxy localhost
app_name := org.onosproject.ngsdn-tutorial

NGSDN_TUTORIAL_SUDO ?=

default:
	$(error Please specify a make target (see README.md))

_docker_pull_all:
	docker pull ${ONOS_IMG}@${ONOS_SHA}
	docker tag ${ONOS_IMG}@${ONOS_SHA} ${ONOS_IMG}
	docker pull ${P4RT_SH_IMG}@${P4RT_SH_SHA}
	docker tag ${P4RT_SH_IMG}@${P4RT_SH_SHA} ${P4RT_SH_IMG}
	docker pull ${P4C_IMG}@${P4C_SHA}
	docker tag ${P4C_IMG}@${P4C_SHA} ${P4C_IMG}
	docker pull ${STRATUM_BMV2_IMG}@${STRATUM_BMV2_SHA}
	docker tag ${STRATUM_BMV2_IMG}@${STRATUM_BMV2_SHA} ${STRATUM_BMV2_IMG}
	docker pull ${MVN_IMG}@${MVN_SHA}
	docker tag ${MVN_IMG}@${MVN_SHA} ${MVN_IMG}
	docker pull ${GNMI_CLI_IMG}@${GNMI_CLI_SHA}
	docker tag ${GNMI_CLI_IMG}@${GNMI_CLI_SHA} ${GNMI_CLI_IMG}
	docker pull ${YANG_IMG}@${YANG_SHA}
	docker tag ${YANG_IMG}@${YANG_SHA} ${YANG_IMG}
	docker pull ${SSHPASS_IMG}@${SSHPASS_SHA}
	docker tag ${SSHPASS_IMG}@${SSHPASS_SHA} ${SSHPASS_IMG}

deps: _docker_pull_all

_start:
	$(info *** Starting ONOS and Mininet (${NGSDN_TOPO_PY})... )
	@mkdir -p tmp/onos
	@NGSDN_TOPO_PY=${NGSDN_TOPO_PY} docker-compose up -d

start: NGSDN_TOPO_PY := topo-v6.py
start: _start

start-v4: NGSDN_TOPO_PY := topo-v4.py
start-v4: _start

start-gtp: NGSDN_TOPO_PY := topo-gtp.py
start-gtp: _start

stop:
	$(info *** Stopping ONOS and Mininet...)
	@NGSDN_TOPO_PY=foo docker-compose down -t0

restart: reset start

onos-cli:
	$(info *** Connecting to the ONOS CLI... password: rocks)
	$(info *** Top exit press Ctrl-D)
	@ssh -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" -o LogLevel=ERROR -p 8101 onos@localhost

onos-log:
	docker-compose logs -f onos

onos-ui:
	open ${onos_url}/ui

mn-cli:
	$(info *** Attaching to Mininet CLI...)
	$(info *** To detach press Ctrl-D (Mininet will keep running))
	-@docker attach --detach-keys "ctrl-d" $(shell docker-compose ps -q mininet) || echo "*** Detached from Mininet CLI"

mn-log:
	docker logs -f mininet

_netcfg:
	$(info *** Pushing ${NGSDN_NETCFG_JSON} to ONOS...)
	${onos_curl} -X POST -H 'Content-Type:application/json' \
		${onos_url}/v1/network/configuration -d@./mininet/${NGSDN_NETCFG_JSON}
	@echo

netcfg: NGSDN_NETCFG_JSON := netcfg.json
netcfg: _netcfg

netcfg-sr: NGSDN_NETCFG_JSON := netcfg-sr.json
netcfg-sr: _netcfg

netcfg-gtp: NGSDN_NETCFG_JSON := netcfg-gtp.json
netcfg-gtp: _netcfg

flowrule-gtp:
	$(info *** Pushing flowrule-gtp.json to ONOS...)
	${onos_curl} -X POST -H 'Content-Type:application/json' \
		${onos_url}/v1/flows?appId=rest-api -d@./mininet/flowrule-gtp.json
	@echo

flowrule-clean:
	$(info *** Removing all flows installed via REST APIs...)
	${onos_curl} -X DELETE -H 'Content-Type:application/json' \
		${onos_url}/v1/flows/application/rest-api
	@echo

reset: stop
	-$(NGSDN_TUTORIAL_SUDO) rm -rf ./tmp

clean:
	-$(NGSDN_TUTORIAL_SUDO) rm -rf p4src/build
	-$(NGSDN_TUTORIAL_SUDO) rm -rf app/target
	-$(NGSDN_TUTORIAL_SUDO) rm -rf app/src/main/resources/bmv2.json
	-$(NGSDN_TUTORIAL_SUDO) rm -rf app/src/main/resources/p4info.txt

p4-build: p4src/main.p4
	$(info *** Building P4 program...)
	@mkdir -p p4src/build
	docker run --rm -v ${curr_dir}:/workdir -w /workdir ${P4C_IMG} \
		p4c-bm2-ss --arch v1model -o p4src/build/bmv2.json \
		--p4runtime-files p4src/build/p4info.txt --Wdisable=unsupported \
		p4src/main.p4
	@echo "*** P4 program compiled successfully! Output files are in p4src/build"

p4-test:
	@cd ptf && PTF_DOCKER_IMG=$(STRATUM_BMV2_IMG) ./run_tests $(TEST)

_copy_p4c_out:
	$(info *** Copying p4c outputs to app resources...)
	@mkdir -p app/src/main/resources
	cp -f p4src/build/p4info.txt app/src/main/resources/
	cp -f p4src/build/bmv2.json app/src/main/resources/

_mvn_package:
	$(info *** Building ONOS app...)
	@mkdir -p app/target
	@docker run --rm -v ${curr_dir}/app:/mvn-src -w /mvn-src ${MVN_IMG} mvn -o clean package

app-build: p4-build _copy_p4c_out _mvn_package
	$(info *** ONOS app .oar package created succesfully)
	@ls -1 app/target/*.oar

app-install:
	$(info *** Installing and activating app in ONOS...)
	${onos_curl} -X POST -HContent-Type:application/octet-stream \
		'${onos_url}/v1/applications?activate=true' \
		--data-binary @app/target/ngsdn-tutorial-1.0-SNAPSHOT.oar
	@echo

app-uninstall:
	$(info *** Uninstalling app from ONOS (if present)...)
	-${onos_curl} -X DELETE ${onos_url}/v1/applications/${app_name}
	@echo

app-reload: app-uninstall app-install

yang-tools:
	docker run --rm -it -v ${curr_dir}/yang/demo-port.yang:/models/demo-port.yang ${YANG_IMG}

solution-apply:
	mkdir working_copy
	cp -r app working_copy/app
	cp -r p4src working_copy/p4src
	cp -r ptf working_copy/ptf
	cp -r mininet working_copy/mininet
	rsync -r solution/ ./

solution-revert:
	test -d working_copy
	$(NGSDN_TUTORIAL_SUDO) rm -rf ./app/*
	$(NGSDN_TUTORIAL_SUDO) rm -rf ./p4src/*
	$(NGSDN_TUTORIAL_SUDO) rm -rf ./ptf/*
	$(NGSDN_TUTORIAL_SUDO) rm -rf ./mininet/*
	cp -r working_copy/* ./
	$(NGSDN_TUTORIAL_SUDO) rm -rf working_copy/

check:
	make reset
	# P4 starter code and app should compile
	make p4-build
	make app-build
	# Check solution
	make solution-apply
	make start
	make p4-build
	make p4-test
	make app-build
	sleep 30
	make app-reload
	sleep 10
	make netcfg
	sleep 10
	# The first ping(s) might fail because of a known race condition in the
	# L2BridgingComponenet. Ping all hosts.
	-util/mn-cmd h1a ping -c 1 2001:1:1::b
	util/mn-cmd h1a ping -c 1 2001:1:1::b
	-util/mn-cmd h1b ping -c 1 2001:1:1::c
	util/mn-cmd h1b ping -c 1 2001:1:1::c
	-util/mn-cmd h2 ping -c 1 2001:1:1::b
	util/mn-cmd h2 ping -c 1 2001:1:1::b
	util/mn-cmd h2 ping -c 1 2001:1:1::a
	util/mn-cmd h2 ping -c 1 2001:1:1::c
	-util/mn-cmd h3 ping -c 1 2001:1:2::1
	util/mn-cmd h3 ping -c 1 2001:1:2::1
	util/mn-cmd h3 ping -c 1 2001:1:1::a
	util/mn-cmd h3 ping -c 1 2001:1:1::b
	util/mn-cmd h3 ping -c 1 2001:1:1::c
	-util/mn-cmd h4 ping -c 1 2001:1:2::1
	util/mn-cmd h4 ping -c 1 2001:1:2::1
	util/mn-cmd h4 ping -c 1 2001:1:1::a
	util/mn-cmd h4 ping -c 1 2001:1:1::b
	util/mn-cmd h4 ping -c 1 2001:1:1::c
	make stop
	make solution-revert

check-sr:
	make reset
	make start-v4
	sleep 45
	util/onos-cmd app activate segmentrouting
	util/onos-cmd app activate pipelines.fabric
	sleep 15
	make netcfg-sr
	sleep 20
	util/mn-cmd h1a ping -c 1 172.16.1.3
	util/mn-cmd h1b ping -c 1 172.16.1.3
	util/mn-cmd h2 ping -c 1 172.16.2.254
	sleep 5
	util/mn-cmd h2 ping -c 1 172.16.1.1
	util/mn-cmd h2 ping -c 1 172.16.1.2
	util/mn-cmd h2 ping -c 1 172.16.1.3
	# ping from h3 and h4 should not work without the solution
	! util/mn-cmd h3 ping -c 1 172.16.3.254
	! util/mn-cmd h4 ping -c 1 172.16.4.254
	make solution-apply
	make netcfg-sr
	sleep 20
	util/mn-cmd h3 ping -c 1 172.16.3.254
	util/mn-cmd h4 ping -c 1 172.16.4.254
	sleep 5
	util/mn-cmd h3 ping -c 1 172.16.1.1
	util/mn-cmd h3 ping -c 1 172.16.1.2
	util/mn-cmd h3 ping -c 1 172.16.1.3
	util/mn-cmd h3 ping -c 1 172.16.2.1
	util/mn-cmd h3 ping -c 1 172.16.4.1
	util/mn-cmd h4 ping -c 1 172.16.1.1
	util/mn-cmd h4 ping -c 1 172.16.1.2
	util/mn-cmd h4 ping -c 1 172.16.1.3
	util/mn-cmd h4 ping -c 1 172.16.2.1
	make stop
	make solution-revert

check-gtp:
	make reset
	make start-gtp
	sleep 45
	util/onos-cmd app activate segmentrouting
	util/onos-cmd app activate pipelines.fabric
	util/onos-cmd app activate netcfghostprovider
	sleep 15
	make solution-apply
	make netcfg-gtp
	sleep 20
	util/mn-cmd enodeb ping -c 1 10.0.100.254
	util/mn-cmd pdn ping -c 1 10.0.200.254
	util/onos-cmd route-add 17.0.0.0/24 10.0.100.1
	make flowrule-gtp
	# util/mn-cmd requires a TTY because it uses docker -it option
	# hence we use screen for putting it in the background
	screen -d -m util/mn-cmd pdn /mininet/send-udp.py
	util/mn-cmd enodeb /mininet/recv-gtp.py -e
	make stop
	make solution-revert


================================================
FILE: README.md
================================================
# Next-Gen SDN Tutorial (Advanced)

Welcome to the Next-Gen SDN tutorial!

This tutorial is targeted at students and practitioners who want to learn about
the building blocks of the next-generation SDN (NG-SDN) architecture, such as:

* Data plane programming and control via P4 and P4Runtime
* Configuration via YANG, OpenConfig, and gNMI
* Stratum switch OS
* ONOS SDN controller

Tutorial sessions are organized around a sequence of hands-on exercises that
show how to build a leaf-spine data center fabric based on IPv6, using P4,
Stratum, and ONOS. Exercises assume an intermediate knowledge of the P4
language, and a basic knowledge of Java and Python. Participants will be
provided with a starter P4 program and ONOS app implementation. Exercises will
focus on concepts such as:

* Using Stratum APIs (P4Runtime, gNMI, OpenConfig, gNOI)
* Using ONOS with devices programmed with arbitrary P4 programs
* Writing ONOS applications to provide the control plane logic
  (bridging, routing, ECMP, etc.)
* Testing using bmv2 in Mininet
* PTF-based P4 unit tests

## Basic vs. advanced version

This tutorial comes in two versions: basic (`master` branch), and advanced
(this branch).

The basic version contains fewer exercises, and it does not assume prior
knowledge of the P4 language. Instead, it provides a gentle introduction to it.
Check the `master` branch of this repo if you're interested in the basic
version.

If you're interested in the advanced version, keep reading.

## Slides

Tutorial slides are available online:
<http://bit.ly/adv-ngsdn-tutorial-slides>

These slides provide an introduction to the topics covered in the tutorial. We
suggest you look at it before starting to work on the exercises.

## System requirements

If you are taking this tutorial at an event organized by ONF, you should have
received credentials to access the **ONF Cloud Tutorial Platform**, in which
case you can skip this section. Keep reading if you are interested in working on
the exercises on your laptop.

To facilitate access to the tools required to complete this tutorial, we provide
two options for you to choose from:

1. Download a pre-packaged VM with all included; **OR**
2. Manually install Docker and other dependencies.

### Option 1 - Download tutorial VM

Use the following link to download the VM (4 GB):
* <http://bit.ly/ngsdn-tutorial-ova>

The VM is in .ova format and has been created using VirtualBox v5.2.32. To run
the VM you can use any modern virtualization system, although we recommend using
VirtualBox. For instructions on how to get VirtualBox and import the VM, use the
following links:

* <https://www.virtualbox.org/wiki/Downloads>
* <https://docs.oracle.com/cd/E26217_01/E26796/html/qs-import-vm.html>

Alternatively, you can use the scripts in [util/vm](util/vm) to build a VM on
your machine using Vagrant.

**Recommended VM configuration:**
The current configuration of the VM is 4 GB of RAM and 4 core CPU. These are the
recommended minimum system requirements to complete the exercises. When
imported, the VM takes approx. 8 GB of HDD space. For a smooth experience, we
recommend running the VM on a host system that has at least the double of
resources.

**VM user credentials:**
Use credentials `sdn`/`rocks` to log in the Ubuntu system.

### Option 2 - Manually install Docker and other dependencies

All exercises can be executed by installing the following dependencies:

* Docker v1.13.0+ (with docker-compose)
* make
* Python 3
* Bash-like Unix shell
* Wireshark (optional)

**Note for Windows users**: all scripts have been tested on macOS and Ubuntu.
Although we think they should work on Windows, we have not tested it. For this
reason, we advise Windows users to prefer Option 1.

## Get this repo or pull latest changes

To work on the exercises you will need to clone this repo:

    cd ~
    git clone -b advanced https://github.com/opennetworkinglab/ngsdn-tutorial

If the `ngsdn-tutorial` directory is already present, make sure to update its
content:

    cd ~/ngsdn-tutorial
    git pull origin advanced

## Download / upgrade dependencies

The VM may have shipped with an older version of the dependencies than we would
like to use for the exercises. You can upgrade to the latest version using the
following command:

    cd ~/ngsdn-tutorial
    make deps

This command will download all necessary Docker images (~1.5 GB) allowing you to
work off-line. For this reason, we recommend running this step ahead of the
tutorial, with a reliable Internet connection.

## Using an IDE to work on the exercises

During the exercises you will need to write code in multiple languages such as
P4, Java, and Python. While the exercises do not prescribe the use of any
specific IDE or code editor, the **ONF Cloud Tutorial Platform** provides access
to a web-based version of Visual Studio Code (VS Code).

If you are using the tutorial VM, you will find the Java IDE [IntelliJ IDEA
Community Edition](https://www.jetbrains.com/idea/), already pre-loaded with
plugins for P4 syntax highlighting and Python development. We suggest using
IntelliJ IDEA especially when working on the ONOS app, as it provides code
completion for all ONOS APIs.

## Repo structure

This repo is structured as follows:

 * `p4src/` P4 implementation
 * `yang/` Yang model used in exercise 2
 * `app/` custom ONOS app Java implementation
 * `mininet/` Mininet script to emulate a 2x2 leaf-spine fabric topology of
   `stratum_bmv2` devices
 * `util/` Utility scripts
 * `ptf/` P4 data plane unit tests based on Packet Test Framework (PTF)

## Tutorial commands

To facilitate working on the exercises, we provide a set of make-based commands
to control the different aspects of the tutorial. Commands will be introduced in
the exercises, here's a quick reference:

| Make command        | Description                                            |
|---------------------|------------------------------------------------------- |
| `make deps`         | Pull and build all required dependencies               |
| `make p4-build`     | Build P4 program                                       |
| `make p4-test`      | Run PTF tests                                          |
| `make start`        | Start Mininet and ONOS containers                      |
| `make stop`         | Stop all containers                                    |
| `make restart`      | Restart containers clearing any previous state         |
| `make onos-cli`     | Access the ONOS CLI (password: `rocks`, Ctrl-D to exit)|
| `make onos-log`     | Show the ONOS log                                      |
| `make mn-cli`       | Access the Mininet CLI (Ctrl-D to exit)                |
| `make mn-log`       | Show the Mininet log (i.e., the CLI output)            |
| `make app-build`    | Build custom ONOS app                                  |
| `make app-reload`   | Install and activate the ONOS app                      |
| `make netcfg`       | Push netcfg.json file (network config) to ONOS         |

## Exercises

Click on the exercise name to see the instructions:

 1. [P4Runtime basics](./EXERCISE-1.md)
 2. [Yang, OpenConfig, and gNMI basics](./EXERCISE-2.md)
 3. [Using ONOS as the control plane](./EXERCISE-3.md)
 4. [Enabling ONOS built-in services](./EXERCISE-4.md)
 5. [Implementing IPv6 routing with ECMP](./EXERCISE-5.md)
 6. [Implementing SRv6](./EXERCISE-6.md)
 7. [Trellis Basics](./EXERCISE-7.md)
 8. [GTP termination with fabric.p4](./EXERCISE-8.md)

## Solutions

You can find solutions for each exercise in the [solution](solution) directory.
Feel free to compare your solution to the reference one whenever you feel stuck.

[![Build Status](https://travis-ci.org/opennetworkinglab/ngsdn-tutorial.svg?branch=advanced)](https://travis-ci.org/opennetworkinglab/ngsdn-tutorial)


================================================
FILE: app/pom.xml
================================================
<?xml version="1.0" encoding="UTF-8"?>
<!--
  ~ Copyright 2019 Open Networking Foundation
  ~
  ~ Licensed under the Apache License, Version 2.0 (the "License");
  ~ you may not use this file except in compliance with the License.
  ~ You may obtain a copy of the License at
  ~
  ~     http://www.apache.org/licenses/LICENSE-2.0
  ~
  ~ Unless required by applicable law or agreed to in writing, software
  ~ distributed under the License is distributed on an "AS IS" BASIS,
  ~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  ~ See the License for the specific language governing permissions and
  ~ limitations under the License.
  -->
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <parent>
        <groupId>org.onosproject</groupId>
        <artifactId>onos-dependencies</artifactId>
        <version>2.2.2</version>
    </parent>

    <groupId>org.onosproject</groupId>
    <artifactId>ngsdn-tutorial</artifactId>
    <version>1.0-SNAPSHOT</version>
    <packaging>bundle</packaging>

    <description>NG-SDN tutorial app</description>
    <url>http://www.onosproject.org</url>

    <properties>
        <onos.app.name>org.onosproject.ngsdn-tutorial</onos.app.name>
        <onos.app.title>NG-SDN Tutorial App</onos.app.title>
        <onos.app.origin>https://www.onosproject.org</onos.app.origin>
        <onos.app.category>Traffic Steering</onos.app.category>
        <onos.app.url>https://www.onosproject.org</onos.app.url>
        <onos.app.readme>
            Provides IPv6 routing capabilities to a leaf-spine network of
            Stratum switches
        </onos.app.readme>
    </properties>

    <dependencies>
        <dependency>
            <groupId>org.onosproject</groupId>
            <artifactId>onos-api</artifactId>
            <version>${onos.version}</version>
            <scope>provided</scope>
        </dependency>

        <dependency>
            <groupId>org.onosproject</groupId>
            <artifactId>onos-protocols-p4runtime-model</artifactId>
            <version>${onos.version}</version>
            <scope>provided</scope>
        </dependency>

        <dependency>
            <groupId>org.onosproject</groupId>
            <artifactId>onos-protocols-p4runtime-api</artifactId>
            <version>${onos.version}</version>
            <scope>provided</scope>
        </dependency>

        <dependency>
            <groupId>org.onosproject</groupId>
            <artifactId>onos-protocols-grpc-api</artifactId>
            <version>${onos.version}</version>
            <scope>provided</scope>
        </dependency>

        <dependency>
            <groupId>org.onosproject</groupId>
            <artifactId>onlab-osgi</artifactId>
            <version>${onos.version}</version>
            <scope>provided</scope>
        </dependency>

        <dependency>
            <groupId>org.onosproject</groupId>
            <artifactId>onlab-misc</artifactId>
            <version>${onos.version}</version>
            <scope>provided</scope>
        </dependency>

        <dependency>
            <groupId>org.onosproject</groupId>
            <artifactId>onos-cli</artifactId>
            <version>${onos.version}</version>
            <scope>provided</scope>
        </dependency>

        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>slf4j-api</artifactId>
            <scope>provided</scope>
        </dependency>

        <dependency>
            <groupId>com.google.guava</groupId>
            <artifactId>guava</artifactId>
            <scope>provided</scope>
        </dependency>

        <dependency>
            <groupId>com.fasterxml.jackson.core</groupId>
            <artifactId>jackson-databind</artifactId>
            <scope>provided</scope>
        </dependency>

        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <scope>test</scope>
        </dependency>

        <dependency>
            <groupId>org.onosproject</groupId>
            <artifactId>onos-api</artifactId>
            <version>${onos.version}</version>
            <scope>test</scope>
            <classifier>tests</classifier>
        </dependency>

        <dependency>
            <groupId>org.osgi</groupId>
            <artifactId>org.osgi.service.component.annotations</artifactId>
            <scope>provided</scope>
        </dependency>

        <dependency>
            <groupId>org.osgi</groupId>
            <artifactId>org.osgi.core</artifactId>
            <scope>provided</scope>
        </dependency>

        <dependency>
            <groupId>org.apache.karaf.shell</groupId>
            <artifactId>org.apache.karaf.shell.console</artifactId>
            <scope>provided</scope>
        </dependency>
    </dependencies>

    <build>
        <plugins>
            <plugin>
                <groupId>org.apache.felix</groupId>
                <artifactId>maven-bundle-plugin</artifactId>
                <configuration>
                    <instructions>
                        <Karaf-Commands>
                            org.onosproject.ngsdn.tutorial.cli
                        </Karaf-Commands>
                    </instructions>
                </configuration>
            </plugin>
            <plugin>
                <groupId>org.onosproject</groupId>
                <artifactId>onos-maven-plugin</artifactId>
            </plugin>
        </plugins>
    </build>

</project>


================================================
FILE: app/src/main/java/org/onosproject/ngsdn/tutorial/AppConstants.java
================================================
/*
 * Copyright 2019-present Open Networking Foundation
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

package org.onosproject.ngsdn.tutorial;

import org.onosproject.net.pi.model.PiPipeconfId;

public class AppConstants {

    public static final String APP_NAME = "org.onosproject.ngsdn-tutorial";
    public static final PiPipeconfId PIPECONF_ID = new PiPipeconfId("org.onosproject.ngsdn-tutorial");

    public static final int DEFAULT_FLOW_RULE_PRIORITY = 10;
    public static final int INITIAL_SETUP_DELAY = 2; // Seconds.
    public static final int CLEAN_UP_DELAY = 2000; // milliseconds
    public static final int DEFAULT_CLEAN_UP_RETRY_TIMES = 10;

    public static final int CPU_PORT_ID = 255;
    public static final int CPU_CLONE_SESSION_ID = 99;
}


================================================
FILE: app/src/main/java/org/onosproject/ngsdn/tutorial/Ipv6RoutingComponent.java
================================================
/*
 * Copyright 2019-present Open Networking Foundation
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

package org.onosproject.ngsdn.tutorial;

import com.google.common.collect.Lists;
import org.onlab.packet.Ip6Address;
import org.onlab.packet.Ip6Prefix;
import org.onlab.packet.IpAddress;
import org.onlab.packet.IpPrefix;
import org.onlab.packet.MacAddress;
import org.onlab.util.ItemNotFoundException;
import org.onosproject.core.ApplicationId;
import org.onosproject.mastership.MastershipService;
import org.onosproject.net.Device;
import org.onosproject.net.DeviceId;
import org.onosproject.net.Host;
import org.onosproject.net.Link;
import org.onosproject.net.PortNumber;
import org.onosproject.net.config.NetworkConfigService;
import org.onosproject.net.device.DeviceEvent;
import org.onosproject.net.device.DeviceListener;
import org.onosproject.net.device.DeviceService;
import org.onosproject.net.flow.FlowRule;
import org.onosproject.net.flow.FlowRuleService;
import org.onosproject.net.flow.criteria.PiCriterion;
import org.onosproject.net.group.GroupDescription;
import org.onosproject.net.group.GroupService;
import org.onosproject.net.host.HostEvent;
import org.onosproject.net.host.HostListener;
import org.onosproject.net.host.HostService;
import org.onosproject.net.host.InterfaceIpAddress;
import org.onosproject.net.intf.Interface;
import org.onosproject.net.intf.InterfaceService;
import org.onosproject.net.link.LinkEvent;
import org.onosproject.net.link.LinkListener;
import org.onosproject.net.link.LinkService;
import org.onosproject.net.pi.model.PiActionId;
import org.onosproject.net.pi.model.PiActionParamId;
import org.onosproject.net.pi.model.PiMatchFieldId;
import org.onosproject.net.pi.runtime.PiAction;
import org.onosproject.net.pi.runtime.PiActionParam;
import org.onosproject.net.pi.runtime.PiActionProfileGroupId;
import org.onosproject.net.pi.runtime.PiTableAction;
import org.osgi.service.component.annotations.Activate;
import org.osgi.service.component.annotations.Component;
import org.osgi.service.component.annotations.Deactivate;
import org.osgi.service.component.annotations.Reference;
import org.osgi.service.component.annotations.ReferenceCardinality;
import org.onosproject.ngsdn.tutorial.common.FabricDeviceConfig;
import org.onosproject.ngsdn.tutorial.common.Utils;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import java.util.Collection;
import java.util.Collections;
import java.util.List;
import java.util.Optional;
import java.util.Set;
import java.util.stream.Collectors;

import static com.google.common.collect.Streams.stream;
import static org.onosproject.ngsdn.tutorial.AppConstants.INITIAL_SETUP_DELAY;

/**
 * App component that configures devices to provide IPv6 routing capabilities
 * across the whole fabric.
 */
@Component(
        immediate = true,
        // *** TODO EXERCISE 5
        // set to true when ready
        enabled = false
)
public class Ipv6RoutingComponent {

    private static final Logger log = LoggerFactory.getLogger(Ipv6RoutingComponent.class);

    private static final int DEFAULT_ECMP_GROUP_ID = 0xec3b0000;
    private static final long GROUP_INSERT_DELAY_MILLIS = 200;

    private final HostListener hostListener = new InternalHostListener();
    private final LinkListener linkListener = new InternalLinkListener();
    private final DeviceListener deviceListener = new InternalDeviceListener();

    private ApplicationId appId;

    //--------------------------------------------------------------------------
    // ONOS CORE SERVICE BINDING
    //
    // These variables are set by the Karaf runtime environment before calling
    // the activate() method.
    //--------------------------------------------------------------------------

    @Reference(cardinality = ReferenceCardinality.MANDATORY)
    private FlowRuleService flowRuleService;

    @Reference(cardinality = ReferenceCardinality.MANDATORY)
    private HostService hostService;

    @Reference(cardinality = ReferenceCardinality.MANDATORY)
    private MastershipService mastershipService;

    @Reference(cardinality = ReferenceCardinality.MANDATORY)
    private GroupService groupService;

    @Reference(cardinality = ReferenceCardinality.MANDATORY)
    private DeviceService deviceService;

    @Reference(cardinality = ReferenceCardinality.MANDATORY)
    private NetworkConfigService networkConfigService;

    @Reference(cardinality = ReferenceCardinality.MANDATORY)
    private InterfaceService interfaceService;

    @Reference(cardinality = ReferenceCardinality.MANDATORY)
    private LinkService linkService;

    @Reference(cardinality = ReferenceCardinality.MANDATORY)
    private MainComponent mainComponent;

    //--------------------------------------------------------------------------
    // COMPONENT ACTIVATION.
    //
    // When loading/unloading the app the Karaf runtime environment will call
    // activate()/deactivate().
    //--------------------------------------------------------------------------

    @Activate
    protected void activate() {
        appId = mainComponent.getAppId();

        hostService.addListener(hostListener);
        linkService.addListener(linkListener);
        deviceService.addListener(deviceListener);

        // Schedule set up for all devices.
        mainComponent.scheduleTask(this::setUpAllDevices, INITIAL_SETUP_DELAY);

        log.info("Started");
    }

    @Deactivate
    protected void deactivate() {
        hostService.removeListener(hostListener);
        linkService.removeListener(linkListener);
        deviceService.removeListener(deviceListener);

        log.info("Stopped");
    }

    //--------------------------------------------------------------------------
    // METHODS TO COMPLETE.
    //
    // Complete the implementation wherever you see TODO.
    //--------------------------------------------------------------------------

    /**
     * Sets up the "My Station" table for the given device using the
     * myStationMac address found in the config.
     * <p>
     * This method will be called at component activation for each device
     * (switch) known by ONOS, and every time a new device-added event is
     * captured by the InternalDeviceListener defined below.
     *
     * @param deviceId the device ID
     */
    private void setUpMyStationTable(DeviceId deviceId) {

        log.info("Adding My Station rules to {}...", deviceId);

        final MacAddress myStationMac = getMyStationMac(deviceId);

        // HINT: in our solution, the My Station table matches on the *ethernet
        // destination* and there is only one action called *NoAction*, which is
        // used as an indication of "table hit" in the control block.

        // *** TODO EXERCISE 5
        // Modify P4Runtime entity names to match content of P4Info file (look
        // for the fully qualified name of tables, match fields, and actions.
        // ---- START SOLUTION ----
        final String tableId = "MODIFY ME";

        final PiCriterion match = PiCriterion.builder()
                .matchExact(
                        PiMatchFieldId.of("MODIFY ME"),
                        myStationMac.toBytes())
                .build();

        // Creates an action which do *NoAction* when hit.
        final PiTableAction action = PiAction.builder()
                .withId(PiActionId.of("MODIFY ME"))
                .build();
        // ---- END SOLUTION ----

        final FlowRule myStationRule = Utils.buildFlowRule(
                deviceId, appId, tableId, match, action);

        flowRuleService.applyFlowRules(myStationRule);
    }

    /**
     * Creates an ONOS SELECT group for the routing table to provide ECMP
     * forwarding for the given collection of next hop MAC addresses. ONOS
     * SELECT groups are equivalent to P4Runtime action selector groups.
     * <p>
     * This method will be called by the routing policy methods below to insert
     * groups in the L3 table
     *
     * @param nextHopMacs the collection of mac addresses of next hops
     * @param deviceId    the device where the group will be installed
     * @return a SELECT group
     */
    private GroupDescription createNextHopGroup(int groupId,
                                                Collection<MacAddress> nextHopMacs,
                                                DeviceId deviceId) {

        String actionProfileId = "IngressPipeImpl.ecmp_selector";

        final List<PiAction> actions = Lists.newArrayList();

        // Build one "set next hop" action for each next hop
        // *** TODO EXERCISE 5
        // Modify P4Runtime entity names to match content of P4Info file (look
        // for the fully qualified name of tables, match fields, and actions.
        // ---- START SOLUTION ----
        final String tableId = "MODIFY ME";
        for (MacAddress nextHopMac : nextHopMacs) {
            final PiAction action = PiAction.builder()
                    .withId(PiActionId.of("MODIFY ME"))
                    .withParameter(new PiActionParam(
                            // Action param name.
                            PiActionParamId.of("MODIFY ME"),
                            // Action param value.
                            nextHopMac.toBytes()))
                    .build();

            actions.add(action);
        }
        // ---- END SOLUTION ----

        return Utils.buildSelectGroup(
                deviceId, tableId, actionProfileId, groupId, actions, appId);
    }

    /**
     * Creates a routing flow rule that matches on the given IPv6 prefix and
     * executes the given group ID (created before).
     *
     * @param deviceId  the device where flow rule will be installed
     * @param ip6Prefix the IPv6 prefix
     * @param groupId   the group ID
     * @return a flow rule
     */
    private FlowRule createRoutingRule(DeviceId deviceId, Ip6Prefix ip6Prefix,
                                       int groupId) {

        // *** TODO EXERCISE 5
        // Modify P4Runtime entity names to match content of P4Info file (look
        // for the fully qualified name of tables, match fields, and actions.
        // ---- START SOLUTION ----
        final String tableId = "MODIFY ME";
        final PiCriterion match = PiCriterion.builder()
                .matchLpm(
                        PiMatchFieldId.of("MODIFY ME"),
                        ip6Prefix.address().toOctets(),
                        ip6Prefix.prefixLength())
                .build();

        final PiTableAction action = PiActionProfileGroupId.of(groupId);
        // ---- END SOLUTION ----

        return Utils.buildFlowRule(
                deviceId, appId, tableId, match, action);
    }

    /**
     * Creates a flow rule for the L2 table mapping the given next hop MAC to
     * the given output port.
     * <p>
     * This is called by the routing policy methods below to establish L2-based
     * forwarding inside the fabric, e.g., when deviceId is a leaf switch and
     * nextHopMac is the one of a spine switch.
     *
     * @param deviceId   the device
     * @param nexthopMac the next hop (destination) mac
     * @param outPort    the output port
     */
    private FlowRule createL2NextHopRule(DeviceId deviceId, MacAddress nexthopMac,
                                         PortNumber outPort) {

        // *** TODO EXERCISE 5
        // Modify P4Runtime entity names to match content of P4Info file (look
        // for the fully qualified name of tables, match fields, and actions.
        // ---- START SOLUTION ----
        final String tableId = "MODIFY ME";
        final PiCriterion match = PiCriterion.builder()
                .matchExact(PiMatchFieldId.of("MODIFY ME"),
                        nexthopMac.toBytes())
                .build();


        final PiAction action = PiAction.builder()
                .withId(PiActionId.of("MODIFY ME"))
                .withParameter(new PiActionParam(
                        PiActionParamId.of("MODIFY ME"),
                        outPort.toLong()))
                .build();
        // ---- END SOLUTION ----

        return Utils.buildFlowRule(
                deviceId, appId, tableId, match, action);
    }

    //--------------------------------------------------------------------------
    // EVENT LISTENERS
    //
    // Events are processed only if isRelevant() r
Download .txt
gitextract_7ohb8phc/

├── .gitignore
├── .travis.yml
├── EXERCISE-1.md
├── EXERCISE-2.md
├── EXERCISE-3.md
├── EXERCISE-4.md
├── EXERCISE-5.md
├── EXERCISE-6.md
├── EXERCISE-7.md
├── EXERCISE-8.md
├── LICENSE
├── Makefile
├── README.md
├── app/
│   ├── pom.xml
│   └── src/
│       └── main/
│           └── java/
│               └── org/
│                   └── onosproject/
│                       └── ngsdn/
│                           └── tutorial/
│                               ├── AppConstants.java
│                               ├── Ipv6RoutingComponent.java
│                               ├── L2BridgingComponent.java
│                               ├── MainComponent.java
│                               ├── NdpReplyComponent.java
│                               ├── Srv6Component.java
│                               ├── cli/
│                               │   ├── Srv6ClearCommand.java
│                               │   ├── Srv6InsertCommand.java
│                               │   ├── Srv6SidCompleter.java
│                               │   └── package-info.java
│                               ├── common/
│                               │   ├── FabricDeviceConfig.java
│                               │   └── Utils.java
│                               └── pipeconf/
│                                   ├── InterpreterImpl.java
│                                   ├── PipeconfLoader.java
│                                   └── PipelinerImpl.java
├── docker-compose.yml
├── mininet/
│   ├── flowrule-gtp.json
│   ├── host-cmd
│   ├── netcfg-gtp.json
│   ├── netcfg-sr.json
│   ├── netcfg.json
│   ├── recv-gtp.py
│   ├── send-udp.py
│   ├── topo-gtp.py
│   ├── topo-v4.py
│   └── topo-v6.py
├── p4src/
│   ├── main.p4
│   └── snippets.p4
├── ptf/
│   ├── lib/
│   │   ├── __init__.py
│   │   ├── base_test.py
│   │   ├── chassis_config.pb.txt
│   │   ├── convert.py
│   │   ├── helper.py
│   │   ├── port_map.json
│   │   ├── runner.py
│   │   └── start_bmv2.sh
│   ├── run_tests
│   └── tests/
│       ├── bridging.py
│       ├── packetio.py
│       ├── routing.py
│       └── srv6.py
├── solution/
│   ├── app/
│   │   └── src/
│   │       └── main/
│   │           └── java/
│   │               └── org/
│   │                   └── onosproject/
│   │                       └── ngsdn/
│   │                           └── tutorial/
│   │                               ├── Ipv6RoutingComponent.java
│   │                               ├── L2BridgingComponent.java
│   │                               ├── NdpReplyComponent.java
│   │                               ├── Srv6Component.java
│   │                               └── pipeconf/
│   │                                   └── InterpreterImpl.java
│   ├── mininet/
│   │   ├── flowrule-gtp.json
│   │   ├── netcfg-gtp.json
│   │   └── netcfg-sr.json
│   ├── p4src/
│   │   └── main.p4
│   └── ptf/
│       └── tests/
│           ├── bridging.py
│           ├── packetio.py
│           ├── routing.py
│           └── srv6.py
├── util/
│   ├── docker/
│   │   ├── Makefile
│   │   ├── Makefile.vars
│   │   ├── mvn/
│   │   │   └── Dockerfile
│   │   └── stratum_bmv2/
│   │       └── Dockerfile
│   ├── gnmi-cli
│   ├── mn-cmd
│   ├── mn-pcap
│   ├── oc-pb-decoder
│   ├── onos-cmd
│   ├── p4rt-sh
│   └── vm/
│       ├── .gitignore
│       ├── README.md
│       ├── Vagrantfile
│       ├── build-vm.sh
│       ├── cleanup.sh
│       ├── root-bootstrap.sh
│       └── user-bootstrap.sh
└── yang/
    └── demo-port.yang
Download .txt
SYMBOL INDEX (376 symbols across 35 files)

FILE: app/src/main/java/org/onosproject/ngsdn/tutorial/AppConstants.java
  class AppConstants (line 21) | public class AppConstants {

FILE: app/src/main/java/org/onosproject/ngsdn/tutorial/Ipv6RoutingComponent.java
  class Ipv6RoutingComponent (line 82) | @Component(
    method activate (line 142) | @Activate
    method deactivate (line 156) | @Deactivate
    method setUpMyStationTable (line 181) | private void setUpMyStationTable(DeviceId deviceId) {
    method createNextHopGroup (line 227) | private GroupDescription createNextHopGroup(int groupId,
    method createRoutingRule (line 268) | private FlowRule createRoutingRule(DeviceId deviceId, Ip6Prefix ip6Pre...
    method createL2NextHopRule (line 302) | private FlowRule createL2NextHopRule(DeviceId deviceId, MacAddress nex...
    class InternalHostListener (line 338) | class InternalHostListener implements HostListener {
      method isRelevant (line 340) | @Override
      method event (line 361) | @Override
    class InternalLinkListener (line 386) | class InternalLinkListener implements LinkListener {
      method isRelevant (line 388) | @Override
      method event (line 404) | @Override
    class InternalDeviceListener (line 432) | class InternalDeviceListener implements DeviceListener {
      method isRelevant (line 434) | @Override
      method event (line 450) | @Override
    method setUpL2NextHopRules (line 473) | private void setUpL2NextHopRules(DeviceId deviceId) {
    method setUpHostRules (line 499) | private void setUpHostRules(DeviceId deviceId, Host host) {
    method setUpFabricRoutes (line 547) | private void setUpFabricRoutes(DeviceId deviceId) {
    method setUpSpineRoutes (line 561) | private void setUpSpineRoutes(DeviceId spineId) {
    method setUpLeafRoutes (line 602) | private void setUpLeafRoutes(DeviceId leafId) {
    method isSpine (line 664) | private boolean isSpine(DeviceId deviceId) {
    method isLeaf (line 676) | private boolean isLeaf(DeviceId deviceId) {
    method getMyStationMac (line 687) | private MacAddress getMyStationMac(DeviceId deviceId) {
    method getDeviceConfig (line 700) | private Optional<FabricDeviceConfig> getDeviceConfig(DeviceId deviceId) {
    method getInterfaceIpv6Prefixes (line 713) | private Set<Ip6Prefix> getInterfaceIpv6Prefixes(DeviceId deviceId) {
    method macToGroupId (line 730) | private int macToGroupId(MacAddress mac) {
    method insertInOrder (line 742) | private void insertInOrder(GroupDescription group, Collection<FlowRule...
    method getDeviceSid (line 760) | private Ip6Address getDeviceSid(DeviceId deviceId) {
    method setUpAllDevices (line 771) | private synchronized void setUpAllDevices() {

FILE: app/src/main/java/org/onosproject/ngsdn/tutorial/L2BridgingComponent.java
  class L2BridgingComponent (line 63) | @Component(
    method activate (line 118) | @Activate
    method deactivate (line 131) | @Deactivate
    method setUpDevice (line 150) | private void setUpDevice(DeviceId deviceId) {
    method insertMulticastGroup (line 170) | private void insertMulticastGroup(DeviceId deviceId) {
    method insertMulticastFlowRules (line 204) | private void insertMulticastFlowRules(DeviceId deviceId) {
    method insertUnmatchedBridgingFlowRule (line 262) | @SuppressWarnings("unused")
    method learnHost (line 311) | private void learnHost(Host host, DeviceId deviceId, PortNumber port) {
    class InternalDeviceListener (line 353) | public class InternalDeviceListener implements DeviceListener {
      method isRelevant (line 355) | @Override
      method event (line 370) | @Override
    class InternalHostListener (line 392) | public class InternalHostListener implements HostListener {
      method isRelevant (line 394) | @Override
      method event (line 416) | @Override
    method getHostFacingPorts (line 443) | private Set<PortNumber> getHostFacingPorts(DeviceId deviceId) {
    method isSpine (line 469) | private boolean isSpine(DeviceId deviceId) {
    method setUpAllDevices (line 492) | private void setUpAllDevices() {

FILE: app/src/main/java/org/onosproject/ngsdn/tutorial/MainComponent.java
  class MainComponent (line 42) | @Component(immediate = true, service = MainComponent.class)
    method createConfig (line 74) | @Override
    method activate (line 86) | @Activate
    method deactivate (line 106) | @Deactivate
    method getAppId (line 120) | ApplicationId getAppId() {
    method getExecutorService (line 129) | public ExecutorService getExecutorService() {
    method scheduleTask (line 140) | public void scheduleTask(Runnable task, int delaySeconds) {
    method cleanUp (line 152) | private boolean cleanUp() {
    method waitPreviousCleanup (line 176) | private void waitPreviousCleanup() {

FILE: app/src/main/java/org/onosproject/ngsdn/tutorial/NdpReplyComponent.java
  class NdpReplyComponent (line 61) | @Component(
    method activate (line 107) | @Activate
    method deactivate (line 117) | @Deactivate
    method setUpAllDevices (line 132) | private void setUpAllDevices() {
    method setUpDevice (line 147) | private void setUpDevice(DeviceId deviceId) {
    method buildNdpReplyFlowRule (line 193) | private FlowRule buildNdpReplyFlowRule(DeviceId deviceId,
    class InternalDeviceListener (line 232) | public class InternalDeviceListener implements DeviceListener {
      method isRelevant (line 234) | @Override
      method event (line 249) | @Override
    method getIp6Addresses (line 277) | private Collection<Ip6Address> getIp6Addresses(Interface iface) {
    method installRules (line 291) | private void installRules(Collection<FlowRule> flowRules) {

FILE: app/src/main/java/org/onosproject/ngsdn/tutorial/Srv6Component.java
  class Srv6Component (line 58) | @Component(
    method activate (line 102) | @Activate
    method deactivate (line 115) | @Deactivate
    method setUpMySidTable (line 134) | private void setUpMySidTable(DeviceId deviceId) {
    method insertSrv6InsertRule (line 175) | public void insertSrv6InsertRule(DeviceId deviceId, Ip6Address destIp,...
    method clearSrv6InsertRules (line 219) | public void clearSrv6InsertRules(DeviceId deviceId) {
    class InternalDeviceListener (line 245) | public class InternalDeviceListener implements DeviceListener {
      method isRelevant (line 247) | @Override
      method event (line 262) | @Override
    method setUpAllDevices (line 287) | private synchronized void setUpAllDevices() {
    method getDeviceConfig (line 304) | private Optional<FabricDeviceConfig> getDeviceConfig(DeviceId deviceId) {
    method getMySid (line 315) | private Ip6Address getMySid(DeviceId deviceId) {

FILE: app/src/main/java/org/onosproject/ngsdn/tutorial/cli/Srv6ClearCommand.java
  class Srv6ClearCommand (line 32) | @Service
    method doExecute (line 42) | @Override

FILE: app/src/main/java/org/onosproject/ngsdn/tutorial/cli/Srv6InsertCommand.java
  class Srv6InsertCommand (line 37) | @Service
    method doExecute (line 53) | @Override

FILE: app/src/main/java/org/onosproject/ngsdn/tutorial/cli/Srv6SidCompleter.java
  class Srv6SidCompleter (line 37) | @Service
    method complete (line 40) | @Override

FILE: app/src/main/java/org/onosproject/ngsdn/tutorial/common/FabricDeviceConfig.java
  class FabricDeviceConfig (line 27) | public class FabricDeviceConfig extends Config<DeviceId> {
    method isValid (line 34) | @Override
    method myStationMac (line 46) | public MacAddress myStationMac() {
    method mySid (line 56) | public Ip6Address mySid() {
    method isSpine (line 67) | public boolean isSpine() {

FILE: app/src/main/java/org/onosproject/ngsdn/tutorial/common/Utils.java
  class Utils (line 53) | public final class Utils {
    method buildMulticastGroup (line 57) | public static GroupDescription buildMulticastGroup(
    method buildCloneGroup (line 65) | public static GroupDescription buildCloneGroup(
    method buildReplicationGroup (line 73) | private static GroupDescription buildReplicationGroup(
    method buildFlowRule (line 101) | public static FlowRule buildFlowRule(DeviceId switchId, ApplicationId ...
    method buildSelectGroup (line 117) | public static GroupDescription buildSelectGroup(DeviceId deviceId,
    method sleep (line 140) | public static void sleep(int millis) {

FILE: app/src/main/java/org/onosproject/ngsdn/tutorial/pipeconf/InterpreterImpl.java
  class InterpreterImpl (line 63) | public class InterpreterImpl extends AbstractHandlerBehaviour
    method mapOutboundPacket (line 93) | @Override
    method buildPacketOut (line 142) | private PiPacketOperation buildPacketOut(ByteBuffer pktData, long port...
    method mapInboundPacket (line 182) | @Override
    method mapLogicalPortNumber (line 224) | @Override
    method mapCriterionType (line 233) | @Override
    method mapTreatment (line 242) | @Override
    method mapFlowRuleTableId (line 248) | @Override

FILE: app/src/main/java/org/onosproject/ngsdn/tutorial/pipeconf/PipeconfLoader.java
  class PipeconfLoader (line 48) | @Component(immediate = true, service = PipeconfLoader.class)
    method activate (line 63) | @Activate
    method deactivate (line 79) | @Deactivate
    method buildPipeconf (line 84) | private PiPipeconf buildPipeconf() throws P4InfoParserException {
    method removePipeconfDrivers (line 100) | private void removePipeconfDrivers() {

FILE: app/src/main/java/org/onosproject/ngsdn/tutorial/pipeconf/PipelinerImpl.java
  class PipelinerImpl (line 53) | public class PipelinerImpl extends AbstractHandlerBehaviour implements P...
    method init (line 66) | @Override
    method filter (line 73) | @Override
    method forward (line 78) | @Override
    method next (line 142) | @Override
    method getNextMappings (line 147) | @Override

FILE: mininet/recv-gtp.py
  function handle_pkt (line 16) | def handle_pkt(pkt, ex):
  function handle_timeout (line 35) | def handle_timeout(signum, frame):

FILE: mininet/topo-gtp.py
  class IPv4Host (line 29) | class IPv4Host(Host):
    method config (line 33) | def config(self, mac=None, ip=None, defaultRoute=None, lo='up', gw=None,
  class TutorialTopo (line 55) | class TutorialTopo(Topo):
    method __init__ (line 60) | def __init__(self, *args, **kwargs):
  function main (line 92) | def main():

FILE: mininet/topo-v4.py
  class IPv4Host (line 29) | class IPv4Host(Host):
    method config (line 33) | def config(self, mac=None, ip=None, defaultRoute=None, lo='up', gw=None,
  class TaggedIPv4Host (line 54) | class TaggedIPv4Host(Host):
    method config (line 60) | def config(self, mac=None, ip=None, defaultRoute=None, lo='up', gw=None,
    method terminate (line 88) | def terminate(self):
  class TutorialTopo (line 93) | class TutorialTopo(Topo):
    method __init__ (line 96) | def __init__(self, *args, **kwargs):
  function main (line 140) | def main():

FILE: mininet/topo-v6.py
  class IPv6Host (line 29) | class IPv6Host(Host):
    method config (line 33) | def config(self, ipv6, ipv6_gw=None, **params):
    method terminate (line 50) | def terminate(self):
  class TutorialTopo (line 54) | class TutorialTopo(Topo):
    method __init__ (line 57) | def __init__(self, *args, **kwargs):
  function main (line 101) | def main():

FILE: ptf/lib/base_test.py
  function print_inline (line 91) | def print_inline(text):
  class partialmethod (line 98) | class partialmethod(partial):
    method __get__ (line 99) | def __get__(self, instance, owner):
  function stringify (line 110) | def stringify(n, length):
  function ipv4_to_binary (line 116) | def ipv4_to_binary(addr):
  function ipv6_to_binary (line 121) | def ipv6_to_binary(addr):
  function mac_to_binary (line 126) | def mac_to_binary(addr):
  function format_pkt_match (line 131) | def format_pkt_match(received_pkt, expected_pkt):
  function format_pb_msg_match (line 157) | def format_pb_msg_match(received_msg, expected_msg):
  function pkt_mac_swap (line 169) | def pkt_mac_swap(pkt):
  function pkt_route (line 176) | def pkt_route(pkt, mac_dst):
  function pkt_decrement_ttl (line 182) | def pkt_decrement_ttl(pkt):
  function genNdpNsPkt (line 190) | def genNdpNsPkt(target_ip, src_mac=HOST1_MAC, src_ip=HOST1_IPV6):
  function genNdpNaPkt (line 200) | def genNdpNaPkt(target_ip, target_mac,
  class P4RuntimeErrorFormatException (line 210) | class P4RuntimeErrorFormatException(Exception):
    method __init__ (line 215) | def __init__(self, message):
  class P4RuntimeErrorIterator (line 220) | class P4RuntimeErrorIterator:
    method __init__ (line 221) | def __init__(self, grpc_error):
    method __iter__ (line 243) | def __iter__(self):
    method next (line 246) | def next(self):
  class P4RuntimeWriteException (line 268) | class P4RuntimeWriteException(Exception):
    method __init__ (line 269) | def __init__(self, grpc_error):
    method __str__ (line 280) | def __str__(self):
  class P4RuntimeTest (line 294) | class P4RuntimeTest(BaseTest):
    method setUp (line 295) | def setUp(self):
    method set_up_stream (line 345) | def set_up_stream(self):
    method handshake (line 367) | def handshake(self):
    method tearDown (line 380) | def tearDown(self):
    method tear_down_stream (line 384) | def tear_down_stream(self):
    method get_packet_in (line 388) | def get_packet_in(self, timeout=2):
    method verify_packet_in (line 395) | def verify_packet_in(self, exp_packet_in_msg, timeout=2):
    method get_stream_packet (line 419) | def get_stream_packet(self, type_, timeout=1):
    method send_packet_out (line 434) | def send_packet_out(self, packet):
    method swports (line 439) | def swports(self, idx):
    method _write (line 444) | def _write(self, req):
    method write_request (line 452) | def write_request(self, req, store=True):
    method insert (line 458) | def insert(self, entity):
    method get_new_write_request (line 477) | def get_new_write_request(self):
    method insert_pre_multicast_group (line 485) | def insert_pre_multicast_group(self, group_id, ports):
    method insert_pre_clone_session (line 498) | def insert_pre_clone_session(self, session_id, ports, cos=0,
    method undo_write_requests (line 517) | def undo_write_requests(self, reqs):
  function autocleanup (line 545) | def autocleanup(f):
  function skip_on_hw (line 558) | def skip_on_hw(cls):

FILE: ptf/lib/convert.py
  function matchesMac (line 33) | def matchesMac(mac_addr_string):
  function encodeMac (line 37) | def encodeMac(mac_addr_string):
  function decodeMac (line 41) | def decodeMac(encoded_mac_addr):
  function matchesIPv4 (line 48) | def matchesIPv4(ip_addr_string):
  function encodeIPv4 (line 52) | def encodeIPv4(ip_addr_string):
  function decodeIPv4 (line 56) | def decodeIPv4(encoded_ip_addr):
  function matchesIPv6 (line 60) | def matchesIPv6(ip_addr_string):
  function encodeIPv6 (line 68) | def encodeIPv6(ip_addr_string):
  function bitwidthToBytes (line 72) | def bitwidthToBytes(bitwidth):
  function encodeNum (line 76) | def encodeNum(number, bitwidth):
  function decodeNum (line 85) | def decodeNum(encoded_number):
  function encode (line 89) | def encode(x, bitwidth):
  function test (line 113) | def test():

FILE: ptf/lib/helper.py
  function get_match_field_value (line 24) | def get_match_field_value(match_field):
  class P4InfoHelper (line 40) | class P4InfoHelper(object):
    method __init__ (line 41) | def __init__(self, p4_info_filepath):
    method get_next_mbr_id (line 51) | def get_next_mbr_id(self):
    method get_next_grp_id (line 56) | def get_next_grp_id(self):
    method get (line 61) | def get(self, entity_type, name=None, id=None):
    method get_id (line 81) | def get_id(self, entity_type, name):
    method get_name (line 84) | def get_name(self, entity_type, id):
    method __getattr__ (line 87) | def __getattr__(self, attr):
    method get_match_field (line 107) | def get_match_field(self, table_name, name=None, id=None):
    method get_packet_metadata (line 124) | def get_packet_metadata(self, meta_type, name=None, id=None):
    method get_match_field_id (line 139) | def get_match_field_id(self, table_name, match_field_name):
    method get_match_field_name (line 142) | def get_match_field_name(self, table_name, match_field_id):
    method get_match_field_pb (line 145) | def get_match_field_pb(self, table_name, match_field_name, value):
    method get_action_param (line 170) | def get_action_param(self, action_name, name=None, id=None):
    method get_action_param_id (line 185) | def get_action_param_id(self, action_name, param_name):
    method get_action_param_name (line 188) | def get_action_param_name(self, action_name, param_id):
    method get_action_param_pb (line 191) | def get_action_param_pb(self, action_name, param_name, value):
    method build_table_entry (line 198) | def build_table_entry(self,
    method build_action (line 230) | def build_action(self, action_name, action_params=None):
    method build_act_prof_member (line 240) | def build_act_prof_member(self, act_prof_name,
    method build_act_prof_group (line 249) | def build_act_prof_group(self, act_prof_name, group_id, actions=()):
    method build_packet_out (line 270) | def build_packet_out(self, payload, metadata=None):
    method build_packet_in (line 282) | def build_packet_in(self, payload, metadata=None):

FILE: ptf/lib/runner.py
  function error (line 40) | def error(msg, *args, **kwargs):
  function warn (line 44) | def warn(msg, *args, **kwargs):
  function info (line 48) | def info(msg, *args, **kwargs):
  function debug (line 52) | def debug(msg, *args, **kwargs):
  function check_ifaces (line 56) | def check_ifaces(ifaces):
  function build_bmv2_config (line 67) | def build_bmv2_config(bmv2_json_path):
  function update_config (line 75) | def update_config(p4info_path, bmv2_json_path, grpc_addr, device_id):
  function run_test (line 156) | def run_test(p4info_path, grpc_addr, device_id, cpu_port, ptfdir, port_m...
  function check_ptf (line 208) | def check_ptf():
  function main (line 221) | def main():

FILE: ptf/tests/bridging.py
  class ArpNdpRequestWithCloneTest (line 50) | class ArpNdpRequestWithCloneTest(P4RuntimeTest):
    method runTest (line 56) | def runTest(self):
    method testPacket (line 68) | def testPacket(self, pkt):
  class ArpNdpReplyWithCloneTest (line 171) | class ArpNdpReplyWithCloneTest(P4RuntimeTest):
    method runTest (line 176) | def runTest(self):
    method testPacket (line 189) | def testPacket(self, pkt):
  class BridgingTest (line 254) | class BridgingTest(P4RuntimeTest):
    method runTest (line 257) | def runTest(self):
    method testPacket (line 265) | def testPacket(self, pkt):

FILE: ptf/tests/packetio.py
  class PacketOutTest (line 48) | class PacketOutTest(P4RuntimeTest):
    method runTest (line 54) | def runTest(self):
    method testPacket (line 61) | def testPacket(self, pkt):
  class PacketInTest (line 85) | class PacketInTest(P4RuntimeTest):
    method runTest (line 90) | def runTest(self):
    method testPacket (line 98) | def testPacket(self, pkt):

FILE: ptf/tests/routing.py
  class IPv6RoutingTest (line 46) | class IPv6RoutingTest(P4RuntimeTest):
    method runTest (line 49) | def runTest(self):
    method testPacket (line 57) | def testPacket(self, pkt):
  class NdpReplyGenTest (line 135) | class NdpReplyGenTest(P4RuntimeTest):
    method runTest (line 141) | def runTest(self):

FILE: ptf/tests/srv6.py
  function insert_srv6_header (line 45) | def insert_srv6_header(pkt, sid_list):
  function pop_srv6_header (line 63) | def pop_srv6_header(pkt):
  function set_cksum (line 70) | def set_cksum(pkt, cksum):
  class Srv6InsertTest (line 80) | class Srv6InsertTest(P4RuntimeTest):
    method runTest (line 85) | def runTest(self):
    method testPacket (line 101) | def testPacket(self, pkt, sid_list, next_hop_mac):
  class Srv6TransitTest (line 196) | class Srv6TransitTest(P4RuntimeTest):
    method runTest (line 202) | def runTest(self):
    method testPacket (line 220) | def testPacket(self, pkt, next_hop_mac, my_sid):
  class Srv6EndTest (line 299) | class Srv6EndTest(P4RuntimeTest):
    method runTest (line 304) | def runTest(self):
    method testPacket (line 321) | def testPacket(self, pkt, sid_list, next_hop_mac, my_sid):
  class Srv6EndPspTest (line 406) | class Srv6EndPspTest(P4RuntimeTest):
    method runTest (line 413) | def runTest(self):
    method testPacket (line 428) | def testPacket(self, pkt, sid_list, next_hop_mac, my_sid):

FILE: solution/app/src/main/java/org/onosproject/ngsdn/tutorial/Ipv6RoutingComponent.java
  class Ipv6RoutingComponent (line 82) | @Component(
    method activate (line 142) | @Activate
    method deactivate (line 156) | @Deactivate
    method setUpMyStationTable (line 181) | private void setUpMyStationTable(DeviceId deviceId) {
    method createNextHopGroup (line 227) | private GroupDescription createNextHopGroup(int groupId,
    method createRoutingRule (line 268) | private FlowRule createRoutingRule(DeviceId deviceId, Ip6Prefix ip6Pre...
    method createL2NextHopRule (line 302) | private FlowRule createL2NextHopRule(DeviceId deviceId, MacAddress nex...
    class InternalHostListener (line 338) | class InternalHostListener implements HostListener {
      method isRelevant (line 340) | @Override
      method event (line 361) | @Override
    class InternalLinkListener (line 386) | class InternalLinkListener implements LinkListener {
      method isRelevant (line 388) | @Override
      method event (line 404) | @Override
    class InternalDeviceListener (line 432) | class InternalDeviceListener implements DeviceListener {
      method isRelevant (line 434) | @Override
      method event (line 450) | @Override
    method setUpL2NextHopRules (line 473) | private void setUpL2NextHopRules(DeviceId deviceId) {
    method setUpHostRules (line 499) | private void setUpHostRules(DeviceId deviceId, Host host) {
    method setUpFabricRoutes (line 547) | private void setUpFabricRoutes(DeviceId deviceId) {
    method setUpSpineRoutes (line 561) | private void setUpSpineRoutes(DeviceId spineId) {
    method setUpLeafRoutes (line 602) | private void setUpLeafRoutes(DeviceId leafId) {
    method isSpine (line 664) | private boolean isSpine(DeviceId deviceId) {
    method isLeaf (line 676) | private boolean isLeaf(DeviceId deviceId) {
    method getMyStationMac (line 687) | private MacAddress getMyStationMac(DeviceId deviceId) {
    method getDeviceConfig (line 700) | private Optional<FabricDeviceConfig> getDeviceConfig(DeviceId deviceId) {
    method getInterfaceIpv6Prefixes (line 713) | private Set<Ip6Prefix> getInterfaceIpv6Prefixes(DeviceId deviceId) {
    method macToGroupId (line 730) | private int macToGroupId(MacAddress mac) {
    method insertInOrder (line 742) | private void insertInOrder(GroupDescription group, Collection<FlowRule...
    method getDeviceSid (line 760) | private Ip6Address getDeviceSid(DeviceId deviceId) {
    method setUpAllDevices (line 771) | private synchronized void setUpAllDevices() {

FILE: solution/app/src/main/java/org/onosproject/ngsdn/tutorial/L2BridgingComponent.java
  class L2BridgingComponent (line 63) | @Component(
    method activate (line 118) | @Activate
    method deactivate (line 131) | @Deactivate
    method setUpDevice (line 150) | private void setUpDevice(DeviceId deviceId) {
    method insertMulticastGroup (line 170) | private void insertMulticastGroup(DeviceId deviceId) {
    method insertMulticastFlowRules (line 204) | private void insertMulticastFlowRules(DeviceId deviceId) {
    method insertUnmatchedBridgingFlowRule (line 262) | @SuppressWarnings("unused")
    method learnHost (line 311) | private void learnHost(Host host, DeviceId deviceId, PortNumber port) {
    class InternalDeviceListener (line 353) | public class InternalDeviceListener implements DeviceListener {
      method isRelevant (line 355) | @Override
      method event (line 370) | @Override
    class InternalHostListener (line 392) | public class InternalHostListener implements HostListener {
      method isRelevant (line 394) | @Override
      method event (line 416) | @Override
    method getHostFacingPorts (line 443) | private Set<PortNumber> getHostFacingPorts(DeviceId deviceId) {
    method isSpine (line 469) | private boolean isSpine(DeviceId deviceId) {
    method setUpAllDevices (line 492) | private void setUpAllDevices() {

FILE: solution/app/src/main/java/org/onosproject/ngsdn/tutorial/NdpReplyComponent.java
  class NdpReplyComponent (line 61) | @Component(
    method activate (line 107) | @Activate
    method deactivate (line 117) | @Deactivate
    method setUpAllDevices (line 132) | private void setUpAllDevices() {
    method setUpDevice (line 147) | private void setUpDevice(DeviceId deviceId) {
    method buildNdpReplyFlowRule (line 193) | private FlowRule buildNdpReplyFlowRule(DeviceId deviceId,
    class InternalDeviceListener (line 232) | public class InternalDeviceListener implements DeviceListener {
      method isRelevant (line 234) | @Override
      method event (line 249) | @Override
    method getIp6Addresses (line 277) | private Collection<Ip6Address> getIp6Addresses(Interface iface) {
    method installRules (line 291) | private void installRules(Collection<FlowRule> flowRules) {

FILE: solution/app/src/main/java/org/onosproject/ngsdn/tutorial/Srv6Component.java
  class Srv6Component (line 58) | @Component(
    method activate (line 102) | @Activate
    method deactivate (line 115) | @Deactivate
    method setUpMySidTable (line 134) | private void setUpMySidTable(DeviceId deviceId) {
    method insertSrv6InsertRule (line 175) | public void insertSrv6InsertRule(DeviceId deviceId, Ip6Address destIp,...
    method clearSrv6InsertRules (line 219) | public void clearSrv6InsertRules(DeviceId deviceId) {
    class InternalDeviceListener (line 245) | public class InternalDeviceListener implements DeviceListener {
      method isRelevant (line 247) | @Override
      method event (line 262) | @Override
    method setUpAllDevices (line 287) | private synchronized void setUpAllDevices() {
    method getDeviceConfig (line 304) | private Optional<FabricDeviceConfig> getDeviceConfig(DeviceId deviceId) {
    method getMySid (line 315) | private Ip6Address getMySid(DeviceId deviceId) {

FILE: solution/app/src/main/java/org/onosproject/ngsdn/tutorial/pipeconf/InterpreterImpl.java
  class InterpreterImpl (line 63) | public class InterpreterImpl extends AbstractHandlerBehaviour
    method mapOutboundPacket (line 93) | @Override
    method buildPacketOut (line 142) | private PiPacketOperation buildPacketOut(ByteBuffer pktData, long port...
    method mapInboundPacket (line 182) | @Override
    method mapLogicalPortNumber (line 224) | @Override
    method mapCriterionType (line 233) | @Override
    method mapTreatment (line 242) | @Override
    method mapFlowRuleTableId (line 248) | @Override

FILE: solution/ptf/tests/bridging.py
  class ArpNdpRequestWithCloneTest (line 50) | class ArpNdpRequestWithCloneTest(P4RuntimeTest):
    method runTest (line 56) | def runTest(self):
    method testPacket (line 68) | def testPacket(self, pkt):
  class ArpNdpReplyWithCloneTest (line 171) | class ArpNdpReplyWithCloneTest(P4RuntimeTest):
    method runTest (line 176) | def runTest(self):
    method testPacket (line 189) | def testPacket(self, pkt):
  class BridgingTest (line 254) | class BridgingTest(P4RuntimeTest):
    method runTest (line 257) | def runTest(self):
    method testPacket (line 265) | def testPacket(self, pkt):

FILE: solution/ptf/tests/packetio.py
  class PacketOutTest (line 48) | class PacketOutTest(P4RuntimeTest):
    method runTest (line 54) | def runTest(self):
    method testPacket (line 61) | def testPacket(self, pkt):
  class PacketInTest (line 85) | class PacketInTest(P4RuntimeTest):
    method runTest (line 90) | def runTest(self):
    method testPacket (line 98) | def testPacket(self, pkt):

FILE: solution/ptf/tests/routing.py
  class IPv6RoutingTest (line 46) | class IPv6RoutingTest(P4RuntimeTest):
    method runTest (line 49) | def runTest(self):
    method testPacket (line 57) | def testPacket(self, pkt):
  class NdpReplyGenTest (line 135) | class NdpReplyGenTest(P4RuntimeTest):
    method runTest (line 141) | def runTest(self):

FILE: solution/ptf/tests/srv6.py
  function insert_srv6_header (line 45) | def insert_srv6_header(pkt, sid_list):
  function pop_srv6_header (line 63) | def pop_srv6_header(pkt):
  function set_cksum (line 70) | def set_cksum(pkt, cksum):
  class Srv6InsertTest (line 80) | class Srv6InsertTest(P4RuntimeTest):
    method runTest (line 85) | def runTest(self):
    method testPacket (line 101) | def testPacket(self, pkt, sid_list, next_hop_mac):
  class Srv6TransitTest (line 196) | class Srv6TransitTest(P4RuntimeTest):
    method runTest (line 202) | def runTest(self):
    method testPacket (line 220) | def testPacket(self, pkt, next_hop_mac, my_sid):
  class Srv6EndTest (line 299) | class Srv6EndTest(P4RuntimeTest):
    method runTest (line 304) | def runTest(self):
    method testPacket (line 321) | def testPacket(self, pkt, sid_list, next_hop_mac, my_sid):
  class Srv6EndPspTest (line 406) | class Srv6EndPspTest(P4RuntimeTest):
    method runTest (line 413) | def runTest(self):
    method testPacket (line 428) | def testPacket(self, pkt, sid_list, next_hop_mac, my_sid):
Condensed preview — 86 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (645K chars).
[
  {
    "path": ".gitignore",
    "chars": 264,
    "preview": ".idea/\ntmp/\np4src/build\napp/target\napp/src/main/resources/p4info.txt\napp/src/main/resources/bmv2.json\nptf/stratum_bmv2.l"
  },
  {
    "path": ".travis.yml",
    "chars": 164,
    "preview": "dist: xenial\n\nlanguage: python\n\nservices:\n  - docker\n\npython:\n  - \"3.5\"\n\ninstall:\n  - make deps\n\nscript:\n  - make check "
  },
  {
    "path": "EXERCISE-1.md",
    "chars": 14891,
    "preview": "# Exercise 1: P4Runtime Basics\n\nThis exercise provides a hands-on introduction to the P4Runtime API. You will be\nasked t"
  },
  {
    "path": "EXERCISE-2.md",
    "chars": 16760,
    "preview": "# Exercise 2: Yang, OpenConfig, and gNMI Basics\n\nThis exercise is designed to give you more exposure to YANG, OpenConfig"
  },
  {
    "path": "EXERCISE-3.md",
    "chars": 14905,
    "preview": "# Exercise 3: Using ONOS as the Control Plane\n\nThis exercise provides a hands-on introduction to ONOS, where you will le"
  },
  {
    "path": "EXERCISE-4.md",
    "chars": 22708,
    "preview": "# Exercise 4: Enabling ONOS Built-in Services\n\nIn this exercise, you will integrate ONOS built-in services for link and\n"
  },
  {
    "path": "EXERCISE-5.md",
    "chars": 20154,
    "preview": "# Exercise 5: IPv6 Routing\n\nIn this exercise, you will be modifying the P4 program and ONOS app to add\nsupport for IPv6-"
  },
  {
    "path": "EXERCISE-6.md",
    "chars": 12419,
    "preview": "# Exercise 6: Segment Routing v6 (SRv6)\n\nIn this exercise, you will be implementing a simplified version of segment\nrout"
  },
  {
    "path": "EXERCISE-7.md",
    "chars": 27959,
    "preview": "# Exercise 7: Trellis Basics\n\nThe goal of this exercise is to learn how to set up and configure an emulated\nTrellis envi"
  },
  {
    "path": "EXERCISE-8.md",
    "chars": 20833,
    "preview": "# Exercise 8: GTP termination with fabric.p4\n\nThe goal of this exercise is to learn how to use Trellis and fabric.p4 to\n"
  },
  {
    "path": "LICENSE",
    "chars": 11357,
    "preview": "                                 Apache License\n                           Version 2.0, January 2004\n                   "
  },
  {
    "path": "Makefile",
    "chars": 7936,
    "preview": "mkfile_path := $(abspath $(lastword $(MAKEFILE_LIST)))\ncurr_dir := $(patsubst %/,%,$(dir $(mkfile_path)))\n\ninclude util/"
  },
  {
    "path": "README.md",
    "chars": 7800,
    "preview": "# Next-Gen SDN Tutorial (Advanced)\n\nWelcome to the Next-Gen SDN tutorial!\n\nThis tutorial is targeted at students and pra"
  },
  {
    "path": "app/pom.xml",
    "chars": 5675,
    "preview": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!--\n  ~ Copyright 2019 Open Networking Foundation\n  ~\n  ~ Licensed under the Apa"
  },
  {
    "path": "app/src/main/java/org/onosproject/ngsdn/tutorial/AppConstants.java",
    "chars": 1281,
    "preview": "/*\n * Copyright 2019-present Open Networking Foundation\n *\n * Licensed under the Apache License, Version 2.0 (the \"Licen"
  },
  {
    "path": "app/src/main/java/org/onosproject/ngsdn/tutorial/Ipv6RoutingComponent.java",
    "chars": 30625,
    "preview": "/*\n * Copyright 2019-present Open Networking Foundation\n *\n * Licensed under the Apache License, Version 2.0 (the \"Licen"
  },
  {
    "path": "app/src/main/java/org/onosproject/ngsdn/tutorial/L2BridgingComponent.java",
    "chars": 19396,
    "preview": "/*\n * Copyright 2019-present Open Networking Foundation\n *\n * Licensed under the Apache License, Version 2.0 (the \"Licen"
  },
  {
    "path": "app/src/main/java/org/onosproject/ngsdn/tutorial/MainComponent.java",
    "chars": 6905,
    "preview": "package org.onosproject.ngsdn.tutorial;\n\nimport com.google.common.collect.Lists;\nimport org.onlab.util.SharedScheduledEx"
  },
  {
    "path": "app/src/main/java/org/onosproject/ngsdn/tutorial/NdpReplyComponent.java",
    "chars": 11391,
    "preview": "/*\n * Copyright 2019-present Open Networking Foundation\n *\n * Licensed under the Apache License, Version 2.0 (the \"Licen"
  },
  {
    "path": "app/src/main/java/org/onosproject/ngsdn/tutorial/Srv6Component.java",
    "chars": 11810,
    "preview": "/*\n * Copyright 2019-present Open Networking Foundation\n *\n * Licensed under the Apache License, Version 2.0 (the \"Licen"
  },
  {
    "path": "app/src/main/java/org/onosproject/ngsdn/tutorial/cli/Srv6ClearCommand.java",
    "chars": 1978,
    "preview": "/*\n * Copyright 2019-present Open Networking Foundation\n *\n * Licensed under the Apache License, Version 2.0 (the \"Licen"
  },
  {
    "path": "app/src/main/java/org/onosproject/ngsdn/tutorial/cli/Srv6InsertCommand.java",
    "chars": 2899,
    "preview": "/*\n * Copyright 2019-present Open Networking Foundation\n *\n * Licensed under the Apache License, Version 2.0 (the \"Licen"
  },
  {
    "path": "app/src/main/java/org/onosproject/ngsdn/tutorial/cli/Srv6SidCompleter.java",
    "chars": 2346,
    "preview": "/*\n * Copyright 2019-present Open Networking Foundation\n *\n * Licensed under the Apache License, Version 2.0 (the \"Licen"
  },
  {
    "path": "app/src/main/java/org/onosproject/ngsdn/tutorial/cli/package-info.java",
    "chars": 43,
    "preview": "package org.onosproject.ngsdn.tutorial.cli;"
  },
  {
    "path": "app/src/main/java/org/onosproject/ngsdn/tutorial/common/FabricDeviceConfig.java",
    "chars": 2313,
    "preview": "/*\n * Copyright 2019-present Open Networking Foundation\n *\n * Licensed under the Apache License, Version 2.0 (the \"Licen"
  },
  {
    "path": "app/src/main/java/org/onosproject/ngsdn/tutorial/common/Utils.java",
    "chars": 6008,
    "preview": "/*\n * Copyright 2019-present Open Networking Foundation\n *\n * Licensed under the Apache License, Version 2.0 (the \"Licen"
  },
  {
    "path": "app/src/main/java/org/onosproject/ngsdn/tutorial/pipeconf/InterpreterImpl.java",
    "chars": 10532,
    "preview": "/*\n * Copyright 2019-present Open Networking Foundation\n *\n * Licensed under the Apache License, Version 2.0 (the \"Licen"
  },
  {
    "path": "app/src/main/java/org/onosproject/ngsdn/tutorial/pipeconf/PipeconfLoader.java",
    "chars": 4511,
    "preview": "/*\n * Copyright 2019-present Open Networking Foundation\n *\n * Licensed under the Apache License, Version 2.0 (the \"Licen"
  },
  {
    "path": "app/src/main/java/org/onosproject/ngsdn/tutorial/pipeconf/PipelinerImpl.java",
    "chars": 5901,
    "preview": "/*\n * Copyright 2019-present Open Networking Foundation\n *\n * Licensed under the Apache License, Version 2.0 (the \"Licen"
  },
  {
    "path": "docker-compose.yml",
    "chars": 929,
    "preview": "version: \"3\"\n\nservices:\n  mininet:\n    image: opennetworking/ngsdn-tutorial:stratum_bmv2\n    hostname: mininet\n    conta"
  },
  {
    "path": "mininet/flowrule-gtp.json",
    "chars": 760,
    "preview": "{\n  \"flows\": [\n    {\n      \"deviceId\": \"device:leaf1\",\n      \"tableId\": \"FabricIngress.spgw_ingress.dl_sess_lookup\",\n   "
  },
  {
    "path": "mininet/host-cmd",
    "chars": 844,
    "preview": "#!/bin/bash\n\n# Attach to a Mininet host and run a command\n\nif [ -z $1 ]; then\n  echo \"usage: $0 host cmd [args...]\"\n  ex"
  },
  {
    "path": "mininet/netcfg-gtp.json",
    "chars": 3033,
    "preview": "{\n  \"devices\": {\n    \"device:leaf1\": {\n      \"basic\": {\n        \"managementAddress\": \"grpc://mininet:50001?device_id=1\","
  },
  {
    "path": "mininet/netcfg-sr.json",
    "chars": 3882,
    "preview": "{\n  \"devices\": {\n    \"device:leaf1\": {\n      \"basic\": {\n        \"managementAddress\": \"grpc://mininet:50001?device_id=1\","
  },
  {
    "path": "mininet/netcfg.json",
    "chars": 3541,
    "preview": "{\n  \"devices\": {\n    \"device:leaf1\": {\n      \"basic\": {\n        \"managementAddress\": \"grpc://mininet:50001?device_id=1\","
  },
  {
    "path": "mininet/recv-gtp.py",
    "chars": 1102,
    "preview": "#!/usr/bin/python\n\n# Script used in Exercise 8 that sniffs packets and prints on screen whether\n# they are GTP encapsula"
  },
  {
    "path": "mininet/send-udp.py",
    "chars": 436,
    "preview": "#!/usr/bin/python\n\n# Script used in Exercise 8.\n# Send downlink packets to UE address.\n\nfrom scapy.layers.inet import IP"
  },
  {
    "path": "mininet/topo-gtp.py",
    "chars": 3913,
    "preview": "#!/usr/bin/python\n\n#  Copyright 2019-present Open Networking Foundation\n#\n#  Licensed under the Apache License, Version "
  },
  {
    "path": "mininet/topo-v4.py",
    "chars": 5955,
    "preview": "#!/usr/bin/python\n\n#  Copyright 2019-present Open Networking Foundation\n#\n#  Licensed under the Apache License, Version "
  },
  {
    "path": "mininet/topo-v6.py",
    "chars": 4419,
    "preview": "#!/usr/bin/python\n\n#  Copyright 2019-present Open Networking Foundation\n#\n#  Licensed under the Apache License, Version "
  },
  {
    "path": "p4src/main.p4",
    "chars": 19968,
    "preview": "/*\n * Copyright 2019-present Open Networking Foundation\n *\n * Licensed under the Apache License, Version 2.0 (the \"Licen"
  },
  {
    "path": "p4src/snippets.p4",
    "chars": 3778,
    "preview": "//------------------------------------------------------------------------------\n// SNIPPETS FOR EXERCISE 5 (IPV6 ROUTIN"
  },
  {
    "path": "ptf/lib/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "ptf/lib/base_test.py",
    "chars": 19626,
    "preview": "# Copyright 2013-present Barefoot Networks, Inc.\n# Copyright 2018-present Open Networking Foundation\n#\n# Licensed under "
  },
  {
    "path": "ptf/lib/chassis_config.pb.txt",
    "chars": 1550,
    "preview": "description: \"Config for PTF tests using virtual interfaces\"\nchassis {\n  platform: PLT_P4_SOFT_SWITCH\n  name: \"bmv2-simp"
  },
  {
    "path": "ptf/lib/convert.py",
    "chars": 4436,
    "preview": "# Copyright 2017-present Open Networking Foundation\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n"
  },
  {
    "path": "ptf/lib/helper.py",
    "chars": 11435,
    "preview": "# Copyright 2017-present Open Networking Foundation\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n"
  },
  {
    "path": "ptf/lib/port_map.json",
    "chars": 710,
    "preview": "[\n    {\n        \"ptf_port\": 0,\n        \"p4_port\": 1,\n        \"iface_name\": \"veth1\"\n    },\n    {\n        \"ptf_port\": 1,\n "
  },
  {
    "path": "ptf/lib/runner.py",
    "chars": 9216,
    "preview": "#!/usr/bin/env python2\n\n# Copyright 2013-present Barefoot Networks, Inc.\n# Copyright 2018-present Open Networking Founda"
  },
  {
    "path": "ptf/lib/start_bmv2.sh",
    "chars": 1751,
    "preview": "#!/usr/bin/env bash\n\nset -xe\n\nDIR=\"$( cd \"$( dirname \"${BASH_SOURCE[0]}\" )\" >/dev/null 2>&1 && pwd )\"\n\nCPU_PORT=255\nGRPC"
  },
  {
    "path": "ptf/run_tests",
    "chars": 1042,
    "preview": "#!/usr/bin/env bash\n\nset -e\n\nPTF_DIR=\"$( cd \"$( dirname \"${BASH_SOURCE[0]}\" )\" >/dev/null 2>&1 && pwd )\"\nP4SRC_DIR=${PTF"
  },
  {
    "path": "ptf/tests/bridging.py",
    "chars": 11107,
    "preview": "# Copyright 2013-present Barefoot Networks, Inc.\n# Copyright 2018-present Open Networking Foundation\n#\n# Licensed under "
  },
  {
    "path": "ptf/tests/packetio.py",
    "chars": 5009,
    "preview": "# Copyright 2019-present Open Networking Foundation\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n"
  },
  {
    "path": "ptf/tests/routing.py",
    "chars": 6302,
    "preview": "# Copyright 2019-present Open Networking Foundation\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n"
  },
  {
    "path": "ptf/tests/srv6.py",
    "chars": 16796,
    "preview": "# Copyright 2019-present Open Networking Foundation\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n"
  },
  {
    "path": "solution/app/src/main/java/org/onosproject/ngsdn/tutorial/Ipv6RoutingComponent.java",
    "chars": 30780,
    "preview": "/*\n * Copyright 2019-present Open Networking Foundation\n *\n * Licensed under the Apache License, Version 2.0 (the \"Licen"
  },
  {
    "path": "solution/app/src/main/java/org/onosproject/ngsdn/tutorial/L2BridgingComponent.java",
    "chars": 19392,
    "preview": "/*\n * Copyright 2019-present Open Networking Foundation\n *\n * Licensed under the Apache License, Version 2.0 (the \"Licen"
  },
  {
    "path": "solution/app/src/main/java/org/onosproject/ngsdn/tutorial/NdpReplyComponent.java",
    "chars": 11431,
    "preview": "/*\n * Copyright 2019-present Open Networking Foundation\n *\n * Licensed under the Apache License, Version 2.0 (the \"Licen"
  },
  {
    "path": "solution/app/src/main/java/org/onosproject/ngsdn/tutorial/Srv6Component.java",
    "chars": 11896,
    "preview": "/*\n * Copyright 2019-present Open Networking Foundation\n *\n * Licensed under the Apache License, Version 2.0 (the \"Licen"
  },
  {
    "path": "solution/app/src/main/java/org/onosproject/ngsdn/tutorial/pipeconf/InterpreterImpl.java",
    "chars": 10470,
    "preview": "/*\n * Copyright 2019-present Open Networking Foundation\n *\n * Licensed under the Apache License, Version 2.0 (the \"Licen"
  },
  {
    "path": "solution/mininet/flowrule-gtp.json",
    "chars": 741,
    "preview": "{\n  \"flows\": [\n    {\n      \"deviceId\": \"device:leaf1\",\n      \"tableId\": \"FabricIngress.spgw_ingress.dl_sess_lookup\",\n   "
  },
  {
    "path": "solution/mininet/netcfg-gtp.json",
    "chars": 3038,
    "preview": "{\n  \"devices\": {\n    \"device:leaf1\": {\n      \"basic\": {\n        \"managementAddress\": \"grpc://mininet:50001?device_id=1\","
  },
  {
    "path": "solution/mininet/netcfg-sr.json",
    "chars": 4312,
    "preview": "{\n  \"devices\": {\n    \"device:leaf1\": {\n      \"basic\": {\n        \"managementAddress\": \"grpc://mininet:50001?device_id=1\","
  },
  {
    "path": "solution/p4src/main.p4",
    "chars": 26619,
    "preview": "/*\n * Copyright 2019-present Open Networking Foundation\n *\n * Licensed under the Apache License, Version 2.0 (the \"Licen"
  },
  {
    "path": "solution/ptf/tests/bridging.py",
    "chars": 11107,
    "preview": "# Copyright 2013-present Barefoot Networks, Inc.\n# Copyright 2018-present Open Networking Foundation\n#\n# Licensed under "
  },
  {
    "path": "solution/ptf/tests/packetio.py",
    "chars": 5063,
    "preview": "# Copyright 2019-present Open Networking Foundation\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n"
  },
  {
    "path": "solution/ptf/tests/routing.py",
    "chars": 6505,
    "preview": "# Copyright 2019-present Open Networking Foundation\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n"
  },
  {
    "path": "solution/ptf/tests/srv6.py",
    "chars": 17534,
    "preview": "# Copyright 2019-present Open Networking Foundation\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n"
  },
  {
    "path": "util/docker/Makefile",
    "chars": 481,
    "preview": "include Makefile.vars\n\nbuild: build-stratum_bmv2 build-mvn\npush: push-stratum_bmv2 push-mvn\n\nbuild-stratum_bmv2:\n\tcd str"
  },
  {
    "path": "util/docker/Makefile.vars",
    "chars": 1013,
    "preview": "ONOS_IMG := onosproject/onos:2.2.2\nP4RT_SH_IMG := p4lang/p4runtime-sh:latest\nP4C_IMG := opennetworking/p4c:stable\nSTRATU"
  },
  {
    "path": "util/docker/mvn/Dockerfile",
    "chars": 802,
    "preview": "# Copyright 2019-present Open Networking Foundation\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n"
  },
  {
    "path": "util/docker/stratum_bmv2/Dockerfile",
    "chars": 1513,
    "preview": "# Copyright 2019-present Open Networking Foundation\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n"
  },
  {
    "path": "util/gnmi-cli",
    "chars": 72,
    "preview": "#!/bin/bash\ndocker run --rm -it --network host bocon/gnmi-cli:latest $@\n"
  },
  {
    "path": "util/mn-cmd",
    "chars": 128,
    "preview": "#!/bin/bash\n\nif [ -z $1 ]; then\n  echo \"usage: $0 host cmd [args...]\"\n  exit 1\nfi\n\ndocker exec -it mininet /mininet/host"
  },
  {
    "path": "util/mn-pcap",
    "chars": 530,
    "preview": "#!/bin/bash\n\nDIR=\"$( cd \"$( dirname \"${BASH_SOURCE[0]}\" )\" >/dev/null 2>&1 && pwd )\"\n\nif [ -z $1 ]; then\n  echo \"usage: "
  },
  {
    "path": "util/oc-pb-decoder",
    "chars": 69,
    "preview": "#!/bin/bash\ndocker run --rm -i bocon/yang-tools:latest oc-pb-decoder\n"
  },
  {
    "path": "util/onos-cmd",
    "chars": 298,
    "preview": "#!/bin/bash\n\nif [ -z $1 ]; then\n  echo \"usage: $0 cmd [args...]\"\n  exit 1\nfi\n\n# Use sshpass to skip the password prompt\n"
  },
  {
    "path": "util/p4rt-sh",
    "chars": 4069,
    "preview": "#!/usr/bin/env python3\n\n# Copyright 2019 Barefoot Networks, Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the"
  },
  {
    "path": "util/vm/.gitignore",
    "chars": 21,
    "preview": "*.log\n.vagrant\n*.ova\n"
  },
  {
    "path": "util/vm/README.md",
    "chars": 866,
    "preview": "# Scripts to build the tutorial VM\n\n## Requirements\n\n- [Vagrant](https://www.vagrantup.com/) (tested v2.2.5)\n- [VirtualB"
  },
  {
    "path": "util/vm/Vagrantfile",
    "chars": 1301,
    "preview": "REQUIRED_PLUGINS = %w( vagrant-vbguest vagrant-reload vagrant-disksize )\n\nVagrant.configure(2) do |config|\n\n  # Install "
  },
  {
    "path": "util/vm/build-vm.sh",
    "chars": 625,
    "preview": "#!/usr/bin/env bash\n\nset -xe\n\nfunction wait_vm_shutdown {\n    set +x\n    while vboxmanage showvminfo $1 | grep -c \"runni"
  },
  {
    "path": "util/vm/cleanup.sh",
    "chars": 321,
    "preview": "#!/bin/bash\nset -ex\n\nsudo apt-get clean\nsudo apt-get -y autoremove\n\nsudo rm -rf /tmp/*\n\nhistory -c\nrm -f ~/.bash_history"
  },
  {
    "path": "util/vm/root-bootstrap.sh",
    "chars": 1389,
    "preview": "#!/usr/bin/env bash\n\nset -xe\n\n# Create user sdn\nuseradd -m -d /home/sdn -s /bin/bash sdn\nusermod -aG sudo sdn\nusermod -a"
  },
  {
    "path": "util/vm/user-bootstrap.sh",
    "chars": 317,
    "preview": "#!/bin/bash\nset -xe\n\ncd /home/sdn\n\ncp /etc/skel/.bashrc ~/\ncp /etc/skel/.profile ~/\ncp /etc/skel/.bash_logout ~/\n\n#  Wit"
  },
  {
    "path": "yang/demo-port.yang",
    "chars": 2141,
    "preview": "// A module is a self-contained tree of nodes\nmodule demo-port {\n\n    // YANG Boilerplate \n    yang-version \"1\";\n    nam"
  }
]

About this extraction

This page contains the full source code of the opennetworkinglab/ngsdn-tutorial GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 86 files (603.7 KB), approximately 152.6k tokens, and a symbol index with 376 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!