Commit a5d03da0 authored by cedgar's avatar cedgar
Browse files

Exercise week 5

parent ad7e1043
# Load Balancing
## Task 1: Equal-Cost Multi-Path Routing
In this exercise we will implement a layer 3 forwarding switch that load balances traffic
towards a destination across equal-cost paths. To load balance traffic across multiple ports we will implement ECMP (Equal-Cost
Multi-Path) routing. When a packet with multiple candidate paths arrives, our switch should assign the next-hop by hashing some fields from the
header and compute this hash value modulo the number of possible equal paths. For example in the topology below, when `s1` has to send
a packet to `s6`, the switch should determine the output port by computing: `hash(some-header-fields) mod 4`. To prevent out of order packets, ECMP hashing is done on a per-flow basis,
which means that all packets with the same source and destination IP addresses and the same source and destination
ports always hash to the same next hop.
<p align="center">
<img src="images/multi_hop_topo.png" title="Multi Hop Topology"/>
<p/>
For more information about ECMP see this [page](https://docs.cumulusnetworks.com/display/DOCS/Equal+Cost+Multipath+Load+Sharing+-+Hardware+ECMP)
## Before Starting
As usual, we provide you with the following files:
* `p4app.json`: describes the topology we want to create with the help
of mininet and p4-utils package. Note that we disabled `pcap` loging to reduce disk usage. In case you want to use the pcaps you will have to set the option to `true`.
* `p4src/ecmp.p4`: p4 program skeleton to use as a starting point.
* `p4src/includes`: In today's exercise we will split our p4 code into multiple files for the first time. In the includes
directory you will find `headers.p4` and `parsers.p4` (which also have to be completed).
* `send.py`: a small python script to generate multiple packets with different tcp port.
* `sX-commands`: directory with the static CLI commands for the example topology. These commands will allow you to test your P4 code. However,
once you have a working P4 code, your task will be to do the table population using a controller and for any topology.
#### Notes about p4app.json
For this exercise, we will use a new IP assignment strategy. If you have a look at `p4app.json` you will see that
the option is set to `mixed`. Therefore, only hosts connected to the same switch will be assigned to the same subnet. Hosts connected
to a different switch will belong to a different `/24` subnet. If you use the namings `hX` and `sX` (e.g h1, h2, s1...), the IP assignment
goes as follows: `10.0.x.y`. Where `x` is the switch id (upper and lower bytes), and `y` is the host id. For example, in the topology above,
`h1` gets `10.0.1.1` and `h2` gets `10.0.6.2`.
 
You can find all the documentation about `p4app.json` in the `p4-utils` [documentation](https://github.com/nsg-ethz/p4-utils#topology-description).
## Implementing the L3 forwarding switch + ECMP
To solve this exercise we have to program our switch such that it forwards L3 packets when there is one
possible next-hop or more. For that we will use two tables: in the first table we match the destination IP and
depending on whether ECMP has to be applied (for that destination) we set the output port or a ecmp_group. For the later we
will apply a second table that maps (ecmp_group, hash_output) to egress ports.
This time you will have to fill the gaps in several files: `p4src/ecmp.p4`, `p4src/include/headers.p4`
and `p4src/include/parsers.p4`. Headers and parser are imported from within `ecmp.p4`. Since the end goal of this
exercise is to write a controller that populates tables automatically, to test your P4 code, we will provide
the `sX-commands.txt` for the default topology.
To complete the exercise you have to do the following subtasks:
1. Use the header definitions that are already provided. Have a look at `p4src/include/headers.p4`.
2. Define a parser that parses packets up to `tcp`. Note, that for simplicity won't use `udp` packets
in this exercise. This time you have to define the parser in: `p4src/include/parsers.p4`.
3. Define the deparser. Just emit all the headers in the right order. Also in `p4src/include/parsers.p4`.
4. Define a match-action table (`ipv4_lpm`) that matches the IP destination address of every packet and has three actions: `set_nhop`, `ecmp_group`, `drop`.
Set the drop action as default.
5. Define the action `set_nhop`. This action takes 2 parameters: destination mac and egress port. Use the parameters to set the destination mac and
`egress_spec`. Set the source mac as the previous destination mac (this is not what a real L3 switch would do, we just do it for simplicity). In a more realistic implementation we would create a table
that maps egress_ports to each switch interface mac address, however since the source mac address is not very important for this exercise just do this swap). When sending packets from a switch to another switch, the destination
address is not very important, and thus you can use a random one. However, keep in mind that when the packet is sent to a host needs to have the right destination MAC address.
Finally, decrease the packet's TTL by 1. **Note:** since we are in a L3 network, when you send packets from `s1` to `s2` you have to use the dst mac of the switch interface not the mac address of the receiving host, that instead is done in the very last hop.
6. Define the action `ecmp_group`. This action takes two parameters, the ecmp group id (14 bits), and the number of next hops (16 bits). This
action is one of the key parts of the ECMP algorithm. You have to do several things:
1. In this action we will compute a hash function. To store the output you need to define a metadata field. Define `ecmp_hash` (14 bits) inside
the metadata struct in `headers.p4`. Use the `hash` extern function to compute the hash of packets 5-tuple (src ip, dst ip, src port, dst port, protocol). The signature of a hash function is:
`hash(output_field, (crc16 or crc32), (bit<1>)0, {fields to hash}, (bit<16>)modulo)`.
2. Define another metadata field. Call it for example: `ecmp_group_id` (14 bits).
3. Finally copy the value of the second action parameter ecmp group in the metadata field you just defined (`ecmp_group_id`) this will be used
to match in the second table.
**Note**: Why is the `ecmp_group_id` needed ?. In few words, it allows you to map from one ip address to a set of ports, which does not have to be
the 4 ports we use in this exercise. For example, you could have that for `IP1` you use only the upper 2 ports and for `IP2` you loadbalance using the two lower ports. Thus, by
creating two ECMP groups you can easily map any destination address to any set of ports.
7. Define the second match-action table used to set `ecmp_groups` to real next hops. The table should have `exact` matches to the metadata fields
you defined in the previous step. Thus, it should match the `meta.ecmp_group_id` and then to the output of the hash function `meta.ecmp_hash` (which will be
a value ranging from 0 to `NUM_NEXT_HOPS-1`). A match in this table should call the `set_nhop` action that you already defined above, a miss should mark the packet
to be dropped (set `drop` as default action). This enables us to use any subset of interfaces. For example imagine that
in the topology above we have `h2` and `h3` ( h3 does not exist, but just for the sake of the example) we could define two different `ecmp` groups (in the previous table), one that maps to port 2 and 4, and one that maps to port 3 and 5. And then, in this table we could add two rules per group, to make the outputs `[0,1]` from the hash function match [2,4] and `[3,5]`
respectively.
8. Define the ingress control logic:
1. Check if the ipv4 header was parsed (use `isValid`).
2. Apply the first table (`ipv4_lpm`).
3. If the action `ecmp_group` was called during the first table apply. Call the second table.
Note: to know which action was called during an apply you can use a switch statement and `action_run`, to see more information about how to check which action was used, check out
the [P4 16 specification](https://p4.org/p4-spec/docs/P4-16-v1.0.0-spec.html#sec-invoke-mau)
9. Since the `ipv4.ttl` is modified the `ipv4` checksum field needs to be updated otherwise other network devices (or receiving hosts) might drop the packet. To do that, the `v1model` provides an `extern` function that can be called inside the `MyComputeChecksum` control to update checksum fields. In this exercise, you do not have to do anything, however just go to the `ecmp.p4` file and check how the `update_checksum` is used.
10. This time we provide you the six `sX-commands.txt` files, one per switch. Note that only `s1` and `s6` need to have `ecmp` groups installed. For all
the other switches setting rules for the first table (using action `set_nhop`) will suffice. Knowing how the tables are populated can help you to solve previous tasks.
## Testing your solution
Once you have the `ecmp.p4` program finished you can test its behavior:
1. Start the topology (this will also compile and load the program).
```bash
sudo p4run
```
2. Check that you can ping:
```bash
> mininet pingall
```
3. Monitor the 4 links from `s1` that will be used during `ecmp` (from `s1-eth2` to `s1-eth5`). Doing this you will be able to check which path is each flow
taking.
```bash
sudo tcpdump -enn -i s1-ethX
```
4. Ping between two hosts:
You should see traffic in only 1 or 2 interfaces (due to the return path).
Since all the ping packets have the same 5-tuple.
5. Do iperf between two hosts:
You should also see traffic in 1 or 2 interfaces (due to the return path).
Since all the packets belonging to the same flow have the same 5-tuple, and thus the hash always returns the same index.
6. Get a terminal in `h1`. Use the `send.py` script.
```bash
python send.py 10.0.6.2 1000
```
This will send `tcp syn` packets with random ports. Now you should see packets going to all the interfaces, since each packet will have a different hash. You should see two packets: (1) syn packet going, (2) RST/ACK reply for each packet sent.
## Task 2: Routing control plane
The goal of this task is to implement and provide a centralized control plane to the ECMP switch.
Unlike in the previous task where you specified the entries for the forwarding tables manually,
we will now implement a controller that generates and installs forwarding rules automatically, based on the network topology.
In traditional networks, the control plane is the brain of any networking device. The control plane is in charge of deciding
where packets have to be sent. Distributed control planes exchange topology information with other devices and compute which is
the best way of sending traffic based on some routing protocol (RIP, OSPF, BGP).
To simplify things, in this exercise we will not implement a distributed and dynamic control plane like the ones mentioned above, but something simple,
centralized, and static. However, your controller should be able to automatically populate the ECMP exercise tables for any topology.
### What is already provided
For this task we provide you with the following files:
* `routing-controller.py`: routing controller skeleton. The controller uses global topology
information and the simple switch `runtime_API` to populate the routing tables.
* `topology_generator.py`: python script that automatically generates `p4app` configuration files.
It allows you to generate 3 types of topologies: linear, circular, and random (with a node number and degree). Run it with `-h` option to see the
command line parameters.
## Implementing the router's control plane program
The main task of the controller (we provide a skeleton in `routing-controller.py`) is to translate the network topology
(stored in `topology.db`) to match-action table entries. For example for the topology that we used in ECMP task,
it should run the following commands to fill the `ipv4_lpm` and `ecmp_group_to_nhop` tables in switch `s1`:
```
table_set_default ipv4_lpm drop
table_set_default ecmp_group_to_nhop drop
table_add ipv4_lpm set_nhop 10.0.1.1/32 => 00:00:0a:00:01:01 1
table_add ipv4_lpm ecmp_group 10.0.6.2/32 => 1 4
table_add ecmp_group_to_nhop set_nhop 1 0 => 00:00:00:02:01:00 2
table_add ecmp_group_to_nhop set_nhop 1 1 => 00:00:00:03:01:00 3
table_add ecmp_group_to_nhop set_nhop 1 2 => 00:00:00:04:01:00 4
table_add ecmp_group_to_nhop set_nhop 1 3 => 00:00:00:05:01:00 5
```
You have to write your controller application in the `routing-controller.py` file that we already provided you. You will see that we already implemented some
small functions that use the `Topology` and `SimpleSwitchAPI` objects from p4utils. Among others, the provided functions do:
1. `connect_to_switches()`: function that establishes a connection with the simple switch `thrift` server using the `SimpleSwitchAPI` object and saves those
objects in the `self.controllers` dictionary. This dictionary has the form of: `{'sw_name' : SimpleSwitchAPI()}`.
2. `reset_states()`: iterates over the `self.controllers` object and runs the `reset_state` function which will empty the state (registers, tables, etc) for every switch.
3. `set_table_defaults()`: for each p4 switch it sets the default action for `ipv4_lpm` and `ecmp_group_to_nhop` tables.
Your task is to implement the `route` function which is in charge of
populating the table entries such that you can route traffic using the shortest path in the network.
Furthermore, if multiple equal-cost paths are found you have to assign them to an ECMP group.
At a high level, the `route` function should do the following:
1. Iterate over all pairs of switches in the topology
2. Compute all the shortest paths between each of these pairs of switches
3. Install the table entries needed depending on the following 3 scenarios:
1. If source switch and destination switch are the same. Install an entry for each directly connected host: You need host ip (use `/32`), mac address, and in which port index it is connected to the switch.
2. If there is a single path between src switch and destination switch and the destination switch has direct hosts connected: this time use the next hop to get the output port and the destination mac address.
3. If there are multiple paths between src switch and destination switch and the destination switch has direct hosts connected: create a ecmp group (as in the example above) for all multiple next hops needed to reach
the destination switch. If for the same source switch the same multiple hops have to be used for another destination use the already defined ecmp group.
To get information about the shortest paths, IP addresses, mac addresses, port indexes, and how nodes are connected between each other you will have to strongly utilize the topology object from `p4-utils`. To implement the routing function you will have to strongly utilize the topology object from `p4-utils`.
Useful methods provided by the `Topology` object that will help you in this
task:
- `self.topo.get_shortest_paths_between_nodes(src, dst)` returns shortest paths between nodes. It can return multiple paths.
- `self.topo.get_p4switches()` returns a dictionary with all p4 switches. Use keys to get p4 switches list.
- `self.topo.get_hosts_connected_to(node)` returns all the hosts connected to
`node`.
- `self.topo.get_switches_connected_to(node)` returns all the switches
connected to `node`.
- `self.topo.node_to_node_port_num(node1, node2)` returns the port at which
`node1` is connected to `node2`.
- `self.topo.node_to_node_mac(node1, node2)` returns the MAC address of the
interface on `node1` which is connected to `node2`.
- `self.topo.get_host_ip(host)` returns the IP address of `host`
- `self.topo.get_host_mac(host)` returns the MAC address of `host`
- `controller.table_add(table_name, action, [match1, match2], [action_parameter1, action_parameter2])`
inserts a table entry. Note that, `table_add` expects all parameters in match and action lists to be strings,
make sure you cast them before.
You can find documentation about all the functions you have to use to solve this exercise in the
p4-utils [documentation](https://github.com/nsg-ethz/p4-utils#topology-object) page (all the functions documented should be enough to solve the exercise). However, if you want to, you can also find the topology object
source code [here](https://github.com/nsg-ethz/p4-utils/blob/master/p4utils/utils/topology.py) and use other functions.
## Testing your solution
Once you completed your implementation of the `route` function of the controller. You can test the program doing the same than before, however
this time you will have to start the controller to populte table entries:
1. Start the topology (this will also compile and load the program).
```bash
sudo p4run --config p4app-no-sx.json
```
2. Run the controller.
```bash
python routing-controller.py
```
3. Check that you can ping:
```bash
mininet> pingall
```
4. check that ECMP works: monitor the 4 links from `s1` that will be used during `ecmp` (from `s1-eth2` to `s1-eth5`). Doing this you will be able to check which path is each flow
taking.
```bash
sudo tcpdump -enn -i s1-ethX
```
4. Ping between two hosts:
You should see traffic in only 1 or 2 interfaces (due to the return path).
Since all the ping packets have the same 5-tuple.
5. Do iperf between two hosts:
You should also see traffic in 1 or 2 interfaces (due to the return path).
Since all the packets belonging to the same flow have the same 5-tuple, and thus the hash always returns the same index.
6. Get a terminal in `h1`. Use the `send.py`.
```bash
python send.py 10.0.6.2 1000
```
This will send `tcp syn` packets with random ports. Now you should see packets going to all the interfaces, since each packet will have a different hash.
### Testing with another topology
Now you have a controller that should be able to populate automatically the routing tables of any topology. To test that your solution does work with other topologies you can use the
`topology_generator.py` script we provided you and generate random topologies:
```bash
python topology_generator.py --output_name <name.json> --topo random -n <number of switches to use> -d <average switch degree>
```
This will create a random topology with `n` switches that have on average `d` interfaces (depending on `n`, `d` might not be possible). Also each switch will have one host directly connected to it (so `n` hosts).
For example, you can create a random topology with 16 switches and an average degree of 4:
```bash
python topology_generator.py --output_name 16-switches.json --topo random -n 16 -d 4
```
Run the random topology:
```bash
sudo p4run --config 16-switches.json
```
Now run the controller, and check that your can send traffic to all the nodes(`pingall`). Furthermore, check that ECMP works.
```
mininet> pingall
*** Ping: testing ping reachability
h1 -> h2 h3 h4 h5 h6 h7 h8 h9 h10 h11 h12 h13 h14 h15 h16
h2 -> h1 h3 h4 h5 h6 h7 h8 h9 h10 h11 h12 h13 h14 h15 h16
h3 -> h1 h2 h4 h5 h6 h7 h8 h9 h10 h11 h12 h13 h14 h15 h16
h4 -> h1 h2 h3 h5 h6 h7 h8 h9 h10 h11 h12 h13 h14 h15 h16
h5 -> h1 h2 h3 h4 h6 h7 h8 h9 h10 h11 h12 h13 h14 h15 h16
h6 -> h1 h2 h3 h4 h5 h7 h8 h9 h10 h11 h12 h13 h14 h15 h16
h7 -> h1 h2 h3 h4 h5 h6 h8 h9 h10 h11 h12 h13 h14 h15 h16
h8 -> h1 h2 h3 h4 h5 h6 h7 h9 h10 h11 h12 h13 h14 h15 h16
h9 -> h1 h2 h3 h4 h5 h6 h7 h8 h10 h11 h12 h13 h14 h15 h16
h10 -> h1 h2 h3 h4 h5 h6 h7 h8 h9 h11 h12 h13 h14 h15 h16
h11 -> h1 h2 h3 h4 h5 h6 h7 h8 h9 h10 h12 h13 h14 h15 h16
h12 -> h1 h2 h3 h4 h5 h6 h7 h8 h9 h10 h11 h13 h14 h15 h16
h13 -> h1 h2 h3 h4 h5 h6 h7 h8 h9 h10 h11 h12 h14 h15 h16
h14 -> h1 h2 h3 h4 h5 h6 h7 h8 h9 h10 h11 h12 h13 h15 h16
h15 -> h1 h2 h3 h4 h5 h6 h7 h8 h9 h10 h11 h12 h13 h14 h16
h16 -> h1 h2 h3 h4 h5 h6 h7 h8 h9 h10 h11 h12 h13 h14 h15
*** Results: 0% dropped (240/240 received)
```
# Task 3: Flowlet Switching
In the previous tasks, we used ECMP as loadbalancer. Ecmp is a very basic (but widely used) technique to load balance traffic across
multiple equal-cost paths. ECMP works very well when it has to load balance many small flows with similar sizes (since it
randomly maps them to one of the possible paths). However, real traffic does not look like that, real traffic is composed by many
small flows, but also but very few that are quite bigger. This makes ECMP suffer from a well-known performance problem such elephant flows collisions,
in which few big flows end up colliding in the same path. In this task, we will use state and time information provided by the software switch
`standard_metadata` to fix the collision problem of ECMP. Instead of loadbalancing flows, we will load balance `flowlets`.
Flowlet switching leverages the burstiness of TCP flows to achieve a better and more dynamic load balancing. TCP flows tend to come in bursts (for instance because
a flow needs to wait to get window space). Every time there is a gap (i.e., 50ms) between packets from the same flow, a new flowlet is generated. In flowlet switching,
every flowlet is re-hashed again to a new path by using the 5-tuple and some random ID). It is important to notice, that the packets that belong to the same flowlet, will
be hashed along the same path avoiding packet reordering.
For more information about flowlet switching check out this [paper](https://www.usenix.org/system/files/conference/nsdi17/nsdi17-vanini.pdf)
## Before Starting
As usual, we provide you with the following files (plus files provided in the previous task):
* `p4src/flowlet_switching.p4`: p4 program skeleton to use as a starting point.
* `p4src/includes`: In the includes directory you will find `headers.p4` and `parsers.p4` (which also have to be completed).
* `send.py`: a small python script to send burst of packets that belong to the same flow.
## Implementing flowlet switching
This exercise is an enhancement of ECMP, and thus, you can start by copying all the code from the previous task.
You will use exactly the same headers, parser, tables, and CLI commands or controller (so you do not need to write this part either).
We provided you with the `flowlet_switching.p4` skeleton, use it as a reference to know where to place the code but don't forget to
copy the ECMP code.
To solve this exercise you will have to use two registers, one for `flowlet_ids` (hash seed or random ID) and one to keep the last timestamp for
every flow. You will have to slightly change the ingress logic to define a new action to read/write the flowlet registers. And modify
the hash function used in ECMP, adding a new field (the `flowlet_id`) which will make different flowlets to be hashed differently.
You will have to fill the gaps in several files: `p4src/flowlet_switching.p4`, `p4src/include/headers.p4`
and `p4src/include/parsers.p4`. Remember you can copy the ECMP's header and parsers since they are the same.
To successfully complete the exercise you have to do the following:
1. Like in the previous task, header definitions are already provided.
2. Define the parser that parses packets up to `tcp`. Note that for simplicity we do not consider `udp` packets
in this exercise. You can simply copy them from the previous task.
3. Define the deparser. Just emit all the headers. You can simply copy them from the previous task.
4. Copy the tables and actions from the previous exercise. You will have to slightly modify them.
5. Define two registers (`register<bit<W>(N)`) `flowlet_to_id` and `flowlet_time_stamp` (for register sizing use the constant defined at the
beginning of `flowlet_switching.p4` file: REGISTER_SIZE, TIMESTAMP_WIDTH, ID_WIDTH). We will use this two registers to keep two things:
1. In `flowlet_to_id` register we keep the id (a random generated number) of each flowlet, this `id` is now added to the
hash function that decides on the output port. As long as this id does not change, packets for that flow will stay in the same path.
2. In `flowlet_time_stamp` register we keep the last timestamp for the last observed packet belonging to a flow.
6. Define an action to read the flowlet's register values (`read_flowlet_registers`). In this action, you will have to hash the 5-tuple
of every packet the index you will use to read the flowlet registers (to save the index you will need to define a new metadata field with a
width size of 14 bits). Using the index you got from the hash function read flowlet id and last timestamp and save them in a metadata field (you
also have to define them). Finally, update the timestamp register using `standard_metadata.ingress_global_timestamp`.
7. Define another action to update the flowlet id (`update_flowlet_id`). We will use this action to update flowlet ids when needed.
In this action you just have to generate a random number, and then save it in the flowlet to id register (using the
register index you already computed previously). To generate random numbers, you can use the `random` extern. You can find the
extern signature in the [v1model architecture definition](https://github.com/p4lang/p4c/blob/master/p4include/v1model.p4#L376).
8. Modify the `hash` function you defined in the ECMP exercise (`ecmp_group`), now instead of just hashing the 5-tuple, you have to
add the metadata field where you store the `flowlet_id` you read from the register (or you just updated).
9. Define the ingress control logic (keep the logic from the ecmp example and add):
Before applying the `ipv4_lpm` table:
1. Read the flowlet registers (calling the action)
2. Compute the time difference between now and the last packet observed for the current flow.
3. Check if the time difference is bigger than `FLOWLET_TIMEOUT` (define at the beginning of the file with a default
value of 200ms).
4. Update the flowlet id if the difference is bigger. Updating the flowlet id will make the hash function output a new value. And thus, packets should be sent to a new port.
5. Apply `ipv4_lpm` and `ecmp_group` in the same way you did in `ecmp`.
## Testing your solution
Once you have the `flowlet_switching.p4` program finished you can test its behavior:
1. Start the topology (this will also compile and load the program).
```bash
sudo p4run --config p4app-no-sx.json
```
2. Run the routing controller if you finished it. Otherwise use `p4app.json` and the CLI files.
```bash
python routing-controller.py
```
3. Check that you can ping:
```bash
> mininet pingall
```
4. Monitor the 4 links from `s1` that will be used during loadbalance (from `s1-eth2` to `s1-eth5`). Doing this you will be able to check which path is each flow
taking.
```bash
sudo tcpdump -enn -i s1-ethX
```
5. Ping between two hosts:
If you run a normal ping from the mininet cli, or using the terminal, by default it will send a ping packet every 1 second. In this
case every ping should belong to a different flowlet, and thus it should be crossing different paths all the time.
6. Do iperf between two hosts:
If you do iperf between `h1` and `h2` you should see all the packets cross the same interfaces almost
all the time (unless you set the gap interval very small).
7. Get a terminal in `h1`. Use the `send.py` script.
```bash
python send.py 10.0.6.2 1000 <sleep_time_between_packets>
```
This will send `tcp syn` packets with the same 5-tuple. You can play with the sleep time (third parameter). If you set it bigger than your gap, packets should change
paths, if you set it smaller (set it quite smaller since the software model is not very precise) you will see all the packets cross the same interfaces.
\ No newline at end of file
{
"program": "p4src/ecmp.p4",
"switch": "simple_switch",
"compiler": "p4c",
"options": "--target bmv2 --arch v1model --std p4-16",
"switch_cli": "simple_switch_CLI",
"cli": true,
"pcap_dump": false,
"enable_log": true,
"topo_module": {
"file_path": "",
"module_name": "p4utils.mininetlib.apptopo",
"object_name": "AppTopoStrategies"
},
"controller_module": null,
"topodb_module": {
"file_path": "",
"module_name": "p4utils.utils.topology",
"object_name": "Topology"
},
"mininet_module": {
"file_path": "",
"module_name": "p4utils.mininetlib.p4net",
"object_name": "P4Mininet"
},
"topology": {
"assignment_strategy": "mixed",
"links": [["h1", "s1"], ["h2", "s6"], ["s1", "s2"], ["s1", "s3"], ["s1", "s4"], ["s1", "s5"], ["s2", "s6"], ["s3", "s6"], ["s4", "s6"], ["s5", "s6"]],
"hosts": {
"h1": {
},
"h2": {
}
},
"switches": {
"s1": {
},
"s2": {
},
"s3": {
},
"s4": {
},
"s5": {
},
"s6": {
}
}
}
}
{
"program": "p4src/ecmp.p4",
"switch": "simple_switch",
"compiler": "p4c",
"options": "--target bmv2 --arch v1model --std p4-16",
"switch_cli": "simple_switch_CLI",
"cli": true,
"pcap_dump": false,
"enable_log": true,
"topo_module": {
"file_path": "",
"module_name": "p4utils.mininetlib.apptopo",
"object_name": "AppTopoStrategies"
},
"controller_module": null,
"topodb_module": {
"file_path": "",
"module_name": "p4utils.utils.topology",
"object_name": "Topology"
},
"mininet_module": {
"file_path": "",
"module_name": "p4utils.mininetlib.p4net",
"object_name": "P4Mininet"
},
"topology": {
"assignment_strategy": "mixed",
"links": [["h1", "s1"], ["h2", "s6"], ["s1", "s2"], ["s1", "s3"], ["s1", "s4"], ["s1", "s5"], ["s2", "s6"], ["s3", "s6"], ["s4", "s6"], ["s5", "s6"]],
"hosts": {
"h1": {
},
"h2": {
}
},
"switches": {
"s1": {
"cli_input": "sX-commands/s1-commands.txt"
},
"s2": {
"cli_input": "sX-commands/s2-commands.txt"
},
"s3": {
"cli_input": "sX-commands/s3-commands.txt"
},
"s4": {
"cli_input": "sX-commands/s4-commands.txt"
},
"s5": {
"cli_input": "sX-commands/s5-commands.txt"
},
"s6": {
"cli_input": "sX-commands/s6-commands.txt"
}
}
}
}
/* -*- P4_16 -*- */
#include <core.p4>
#include <v1model.p4>
//My includes
#include "include/headers.p4"
#include "include/parsers.p4"
/*************************************************************************
************ C H E C K S U M V E R I F I C A T I O N *************
*************************************************************************/
control MyVerifyChecksum(inout headers hdr, inout metadata meta) {
apply { }
}
/*************************************************************************
************** I N G R E S S P R O C E S S I N G *******************
*************************************************************************/
control MyIngress(inout headers hdr,
inout metadata meta,
inout standard_metadata_t standard_metadata) {
action drop() {
mark_to_drop(standard_metadata);
}
action ecmp_group(bit<14> ecmp_group_id, bit<16> num_nhops){
//Task 1, TODO 6: define the ecmp_group action, here you need to hash the 5-tuple mod num_ports and safe it in metadata
}
action set_nhop(macAddr_t dstAddr, egressSpec_t port) {
//Task 1, ODO 5: Define the set_nhop action. You can copy it from the previous exercise, they are the same.
}
table ecmp_group_to_nhop {
//Task 1, TODO 7: define the ecmp table, this table is only called when multiple hops are available
}
table ipv4_lpm {
//Task 1, TODO 4: define the ip forwarding table
}
apply {
//Task 1, TODO 8: implement the ingress logic: check validities, apply first table, and if needed the second table.
}
}
/*************************************************************************
**************** E G R E S S P R O C E S S I N G *******************
*************************************************************************/
control MyEgress(inout headers hdr,
inout metadata meta,
inout standard_metadata_t standard_metadata) {
apply {