first commit
This commit is contained in:
13
.gitignore
vendored
Normal file
13
.gitignore
vendored
Normal file
@ -0,0 +1,13 @@
|
||||
certs/
|
||||
logs/
|
||||
logs_*/
|
||||
*.json
|
||||
!implementations.json
|
||||
!testbed/*.json
|
||||
web/latest
|
||||
|
||||
*.egg-info/
|
||||
__pycache__
|
||||
build/
|
||||
dist/
|
||||
out/
|
13
LICENSE.md
Normal file
13
LICENSE.md
Normal file
@ -0,0 +1,13 @@
|
||||
Copyright 2019 Jana Iyengar, Marten Seemann
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
141
README.md
Normal file
141
README.md
Normal file
@ -0,0 +1,141 @@
|
||||
# Interop Test Runner - ACN
|
||||
|
||||
The Interop Test Runner aims to automatically generate an interop matrix by running multiple **test cases** using different QUIC implementations.
|
||||
Note, this runner is an adaptation of the original implementation.
|
||||
Instead of relying on docker-compose and simulating a network, it executes the server and client from a testbed management host on testbed machines or locally.
|
||||
|
||||
The testbed mode is not important for you, but used by us to test your implementations on real hardware in later phases of the project.
|
||||
|
||||
## Requirements
|
||||
The Interop Runner is written in Python 3. You'll need to install a few Python modules to run it. A virtual environment would be optimal, especially when installing it on a testbed management host.
|
||||
|
||||
```bash
|
||||
pip3 install -r requirements.txt
|
||||
```
|
||||
|
||||
* The client is given URLs including a hostname. To be able to resolve this hostname, the /etc/hosts file has to be updated. The hostname is "server". The IP address has to be set to 127.0.0.1 in local mode.
|
||||
|
||||
- Optional: For several testcases which inspect packet traces you need to install development version of Wireshark (version 4.0.6 or newer). The installed version on your ACN VM already supports all tests.
|
||||
|
||||
## Building a QUIC endpoint
|
||||
|
||||
To include your QUIC implementations in the Interop Runner, three scripts are required:
|
||||
* setup-env.sh
|
||||
* run-client.sh
|
||||
* run-server.sh
|
||||
|
||||
Typically, a test case will require a server to serve files from a directory, and a client to download files. Different test cases will specify the behavior to be tested. For example, the Retry test case expects the server to use a Retry before accepting the connection from the client.
|
||||
All configuration information from the test framework to your implementation is fed to the scripts run-client.sh and run-server.sh.
|
||||
You can use them in your respective implementations as enviorenment variables or use the script to transform them into command line parameters.
|
||||
|
||||
The test case is passed into your Docker container using the `TESTCASE` environment variable. If your implementation doesn't support a test case, it MUST exit with status code 127. This will allow us to add new test cases in the future, and correctly report test failures und successes, even if some implementations have not yet implented support for this new test case.
|
||||
|
||||
After the transfer is completed, the client container is expected to exit with exit status 0. If an error occurred during the transfer, the client is expected to exit with exit status 1.
|
||||
After completion of the test case, the Interop Runner will verify that the client downloaded the files it was expected to transfer, and that the file contents match. Additionally, for certain test cases, the Interop Runner will use the pcap of the transfer to verify that the implementations fulfilled the requirements of the test (for example, for the Retry test case, the pcap should show that a Retry packet was sent, and that the client used the Token provided in that packet).
|
||||
|
||||
### Server Variables
|
||||
The following variables will be given to the server and should be supported by your implementation
|
||||
| Var | Description |
|
||||
| -------- | -------- |
|
||||
| SSLKEYLOGFILE | The variable contains the path + name of the keylog file. The output is required to decrypt traces and verify tests. The file has to be in the [NSS Key Log format](https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSS/Key_Log_Format) |
|
||||
| QLOGDIR | qlog results are not required but might help to debug your output. However they have a negativ impact on performance so you might want to deactivate it for some tests|
|
||||
| LOGS | It contains the path to a directory the server can use for its general logs. These will be uploaded as part of the results artifact. |
|
||||
| TESTCASE | The name of the test case. You have to make sure a random string can be handled by your implementation. |
|
||||
| WWW | It contains the directory that will contain one or more randomly generated files. Your server implementation is expected to run on the given port 443 and serve files from this directory. |
|
||||
| CERTS | The runner will create an X.509 certificate and chain to be used by the server during the handshake. The variable contains the path to a directory that contains a priv.key and cert.pem file. |
|
||||
| IP | The IP the server has to listen on. |
|
||||
| PORT | The port the server has to listen on. |
|
||||
| SERVERNAME | The servername a client might send using SNI. The name relates to the provided certificate and might be necessary for some QUIC implementations. |
|
||||
|
||||
### Client Variables
|
||||
The following variables will be given to the server and should be supported by your implementation
|
||||
| Var | Description |
|
||||
| -------- | -------- |
|
||||
| SSLKEYLOGFILE | The variable contains the path + name of the keylog file. The output is required to decrypt traces and verify tests. The file has to be in the [NSS Key Log format](https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSS/Key_Log_Format). |
|
||||
| QLOGDIR | qlog results are not required but might help to debug your output. However they have a negativ impact on performance so you might want to deactivate it for some tests |
|
||||
| LOGS| It contains the path to a directory the client can use for its general logs. These will be uploaded as part of the results artifact. |
|
||||
| TESTCASE | The name of the test case. You have to make sure a random string can be handled by your implementation. |
|
||||
| DOWNLOADS | The directory is initially empty, and your client implementation is expected to store downloaded files into this directory. Served and downloaded files are compared to check the test. |
|
||||
| REQUESTS | A space seperated list of requests a client should execute one by one. (e.g., https://server:4433/xyz) |
|
||||
|
||||
### implementations.json
|
||||
The implementations.json file contains a simple json with an object for each implementation.
|
||||
Implementations as simply represented as a named object with a path variable.
|
||||
The path should point to the folder containing the three required scripts.
|
||||
Scripsts themselves should be able to execute at any location. Paths inside scripts (e.g., to your binaries) should be relative to the script.
|
||||
|
||||
### ACN Example
|
||||
|
||||
We offer an example implementation on the ACN material [website](https://acn.net.in.tum.de/).
|
||||
It supports all required tests and can be used to test your implementations.
|
||||
|
||||
### Logs
|
||||
|
||||
To facilitate debugging, the Interop Runner saves the log files to the logs directory.
|
||||
|
||||
Implementations that implement [qlog](https://github.com/quiclog/internet-drafts) should export the log files to the directory specified by the `QLOGDIR` environment variable.
|
||||
|
||||
## Test cases
|
||||
|
||||
The Interop Runner implements the following test cases. Unless noted otherwise, test cases use HTTP/3 for file transfers. More test cases will be added in the future, to test more protocol features. The name in parentheses is the value of the `TESTCASE` environment variable passed into your Docker container.
|
||||
|
||||
* **Handshake** (`handshake`):
|
||||
The client requests a single file and the server should serve the file. The test
|
||||
is successful if there is exactly one QUIC handshake and no retries within the
|
||||
packet trace. Additionally, the downloaded file must be equal to the file served
|
||||
by the server.
|
||||
|
||||
* **Transfer** (`transfer`):
|
||||
The client needs to send multiple requests and download all files using a single
|
||||
connection. All files have to match and only a single handshake should be visible
|
||||
to pass the test.
|
||||
|
||||
* **Multi Handshake** (`multihandshake`):
|
||||
The client needs to send multiple requests and download all files using new
|
||||
connections for each request. All files have to match and for each file, a
|
||||
handshake needs to be visible to pass the test.
|
||||
|
||||
* **Version Negotiation** (`versionnegotiation`):
|
||||
Tests whether a server sends a valid version negotiation packet in response to
|
||||
an unknown QUIC version number. The client should start a connection using an
|
||||
unsupported version number (it can use a reserved version number to do so), and
|
||||
has to abort the connection attempt when receiving the Version Negotiation packet.
|
||||
|
||||
* **Transport Parameter** (`transportparameter`):
|
||||
Tests whether the server is able to set an initial_max_streams_bidi value of < 11
|
||||
during the handshake. The client has to download all files with a single connection.
|
||||
|
||||
* **Follow** (`follow`):
|
||||
The client requests a single file from the server, which serves two files. While the
|
||||
first file contains the path of the second file, the second file contains random data.
|
||||
The client only receives one request but has to download both files by parsing the
|
||||
content of the first file and constructing a second request by replacing the path with
|
||||
the one retrieved.
|
||||
|
||||
* **ChaCha20** (`chacha20`):
|
||||
In this test, client and server are expected to offer only
|
||||
`TLS_CHACHA20_POLY1305_SHA256` as a cipher suite. The client then downloads the files.
|
||||
|
||||
* **Retry** (`retry`):
|
||||
Tests that the server can generate a Retry, and that the client can act upon it
|
||||
(i.e. use the Token provided in the Retry packet in the Initial packet). Only a
|
||||
single handshake should be visible.
|
||||
|
||||
* **Resumption** (`resumption`):
|
||||
Tests QUIC session resumption (**without** 0-RTT). The client is expected to establish
|
||||
a connection and download the first file (first value in the REQUESTS variable).
|
||||
The server is expected to provide the client with a session ticket that allows it
|
||||
to resume the connection. After downloading the first file, the client has to close
|
||||
the first connection, establish a resumed connection using the session ticket, and
|
||||
use this connection to download the remaining file(s).
|
||||
|
||||
* **0-RTT** (`zerortt`):
|
||||
Tests QUIC 0-RTT. The client is expected to establish a connection and download the
|
||||
first file. The server is expected to provide the client with a session ticket that
|
||||
allows the client to establish a 0-RTT connection on the next connection attempt.
|
||||
After downloading the first file, the client has to close the first connection,
|
||||
establish and request the remaining file(s) in 0-RTT.
|
||||
|
||||
* **Multiplexing** (`multiplexing`):
|
||||
Tests whether the server is able to set an `initial_max_streams_bidi` value of < 11
|
||||
during the handshake. The client has to download all files with a single connection.
|
82
aggregate.py
Normal file
82
aggregate.py
Normal file
@ -0,0 +1,82 @@
|
||||
import argparse
|
||||
import json
|
||||
import sys
|
||||
import glob
|
||||
import os
|
||||
|
||||
from implementations import IMPLEMENTATIONS
|
||||
|
||||
|
||||
def get_args():
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument(
|
||||
"-s", "--server", help="server implementations (comma-separated)", default=','.join(IMPLEMENTATIONS.keys())
|
||||
)
|
||||
parser.add_argument(
|
||||
"-c", "--client", help="client implementations (comma-separated)", default=','.join(IMPLEMENTATIONS.keys())
|
||||
)
|
||||
parser.add_argument("-l", "--log-dir", help="results directory. In this directory we will search for the logs recursively", default='.')
|
||||
parser.add_argument("-o", "--output", help="output file (stdout if not set)")
|
||||
return parser.parse_args()
|
||||
|
||||
|
||||
STAR_TIME = None
|
||||
|
||||
servers = get_args().server.split(",")
|
||||
clients = get_args().client.split(",")
|
||||
result = {
|
||||
"servers": servers,
|
||||
"clients": clients,
|
||||
"log_dir": get_args().log_dir,
|
||||
"results": [],
|
||||
"measurements": [],
|
||||
"tests": {},
|
||||
"urls": {},
|
||||
}
|
||||
|
||||
|
||||
def parse(server: str, client: str, cat: str):
|
||||
filename = server + "_" + client + "_" + cat + ".json"
|
||||
|
||||
files = glob.glob(os.path.join(get_args().log_dir, "**", filename), recursive=True)
|
||||
if len(files) > 0:
|
||||
with open(files[0]) as f:
|
||||
data = json.load(f)
|
||||
else:
|
||||
print("Warning: Couldn't open file " + filename)
|
||||
result[cat].append([])
|
||||
return
|
||||
parse_data(server, client, cat, data)
|
||||
|
||||
|
||||
def parse_data(server: str, client: str, cat: str, data: object):
|
||||
if len(data["servers"]) != 1:
|
||||
sys.exit("expected exactly one server")
|
||||
if data["servers"][0] != server:
|
||||
sys.exit("inconsistent server")
|
||||
if len(data["clients"]) != 1:
|
||||
sys.exit("expected exactly one client")
|
||||
if data["clients"][0] != client:
|
||||
sys.exit("inconsistent client")
|
||||
if "end_time" not in result or data["end_time"] > result["end_time"]:
|
||||
result["end_time"] = data["end_time"]
|
||||
if "start_time" not in result or data["start_time"] < result["start_time"]:
|
||||
result["start_time"] = data["start_time"]
|
||||
result[cat].append(data[cat][0])
|
||||
result["quic_draft"] = data["quic_draft"]
|
||||
result["quic_version"] = data["quic_version"]
|
||||
#result["urls"].update(data["urls"])
|
||||
result["tests"].update(data["tests"])
|
||||
|
||||
|
||||
for client in clients:
|
||||
for server in servers:
|
||||
parse(server, client, "results")
|
||||
parse(server, client, "measurements")
|
||||
|
||||
if get_args().output:
|
||||
f = open(get_args().output, "w")
|
||||
json.dump(result, f)
|
||||
f.close()
|
||||
else:
|
||||
print(json.dumps(result))
|
10
cert_config.txt
Normal file
10
cert_config.txt
Normal file
@ -0,0 +1,10 @@
|
||||
[ req ]
|
||||
distinguished_name = req_distinguished_name
|
||||
x509_extensions = v3_ca
|
||||
dirstring_type = nobmp
|
||||
[ req_distinguished_name ]
|
||||
[ v3_ca ]
|
||||
keyUsage=critical, keyCertSign
|
||||
subjectKeyIdentifier=hash
|
||||
authorityKeyIdentifier=keyid:always,issuer:always
|
||||
basicConstraints=critical,CA:TRUE,pathlen:100
|
59
certs.sh
Executable file
59
certs.sh
Executable file
@ -0,0 +1,59 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -e
|
||||
|
||||
if [ -z "$1" ] || [ -z "$2" ] ; then
|
||||
echo "$0 <cert dir> <chain length>"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
CERTDIR=$1
|
||||
CHAINLEN=$2
|
||||
|
||||
mkdir -p $CERTDIR || true
|
||||
|
||||
# Generate Root CA and certificate
|
||||
openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 \
|
||||
-keyout $CERTDIR/ca_0.key -out $CERTDIR/cert_0.pem \
|
||||
-subj "/O=interop runner Root Certificate Authority/" \
|
||||
-config cert_config.txt \
|
||||
-extensions v3_ca \
|
||||
2> /dev/null
|
||||
|
||||
for i in $(seq 1 $CHAINLEN); do
|
||||
# Generate a CSR
|
||||
SUBJ="interop runner intermediate $i"
|
||||
if [[ $i == $CHAINLEN ]]; then
|
||||
SUBJ="interop runner leaf"
|
||||
fi
|
||||
openssl req -out $CERTDIR/cert.csr -new -newkey rsa:2048 -nodes -keyout $CERTDIR/ca_$i.key \
|
||||
-subj "/O=$SUBJ/" \
|
||||
2> /dev/null
|
||||
|
||||
# Sign the certificate
|
||||
j=$(($i-1))
|
||||
if [[ $i < $CHAINLEN ]]; then
|
||||
openssl x509 -req -sha256 -days 365 -in $CERTDIR/cert.csr -out $CERTDIR/cert_$i.pem \
|
||||
-CA $CERTDIR/cert_$j.pem -CAkey $CERTDIR/ca_$j.key -CAcreateserial \
|
||||
-extfile cert_config.txt \
|
||||
-extensions v3_ca \
|
||||
2> /dev/null
|
||||
else
|
||||
openssl x509 -req -sha256 -days 365 -in $CERTDIR/cert.csr -out $CERTDIR/cert_$i.pem \
|
||||
-CA $CERTDIR/cert_$j.pem -CAkey $CERTDIR/ca_$j.key -CAcreateserial \
|
||||
-extfile <(printf "subjectAltName=DNS:server,DNS:server4,DNS:server6,DNS:server46") \
|
||||
2> /dev/null
|
||||
fi
|
||||
done
|
||||
|
||||
mv $CERTDIR/cert_0.pem $CERTDIR/ca.pem
|
||||
cp $CERTDIR/ca_$CHAINLEN.key $CERTDIR/priv.key
|
||||
|
||||
# combine certificates
|
||||
for i in $(seq $CHAINLEN -1 1); do
|
||||
cat $CERTDIR/cert_$i.pem >> $CERTDIR/cert.pem
|
||||
rm $CERTDIR/cert_$i.pem $CERTDIR/ca_$i.key
|
||||
done
|
||||
rm $CERTDIR/*.srl $CERTDIR/ca_0.key $CERTDIR/cert.csr
|
||||
|
||||
|
103
fetch_artifacts.py
Normal file
103
fetch_artifacts.py
Normal file
@ -0,0 +1,103 @@
|
||||
import argparse
|
||||
import os
|
||||
import sys
|
||||
import gitlab
|
||||
import zipfile
|
||||
import io
|
||||
from termcolor import colored
|
||||
import logging
|
||||
|
||||
from implementations import IMPLEMENTATIONS
|
||||
|
||||
logging.basicConfig(
|
||||
format='%(asctime)s %(levelname)s %(message)s',
|
||||
datefmt='%m-%d %H:%M:%S'
|
||||
)
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
logger.setLevel(logging.DEBUG)
|
||||
|
||||
|
||||
class ZipFileWithPermissions(zipfile.ZipFile):
|
||||
""" Custom ZipFile class handling file permissions.
|
||||
From https://stackoverflow.com/a/54748564 """
|
||||
def _extract_member(self, member, targetpath, pwd):
|
||||
if not isinstance(member, zipfile.ZipInfo):
|
||||
member = self.getinfo(member)
|
||||
|
||||
targetpath = super()._extract_member(member, targetpath, pwd)
|
||||
|
||||
attr = member.external_attr >> 16
|
||||
if attr != 0:
|
||||
os.chmod(targetpath, attr)
|
||||
return targetpath
|
||||
|
||||
|
||||
def main(args):
|
||||
GITLAB_TOKEN = os.getenv('GITLAB_TOKEN')
|
||||
CI_JOB_TOKEN = os.getenv('CI_JOB_TOKEN')
|
||||
gitlab_url = 'https://gitlab.lrz.de'
|
||||
|
||||
if GITLAB_TOKEN:
|
||||
logger.info('Using GITLAB_TOKEN')
|
||||
gl = gitlab.Gitlab(gitlab_url, private_token=GITLAB_TOKEN)
|
||||
elif CI_JOB_TOKEN:
|
||||
logger.info('Using CI_JOB_TOKEN')
|
||||
gl = gitlab.Gitlab(gitlab_url, job_token=os.environ['CI_JOB_TOKEN'])
|
||||
else:
|
||||
logger.error('Set GITLAB_TOKEN or CI_JOB_TOKEN')
|
||||
exit(1)
|
||||
|
||||
implementations = {}
|
||||
if args.implementations:
|
||||
for s in args.implementations:
|
||||
if s not in [n for n, _ in IMPLEMENTATIONS.items()]:
|
||||
sys.exit("implementation " + s + " not found.")
|
||||
implementations[s] = IMPLEMENTATIONS[s]
|
||||
else:
|
||||
implementations = IMPLEMENTATIONS
|
||||
|
||||
successful = 0
|
||||
errors = 0
|
||||
|
||||
for name, value in implementations.items():
|
||||
project_id = value.get("project_id")
|
||||
|
||||
if not project_id:
|
||||
logger.info(colored(f'{name}: no Gitlab project id specified, skipping.', 'yellow'))
|
||||
continue
|
||||
|
||||
outpath = os.path.join(args.output_directory, name)
|
||||
os.makedirs(outpath, exist_ok=True)
|
||||
|
||||
# Get project
|
||||
project = gl.projects.get(project_id, lazy=True)
|
||||
|
||||
# Get branch, use main if not set
|
||||
ref = value.get('branch', 'main')
|
||||
|
||||
# Get latest build artifact and extract
|
||||
try:
|
||||
for job in project.jobs.list(all=True):
|
||||
if job.ref == ref and job.name == 'build' and job.status == 'success':
|
||||
artifacts = job.artifact(path='/artifact.zip')
|
||||
ZipFileWithPermissions(io.BytesIO(artifacts)).extractall(path=outpath)
|
||||
logger.info(colored(f'{name}: artifacts pulled successfully for {ref}.', 'green'))
|
||||
successful += 1
|
||||
break
|
||||
except gitlab.exceptions.GitlabGetError:
|
||||
logger.info(colored(f'{name}: failed to pull artifacts.', 'red'))
|
||||
errors += 1
|
||||
except zipfile.BadZipFile:
|
||||
logger.info(colored(f'{name}: failed to pull artifacts.', 'red'))
|
||||
errors += 1
|
||||
logger.info(f'{successful}/{successful + errors} artifacts downloaded.')
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument("-i", "--implementations", help="implementations to pull", nargs='*')
|
||||
parser.add_argument("-o", "--output_directory", help="write output to this directory", default='out')
|
||||
args = parser.parse_args()
|
||||
main(args)
|
5
implementations.json
Normal file
5
implementations.json
Normal file
@ -0,0 +1,5 @@
|
||||
{
|
||||
"name": {
|
||||
"path": "/dev/null"
|
||||
}
|
||||
}
|
30
implementations.py
Normal file
30
implementations.py
Normal file
@ -0,0 +1,30 @@
|
||||
import json
|
||||
import re
|
||||
from enum import Enum
|
||||
|
||||
IMPLEMENTATIONS = {}
|
||||
|
||||
|
||||
class Role(Enum):
|
||||
BOTH = "both"
|
||||
SERVER = "server"
|
||||
CLIENT = "client"
|
||||
|
||||
|
||||
def parse_filesize(input: str, default_unit="B"):
|
||||
units = {"B": 1, "KB": 10 ** 3, "MB": 10 ** 6, "GB": 10 ** 9, "TB": 10 ** 12,
|
||||
"KiB": 2 ** 10, "MiB": 2 ** 20, "GiB": 2 ** 30, "TiB": 2 ** 40}
|
||||
m = re.match(fr'^(\d+(?:\.\d+)?)\s*({"|".join(units.keys())})?$', input)
|
||||
units[None] = units[default_unit]
|
||||
if m:
|
||||
number, unit = m.groups()
|
||||
return int(float(number) * units[unit])
|
||||
raise ValueError("Invalid file size")
|
||||
|
||||
|
||||
with open("implementations.json", "r") as f:
|
||||
data = json.load(f)
|
||||
for name, val in data.items():
|
||||
if 'max_filesize' in val.keys():
|
||||
val['max_filesize'] = parse_filesize(val['max_filesize'])
|
||||
IMPLEMENTATIONS[name] = val
|
1679
interop.py
Normal file
1679
interop.py
Normal file
File diff suppressed because it is too large
Load Diff
6
requirements.txt
Normal file
6
requirements.txt
Normal file
@ -0,0 +1,6 @@
|
||||
psutil
|
||||
termcolor
|
||||
prettytable
|
||||
pyshark
|
||||
python-gitlab
|
||||
pyyaml
|
7
result.py
Normal file
7
result.py
Normal file
@ -0,0 +1,7 @@
|
||||
from enum import Enum
|
||||
|
||||
|
||||
class TestResult(Enum):
|
||||
SUCCEEDED = "succeeded"
|
||||
FAILED = "failed"
|
||||
UNSUPPORTED = "unsupported"
|
394
run.py
Executable file
394
run.py
Executable file
@ -0,0 +1,394 @@
|
||||
#!/usr/bin/env python3
|
||||
|
||||
import argparse
|
||||
import ast
|
||||
import sys
|
||||
import yaml
|
||||
|
||||
from typing import List, Tuple
|
||||
from yaml.scanner import ScannerError
|
||||
|
||||
import testcases
|
||||
|
||||
from implementations import IMPLEMENTATIONS
|
||||
from implementations import parse_filesize
|
||||
from interop import InteropRunner
|
||||
from testcases import MEASUREMENTS, TESTCASES
|
||||
|
||||
|
||||
def main():
|
||||
def get_args():
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument(
|
||||
"-d",
|
||||
"--debug",
|
||||
action="store_const",
|
||||
const=True,
|
||||
default=False,
|
||||
help="turn on debug logs",
|
||||
)
|
||||
parser.add_argument(
|
||||
"-m",
|
||||
"--manual-mode",
|
||||
action="store_true",
|
||||
help="only prepare the tests and print out the server and client run commands (to be executed manually)"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--config",
|
||||
metavar="config.yml",
|
||||
help="File containing argument values"
|
||||
)
|
||||
parser.add_argument(
|
||||
"-6", "--enable-ipv6", action="store_true", default=False, dest="v6", # dest allows accessing it
|
||||
help="Enables IPv6 execution in the interop runner"
|
||||
)
|
||||
parser.add_argument(
|
||||
"-s", "--server", help="server implementations (comma-separated)"
|
||||
)
|
||||
parser.add_argument(
|
||||
"-c", "--client", help="client implementations (comma-separated)"
|
||||
)
|
||||
parser.add_argument(
|
||||
"-t",
|
||||
"--test",
|
||||
help="test cases (comma-separatated). Valid test cases are: "
|
||||
+ ", ".join([x.name() for x in TESTCASES + MEASUREMENTS]),
|
||||
)
|
||||
parser.add_argument(
|
||||
"-l",
|
||||
"--log-dir",
|
||||
help="log directory",
|
||||
default="",
|
||||
)
|
||||
parser.add_argument(
|
||||
"-f", "--save-files", help="save downloaded files if a test fails"
|
||||
)
|
||||
parser.add_argument(
|
||||
"-i", "--implementation-directory",
|
||||
help="Directory containing the implementations."
|
||||
"This is prepended to the 'path' in the implementations.json file."
|
||||
"Default: .",
|
||||
default='.'
|
||||
)
|
||||
parser.add_argument(
|
||||
"-j", "--json", help="output the matrix to file in json format"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--venv-dir",
|
||||
help="dir to store venvs",
|
||||
default="",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--testbed",
|
||||
help="Runs the measurement in testbed mode. Requires a json file with client/server information"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--bandwidth",
|
||||
help="Set a link bandwidth value which will be enforced using tc. Is only set in testbed mode on the remote hosts. Set values in tc syntax, e.g. 100mbit, 1gbit"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--delay",
|
||||
help="Add the chosen delay to packets sent to the client interface using tc. Set values in milliseconds, "
|
||||
"e.g. --delay 10ms"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--reorder",
|
||||
nargs=2,
|
||||
help="Add random reordering to packets sent to the client interface using tc. It is required to set a "
|
||||
"delay value for using this option. Two percentage values are required for this option. The first is "
|
||||
"the percentage of packets immediately sent and the second is the correlation ,"
|
||||
"e.g. --reorder-packets 25%% 50%%"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--corruption",
|
||||
help="Add random noise corruption using tc. This option introduces a single bit error at a random offset "
|
||||
"in the packet. Set value in percentage, e.g. --corruption 0.1%%"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--loss",
|
||||
help="Add random packet loss specified in the 'tc' command in percentage, e.g. --loss 0.1%%"
|
||||
)
|
||||
# TODO: Maybe add option for packet duplication too
|
||||
|
||||
# Handle Pre-/Postscripts for server and client
|
||||
script_variable_help_msg = ("Available pos variables to use: "
|
||||
"interface (the interface we send/receive on, e.g enp123test), "
|
||||
"hostname, log_dir (the directory where all logs are saved to)")
|
||||
script_server_vars = ", www_dir (server root when serving files), certs_dir (folder of the certificates)"
|
||||
script_client_vars = ", sim_log_dir, download_dir"
|
||||
parser.add_argument(
|
||||
"-spre",
|
||||
"--server-prerunscript",
|
||||
default=[],
|
||||
nargs="*",
|
||||
metavar="SCRIPT",
|
||||
help="Add a bash script which should be executed before a test run on the server using pos. " + script_variable_help_msg + script_server_vars,
|
||||
)
|
||||
parser.add_argument(
|
||||
"-sprehot",
|
||||
"--server-prerunscript-hot",
|
||||
default=[],
|
||||
nargs="*",
|
||||
metavar="SCRIPT",
|
||||
help="Add a bash script which should be executed at the first hold stage (client/server called begin()) " + script_variable_help_msg + script_server_vars,
|
||||
)
|
||||
parser.add_argument(
|
||||
"-sposthot",
|
||||
"--server-postrunscript-hot",
|
||||
default=[],
|
||||
nargs="*",
|
||||
metavar="SCRIPT",
|
||||
help="Add a bash script which should be executed at the second hold stage (client/server called end()) " + script_variable_help_msg + script_server_vars,
|
||||
)
|
||||
parser.add_argument(
|
||||
"-spost",
|
||||
"--server-postrunscript",
|
||||
default=[],
|
||||
nargs="*",
|
||||
metavar="SCRIPT",
|
||||
help="Add a bash script which should be executed after a test run on the server using pos. " + script_variable_help_msg + script_server_vars,
|
||||
)
|
||||
parser.add_argument(
|
||||
"-cpre",
|
||||
"--client-prerunscript",
|
||||
default=[],
|
||||
nargs="*",
|
||||
metavar="SCRIPT",
|
||||
help="Add a bash script which should be executed before a test run on the client using pos. " + script_variable_help_msg + script_client_vars,
|
||||
)
|
||||
parser.add_argument(
|
||||
"-cprehot",
|
||||
"--client-prerunscript-hot",
|
||||
default=[],
|
||||
nargs="*",
|
||||
metavar="SCRIPT",
|
||||
help="Add a bash script which should be executed at the first hold stage (client/server called begin()) " + script_variable_help_msg + script_client_vars,
|
||||
)
|
||||
parser.add_argument(
|
||||
"-cposthot",
|
||||
"--client-postrunscript-hot",
|
||||
default=[],
|
||||
nargs="*",
|
||||
metavar="SCRIPT",
|
||||
help="Add a bash script which should be executed at the second hold stage (client/server called end()) " + script_variable_help_msg + script_client_vars,
|
||||
)
|
||||
parser.add_argument(
|
||||
"-cpost",
|
||||
"--client-postrunscript",
|
||||
default=[],
|
||||
nargs="*",
|
||||
metavar="SCRIPT",
|
||||
help="Add a bash script which should be executed after a test run on the client using pos. " + script_variable_help_msg + script_client_vars,
|
||||
)
|
||||
parser.add_argument(
|
||||
"--client-implementation-params",
|
||||
nargs="*",
|
||||
metavar="KEY=VALUE",
|
||||
help="",
|
||||
default=[]
|
||||
)
|
||||
parser.add_argument(
|
||||
"--server-implementation-params",
|
||||
nargs="*",
|
||||
metavar="KEY=VALUE",
|
||||
help="",
|
||||
default=[]
|
||||
)
|
||||
parser.add_argument(
|
||||
"--disable-server-aes-offload",
|
||||
action="store_const",
|
||||
const=True,
|
||||
default=False,
|
||||
help="turn server aes offload off",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--disable-client-aes-offload",
|
||||
action="store_const",
|
||||
const=True,
|
||||
default=False,
|
||||
help="turn client aes offload off",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--filesize",
|
||||
help="Set the filesize of the transmitted file for all measurements. If no unit is specified MiB is assumed."
|
||||
)
|
||||
parser.add_argument(
|
||||
"--repetitions",
|
||||
metavar="N",
|
||||
type=int,
|
||||
help="Set the number of repetitions for all measurements."
|
||||
)
|
||||
parser.add_argument(
|
||||
"--continue-on-error",
|
||||
action="store_true",
|
||||
help="Continue measurement even if a measurement fails."
|
||||
)
|
||||
parser.add_argument(
|
||||
"--use-client-timestamps",
|
||||
action="store_true",
|
||||
help="Try to parse timestamps written by the client for computing goodput."
|
||||
)
|
||||
parser.add_argument(
|
||||
"--only-same-implementation",
|
||||
action="store_true",
|
||||
help="Test implementations only against their counterpart."
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
if args.config:
|
||||
config_args = parse_config(args.config)
|
||||
parser.set_defaults(**config_args)
|
||||
# Ensure that every config file argument is defined in the parser
|
||||
for k, _ in config_args.items():
|
||||
if k not in args:
|
||||
sys.exit(f"Argument '{k}' from config file was not recognized by the parser.")
|
||||
args = parser.parse_args()
|
||||
|
||||
return args
|
||||
|
||||
def parse_config(config_file):
|
||||
try:
|
||||
with open(config_file, 'r') as f:
|
||||
model_config = yaml.safe_load(f)
|
||||
return model_config
|
||||
except ScannerError:
|
||||
sys.exit("config file syntax error!")
|
||||
except FileNotFoundError:
|
||||
sys.exit("config file not found!")
|
||||
|
||||
def get_dict_arg(arg):
|
||||
"""Given: list containing one KV pair
|
||||
per entry
|
||||
Return: dict containing all KV pairs
|
||||
from the list
|
||||
"""
|
||||
output = {}
|
||||
if not arg:
|
||||
return output
|
||||
for item in arg:
|
||||
if type(item) is dict:
|
||||
output = {**output, **item}
|
||||
else:
|
||||
try:
|
||||
k, v = item.split('=', 1)
|
||||
except ValueError:
|
||||
# handle entries without equals symbol as bool set to True
|
||||
output[item] = True
|
||||
continue
|
||||
try:
|
||||
output[k] = ast.literal_eval(v)
|
||||
except (ValueError, SyntaxError):
|
||||
output[k] = v
|
||||
return output
|
||||
|
||||
def get_impls(arg, availableImpls, role) -> List[str]:
|
||||
if not arg:
|
||||
return availableImpls
|
||||
impls = []
|
||||
arg = arg.replace(" ", "").replace("\n", "")
|
||||
for s in arg.replace(" ", "").split(","):
|
||||
if s not in availableImpls:
|
||||
sys.exit(role + " implementation " + s + " not found.")
|
||||
impls.append(s)
|
||||
return impls
|
||||
|
||||
def get_tests_and_measurements(
|
||||
arg,
|
||||
filesize,
|
||||
repetitions,
|
||||
) -> Tuple[List[testcases.TestCase], List[testcases.Measurement]]:
|
||||
if arg is None:
|
||||
return TESTCASES, MEASUREMENTS
|
||||
elif arg == "onlyTests":
|
||||
return TESTCASES, []
|
||||
elif arg == "onlyMeasurements":
|
||||
return [], MEASUREMENTS
|
||||
elif not arg:
|
||||
return [], []
|
||||
tests = []
|
||||
measurements = []
|
||||
for t in arg.split(","):
|
||||
if t in [tc.name() for tc in TESTCASES]:
|
||||
tests += [tc for tc in TESTCASES if tc.name() == t]
|
||||
elif t in [tc.name() for tc in MEASUREMENTS]:
|
||||
measurement = [tc for tc in MEASUREMENTS if tc.name() == t]
|
||||
if filesize:
|
||||
measurement[0].FILESIZE = parse_filesize(str(filesize), default_unit="MiB")
|
||||
if repetitions:
|
||||
measurement[0].REPETITIONS = int(repetitions)
|
||||
measurements += measurement
|
||||
else:
|
||||
print(
|
||||
(
|
||||
"Test case {} not found.\n"
|
||||
"Available testcases: {}\n"
|
||||
"Available measurements: {}"
|
||||
).format(
|
||||
t,
|
||||
", ".join([t.name() for t in TESTCASES]),
|
||||
", ".join([t.name() for t in MEASUREMENTS]),
|
||||
)
|
||||
)
|
||||
sys.exit()
|
||||
return tests, measurements
|
||||
|
||||
args = get_args()
|
||||
args.server_implementation_params = get_dict_arg(args.server_implementation_params)
|
||||
args.client_implementation_params = get_dict_arg(args.client_implementation_params)
|
||||
|
||||
tests, measurements = get_tests_and_measurements(
|
||||
args.test,
|
||||
args.filesize,
|
||||
args.repetitions
|
||||
)
|
||||
|
||||
# Check if reorder packets option is set without delay
|
||||
if args.reorder and (args.delay is None):
|
||||
print("--reorder requires --delay")
|
||||
return 1
|
||||
|
||||
if args.manual_mode and not args.testbed:
|
||||
print("Manual mode is currently only supported in testbed mode!")
|
||||
return 1
|
||||
|
||||
return InteropRunner(
|
||||
implementations=IMPLEMENTATIONS,
|
||||
implementations_directory=args.implementation_directory,
|
||||
servers=get_impls(args.server, IMPLEMENTATIONS, "Server"),
|
||||
clients=get_impls(args.client, IMPLEMENTATIONS, "Client"),
|
||||
tests=tests,
|
||||
measurements=measurements,
|
||||
output=args.json,
|
||||
debug=args.debug,
|
||||
manual_mode=args.manual_mode,
|
||||
log_dir=args.log_dir,
|
||||
save_files=args.save_files,
|
||||
venv_dir=args.venv_dir,
|
||||
testbed=args.testbed,
|
||||
bandwidth=args.bandwidth,
|
||||
server_pre_scripts=args.server_prerunscript,
|
||||
server_pre_hot_scripts=args.server_prerunscript_hot,
|
||||
server_post_hot_scripts=args.server_postrunscript_hot,
|
||||
server_post_scripts=args.server_postrunscript,
|
||||
client_pre_scripts=args.client_prerunscript,
|
||||
client_pre_hot_scripts=args.client_prerunscript_hot,
|
||||
client_post_hot_scripts=args.client_postrunscript_hot,
|
||||
client_post_scripts=args.client_postrunscript,
|
||||
reorder_packets=args.reorder,
|
||||
delay=args.delay,
|
||||
corruption=args.corruption,
|
||||
loss=args.loss,
|
||||
client_implementation_params=args.client_implementation_params,
|
||||
server_implementation_params=args.server_implementation_params,
|
||||
disable_server_aes_offload=args.disable_server_aes_offload,
|
||||
disable_client_aes_offload=args.disable_client_aes_offload,
|
||||
continue_on_error=args.continue_on_error,
|
||||
use_client_timestamps=args.use_client_timestamps,
|
||||
only_same_implementation=args.only_same_implementation,
|
||||
use_v6=args.v6,
|
||||
args=vars(args),
|
||||
).run()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
1072
testcases.py
Normal file
1072
testcases.py
Normal file
File diff suppressed because it is too large
Load Diff
197
trace.py
Normal file
197
trace.py
Normal file
@ -0,0 +1,197 @@
|
||||
import datetime
|
||||
import logging
|
||||
from enum import Enum
|
||||
from typing import List, Optional, Tuple
|
||||
|
||||
import pyshark
|
||||
|
||||
IP4_SERVER = "127.0.0.1"
|
||||
IP6_SERVER = "fd00:cafe:cafe:100::100"
|
||||
PORT_SERVER = 4433
|
||||
QUIC_V2 = hex(0x6B3343CF)
|
||||
|
||||
|
||||
class Direction(Enum):
|
||||
ALL = 0
|
||||
FROM_CLIENT = 1
|
||||
FROM_SERVER = 2
|
||||
INVALID = 3
|
||||
|
||||
|
||||
class PacketType(Enum):
|
||||
INITIAL = 1
|
||||
HANDSHAKE = 2
|
||||
ZERORTT = 3
|
||||
RETRY = 4
|
||||
ONERTT = 5
|
||||
VERSIONNEGOTIATION = 6
|
||||
INVALID = 7
|
||||
|
||||
|
||||
WIRESHARK_PACKET_TYPES = {
|
||||
PacketType.INITIAL: "0",
|
||||
PacketType.ZERORTT: "1",
|
||||
PacketType.HANDSHAKE: "2",
|
||||
PacketType.RETRY: "3",
|
||||
}
|
||||
|
||||
|
||||
WIRESHARK_PACKET_TYPES_V2 = {
|
||||
PacketType.INITIAL: "1",
|
||||
PacketType.ZERORTT: "2",
|
||||
PacketType.HANDSHAKE: "3",
|
||||
PacketType.RETRY: "0",
|
||||
}
|
||||
|
||||
|
||||
def get_packet_type(p) -> PacketType:
|
||||
if p.quic.header_form == "0":
|
||||
return PacketType.ONERTT
|
||||
if p.quic.version == "0x00000000":
|
||||
return PacketType.VERSIONNEGOTIATION
|
||||
if p.quic.version == QUIC_V2:
|
||||
for t, num in WIRESHARK_PACKET_TYPES_V2.items():
|
||||
if p.quic.long_packet_type_v2 == num:
|
||||
return t
|
||||
return PacketType.INVALID
|
||||
for t, num in WIRESHARK_PACKET_TYPES.items():
|
||||
if p.quic.long_packet_type == num:
|
||||
return t
|
||||
return PacketType.INVALID
|
||||
|
||||
|
||||
class TraceAnalyzer:
|
||||
_filename = ""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
filename: str,
|
||||
keylog_file: Optional[str] = None,
|
||||
ip4_server: str=None,
|
||||
ip6_server: str = None,
|
||||
port_server: int = None,
|
||||
):
|
||||
self._filename = filename
|
||||
self._keylog_file = keylog_file
|
||||
self._ip4_server = ip4_server or IP4_SERVER
|
||||
self._ip6_server = ip6_server or IP6_SERVER
|
||||
self._port_server = port_server or PORT_SERVER
|
||||
|
||||
def _get_direction_filter(self, d: Direction) -> str:
|
||||
f = "(quic && !icmp) && "
|
||||
if d == Direction.FROM_CLIENT:
|
||||
return (
|
||||
f + "((ip.dst==" + self._ip4_server + " || ipv6.dst==" + self._ip6_server + ") && udp.dstport==" + str(self._port_server) + ") && "
|
||||
)
|
||||
elif d == Direction.FROM_SERVER:
|
||||
return (
|
||||
f + "((ip.src==" + self._ip4_server + " || ipv6.src==" + self._ip6_server + ") && udp.srcport==" + str(self._port_server) + ") && "
|
||||
)
|
||||
else:
|
||||
return f
|
||||
|
||||
def _get_packets(self, f: str) -> List:
|
||||
override_prefs = {}
|
||||
if self._keylog_file is not None:
|
||||
override_prefs["tls.keylog_file"] = self._keylog_file
|
||||
cap = pyshark.FileCapture(
|
||||
self._filename,
|
||||
display_filter=f,
|
||||
override_prefs=override_prefs,
|
||||
disable_protocol="http3", # see https://github.com/marten-seemann/quic-interop-runner/pull/179/
|
||||
decode_as={"udp.port==443": "quic"},
|
||||
)
|
||||
packets = []
|
||||
# If the pcap has been cut short in the middle of the packet, pyshark will crash.
|
||||
# See https://github.com/KimiNewt/pyshark/issues/390.
|
||||
try:
|
||||
for p in cap:
|
||||
packets.append(p)
|
||||
cap.close()
|
||||
except Exception as e:
|
||||
logging.debug(e)
|
||||
|
||||
if self._keylog_file is not None:
|
||||
for p in packets:
|
||||
if hasattr(p["quic"], "decryption_failed"):
|
||||
logging.info("At least one QUIC packet could not be decrypted")
|
||||
logging.debug(p)
|
||||
break
|
||||
return packets
|
||||
|
||||
def get_raw_packets(self, direction: Direction = Direction.ALL) -> List:
|
||||
packets = []
|
||||
for packet in self._get_packets(self._get_direction_filter(direction) + "quic"):
|
||||
packets.append(packet)
|
||||
return packets
|
||||
|
||||
def get_1rtt(self, direction: Direction = Direction.ALL) -> List:
|
||||
"""Get all QUIC packets, one or both directions."""
|
||||
packets, _, _ = self.get_1rtt_sniff_times(direction)
|
||||
return packets
|
||||
|
||||
def get_1rtt_sniff_times(
|
||||
self, direction: Direction = Direction.ALL
|
||||
) -> Tuple[List, datetime.datetime, datetime.datetime]:
|
||||
"""Get all QUIC packets, one or both directions, and first and last sniff times."""
|
||||
packets = []
|
||||
first, last = 0, 0
|
||||
for packet in self._get_packets(
|
||||
self._get_direction_filter(direction) + "quic.header_form==0"
|
||||
):
|
||||
for layer in packet.layers:
|
||||
if (
|
||||
layer.layer_name == "quic"
|
||||
and not hasattr(layer, "long_packet_type")
|
||||
and not hasattr(layer, "long_packet_type_v2")
|
||||
):
|
||||
if first == 0:
|
||||
first = packet.sniff_time
|
||||
last = packet.sniff_time
|
||||
packets.append(layer)
|
||||
return packets, first, last
|
||||
|
||||
def get_vnp(self, direction: Direction = Direction.ALL) -> List:
|
||||
return self._get_packets(
|
||||
self._get_direction_filter(direction) + "quic.version==0"
|
||||
)
|
||||
|
||||
def _get_long_header_packets(
|
||||
self, packet_type: PacketType, direction: Direction
|
||||
) -> List:
|
||||
packets = []
|
||||
for packet in self._get_packets(
|
||||
self._get_direction_filter(direction)
|
||||
+ "(quic.long.packet_type || quic.long.packet_type_v2)"
|
||||
):
|
||||
for layer in packet.layers:
|
||||
if layer.layer_name == "quic" and (
|
||||
(
|
||||
hasattr(layer, "long_packet_type")
|
||||
and layer.long_packet_type
|
||||
== WIRESHARK_PACKET_TYPES[packet_type]
|
||||
)
|
||||
or (
|
||||
hasattr(layer, "long_packet_type_v2")
|
||||
and layer.long_packet_type_v2
|
||||
== WIRESHARK_PACKET_TYPES_V2[packet_type]
|
||||
)
|
||||
):
|
||||
packets.append(layer)
|
||||
return packets
|
||||
|
||||
def get_initial(self, direction: Direction = Direction.ALL) -> List:
|
||||
""" Get all Initial packets. """
|
||||
return self._get_long_header_packets(PacketType.INITIAL, direction)
|
||||
|
||||
def get_retry(self, direction: Direction = Direction.ALL) -> List:
|
||||
""" Get all Retry packets. """
|
||||
return self._get_long_header_packets(PacketType.RETRY, direction)
|
||||
|
||||
def get_handshake(self, direction: Direction = Direction.ALL) -> List:
|
||||
""" Get all Handshake packets. """
|
||||
return self._get_long_header_packets(PacketType.HANDSHAKE, direction)
|
||||
|
||||
def get_0rtt(self) -> List:
|
||||
""" Get all 0-RTT packets. """
|
||||
return self._get_long_header_packets(PacketType.ZERORTT, Direction.FROM_CLIENT)
|
Reference in New Issue
Block a user