language
stringlengths
0
24
filename
stringlengths
9
214
code
stringlengths
99
9.93M
Markdown
algo/docs/cloud-linode.md
## API Token Sign into the Linode Manager and go to the [tokens management page](https://cloud.linode.com/profile/tokens). Click `Add a Personal Access Token`. Label your new token and select *at least* the `Linodes` read/write permission and `StackScripts` read/write permission. Press `Submit` and make sure to copy the displayed token as it won't be shown again.
Markdown
algo/docs/cloud-scaleway.md
### Configuration file Algo requires an API key from your Scaleway account to create a server. The API key is generated by going to your Scaleway credentials at [https://console.scaleway.com/project/credentials](https://console.scaleway.com/project/credentials), and then selecting "Generate new API key" on the right side of the box labeled "API Keys". You'll be ask for to specify a purpose for your API key before it is created. You will then be presented and "Access key" and a "Secret key". Enter the "Secret key" when Algo prompts you for the `auth token`. You won't need the "Access key". This information will be pass as the `algo_scaleway_token` variable when asked for in the Algo prompt. Your organization ID is also on this page: https://console.scaleway.com/account/credentials
Markdown
algo/docs/cloud-vultr.md
### Configuration file Algo requires an API key from your Vultr account in order to create a server. The API key is generated by going to your Vultr settings at https://my.vultr.com/settings/#settingsapi, and then selecting "generate new API key" on the right side of the box labeled "API Key". Algo can read the API key in several different ways. Algo will first look for the file containing the API key in the environment variable $VULTR_API_CONFIG if present. You can set this with the command: `export VULTR_API_CONFIG=/path/to/vultr.ini`. Probably the simplest way to give Algo the API key is to create a file titled `.vultr.ini` in your home directory by typing `nano ~/.vultr.ini`, then entering the following text: ``` [default] key = <your api key> ``` where you've cut-and-pasted the API key from above into the `<your api key>` field (no brackets). When Algo asks `Enter the local path to your configuration INI file (https://trailofbits.github.io/algo/cloud-vultr.html):` if you hit enter without typing anything, Algo will look for the file in `~/.vultr.ini` by default.
Markdown
algo/docs/deploy-from-ansible.md
# Deployment from Ansible Before you begin, make sure you have installed all the dependencies necessary for your operating system as described in the [README](../README.md). You can deploy Algo non-interactively by running the Ansible playbooks directly with `ansible-playbook`. `ansible-playbook` accepts variables via the `-e` or `--extra-vars` option. You can pass variables as space separated key=value pairs. Algo requires certain variables that are listed below. You can also use the `--skip-tags` option to skip certain parts of the install, such as `iptables` (overwrite iptables rules), `ipsec` (install strongSwan), `wireguard` (install Wireguard). We don't recommend using the `-t` option as it will only include the tagged portions of the deployment, and skip certain necessary roles (such as `common`). Here is a full example for DigitalOcean: ```shell ansible-playbook main.yml -e "provider=digitalocean server_name=algo ondemand_cellular=false ondemand_wifi=false dns_adblocking=true ssh_tunneling=true store_pki=true region=ams3 do_token=token" ``` See below for more information about variables and roles. ### Variables - `provider` - (Required) The provider to use. See possible values below - `server_name` - (Required) Server name. Default: algo - `ondemand_cellular` (Optional) Enables VPN On Demand when connected to cellular networks for iOS/macOS clients using IPsec. Default: false - `ondemand_wifi` - (Optional. See `ondemand_wifi_exclude`) Enables VPN On Demand when connected to WiFi networks for iOS/macOS clients using IPsec. Default: false - `ondemand_wifi_exclude` (Required if `ondemand_wifi` set) - WiFi networks to exclude from using the VPN. Comma-separated values - `dns_adblocking` - (Optional) Enables dnscrypt-proxy adblocking. Default: false - `ssh_tunneling` - (Optional) Enable SSH tunneling for each user. Default: false - `store_pki` - (Optional) Whether or not keep the CA key (required to add users in the future, but less secure). Default: false If any of the above variables are unspecified, ansible will ask the user to input them. ### Ansible roles Cloud roles can be activated by specifying an extra variable `provider`. Cloud roles: - role: cloud-digitalocean, [provider: digitalocean](#digital-ocean) - role: cloud-ec2, [provider: ec2](#amazon-ec2) - role: cloud-gce, [provider: gce](#google-compute-engine) - role: cloud-vultr, [provider: vultr](#vultr) - role: cloud-azure, [provider: azure](#azure) - role: cloud-lightsail, [provider: lightsail](#lightsail) - role: cloud-scaleway, [provider: scaleway](#scaleway) - role: cloud-openstack, [provider: openstack](#openstack) - role: cloud-cloudstack, [provider: cloudstack](#cloudstack) - role: cloud-hetzner, [provider: hetzner](#hetzner) - role: cloud-linode, [provider: linode](#linode) Server roles: - role: strongswan - Installs [strongSwan](https://www.strongswan.org/) - Enables AppArmor, limits CPU and memory access, and drops user privileges - Builds a Certificate Authority (CA) with [easy-rsa-ipsec](https://github.com/ValdikSS/easy-rsa-ipsec) and creates one client certificate per user - Bundles the appropriate certificates into Apple mobileconfig profiles for each user - role: dns_adblocking - Installs DNS encryption through [dnscrypt-proxy](https://github.com/jedisct1/dnscrypt-proxy) with blacklists to be updated daily from `adblock_lists` in `config.cfg` - note this will occur even if `dns_encryption` in `config.cfg` is set to `false` - Constrains dnscrypt-proxy with AppArmor and cgroups CPU and memory limitations - role: ssh_tunneling - Adds a restricted `algo` group with no shell access and limited SSH forwarding options - Creates one limited, local account and an SSH public key for each user - role: wireguard - Installs a [Wireguard](https://www.wireguard.com/) server, with a startup script, and automatic checks for upgrades - Creates wireguard.conf files for Linux clients as well as QR codes for Apple/Android clients Note: The `strongswan` role generates Apple profiles with On-Demand Wifi and Cellular if you pass the following variables: - ondemand_wifi: true - ondemand_wifi_exclude: HomeNet,OfficeWifi - ondemand_cellular: true ### Local Installation - role: local, provider: local This role is intended to be run for local install onto an Ubuntu server, or onto an unsupported cloud provider's Ubuntu instance. Required variables: - server - IP address of your server (or "localhost" if deploying to the local machine) - endpoint - public IP address of the server you're installing on - ssh_user - name of the SSH user you will use to install on the machine (passwordless login required). If `server=localhost`, this isn't required. - ca_password - Password for the private CA key Note that by default, the iptables rules on your existing server will be overwritten. If you don't want to overwrite the iptables rules, you can use the `--skip-tags iptables` flag. ### Digital Ocean Required variables: - do_token - region Possible options can be gathered calling to <https://api.digitalocean.com/v2/regions> ### Amazon EC2 Required variables: - aws_access_key: `AKIA...` - aws_secret_key - region: e.g. `us-east-1` Possible options can be gathered via cli `aws ec2 describe-regions` Additional variables: - [encrypted](https://aws.amazon.com/blogs/aws/new-encrypted-ebs-boot-volumes/) - Encrypted EBS boot volume. Boolean (Default: true) - [size](https://aws.amazon.com/ec2/instance-types/) - EC2 instance type. String (Default: t2.micro) - [image](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ec2/describe-images.html) - AMI `describe-images` search parameters to find the OS for the hosted image. Each OS and architecture has a unique AMI-ID. The OS owner, for example [Ubuntu](https://cloud-images.ubuntu.com/locator/ec2/), updates these images often. If parameters below result in multiple results, the most recent AMI-ID is chosen ``` # Example of equivalent cli command aws ec2 describe-images --owners "099720109477" --filters "Name=architecture,Values=arm64" "Name=name,Values=ubuntu/images/hvm-ssd/ubuntu-jammy-22.04*" ``` - [owners] - The operating system owner id. Default is [Canonical](https://help.ubuntu.com/community/EC2StartersGuide#Official_Ubuntu_Cloud_Guest_Amazon_Machine_Images_.28AMIs.29) (Default: 099720109477) - [arch] - The architecture (Default: x86_64, Optional: arm64) - [name] - The wildcard string to filter available ami names. Algo appends this name with the string "-\*64-server-\*", and prepends with "ubuntu/images/hvm-ssd/" (Default: Ubuntu latest LTS) - [instance_market_type](https://aws.amazon.com/ec2/pricing/) - Two pricing models are supported: on-demand and spot. String (Default: on-demand) - If using spot instance types, one additional IAM permission along with the below minimum is required for deployment: ``` "ec2:CreateLaunchTemplate" ``` #### Minimum required IAM permissions for deployment ``` { "Version": "2012-10-17", "Statement": [ { "Sid": "PreDeployment", "Effect": "Allow", "Action": [ "ec2:DescribeImages", "ec2:DescribeKeyPairs", "ec2:DescribeRegions", "ec2:ImportKeyPair", "ec2:CopyImage" ], "Resource": [ "*" ] }, { "Sid": "DeployCloudFormationStack", "Effect": "Allow", "Action": [ "cloudformation:CreateStack", "cloudformation:UpdateStack", "cloudformation:DescribeStacks", "cloudformation:DescribeStackEvents", "cloudformation:ListStackResources" ], "Resource": [ "*" ] }, { "Sid": "CloudFormationEC2Access", "Effect": "Allow", "Action": [ "ec2:DescribeRegions", "ec2:CreateInternetGateway", "ec2:DescribeVpcs", "ec2:CreateVpc", "ec2:DescribeInternetGateways", "ec2:ModifyVpcAttribute", "ec2:CreateTags", "ec2:CreateSubnet", "ec2:AssociateVpcCidrBlock", "ec2:AssociateSubnetCidrBlock", "ec2:AssociateRouteTable", "ec2:AssociateAddress", "ec2:CreateRouteTable", "ec2:AttachInternetGateway", "ec2:DescribeRouteTables", "ec2:DescribeSubnets", "ec2:ModifySubnetAttribute", "ec2:CreateRoute", "ec2:CreateSecurityGroup", "ec2:DescribeSecurityGroups", "ec2:AuthorizeSecurityGroupIngress", "ec2:RunInstances", "ec2:DescribeInstances", "ec2:AllocateAddress", "ec2:DescribeAddresses" ], "Resource": [ "*" ] } ] } ``` ### Google Compute Engine Required variables: - gce_credentials_file: e.g. /configs/gce.json if you use the [GCE docs](https://trailofbits.github.io/algo/cloud-gce.html) - can also be defined in environment as GCE_CREDENTIALS_FILE_PATH - [region](https://cloud.google.com/compute/docs/regions-zones/): e.g. `useast-1` ### Vultr Required variables: - [vultr_config](https://trailofbits.github.io/algo/cloud-vultr.html): /path/to/.vultr.ini - [region](https://api.vultr.com/v1/regions/list): e.g. `Chicago`, `'New Jersey'` ### Azure Required variables: - azure_secret - azure_tenant - azure_client_id - azure_subscription_id - [region](https://azure.microsoft.com/en-us/global-infrastructure/regions/) ### Lightsail Required variables: - aws_access_key: `AKIA...` - aws_secret_key - region: e.g. `us-east-1` Possible options can be gathered via cli `aws lightsail get-regions` #### Minimum required IAM permissions for deployment ``` { "Version": "2012-10-17", "Statement": [ { "Sid": "LightsailDeployment", "Effect": "Allow", "Action": [ "lightsail:GetRegions", "lightsail:GetInstance", "lightsail:CreateInstances", "lightsail:DisableAddOn", "lightsail:PutInstancePublicPorts", "lightsail:StartInstance", "lightsail:TagResource", "lightsail:GetStaticIp", "lightsail:AllocateStaticIp", "lightsail:AttachStaticIp" ], "Resource": [ "*" ] }, { "Sid": "DeployCloudFormationStack", "Effect": "Allow", "Action": [ "cloudformation:CreateStack", "cloudformation:UpdateStack", "cloudformation:DescribeStacks", "cloudformation:DescribeStackEvents", "cloudformation:ListStackResources" ], "Resource": [ "*" ] } ] } ``` ### Scaleway Required variables: - [scaleway_token](https://www.scaleway.com/docs/generate-an-api-token/) - region: e.g. `ams1`, `par1` ### OpenStack You need to source the rc file prior to run Algo. Download it from the OpenStack dashboard->Compute->API Access and source it in the shell (eg: source /tmp/dhc-openrc.sh) ### CloudStack Required variables: - [cs_config](https://trailofbits.github.io/algo/cloud-cloudstack.html): /path/to/.cloudstack.ini - cs_region: e.g. `exoscale` - cs_zones: e.g. `ch-gva2` The first two can also be defined in your environment, using the variables `CLOUDSTACK_CONFIG` and `CLOUDSTACK_REGION`. ### Hetzner Required variables: - hcloud_token: Your [API token](https://trailofbits.github.io/algo/cloud-hetzner.html#api-token) - can also be defined in the environment as HCLOUD_TOKEN - region: e.g. `nbg1` ### Linode Required variables: - linode_token: Your [API token](https://trailofbits.github.io/algo/cloud-linode.html#api-token) - can also be defined in the environment as LINODE_TOKEN - region: e.g. `us-east` ### Update users Playbook: ``` users.yml ``` Required variables: - server - IP or hostname to access the server via SSH - ca_password - Password to access the CA key Tags required: - update-users
Markdown
algo/docs/deploy-from-cloudshell.md
# Deploy from Google Cloud Shell **IMPORTANT NOTE: As of 2021-12-14 Algo requires Python 3.8, but Google Cloud Shell only provides Python 3.7.3. The instructions below will not work until Google updates Cloud Shell to have at least Python 3.8.** If you want to try Algo but don't wish to install the software on your own system you can use the **free** [Google Cloud Shell](https://cloud.google.com/shell/) to deploy a VPN to any supported cloud provider. Note that you cannot choose `Install to existing Ubuntu server` to turn Google Cloud Shell into your VPN server. 1. See the [Cloud Shell documentation](https://cloud.google.com/shell/docs/) to start an instance of Cloud Shell in your browser. 2. Follow the [Algo installation instructions](https://github.com/trailofbits/algo#deploy-the-algo-server) as shown but skip step **3. Install Algo's core dependencies** as they are already installed. Run Algo to deploy to a supported cloud provider. 3. Once Algo has completed, retrieve a copy of the configuration files that were created to your local system. While still in the Algo directory, run: ``` zip -r configs configs dl configs.zip ``` 4. Unzip `configs.zip` on your local system and use the files to configure your VPN clients.
Markdown
algo/docs/deploy-from-docker.md
# Docker Support While it is not possible to run your Algo server from within a Docker container, it is possible to use Docker to provision your Algo server. ## Limitations 1. This has not yet been tested with user namespacing enabled. 2. If you're running this on Windows, take care when editing files under `configs/` to ensure that line endings are set appropriately for Unix systems. ## Deploying an Algo Server with Docker 1. Install [Docker](https://www.docker.com/community-edition#/download) -- setup and configuration is not covered here 2. Create a local directory to hold your VPN configs (e.g. `C:\Users\trailofbits\Documents\VPNs\`) 3. Create a local copy of [config.cfg](https://github.com/trailofbits/algo/blob/master/config.cfg), with required modifications (e.g. `C:\Users\trailofbits\Documents\VPNs\config.cfg`) 4. Run the Docker container, mounting your configurations appropriately (assuming the container is named `trailofbits/algo` with a tag `latest`): - From Windows: ```powershell C:\Users\trailofbits> docker run --cap-drop=all -it \ -v C:\Users\trailofbits\Documents\VPNs:/data \ ghcr.io/trailofbits/algo:latest ``` - From Linux: ```bash $ docker run --cap-drop=all -it \ -v /home/trailofbits/Documents/VPNs:/data \ ghcr.io/trailofbits/algo:latest ``` 5. When it exits, you'll be left with a fully populated `configs` directory, containing all appropriate configuration data for your clients, and for future server management ### Providing Additional Files If you need to provide additional files -- like authorization files for Google Cloud Project -- you can simply specify an additional `-v` parameter, and provide the appropriate path when prompted by `algo`. For example, you can specify `-v C:\Users\trailofbits\Documents\VPNs\gce_auth.json:/algo/gce_auth.json`, making the local path to your credentials JSON file `/algo/gce_auth.json`. ### Scripted deployment Ansible variables (see [Deployment from Ansible](deploy-from-ansible.md)) can be passed via `ALGO_ARGS` environment variable. _The leading `-e` (or `--extra-vars`) is required_, e.g. ```bash $ ALGO_ARGS="-e provider=digitalocean server_name=algo ondemand_cellular=false ondemand_wifi=false dns_adblocking=true ssh_tunneling=true store_pki=true region=ams3 do_token=token" $ docker run --cap-drop=all -it \ -e "ALGO_ARGS=$ALGO_ARGS" \ -v /home/trailofbits/Documents/VPNs:/data \ ghcr.io/trailofbits/algo:latest ``` ## Managing an Algo Server with Docker Even though the container itself is transient, because you've persisted the configuration data, you can use the same Docker image to manage your Algo server. This is done by setting the environment variable `ALGO_ARGS`. If you want to use Algo to update the users on an existing server, specify `-e "ALGO_ARGS=update-users"` in your `docker run` command: ```powershell $ docker run --cap-drop=all -it \ -e "ALGO_ARGS=update-users" \ -v C:\Users\trailofbits\Documents\VPNs:/data \ ghcr.io/trailofbits/algo:latest ``` ## GNU Makefile for Docker You can also build and deploy with a Makefile. This simplifies some of the command strings and opens the door for further user configuration. The `Makefile` consists of three targets: `docker-build`, `docker-deploy`, and `docker-prune`. `docker-all` will run thru all of them. ## Building Your Own Docker Image You can use the Dockerfile provided in this repository as-is, or modify it to suit your needs. Further instructions on building an image can be found in the [Docker engine](https://docs.docker.com/engine/) documents. ## Security Considerations Using Docker is largely no different from running Algo yourself, with a couple of notable exceptions: we run as root within the container, and you're retrieving your content from Docker Hub. To work around the limitations of bind mounts in docker, we have to run as root within the container. To mitigate concerns around doing this, we pass the `--cap-drop=all` parameter to `docker run`, which effectively removes all privileges from the root account, reducing it to a generic user account that happens to have a userid of 0. Further steps can be taken by applying `seccomp` profiles to the container; this is being considered as a future improvement. Docker themselves provide a concept of [Content Trust](https://docs.docker.com/engine/security/trust/content_trust/) for image management, which helps to ensure that the image you download is, in fact, the image that was uploaded. Content trust is still under development, and while we may be using it, its implementation, limitations, and constraints are documented with Docker. ## Future Improvements 1. Even though we're taking care to drop all capabilities to minimize the impact of running as root, we can probably include not only a `seccomp` profile, but also AppArmor and/or SELinux profiles as well. 2. The Docker image doesn't natively support [advanced](deploy-from-ansible.md) Algo deployments, which is useful for scripting. This can be done by launching an interactive shell and running the commands yourself. 3. The way configuration is passed into and out of the container is a bit kludgy. Hopefully future improvements in Docker volumes will make this a bit easier to handle. ## Advanced Usage If you want to poke around the Docker container yourself, you can do so by changing your `entrypoint`. Pass `--entrypoint=/bin/ash` as a parameter to `docker run`, and you'll be dropped into a full Linux shell in the container.
Markdown
algo/docs/deploy-from-macos.md
# Deploy from macOS While you can't turn a macOS system in an AlgoVPN, you can install the Algo scripts on a macOS system and use them to deploy your AlgoVPN to a cloud provider. Algo uses [Ansible](https://www.ansible.com) which requires Python 3. macOS includes an obsolete version of Python 2 installed as `/usr/bin/python` which you should ignore. ## macOS 10.15 Catalina Catalina comes with Python 3 installed as `/usr/bin/python3`. This file, and certain others like `/usr/bin/git`, start out as stub files that prompt you to install the Command Line Developer Tools package the first time you run them. This is the easiest way to install Python 3 on Catalina. Note that Python 3 from Command Line Developer Tools prior to the release for Xcode 11.5 on 2020-05-20 might not work with Algo. If Software Update does not offer to update an older version of the tools you can download a newer version from [here](https://developer.apple.com/download/more/) (Apple ID login required). ## macOS prior to 10.15 Catalina You'll need to install Python 3 before you can run Algo. Python 3 is available from different packagers, two of which are listed below. ### Ansible and SSL Validation Ansible validates SSL network connections using OpenSSL but macOS includes LibreSSL which behaves differently. Therefore each version of Python below includes or depends on its own copy of OpenSSL. OpenSSL needs access to a list of trusted CA certificates in order to validate SSL connections. Each packager handles initializing this certificate store differently. If you see the error `CERTIFICATE_VERIFY_FAILED` when running Algo make sure you've followed the packager-specific instructions correctly. ### Choose a packager and install Python 3 Choose one of the packagers below as your source for Python 3. Avoid installing versions from multiple packagers on the same Mac as you may encounter conflicts. In particular they might fight over creating symbolic links in `/usr/local/bin`. #### Option 1: Install using the Homebrew package manager If you're comfortable using the command line in Terminal the [Homebrew](https://brew.sh) project is a great source of software for macOS. First install Homebrew using the instructions on the [Homebrew](https://brew.sh) page. The install command below takes care of initializing the CA certificate store. ##### Installation ``` brew install python3 ``` After installation open a new tab or window in Terminal and verify that the command `which python3` returns `/usr/local/bin/python3`. ##### Removal ``` brew uninstall python3 ``` #### Option 2: Install the package from Python.org If you don't want to install a package manager you can download the Python package for macOS from [python.org](https://www.python.org/downloads/mac-osx/). ##### Installation Download the most recent version of Python and install it like any other macOS package. Then initialize the CA certificate store from Finder by double-clicking on the file `Install Certificates.command` found in the `/Applications/Python 3.8` folder. When you double-click on `Install Certificates.command` a new Terminal window will open. If the window remains blank then the command has not run correctly. This can happen if you've changed the default shell in Terminal Preferences. Try changing it back to the default and run `Install Certificates.command` again. After installation open a new tab or window in Terminal and verify that the command `which python3` returns either `/usr/local/bin/python3` or `/Library/Frameworks/Python.framework/Versions/3.8/bin/python3`. ##### Removal Unfortunately the python.org package does not include an uninstaller and removing it requires several steps: 1. In Finder, delete the package folder found in `/Applications`. 2. In Finder, delete the *rest* of the package found under ` /Library/Frameworks/Python.framework/Versions`. 3. In Terminal, undo the changes to your `PATH` by running: ```mv ~/.bash_profile.pysave ~/.bash_profile``` 4. In Terminal, remove the dozen or so symbolic links the package created in `/usr/local/bin`. Or just leave them because installing another version of Python will overwrite most of them.
Markdown
algo/docs/deploy-from-script-or-cloud-init-to-localhost.md
# Deploy from script or cloud-init You can use `install.sh` to prepare the environment and deploy AlgoVPN on the local Ubuntu server in one shot using cloud-init, or run the script directly on the server after it's been created. The script doesn't configure any parameters in your cloud, so you're on your own to configure related [firewall rules](/docs/firewalls.md), a floating IP address and other resources you may need. The output of the install script (including the p12 and CA passwords) can be found at `/var/log/algo.log`, and user config files will be installed into the `/opt/algo/configs/localhost` directory. If you need to update users later, `cd /opt/algo`, change the user list in `config.cfg`, install additional dependencies as in step 4 of the [main README](https://github.com/trailofbits/algo/blob/master/README.md), and run `./algo update-users` from that directory. ## Cloud init deployment You can copy-paste the snippet below to the user data (cloud-init or startup script) field when creating a new server. For now this has only been successfully tested on [DigitalOcean](https://www.digitalocean.com/docs/droplets/resources/metadata/), Amazon [EC2](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html) and [Lightsail](https://lightsail.aws.amazon.com/ls/docs/en/articles/lightsail-how-to-configure-server-additional-data-shell-script), [Google Cloud](https://cloud.google.com/compute/docs/startupscript), [Azure](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/using-cloud-init) and [Vultr](https://my.vultr.com/startup/), although Vultr doesn't [officially support cloud-init](https://www.vultr.com/docs/getting-started-with-cloud-init). ``` #!/bin/bash curl -s https://raw.githubusercontent.com/trailofbits/algo/master/install.sh | sudo -E bash -x ``` The command will prepare the environment and install AlgoVPN with the default parameters below. If you want to modify the behavior you may define additional variables. ## Variables - `METHOD`: which method of the deployment to use. Possible values are local and cloud. Default: cloud. The cloud method is intended to use in cloud-init deployments only. If you are not using cloud-init to deploy the server you have to use the local method. - `ONDEMAND_CELLULAR`: "Connect On Demand" when connected to cellular networks. Boolean. Default: false. - `ONDEMAND_WIFI`: "Connect On Demand" when connected to Wi-Fi. Default: false. - `ONDEMAND_WIFI_EXCLUDE`: List the names of any trusted Wi-Fi networks where macOS/iOS IPsec clients should not use "Connect On Demand". Comma-separated list. - `STORE_PKI`: To retain the PKI. (required to add users in the future, but less secure). Default: false. - `DNS_ADBLOCKING`: To install an ad blocking DNS resolver. Default: false. - `SSH_TUNNELING`: Enable SSH tunneling for each user. Default: false. - `ENDPOINT`: The public IP address or domain name of your server: (IMPORTANT! This is used to verify the certificate). It will be gathered automatically for DigitalOcean, AWS, GCE, Azure or Vultr if the `METHOD` is cloud. Otherwise you need to define this variable according to your public IP address. - `USERS`: list of VPN users. Comma-separated list. Default: user1. - `REPO_SLUG`: Owner and repository that used to get the installation scripts from. Default: trailofbits/algo. - `REPO_BRANCH`: Branch for `REPO_SLUG`. Default: master. - `EXTRA_VARS`: Additional extra variables. - `ANSIBLE_EXTRA_ARGS`: Any available ansible parameters. ie: `--skip-tags apparmor`. ## Examples ##### How to customise a cloud-init deployment by variables ``` #!/bin/bash export ONDEMAND_CELLULAR=true export SSH_TUNNELING=true curl -s https://raw.githubusercontent.com/trailofbits/algo/master/install.sh | sudo -E bash -x ``` ##### How to deploy locally without using cloud-init ``` export METHOD=local export ONDEMAND_CELLULAR=true export ENDPOINT=[your server's IP here] curl -s https://raw.githubusercontent.com/trailofbits/algo/master/install.sh | sudo -E bash -x ``` ##### How to deploy a server using arguments The arguments order as per [variables](#variables) above ``` curl -s https://raw.githubusercontent.com/trailofbits/algo/master/install.sh | sudo -E bash -x -s local true false _null true true true true myvpnserver.com phone,laptop,desktop ```
Markdown
algo/docs/deploy-from-windows.md
# Deploy from Windows The Algo scripts can't be run directly on Windows, but you can use the Windows Subsystem for Linux (WSL) to run a copy of Ubuntu Linux right on your Windows system. You can then run Algo to deploy a VPN server to a supported cloud provider, though you can't turn the instance of Ubuntu running under WSL into a VPN server. To run WSL you will need: * A 64-bit system * 64-bit Windows 10 (Anniversary update or later version) ## Install WSL Enable the 'Windows Subsystem for Linux': 1. Open 'Settings' 2. Click 'Update & Security', then click the 'For developers' option on the left. 3. Toggle the 'Developer mode' option, and accept any warnings Windows pops up. Wait a minute for Windows to install a few things in the background (it will eventually let you know a restart may be required for changes to take effect—ignore that for now). Next, to install the actual Linux Subsystem, you have to jump over to 'Control Panel', and do the following: 1. Click on 'Programs' 2. Click on 'Turn Windows features on or off' 3. Scroll down and check 'Windows Subsystem for Linux', and then click OK. 4. The subsystem will be installed, then Windows will require a restart. 5. Restart Windows and then install [Ubuntu 20.04 LTS from the Windows Store](https://www.microsoft.com/p/ubuntu-2004-lts/9n6svws3rx71). 6. Run Ubuntu from the Start menu. It will take a few minutes to install. It will have you create a separate user account for the Linux subsystem. Once that's done, you will finally have Ubuntu running somewhat integrated with Windows. ## Install Algo Run these commands in the Ubuntu Terminal to install a prerequisite package and download the Algo scripts to your home directory. Note that when using WSL you should **not** install Algo in the `/mnt/c` directory due to problems with file permissions. You may need to follow [these directions](https://devblogs.microsoft.com/commandline/copy-and-paste-arrives-for-linuxwsl-consoles/) in order to paste commands into the Ubuntu Terminal. ```shell cd umask 0002 sudo apt update sudo apt install -y python3-virtualenv git clone https://github.com/trailofbits/algo cd algo ``` ## Post installation steps These steps should be only if you clone the Algo repository to the host machine disk (C:, D:, etc.). WSL mount host system disks to `\mnt` directory. ### Allow git to change files metadata By default git cannot change files metadata (using chmod for example) for files stored at host machine disks (https://docs.microsoft.com/en-us/windows/wsl/wsl-config#set-wsl-launch-settings). Allow it: 1. Start Ubuntu Terminal. 2. Edit /etc/wsl.conf (create it if it doesn't exist). Add the following: ``` [automount] options = "metadata" ``` 3. Close all Ubuntu Terminals. 4. Run powershell. 5. Run `wsl --shutdown` in powershell. ### Allow run Ansible in a world writable directory Ansible threat host machine directories as world writable directory and do not load .cfg from it by default (https://docs.ansible.com/ansible/devel/reference_appendices/config.html#cfg-in-world-writable-dir). For fix run inside `algo` directory: ```shell chmod 744 . ``` Now you can continue by following the [README](https://github.com/trailofbits/algo#deploy-the-algo-server) from the 4th step to deploy your Algo server! You'll be instructed to edit the file `config.cfg` in order to specify the Algo user accounts to be created. If you're new to Linux the simplest editor to use is `nano`. To edit the file while in the `algo` directory, run: ```shell nano config.cfg ``` Once `./algo` has finished you can use the `cp` command to copy the configuration files from the `configs` directory into your Windows directory under `/mnt/c/Users` for easier access.
Markdown
algo/docs/deploy-to-freebsd.md
# FreeBSD / HardenedBSD server setup FreeBSD server support is a work in progress. For now, it is only possible to install Algo on existing FreeBSD 11 systems. ## System preparation Ensure that the following kernel options are enabled: ``` # sysctl kern.conftxt | grep -iE "IPSEC|crypto" options IPSEC options IPSEC_NAT_T device crypto ``` ## Available roles * vpn * ssh_tunneling * dns_adblocking ## Additional variables * rebuild_kernel - set to `true` if you want to let Algo to rebuild your kernel if needed (takes a lot of time) ## Installation ```shell ansible-playbook main.yml -e "provider=local" ``` And follow the instructions
Markdown
algo/docs/deploy-to-ubuntu.md
# Local Installation **PLEASE NOTE**: Algo is intended for use to create a _dedicated_ VPN server. No uninstallation option is provided. If you install Algo on an existing server any existing services might break. In particular, the firewall rules will be overwritten. See [AlgoVPN and Firewalls](/docs/firewalls.md) for more information. ------ ## Outbound VPN Server You can use Algo to configure a pre-existing server as an AlgoVPN rather than using it to create and configure a new server on a supported cloud provider. This is referred to as a **local** installation rather than a **cloud** deployment. If you're new to Algo or unfamiliar with Linux you'll find a cloud deployment to be easier. To perform a local installation, install the Algo scripts following the normal installation instructions, then choose: ``` Install to existing Ubuntu latest LTS server (for more advanced users) ``` Make sure your target server is running an unmodified copy of the operating system version specified. The target can be the same system where you've installed the Algo scripts, or a remote system that you are able to access as root via SSH without needing to enter the SSH key passphrase (such as when using `ssh-agent`). ## Inbound VPN Server (also called "Road Warrior" setup) Some may find it useful to set up an Algo server on an Ubuntu box on your home LAN, with the intention of being able to securely access your LAN and any resources on it when you're traveling elsewhere (the ["road warrior" setup](https://en.wikipedia.org/wiki/Road_warrior_(computing))). A few tips if you're doing so: - Make sure you forward any [relevant incoming ports](/docs/firewalls.md#external-firewall) to the Algo server from your router; - Change `BetweenClients_DROP` in `config.cfg` to `false`, and also consider changing `block_smb` and `block_netbios` to `false`; - If you want to use a DNS server on your LAN to resolve local domain names properly (e.g. a Pi-hole), set the `dns_encryption` flag in `config.cfg` to `false`, and change `dns_servers` to the local DNS server IP (i.e. `192.168.1.2`).
Markdown
algo/docs/deploy-to-unsupported-cloud.md
# Unsupported Cloud Providers Algo officially supports the [cloud providers listed here](https://github.com/trailofbits/algo/blob/master/README.md#deploy-the-algo-server). If you want to deploy Algo on another virtual hosting provider, that provider must support: 1. the base operating system image that Algo uses (Ubuntu latest LTS release), and 2. a minimum of certain kernel modules required for the strongSwan IPsec server. Please see the [Required Kernel Modules](https://wiki.strongswan.org/projects/strongswan/wiki/KernelModules) documentation from strongSwan for a list of the specific required modules and a script to check for them. As a first step, we recommend running their shell script to determine initial compatibility with your new hosting provider. If you want Algo to officially support your new cloud provider then it must have an Ansible [cloud module](https://docs.ansible.com/ansible/list_of_cloud_modules.html) available. If no module is available for your provider, search Ansible's [open issues](https://github.com/ansible/ansible/issues) and [pull requests](https://github.com/ansible/ansible/pulls) for existing efforts to add it. If none are available, then you may want to develop the module yourself. Reference the [Ansible module developer documentation](https://docs.ansible.com/ansible/dev_guide/developing_modules.html) and the API documentation for your hosting provider. ## IPsec in userland Hosting providers that rely on OpenVZ or Docker cannot be used by Algo since they cannot load the required kernel modules or access the required network interfaces. For more information, see the strongSwan documentation on [Cloud Platforms](https://wiki.strongswan.org/projects/strongswan/wiki/Cloudplatforms). In order to address this issue, strongSwan has developed the [kernel-libipsec](https://wiki.strongswan.org/projects/strongswan/wiki/Kernel-libipsec) plugin which provides an IPsec backend that works entirely in userland. `libipsec` bundles its own IPsec implementation and uses TUN devices to route packets. For example, `libipsec` is used by the Android strongSwan app to address Android's lack of a functional IPsec stack. Use of `libipsec` is not supported by Algo. It has known performance issues since it buffers each packet in memory. On certain systems with insufficient processor power, such as many cloud hosting providers, using `libipsec` can lead to an out of memory condition, crash the charon daemon, or lock up the entire host. Further, `libipsec` introduces unknown security risks. The code in `libipsec` has not been scrutinized to the same level as the code in the Linux or FreeBSD kernel that it replaces. This additional code introduces new complexity to the Algo server that we want to avoid at this time. We recommend moving to a hosting provider that does not require libipsec and can load the required kernel modules.
Markdown
algo/docs/faq.md
# FAQ * [Has Algo been audited?](#has-algo-been-audited) * [What's the current status of WireGuard?](#whats-the-current-status-of-wireguard) * [Why aren't you using Tor?](#why-arent-you-using-tor) * [Why aren't you using Racoon, LibreSwan, or OpenSwan?](#why-arent-you-using-racoon-libreswan-or-openswan) * [Why aren't you using a memory-safe or verified IKE daemon?](#why-arent-you-using-a-memory-safe-or-verified-ike-daemon) * [Why aren't you using OpenVPN?](#why-arent-you-using-openvpn) * [Why aren't you using Alpine Linux, OpenBSD, or HardenedBSD?](#why-arent-you-using-alpine-linux-openbsd-or-hardenedbsd) * [I deployed an Algo server. Can you update it with new features?](#i-deployed-an-algo-server-can-you-update-it-with-new-features) * [Where did the name "Algo" come from?](#where-did-the-name-algo-come-from) * [Can DNS filtering be disabled?](#can-dns-filtering-be-disabled) * [Wasn't IPSEC backdoored by the US government?](#wasnt-ipsec-backdoored-by-the-us-government) * [What inbound ports are used?](#what-inbound-ports-are-used) * [How do I monitor user activity?](#how-do-i-monitor-user-activity) * [How do I reach another connected client?](#how-do-i-reach-another-connected-client) ## Has Algo been audited? No. This project is under active development. We're happy to [accept and fix issues](https://github.com/trailofbits/algo/issues) as they are identified. Use Algo at your own risk. If you find a security issue of any severity, please [contact us on Slack](https://slack.empirehacking.nyc). ## What's the current status of WireGuard? [WireGuard reached "stable" 1.0.0 release](https://lists.zx2c4.com/pipermail/wireguard/2020-March/005206.html) in Spring 2020. It has undergone [substantial](https://www.wireguard.com/formal-verification/) security review. ## Why aren't you using Tor? The goal of this project is not to provide anonymity, but to ensure confidentiality of network traffic. Tor introduces new risks that are unsuitable for Algo's intended users. Namely, with Algo, users are in control over the gateway routing their traffic. With Tor, users are at the mercy of [actively](https://www.securityweek2016.tu-darmstadt.de/fileadmin/user_upload/Group_securityweek2016/pets2016/10_honions-sanatinia.pdf) [malicious](https://web.archive.org/web/20150705184539/https://chloe.re/2015/06/20/a-month-with-badonions/) [exit](https://community.fireeye.com/people/archit.mehta/blog/2014/11/18/onionduke-apt-malware-distributed-via-malicious-tor-exit-node) [nodes](https://www.wired.com/2010/06/wikileaks-documents/). ## Why aren't you using Racoon, LibreSwan, or OpenSwan? Racoon does not support IKEv2. Racoon2 supports IKEv2 but is not actively maintained. When we looked, the documentation for strongSwan was better than the corresponding documentation for LibreSwan or OpenSwan. strongSwan also has the benefit of a from-scratch rewrite to support IKEv2. I consider such rewrites a positive step when supporting a major new protocol version. ## Why aren't you using a memory-safe or verified IKE daemon? I would, but I don't know of any [suitable ones](https://github.com/trailofbits/algo/issues/68). If you're in the position to fund the development of such a project, [contact us](mailto:info@trailofbits.com). We would be interested in leading such an effort. At the very least, I plan to make modifications to strongSwan and the environment it's deployed in that prevent or significantly complicate exploitation of any latent issues. ## Why aren't you using OpenVPN? OpenVPN does not have out-of-the-box client support on any major desktop or mobile operating system. This introduces user experience issues and requires the user to [update](https://www.exploit-db.com/exploits/34037/) and [maintain](https://www.exploit-db.com/exploits/20485/) the software themselves. OpenVPN depends on the security of [TLS](https://tools.ietf.org/html/rfc7457), both the [protocol](https://arstechnica.com/security/2016/08/new-attack-can-pluck-secrets-from-1-of-https-traffic-affects-top-sites/) and its [implementations](https://arstechnica.com/security/2014/04/confirmed-nasty-heartbleed-bug-exposes-openvpn-private-keys-too/), and we simply trust the server less due to [past](https://sweet32.info/) [security](https://github.com/ValdikSS/openvpn-fix-dns-leak-plugin/blob/master/README.md) [incidents](https://www.exploit-db.com/exploits/34879/). ## Why aren't you using Alpine Linux, OpenBSD, or HardenedBSD? Alpine Linux is not supported out-of-the-box by any major cloud provider. We are interested in supporting Free-, Open-, and HardenedBSD. Follow along or contribute to our BSD support in [this issue](https://github.com/trailofbits/algo/issues/35). ## I deployed an Algo server. Can you update it with new features? No. By design, the Algo development team has no access to any Algo server that our users have deployed. We cannot modify the configuration, update the software, or sniff the traffic that goes through your personal Algo VPN server. This prevents scenarios where we are legally compelled or hacked to push down backdoored updates that surveil our users. As a result, once your Algo server has been deployed, it is yours to maintain. It will use unattended-upgrades by default to apply security and feature updates to Ubuntu, as well as to the core VPN software of strongSwan, dnscrypt-proxy and WireGuard. However, if you want to take advantage of new features available in the current release of Algo, then you have two options. You can use the [SSH administrative interface](/README.md#ssh-into-algo-server) to make the changes you want on your own or you can shut down the server and deploy a new one (recommended). As an extension of this rationale, most configuration options (other than users) available in `config.cfg` can only be set at the time of initial deployment. ## Where did the name "Algo" come from? Algo is short for "Al Gore", the **V**ice **P**resident of **N**etworks everywhere for [inventing the Internet](https://www.youtube.com/watch?v=BnFJ8cHAlco). ## Can DNS filtering be disabled? You can temporarily disable DNS filtering for all IPsec clients at once with the following workaround: SSH to your Algo server (using the 'shell access' command printed upon a successful deployment), edit `/etc/ipsec.conf`, and change `rightdns=<random_ip>` to `rightdns=8.8.8.8`. Then run `sudo systemctl restart strongswan`. DNS filtering for WireGuard clients has to be disabled on each client device separately by modifying the settings in the app, or by directly modifying the `DNS` setting on the `clientname.conf` file. If all else fails, we recommend deploying a new Algo server without the adblocking feature enabled. ## Wasn't IPSEC backdoored by the US government? No. [Per security researcher Thomas Ptacek](https://news.ycombinator.com/item?id=2014197): > In 2001, Angelos Keromytis --- then a grad student at Penn, now a Columbia professor --- added support for hardware-accelerated IPSEC NICs. When you have an IPSEC NIC, the channel between the NIC and the IPSEC stack keeps state to tell the stack not to bother doing the things the NIC already did, among them validating the IPSEC ESP authenticator. Angelos' code had a bug; it appears to have done the software check only when the hardware had already done it, and skipped it otherwise. > > The bug happened during a change that simultaneously refactored and added a feature to OpenBSD's ESP code; a comparison that should have been == was instead !=; the "if" statement with the bug was originally and correctly !=, but should have been flipped based on how the code was refactored. > > HD Moore may as we speak be going through the pain of reconstituting a nearly decade-old version of OpenBSD to verify the bug, but stipulate that it was there, and here's what you get: IPSEC ESP packet authentication was disabled if you didn't have hardware IPSEC. There is probably an elaborate man-in-the-middle scenario in which this could get you traffic inspection, but it's nowhere nearly as straightforward as leaking key bits. > > To entertain the conspiracy theory, you're still suggesting that the FBI not only introduced this bug, but also developed the technology required to MITM ESP sessions, bouncing them through some secret FBI-developed middlebox. > > One year later, Jason Wright from NETSEC (the company at the heart of the [I think silly] allegations about OpenBSD IPSEC backdoors) fixed the bug. > > It's interesting that the bug was fixed without an advisory (oh to be a fly on the wall on ICB that day; Theo had a, um, a, "way" with his dev team). On the other hand, we don't know what releases of OpenBSD actually had the bug right now. > > It seems vanishingly unlikely that there could have been anything deliberate about this series of changes. You are unlikely to find anyone who will impugn Angelos. Meanwhile, the diffs tell exactly the opposite of the story that Greg Perry told. ## What inbound ports are used? You should only need 4160/TCP, 500/UDP, 4500/UDP, and 51820/UDP opened on any firewall that sits between your clients and your Algo server. See [AlgoVPN and Firewalls](/docs/firewalls.md) for more information. ## How do I monitor user activity? Your Algo server will track IPsec client logins by default in `/var/log/syslog`. This will give you client names, date/time of connection and reconnection, and what IP addresses they're connecting from. This can be disabled entirely by setting `strongswan_log_level` to `-1` in `config.cfg`. WireGuard doesn't save any logs, but entering `sudo wg` on the server will give you the last endpoint and contact time of each client. Disabling this is [paradoxically difficult](https://git.zx2c4.com/blind-operator-mode/about/). There isn't any out-of-the-box way to monitor actual user _activity_ (e.g. websites browsed, etc.) ## How do I reach another connected client? By default, your Algo server doesn't allow connections between connected clients. This can be changed at the time of deployment by enabling the `BetweenClients_DROP` flag in `config.cfg`. See the ["Road Warrior" instructions](/docs/deploy-to-ubuntu.md#road-warrior-setup) for more details.
Markdown
algo/docs/firewalls.md
# AlgoVPN and Firewalls Your AlgoVPN requires properly configured firewalls. The key points to know are: * If you deploy to a **cloud** provider all firewall configuration will done automatically. * If you perform a **local** installation on an existing server you are responsible for configuring any external firewalls. You must also take care not to interfere with the server firewall configuration of the AlgoVPN. ## The Two Types of Firewall ![Firewall Illustration](/docs/images/firewalls.png) ### Server Firewall During installation Algo configures the Linux [Netfilter](https://en.wikipedia.org/wiki/Netfilter) firewall on the server. The rules added are required for AlgoVPN to work properly. The package `netfilter-persistent` is used to load the IPv4 and IPv6 rules files that Algo generates and stores in `/etc/iptables`. The rules for IPv6 are only generated if the server appears to be properly configured for IPv6. The use of conflicting firewall packages on the server such as `ufw` will likely break AlgoVPN. ### External Firewall Most cloud service providers offer a firewall that sits between the Internet and your AlgoVPN. With some providers (such as EC2, Lightsail, and GCE) this firewall is required and is configured by Algo during a **cloud** deployment. If the firewall is not required by the provider then Algo does not configure it. External firewalls are not configured when performing a **local** installation, even when using a server from a cloud service provider. Any external firewall must be configured to pass the following incoming ports over IPv4 : Port | Protocol | Description | Related variables in `config.cfg` ---- | -------- | ----------- | --------------------------------- 4160 | TCP | Secure Shell (SSH) | `ssh_port` (**cloud** only; for **local** port remains 22) 500 | UDP | IPsec IKEv2 | `ipsec_enabled` 4500 | UDP | IPsec NAT-T | `ipsec_enabled` 51820 | UDP | WireGuard | `wireguard_enabled`, `wireguard_port` If you have chosen to disable either IPsec or WireGuard in `config.cfg` before running `./algo` then the corresponding ports don't need to pass through the firewall. SSH is used when performing a **cloud** deployment and when subsequently modifying the list of VPN users by running `./algo update-users`. Even when not required by the cloud service provider, you still might wish to use an external firewall to limit SSH access to your AlgoVPN to connections from certain IP addresses, or perhaps to block SSH access altogether if you don't need it. Every service provider firewall is different so refer to the provider's documentation for more information.
Markdown
algo/docs/index.md
# Algo VPN documentation * Deployment instructions - Deploy from [RedHat/CentOS 6.x](deploy-from-redhat-centos6.md) - Deploy from [Windows](deploy-from-windows.md) - Deploy from a [Docker container](deploy-from-docker.md) - Deploy from [Ansible](deploy-from-ansible.md) non-interactively - Deploy onto a [cloud server at time of creation with shell script or cloud-init](deploy-from-script-or-cloud-init-to-localhost.md) - Deploy from [macOS](deploy-from-macos.md) - Deploy from [Google Cloud Shell](deploy-from-cloudshell.md) * Client setup - Setup [Android](client-android.md) clients - Setup [Generic/Linux](client-linux.md) clients with Ansible - Setup Ubuntu clients to use [WireGuard](client-linux-wireguard.md) - Setup Linux clients to use [IPsec](client-linux-ipsec.md) - Setup Apple devices to use [IPsec](client-apple-ipsec.md) - Setup Macs running macOS 10.13 or older to use [WireGuard](client-macos-wireguard.md) * Cloud provider setup - Configure [Amazon EC2](cloud-amazon-ec2.md) - Configure [Azure](cloud-azure.md) - Configure [DigitalOcean](cloud-do.md) - Configure [Google Cloud Platform](cloud-gce.md) - Configure [Vultr](cloud-vultr.md) - Configure [CloudStack](cloud-cloudstack.md) - Configure [Hetzner Cloud](cloud-hetzner.md) * Advanced Deployment - Deploy to your own [FreeBSD](deploy-to-freebsd.md) server - Deploy to your own [Ubuntu](deploy-to-ubuntu.md) server, and road warrior setup - Deploy to an [unsupported cloud provider](deploy-to-unsupported-cloud.md) * [FAQ](faq.md) * [Firewalls](firewalls.md) * [Troubleshooting](troubleshooting.md)
Markdown
algo/docs/troubleshooting.md
# Troubleshooting First of all, check [this](https://github.com/trailofbits/algo#features) and ensure that you are deploying to the supported ubuntu version. * [Installation Problems](#installation-problems) * [Error: "You have not agreed to the Xcode license agreements"](#error-you-have-not-agreed-to-the-xcode-license-agreements) * [Error: checking whether the C compiler works... no](#error-checking-whether-the-c-compiler-works-no) * [Error: "fatal error: 'openssl/opensslv.h' file not found"](#error-fatal-error-opensslopensslvh-file-not-found) * [Error: "TypeError: must be str, not bytes"](#error-typeerror-must-be-str-not-bytes) * [Error: "ansible-playbook: command not found"](#error-ansible-playbook-command-not-found) * [Error: "Could not fetch URL ... TLSV1_ALERT_PROTOCOL_VERSION](#could-not-fetch-url--tlsv1_alert_protocol_version) * [Fatal: "Failed to validate the SSL certificate for ..."](#fatal-failed-to-validate-the-SSL-certificate) * [Bad owner or permissions on .ssh](#bad-owner-or-permissions-on-ssh) * [The region you want is not available](#the-region-you-want-is-not-available) * [AWS: SSH permission denied with an ECDSA key](#aws-ssh-permission-denied-with-an-ecdsa-key) * [AWS: "Deploy the template" fails with CREATE_FAILED](#aws-deploy-the-template-fails-with-create_failed) * [AWS: not authorized to perform: cloudformation:UpdateStack](#aws-not-authorized-to-perform-cloudformationupdatestack) * [DigitalOcean: error tagging resource 'xxxxxxxx': param is missing or the value is empty: resources](#digitalocean-error-tagging-resource) * [Azure: The client xxx with object id xxx does not have authorization to perform action Microsoft.Resources/subscriptions/resourcegroups/write' over scope](#azure-deployment-permissions-error) * [Windows: The value of parameter linuxConfiguration.ssh.publicKeys.keyData is invalid](#windows-the-value-of-parameter-linuxconfigurationsshpublickeyskeydata-is-invalid) * [Docker: Failed to connect to the host via ssh](#docker-failed-to-connect-to-the-host-via-ssh) * [Error: Failed to create symlinks for deploying to localhost](#error-failed-to-create-symlinks-for-deploying-to-localhost) * [Wireguard: Unable to find 'configs/...' in expected paths](#wireguard-unable-to-find-configs-in-expected-paths) * [Ubuntu Error: "unable to write 'random state'" when generating CA password](#ubuntu-error-unable-to-write-random-state-when-generating-ca-password) * [Timeout when waiting for search string OpenSSH in xxx.xxx.xxx.xxx:4160](#old-networking-firewall-in-place) * [Connection Problems](#connection-problems) * [I'm blocked or get CAPTCHAs when I access certain websites](#im-blocked-or-get-captchas-when-i-access-certain-websites) * [I want to change the list of trusted Wifi networks on my Apple device](#i-want-to-change-the-list-of-trusted-wifi-networks-on-my-apple-device) * [Error: "The VPN Service payload could not be installed."](#error-the-vpn-service-payload-could-not-be-installed) * [Little Snitch is broken when connected to the VPN](#little-snitch-is-broken-when-connected-to-the-vpn) * [I can't get my router to connect to the Algo server](#i-cant-get-my-router-to-connect-to-the-algo-server) * [I can't get Network Manager to connect to the Algo server](#i-cant-get-network-manager-to-connect-to-the-algo-server) * [Various websites appear to be offline through the VPN](#various-websites-appear-to-be-offline-through-the-vpn) * [Clients appear stuck in a reconnection loop](#clients-appear-stuck-in-a-reconnection-loop) * [Wireguard: clients can connect on Wifi but not LTE](#wireguard-clients-can-connect-on-wifi-but-not-lte) * [IPsec: Difficulty connecting through router](#ipsec-difficulty-connecting-through-router) * [I have a problem not covered here](#i-have-a-problem-not-covered-here) ## Installation Problems Look here if you have a problem running the installer to set up a new Algo server. ### Python version is not supported The minimum Python version required to run Algo is 3.8. Most modern operation systems should have it by default, but if the OS you are using doesn't meet the requirements, you have to upgrade. See the official documentation for your OS, or manual download it from https://www.python.org/downloads/. Otherwise, you may [deploy from docker](deploy-from-docker.md) ### Error: "You have not agreed to the Xcode license agreements" On macOS, you tried to install the dependencies with pip and encountered the following error: ``` Downloading cffi-1.9.1.tar.gz (407kB): 407kB downloaded Running setup.py (path:/private/tmp/pip_build_root/cffi/setup.py) egg_info for package cffi You have not agreed to the Xcode license agreements, please run 'xcodebuild -license' (for user-level acceptance) or 'sudo xcodebuild -license' (for system-wide acceptance) from within a Terminal window to review and agree to the Xcode license agreements. No working compiler found, or bogus compiler options passed to the compiler from Python's distutils module. See the error messages above. ---------------------------------------- Cleaning up... Command python setup.py egg_info failed with error code 1 in /private/tmp/pip_build_root/cffi Storing debug log for failure in /Users/algore/Library/Logs/pip.log ``` The Xcode compiler is installed but requires you to accept its license agreement prior to using it. Run `xcodebuild -license` to agree and then retry installing the dependencies. ### Error: checking whether the C compiler works... no On macOS, you tried to install the dependencies with pip and encountered the following error: ``` Failed building wheel for pycrypto Running setup.py clean for pycrypto Failed to build pycrypto ... copying lib/Crypto/Signature/PKCS1_v1_5.py -> build/lib.macosx-10.6-intel-2.7/Crypto/Signature running build_ext running build_configure checking for gcc... gcc checking whether the C compiler works... no configure: error: in '/private/var/folders/3f/q33hl6_x6_nfyjg29fcl9qdr0000gp/T/pip-build-DB5VZp/pycrypto': configure: error: C compiler cannot create executables See config.log for more details Traceback (most recent call last): File "", line 1, in ... cmd_obj.run() File "/private/var/folders/3f/q33hl6_x6_nfyjg29fcl9qdr0000gp/T/pip-build-DB5VZp/pycrypto/setup.py", line 278, in run raise RuntimeError("autoconf error") RuntimeError: autoconf error ``` You don't have a working compiler installed. You should install the XCode compiler by opening your terminal and running `xcode-select --install`. ### Error: "fatal error: 'openssl/opensslv.h' file not found" On macOS, you tried to install `cryptography` and encountered the following error: ``` build/temp.macosx-10.12-intel-2.7/_openssl.c:434:10: fatal error: 'openssl/opensslv.h' file not found #include <openssl/opensslv.h> ^ 1 error generated. error: command 'cc' failed with exit status 1 ---------------------------------------- Cleaning up... Command /usr/bin/python -c "import setuptools, tokenize;__file__='/private/tmp/pip_build_root/cryptography/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-sREEE5-record/install-record.txt --single-version-externally-managed --compile failed with error code 1 in /private/tmp/pip_build_root/cryptography Storing debug log for failure in /Users/algore/Library/Logs/pip.log ``` You are running an old version of `pip` that cannot download the binary `cryptography` dependency. Upgrade to a new version of `pip` by running `sudo python3 -m pip install -U pip`. ### Error: "ansible-playbook: command not found" You tried to install Algo and you see an error that reads "ansible-playbook: command not found." You did not finish step 4 in the installation instructions, "[Install Algo's remaining dependencies](https://github.com/trailofbits/algo#deploy-the-algo-server)." Algo depends on [Ansible](https://github.com/ansible/ansible), an automation framework, and this error indicates that you do not have Ansible installed. Ansible is installed by `pip` when you run `python3 -m pip install -r requirements.txt`. You must complete the installation instructions to run the Algo server deployment process. ### Fatal: "Failed to validate the SSL certificate" You received a message like this: ``` fatal: [localhost]: FAILED! => {"changed": false, "msg": "Failed to validate the SSL certificate for api.digitalocean.com:443. Make sure your managed systems have a valid CA certificate installed. You can use validate_certs=False if you do not need to confirm the servers identity but this is unsafe and not recommended. Paths checked for this platform: /etc/ssl/certs, /etc/ansible, /usr/local/etc/openssl. The exception msg was: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1076).", "status": -1, "url": "https://api.digitalocean.com/v2/regions"} ``` Your local system does not have a CA certificate that can validate the cloud provider's API. Are you using MacPorts instead of Homebrew? The MacPorts openssl installation does not include a CA certificate, but you can fix this by installing the [curl-ca-bundle](https://andatche.com/articles/2012/02/fixing-ssl-ca-certificates-with-openssl-from-macports/) port with `port install curl-ca-bundle`. That should do the trick. ### Could not fetch URL ... TLSV1_ALERT_PROTOCOL_VERSION You tried to install Algo and you received an error like this one: ``` Could not fetch URL https://pypi.python.org/simple/secretstorage/: There was a problem confirming the ssl certificate: [SSL: TLSV1_ALERT_PROTOCOL_VERSION] tlsv1 alert protocol version (_ssl.c:590) - skipping Could not find a version that satisfies the requirement SecretStorage<3 (from -r requirements.txt (line 2)) (from versions: ) No matching distribution found for SecretStorage<3 (from -r requirements.txt (line 2)) ``` It's time to upgrade your python. `brew upgrade python3` You can also download python 3.7.x from python.org. ### Bad owner or permissions on .ssh You tried to run Algo and it quickly exits with an error about a bad owner or permissions: ``` fatal: [104.236.2.94]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Bad owner or permissions on /home/user/.ssh/config\r\n", "unreachable": true} ``` You need to reset the permissions on your `.ssh` directory. Run `chmod 700 /home/user/.ssh` and then `chmod 600 /home/user/.ssh/config`. You may need to repeat this for other files mentioned in the error message. ### The region you want is not available Algo downloads the regions from the supported cloud providers (other than Microsoft Azure) listed in the first menu using APIs. If the region you want isn't available, the cloud provider has probably taken it offline for some reason. You should investigate further with your cloud provider. If there's a specific region you want to install to in Microsoft Azure that isn't available, you should [file an issue](https://github.com/trailofbits/algo/issues/new), give us information about what region is missing, and we'll add it. ### AWS: SSH permission denied with an ECDSA key You tried to deploy Algo to AWS and you received an error like this one: ``` TASK [Copy the algo ssh key to the local ssh directory] ************************ ok: [localhost -> localhost] PLAY [Configure the server and install required software] ********************** TASK [Check the system] ******************************************************** fatal: [X.X.X.X]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Warning: Permanently added 'X.X.X.X' (ECDSA) to the list of known hosts.\r\nPermission denied (publickey).\r\n", "unreachable": true} ``` You previously deployed Algo to a hosting provider other than AWS, and Algo created an ECDSA keypair at that time. You are now deploying to AWS which [does not support ECDSA keys](https://aws.amazon.com/certificate-manager/faqs/) via their API. As a result, the deploy has failed. In order to fix this issue, delete the `algo.pem` and `algo.pem.pub` keys from your `configs` directory and run the deploy again. If AWS is selected, Algo will now generate new RSA ssh keys which are compatible with the AWS API. ### AWS: "Deploy the template fails" with CREATE_FAILED You tried to deploy Algo to AWS and you received an error like this one: ``` TASK [cloud-ec2 : Make a cloudformation template] ****************************** changed: [localhost] TASK [cloud-ec2 : Deploy the template] ***************************************** fatal: [localhost]: FAILED! => {"changed": true, "events": ["StackEvent AWS::CloudFormation::Stack algopvpn1 ROLLBACK_COMPLETE", "StackEvent AWS::EC2::VPC VPC DELETE_COMPLETE", "StackEvent AWS::EC2::InternetGateway InternetGateway DELETE_COMPLETE", "StackEvent AWS::CloudFormation::Stack algopvpn1 ROLLBACK_IN_PROGRESS", "StackEvent AWS::EC2::VPC VPC CREATE_FAILED", "StackEvent AWS::EC2::VPC VPC CREATE_IN_PROGRESS", "StackEvent AWS::EC2::InternetGateway InternetGateway CREATE_FAILED", "StackEvent AWS::EC2::InternetGateway InternetGateway CREATE_IN_PROGRESS", "StackEvent AWS::CloudFormation::Stack algopvpn1 CREATE_IN_PROGRESS"], "failed": true, "output": "Problem with CREATE. Rollback complete", "stack_outputs": {}, "stack_resources": [{"last_updated_time": null, "logical_resource_id": "InternetGateway", "physical_resource_id": null, "resource_type": "AWS::EC2::InternetGateway", "status": "DELETE_COMPLETE", "status_reason": null}, {"last_updated_time": null, "logical_resource_id": "VPC", "physical_resource_id": null, "resource_type": "AWS::EC2::VPC", "status": "DELETE_COMPLETE", "status_reason": null}]} ``` Algo builds a [Cloudformation](https://aws.amazon.com/cloudformation/) template to deploy to AWS. You can find the entire contents of the Cloudformation template in `configs/algo.yml`. In order to troubleshoot this issue, login to the AWS console, go to the Cloudformation service, find the failed deployment, click the events tab, and find the corresponding "CREATE_FAILED" events. Note that all AWS resources created by Algo are tagged with `Environment => Algo` for easy identification. In many cases, failed deployments are the result of [service limits](http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html) being reached, such as "CREATE_FAILED AWS::EC2::VPC VPC The maximum number of VPCs has been reached." In these cases, you must either [delete the VPCs from previous deployments](https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/working-with-vpcs.html#VPC_Deleting), or [contact AWS support](https://console.aws.amazon.com/support/home?region=us-east-1#/case/create?issueType=service-limit-increase&limitType=service-code-direct-connect) to increase the limits on your account. ### AWS: not authorized to perform: cloudformation:UpdateStack You tried to deploy Algo to AWS and you received an error like this one: ``` TASK [cloud-ec2 : Deploy the template] ***************************************** fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "msg": "User: arn:aws:iam::082851645362:user/algo is not authorized to perform: cloudformation:UpdateStack on resource: arn:aws:cloudformation:us-east-1:082851645362:stack/algo/*"} ``` This error indicates you already have Algo deployed to Cloudformation. Need to [delete it](cloud-amazon-ec2.md#cleanup) first, then re-deploy. ### DigitalOcean: error tagging resource You tried to deploy Algo to DigitalOcean and you received an error like this one: ``` TASK [cloud-digitalocean : Tag the droplet] ************************************ failed: [localhost] (item=staging) => {"failed": true, "item": "staging", "msg": "error tagging resource '73204383': param is missing or the value is empty: resources"} failed: [localhost] (item=dbserver) => {"failed": true, "item": "dbserver", "msg": "error tagging resource '73204383': param is missing or the value is empty: resources"} ``` The error is caused because Digital Ocean changed its API to treat the tag argument as a string instead of a number. 1. Download [doctl](https://github.com/digitalocean/doctl) 2. Run `doctl auth init`; it will ask you for your token which you can get (or generate) on the API tab at DigitalOcean 3. Once you are authorized on DO, you can run `doctl compute tag list` to see the list of tags 4. Run `doctl compute tag delete environment:algo --force` to delete the environment:algo tag 5. Finally run `doctl compute tag list` to make sure that the tag has been deleted 6. Run algo as directed ### Azure: No such file or directory: '/home/username/.azure/azureProfile.json' ``` TASK [cloud-azure : Create AlgoVPN Server] ***************************************************************************************************************************************************************** An exception occurred during task execution. To see the full traceback, use -vvv. The error was: FileNotFoundError: [Errno 2] No such file or directory: '/home/ubuntu/.azure/azureProfile.json' fatal: [localhost]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last): File \"/usr/local/lib/python3.6/dist-packages/azure/cli/core/_session.py\", line 39, in load with codecs_open(self.filename, 'r', encoding=self._encoding) as f: File \"/usr/lib/python3.6/codecs.py\", line 897, in open\n file = builtins.open(filename, mode, buffering) FileNotFoundError: [Errno 2] No such file or directory: '/home/ubuntu/.azure/azureProfile.json' ", "module_stdout": "", "msg": "MODULE FAILURE See stdout/stderr for the exact error", "rc": 1} ``` It happens when your machine is not authenticated in the azure cloud, follow this [guide](https://trailofbits.github.io/algo/cloud-azure.html) to configure your environment ### Azure: Deployment Permissions Error The AAD Application Registration (aka, the 'Service Principal', where you got the ClientId) needs permission to create the resources for the subscription. Otherwise, you will get the following error when you run the Ansible deploy script: ``` fatal: [localhost]: FAILED! => {"changed": false, "msg": "Resource group create_or_update failed with status code: 403 and message: The client 'xxxxx' with object id 'THE_OBJECT_ID' does not have authorization to perform action 'Microsoft.Resources/subscriptions/resourcegroups/write' over scope '/subscriptions/THE_SUBSCRIPTION_ID/resourcegroups/algo' or the scope is invalid. If access was recently granted, please refresh your credentials."} ``` The solution for this is to open the Azure CLI and run the following command to grant contributor role to the Service Principal: ``` az role assignment create --assignee-object-id THE_OBJECT_ID --scope subscriptions/THE_SUBSCRIPTION_ID --role contributor ``` After this is applied, the Service Principal has permissions to create the resources and you can re-run `ansible-playbook main.yml` to complete the deployment. ### Windows: The value of parameter linuxConfiguration.ssh.publicKeys.keyData is invalid You tried to deploy Algo from Windows and you received an error like this one: ``` TASK [cloud-azure : Create an instance]. fatal: [localhost]: FAILED! => {"changed": false, "msg": "Error creating or updating virtual machine AlgoVPN - Azure Error: InvalidParameter\n Message: The value of parameter linuxConfiguration.ssh.publicKeys.keyData is invalid.\n Target: linuxConfiguration.ssh.publicKeys.keyData"} ``` This is related to [the chmod issue](https://github.com/Microsoft/WSL/issues/81) inside /mnt directory which is NTFS. The fix is to place Algo outside of /mnt directory. ### Docker: Failed to connect to the host via ssh You tried to deploy Algo from Docker and you received an error like this one: ``` Failed to connect to the host via ssh: Warning: Permanently added 'xxx.xxx.xxx.xxx' (ECDSA) to the list of known hosts.\r\n Control socket connect(/root/.ansible/cp/6d9d22e981): Connection refused\r\n Failed to connect to new control master\r\n ``` You need to add the following to the ansible.cfg in repo root: ``` [ssh_connection] control_path_dir=/dev/shm/ansible_control_path ``` ### Error: Failed to create symlinks for deploying to localhost You tried to run Algo and you received an error like this one: ``` TASK [Create a symlink if deploying to localhost] ******************************************************************** fatal: [localhost]: FAILED! => {"changed": false, "gid": 1000, "group": "ubuntu", "mode": "0775", "msg": "the directory configs/localhost is not empty, refusing to convert it", "owner": "ubuntu", "path": "configs/localhost", "size": 4096, "state": "directory", "uid": 1000} included: /home/ubuntu/algo-master/playbooks/rescue.yml for localhost TASK [debug] ********************************************************************************************************* ok: [localhost] => { "fail_hint": [ "Sorry, but something went wrong!", "Please check the troubleshooting guide.", "https://trailofbits.github.io/algo/troubleshooting.html" ] } TASK [Fail the installation] ***************************************************************************************** ``` This error is usually encountered when using the local install option and `localhost` is provided in answer to this question, which is expecting an IP address or domain name of your server: ``` Enter the public IP address or domain name of your server: (IMPORTANT! This is used to verify the certificate) [localhost] : ``` You should remove the files in /etc/wireguard/ and configs/ as follows: ```ssh sudo rm -rf /etc/wireguard/* rm -rf configs/* ``` And then immediately re-run `./algo` and provide a domain name or IP address in response to the question referenced above. ### Wireguard: Unable to find 'configs/...' in expected paths You tried to run Algo and you received an error like this one: ``` TASK [wireguard : Generate public keys] ******************************************************************************** [WARNING]: Unable to find 'configs/xxx.xxx.xxx.xxx/wireguard//private/dan' in expected paths. fatal: [localhost]: FAILED! => {"msg": "An unhandled exception occurred while running the lookup plugin 'file'. Error was a <class 'ansible.errors.AnsibleError'>, original message: could not locate file in lookup: configs/xxx.xxx.xxx.xxx/wireguard//private/dan"} ``` This error is usually hit when using the local install option on a server that isn't Ubuntu 18.04 or later. You should upgrade your server to Ubuntu 18.04 or later. If this doesn't work, try removing files in /etc/wireguard/ and the configs directories as follows: ```ssh sudo rm -rf /etc/wireguard/* rm -rf configs/* ``` Then immediately re-run `./algo`. ### Ubuntu Error: "unable to write 'random state'" when generating CA password When running Algo, you received an error like this: ``` TASK [common : Generate password for the CA key] *********************************************************************************************************************************************************** fatal: [xxx.xxx.xxx.xxx -> localhost]: FAILED! => {"changed": true, "cmd": "openssl rand -hex 16", "delta": "0:00:00.024776", "end": "2018-11-26 13:13:55.879921", "msg": "non-zero return code", "rc": 1, "start": "2018-11-26 13:13:55.855145", "stderr": "unable to write 'random state'", "stderr_lines": ["unable to write 'random state'"], "stdout": "xxxxxxxxxxxxxxxxxxx", "stdout_lines": ["xxxxxxxxxxxxxxxxxxx"]} ``` This happens when your user does not have ownership of the `$HOME/.rnd` file, which is a seed for randomization. To fix this issue, give your user ownership of the file with this command: ``` sudo chown $USER:$USER $HOME/.rnd ``` Now, run Algo again. ### Old Networking Firewall In Place You may see the following output when attemptint to run ./algo from your localhost: ``` TASK [Wait until SSH becomes ready...] ********************************************************************************************************************** fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 321, "msg": "Timeout when waiting for search string OpenSSH in xxx.xxx.xxx.xxx:4160"} included: /home/<username>/algo/algo/playbooks/rescue.yml for localhost TASK [debug] ************************************************************************************************************************************************ ok: [localhost] => { "fail_hint": [ "Sorry, but something went wrong!", "Please check the troubleshooting guide.", "https://trailofbits.github.io/algo/troubleshooting.html" ] } ``` If you see this error then one possible explanation is that you have a previous firewall configured in your cloud hosting provider which needs to be either updated or ideally removed. Removing this can often fix this issue. ## Connection Problems Look here if you deployed an Algo server but now have a problem connecting to it with a client. ### I'm blocked or get CAPTCHAs when I access certain websites This is normal. When you deploy a Algo to a new cloud server, the address you are given may have been used before. In some cases, a malicious individual may have attacked others with that address and had it added to "IP reputation" feeds or simply a blacklist. In order to regain the trust for that address, you may be asked to enter CAPTCHAs to prove that you are a human, and not a Denial of Service (DoS) bot trying to attack others. This happens most frequently with Google. You can try entering the CAPTCHAs or you can try redeploying your Algo server to a new IP to resolve this issue. In some cases, a website will block any visitors accessing their site through a cloud hosting provider due to previous, frequent DoS attacks originating from them. In these cases, there is not much you can do except deploy Algo to your own server or another IP that the website has not outright blocked. ### I want to change the list of trusted Wifi networks on my Apple device This setting is enforced on your client device via the Apple profile you put on it. You can edit the profile with new settings, then load it on your device to change the settings. You can use the [Apple Configurator](https://itunes.apple.com/us/app/apple-configurator-2/id1037126344?mt=12) to edit and resave the profile. Advanced users can edit the file directly in a text editor. Use the [Configuration Profile Reference](https://developer.apple.com/library/content/featuredarticles/iPhoneConfigurationProfileRef/Introduction/Introduction.html) for information about the file format and other available options. If you're not comfortable editing the profile, you can also simply redeploy a new Algo server with different settings to receive a new auto-generated profile. ### Error: "The VPN Service payload could not be installed." You tried to install the Apple profile on one of your devices and you received an error stating `The "VPN Service" payload could not be installed. The VPN service could not be created.` Client support for Algo VPN is limited to modern operating systems, e.g. macOS 10.11+, iOS 9+. Please upgrade your operating system and try again. ### Little Snitch is broken when connected to the VPN Little Snitch is not compatible with IPSEC VPNs due to a known bug in macOS and there is no solution. The Little Snitch "filter" does not get incoming packets from IPSEC VPNs and, therefore, cannot evaluate any rules over them. Their developers have filed a bug report with Apple but there has been no response. There is nothing they or Algo can do to resolve this problem on their own. You can read more about this problem in [issue #134](https://github.com/trailofbits/algo/issues/134). ### I can't get my router to connect to the Algo server In order to connect to the Algo VPN server, your router must support IKEv2, ECC certificate-based authentication, and the cipher suite we use. See the ipsec.conf files we generate in the `config` folder for more information. Note that we do not officially support routers as clients for Algo VPN at this time, though patches and documentation for them are welcome (for example, see open issues for [Ubiquiti](https://github.com/trailofbits/algo/issues/307) and [pfSense](https://github.com/trailofbits/algo/issues/292)). ### I can't get Network Manager to connect to the Algo server You're trying to connect Ubuntu or Debian to the Algo server through the Network Manager GUI but it's not working. Many versions of Ubuntu and some older versions of Debian bundle a [broken version of Network Manager](https://github.com/trailofbits/algo/issues/263) without support for modern standards or the strongSwan server. You must upgrade to Ubuntu 17.04 or Debian 9 Stretch, each of which contain the required minimum version of Network Manager. ### Various websites appear to be offline through the VPN This issue appears occasionally due to issues with [MTU](https://en.wikipedia.org/wiki/Maximum_transmission_unit) size. Different networks may require the MTU to be within a specific range to correctly pass traffic. We made an effort to set the MTU to the most conservative, most compatible size by default but problems may still occur. If either your Internet service provider or your chosen cloud service provider use an MTU smaller than the normal value of 1500 you can use the `reduce_mtu` option in the file `config.cfg` to correspondingly reduce the size of the VPN tunnels created by Algo. Algo will attempt to automatically set `reduce_mtu` based on the MTU found on the server at the time of deployment, but it cannot detect if the MTU is smaller on the client side of the connection. If you change `reduce_mtu` you'll need to deploy a new Algo VPN. To determine the value for `reduce_mtu` you should examine the MTU on your Algo VPN server's primary network interface (see below). You might algo want to run tests using `ping`, both on a local client *when not connected to the VPN* and also on your Algo VPN server (see below). Then take the smallest MTU you find (local or server side), subtract it from 1500, and use that for `reduce_mtu`. An exception to this is if you find the smallest MTU is your local MTU at 1492, typical for PPPoE connections, then no MTU reduction should be necessary. #### Check the MTU on the Algo VPN server To check the MTU on your server, SSH in to it, run the command `ifconfig`, and look for the MTU of the main network interface. For example: ``` ens4: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1460 ``` The MTU shown here is 1460 instead of 1500. Therefore set `reduce_mtu: 40` in `config.cfg`. Algo should do this automatically. #### Determine the MTU using `ping` When using `ping` you increase the payload size with the "Don't Fragment" option set until it fails. The largest payload size that works, plus the `ping` overhead of 28, is the MTU of the connection. ##### Example: Test on your Algo VPN server (Ubuntu) ``` $ ping -4 -s 1432 -c 1 -M do github.com PING github.com (192.30.253.112) 1432(1460) bytes of data. 1440 bytes from lb-192-30-253-112-iad.github.com (192.30.253.112): icmp_seq=1 ttl=53 time=13.1 ms --- github.com ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 13.135/13.135/13.135/0.000 ms $ ping -4 -s 1433 -c 1 -M do github.com PING github.com (192.30.253.113) 1433(1461) bytes of data. ping: local error: Message too long, mtu=1460 --- github.com ping statistics --- 1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms ``` In this example the largest payload size that works is 1432. The `ping` overhead is 28 so the MTU is 1432 + 28 = 1460, which is 40 lower than the normal MTU of 1500. Therefore set `reduce_mtu: 40` in `config.cfg`. ##### Example: Test on a macOS client *not connected to your Algo VPN* ``` $ ping -c 1 -D -s 1464 github.com PING github.com (192.30.253.113): 1464 data bytes 1472 bytes from 192.30.253.113: icmp_seq=0 ttl=50 time=169.606 ms --- github.com ping statistics --- 1 packets transmitted, 1 packets received, 0.0% packet loss round-trip min/avg/max/stddev = 169.606/169.606/169.606/0.000 ms $ ping -c 1 -D -s 1465 github.com PING github.com (192.30.253.113): 1465 data bytes --- github.com ping statistics --- 1 packets transmitted, 0 packets received, 100.0% packet loss ``` In this example the largest payload size that works is 1464. The `ping` overhead is 28 so the MTU is 1464 + 28 = 1492, which is typical for a PPPoE Internet connection and does not require an MTU adjustment. Therefore use the default of `reduce_mtu: 0` in `config.cfg`. #### Change the client MTU without redeploying the Algo VPN If you don't wish to deploy a new Algo VPN (which is required to incorporate a change to `reduce_mtu`) you can change the client side MTU of WireGuard clients and Linux IPsec clients without needing to make changes to your Algo VPN. For WireGuard on Linux, or macOS (when installed with `brew`), you can specify the MTU yourself in the client configuration file (typically `wg0.conf`). Refer to the documentation (see `man wg-quick`). For WireGuard on iOS and Android you can change the MTU in the app. For IPsec on Linux you can change the MTU of your network interface to match the required MTU. For example: ``` sudo ifconfig eth0 mtu 1440 ``` To make the change take affect after a reboot, on Ubuntu 18.04 and later edit the relevant file in the `/etc/netplan` directory (see `man netplan`). #### Note for WireGuard iOS users As of WireGuard for iOS 0.0.20190107 the default MTU is 1280, a conservative value intended to allow mobile devices to continue to work as they switch between different networks which might have smaller than normal MTUs. In order to use this default MTU review the configuration in the WireGuard app and remove any value for MTU that might have been added automatically by Algo. ### Clients appear stuck in a reconnection loop If you're using 'Connect on Demand' on iOS and your client device appears stuck in a reconnection loop after switching from WiFi to LTE or vice versa, you may want to try disabling DoS protection in strongSwan. The configuration value can be found in `/etc/strongswan.d/charon.conf`. After making the change you must reload or restart ipsec. Example command: ``` sed -i -e 's/#*.dos_protection = yes/dos_protection = no/' /etc/strongswan.d/charon.conf && ipsec restart ``` ### WireGuard: Clients can connect on Wifi but not LTE Certain cloud providers (like AWS Lightsail) don't assign an IPv6 address to your server, but certain cellular carriers (e.g. T-Mobile in the United States, [EE](https://community.ee.co.uk/t5/4G-and-mobile-data/IPv4-VPN-Connectivity/td-p/757881) in the United Kingdom) operate an IPv6-only network. This somehow leads to the Wireguard app not being able to make a connection when transitioning to cell service. Go to the Wireguard app on the device when you're having problems with cell connectivity and select "Export log file" or similar option. If you see a long string of error messages like "`Failed to send data packet write udp6 [::]:49727->[2607:7700:0:2a:0:1:354:40ae]:51820: sendto: no route to host` then you might be having this problem. Manually disconnecting and then reconnecting should restore your connection. To solve this, you need to either "force IPv4 connection" if available on your phone, or install an IPv4 APN, which might be available from your carrier tech support. T-mobile's is available [for iOS here under "iOS IPv4/IPv6 fix"](https://www.reddit.com/r/tmobile/wiki/index), and [here is a walkthrough for Android phones](https://www.myopenrouter.com/article/vpn-connections-not-working-t-mobile-heres-how-fix). ### IPsec: Difficulty connecting through router Some routers treat IPsec connections specially because older versions of IPsec did not work properly through [NAT](https://en.wikipedia.org/wiki/Network_address_translation). If you're having problems connecting to your AlgoVPN through a specific router using IPsec you might need to change some settings on the router. #### Change the "VPN Passthrough" settings If your router has a setting called something like "VPN Passthrough" or "IPsec Passthrough" try changing the setting to a different value. #### Change the default pfSense NAT rules If your router runs [pfSense](https://www.pfsense.org) and a single IPsec client can connect but you have issues when using multiple clients, you'll need to change the **Outbound NAT** mode to **Manual Outbound NAT** and disable the rule that specifies **Static Port** for IKE (UDP port 500). See [Outbound NAT](https://docs.netgate.com/pfsense/en/latest/book/nat/outbound-nat.html#outbound-nat) in the [pfSense Book](https://docs.netgate.com/pfsense/en/latest/book). ## I have a problem not covered here If you have an issue that you cannot solve with the guidance here, [create a new discussion](https://github.com/trailofbits/algo/discussions) and ask for help. If you think you found a new issue in Algo, [file an issue](https://github.com/trailofbits/algo/issues/new).
Shell Script
algo/files/cloud-init/base.sh
#!/bin/sh set -eux # shellcheck disable=SC2230 which sudo || until \ apt-get update -y && \ apt-get install sudo -yf --install-suggests; do sleep 3 done getent passwd algo || useradd -m -d /home/algo -s /bin/bash -G adm -p '!' algo (umask 337 && echo "algo ALL=(ALL) NOPASSWD:ALL" >/etc/sudoers.d/10-algo-user) cat <<EOF >/etc/ssh/sshd_config {{ lookup('template', 'files/cloud-init/sshd_config') }} EOF test -d /home/algo/.ssh || sudo -u algo mkdir -m 0700 /home/algo/.ssh echo "{{ lookup('file', '{{ SSH_keys.public }}') }}" | (sudo -u algo tee /home/algo/.ssh/authorized_keys && chmod 0600 /home/algo/.ssh/authorized_keys) ufw --force reset # shellcheck disable=SC2015 dpkg -l sshguard && until apt-get remove -y --purge sshguard; do sleep 3 done || true systemctl restart sshd.service
YAML
algo/files/cloud-init/base.yml
#cloud-config output: {all: '| tee -a /var/log/cloud-init-output.log'} package_update: true package_upgrade: true packages: - sudo users: - default - name: algo homedir: /home/algo sudo: ALL=(ALL) NOPASSWD:ALL groups: adm,netdev shell: /bin/bash lock_passwd: true ssh_authorized_keys: - "{{ lookup('file', '{{ SSH_keys.public }}') }}" write_files: - path: /etc/ssh/sshd_config content: | {{ lookup('template', 'files/cloud-init/sshd_config') | indent(width=6) }} runcmd: - set -x - ufw --force reset - sudo apt-get remove -y --purge sshguard || true - systemctl restart sshd.service
algo/files/cloud-init/sshd_config
Port {{ ssh_port }} AllowGroups algo PermitRootLogin no PasswordAuthentication no ChallengeResponseAuthentication no UsePAM yes X11Forwarding yes PrintMotd no AcceptEnv LANG LC_* Subsystem sftp /usr/lib/openssh/sftp-server
Python
algo/library/digital_ocean_floating_ip.py
#!/usr/bin/python # -*- coding: utf-8 -*- # # (c) 2015, Patrick F. Marques <patrickfmarques@gmail.com> # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import absolute_import, division, print_function __metaclass__ = type ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ['preview'], 'supported_by': 'community'} DOCUMENTATION = ''' --- module: digital_ocean_floating_ip short_description: Manage DigitalOcean Floating IPs description: - Create/delete/assign a floating IP. version_added: "2.4" author: "Patrick Marques (@pmarques)" options: state: description: - Indicate desired state of the target. default: present choices: ['present', 'absent'] ip: description: - Public IP address of the Floating IP. Used to remove an IP region: description: - The region that the Floating IP is reserved to. droplet_id: description: - The Droplet that the Floating IP has been assigned to. oauth_token: description: - DigitalOcean OAuth token. required: true notes: - Version 2 of DigitalOcean API is used. requirements: - "python >= 2.6" ''' EXAMPLES = ''' - name: "Create a Floating IP in region lon1" digital_ocean_floating_ip: state: present region: lon1 - name: "Create a Floating IP assigned to Droplet ID 123456" digital_ocean_floating_ip: state: present droplet_id: 123456 - name: "Delete a Floating IP with ip 1.2.3.4" digital_ocean_floating_ip: state: absent ip: "1.2.3.4" ''' RETURN = ''' # Digital Ocean API info https://developers.digitalocean.com/documentation/v2/#floating-ips data: description: a DigitalOcean Floating IP resource returned: success and no resource constraint type: dict sample: { "action": { "id": 68212728, "status": "in-progress", "type": "assign_ip", "started_at": "2015-10-15T17:45:44Z", "completed_at": null, "resource_id": 758603823, "resource_type": "floating_ip", "region": { "name": "New York 3", "slug": "nyc3", "sizes": [ "512mb", "1gb", "2gb", "4gb", "8gb", "16gb", "32gb", "48gb", "64gb" ], "features": [ "private_networking", "backups", "ipv6", "metadata" ], "available": true }, "region_slug": "nyc3" } } ''' import json import time from ansible.module_utils.basic import AnsibleModule from ansible.module_utils.basic import env_fallback from ansible.module_utils.urls import fetch_url from ansible.module_utils.digital_ocean import DigitalOceanHelper class Response(object): def __init__(self, resp, info): self.body = None if resp: self.body = resp.read() self.info = info @property def json(self): if not self.body: if "body" in self.info: return json.loads(self.info["body"]) return None try: return json.loads(self.body) except ValueError: return None @property def status_code(self): return self.info["status"] def wait_action(module, rest, ip, action_id, timeout=10): end_time = time.time() + 10 while time.time() < end_time: response = rest.get('floating_ips/{0}/actions/{1}'.format(ip, action_id)) status_code = response.status_code status = response.json['action']['status'] # TODO: check status_code == 200? if status == 'completed': return True elif status == 'errored': module.fail_json(msg='Floating ip action error [ip: {0}: action: {1}]'.format( ip, action_id), data=json) module.fail_json(msg='Floating ip action timeout [ip: {0}: action: {1}]'.format( ip, action_id), data=json) def core(module): api_token = module.params['oauth_token'] state = module.params['state'] ip = module.params['ip'] droplet_id = module.params['droplet_id'] rest = DigitalOceanHelper(module) if state in ('present'): if droplet_id is not None and module.params['ip'] is not None: # Lets try to associate the ip to the specified droplet associate_floating_ips(module, rest) else: create_floating_ips(module, rest) elif state in ('absent'): response = rest.delete("floating_ips/{0}".format(ip)) status_code = response.status_code json_data = response.json if status_code == 204: module.exit_json(changed=True) elif status_code == 404: module.exit_json(changed=False) else: module.exit_json(changed=False, data=json_data) def get_floating_ip_details(module, rest): ip = module.params['ip'] response = rest.get("floating_ips/{0}".format(ip)) status_code = response.status_code json_data = response.json if status_code == 200: return json_data['floating_ip'] else: module.fail_json(msg="Error assigning floating ip [{0}: {1}]".format( status_code, json_data["message"]), region=module.params['region']) def assign_floating_id_to_droplet(module, rest): ip = module.params['ip'] payload = { "type": "assign", "droplet_id": module.params['droplet_id'], } response = rest.post("floating_ips/{0}/actions".format(ip), data=payload) status_code = response.status_code json_data = response.json if status_code == 201: wait_action(module, rest, ip, json_data['action']['id']) module.exit_json(changed=True, data=json_data) else: module.fail_json(msg="Error creating floating ip [{0}: {1}]".format( status_code, json_data["message"]), region=module.params['region']) def associate_floating_ips(module, rest): floating_ip = get_floating_ip_details(module, rest) droplet = floating_ip['droplet'] # TODO: If already assigned to a droplet verify if is one of the specified as valid if droplet is not None and str(droplet['id']) in [module.params['droplet_id']]: module.exit_json(changed=False) else: assign_floating_id_to_droplet(module, rest) def create_floating_ips(module, rest): payload = { } floating_ip_data = None if module.params['region'] is not None: payload["region"] = module.params['region'] if module.params['droplet_id'] is not None: payload["droplet_id"] = module.params['droplet_id'] floating_ips = rest.get_paginated_data(base_url='floating_ips?', data_key_name='floating_ips') for floating_ip in floating_ips: if floating_ip['droplet'] and floating_ip['droplet']['id'] == module.params['droplet_id']: floating_ip_data = {'floating_ip': floating_ip} if floating_ip_data: module.exit_json(changed=False, data=floating_ip_data) else: response = rest.post("floating_ips", data=payload) status_code = response.status_code json_data = response.json if status_code == 202: module.exit_json(changed=True, data=json_data) else: module.fail_json(msg="Error creating floating ip [{0}: {1}]".format( status_code, json_data["message"]), region=module.params['region']) def main(): module = AnsibleModule( argument_spec=dict( state=dict(choices=['present', 'absent'], default='present'), ip=dict(aliases=['id'], required=False), region=dict(required=False), droplet_id=dict(required=False, type='int'), oauth_token=dict( no_log=True, # Support environment variable for DigitalOcean OAuth Token fallback=(env_fallback, ['DO_API_TOKEN', 'DO_API_KEY', 'DO_OAUTH_TOKEN']), required=True, ), validate_certs=dict(type='bool', default=True), timeout=dict(type='int', default=30), ), required_if=[ ('state', 'delete', ['ip']) ], mutually_exclusive=[ ['region', 'droplet_id'] ], ) core(module) if __name__ == '__main__': main()
Python
algo/library/gcp_compute_location_info.py
#!/usr/bin/python # -*- coding: utf-8 -*- from __future__ import absolute_import, division, print_function __metaclass__ = type ################################################################################ # Documentation ################################################################################ ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ["preview"], 'supported_by': 'community'} ################################################################################ # Imports ################################################################################ from ansible.module_utils.gcp_utils import navigate_hash, GcpSession, GcpModule, GcpRequest import json ################################################################################ # Main ################################################################################ def main(): module = GcpModule(argument_spec=dict(filters=dict(type='list', elements='str'), scope=dict(required=True, type='str'))) if module._name == 'gcp_compute_image_facts': module.deprecate("The 'gcp_compute_image_facts' module has been renamed to 'gcp_compute_regions_info'", version='2.13') if not module.params['scopes']: module.params['scopes'] = ['https://www.googleapis.com/auth/compute'] items = fetch_list(module, collection(module), query_options(module.params['filters'])) if items.get('items'): items = items.get('items') else: items = [] return_value = {'resources': items} module.exit_json(**return_value) def collection(module): return "https://www.googleapis.com/compute/v1/projects/{project}/{scope}".format(**module.params) def fetch_list(module, link, query): auth = GcpSession(module, 'compute') response = auth.get(link, params={'filter': query}) return return_if_object(module, response) def query_options(filters): if not filters: return '' if len(filters) == 1: return filters[0] else: queries = [] for f in filters: # For multiple queries, all queries should have () if f[0] != '(' and f[-1] != ')': queries.append("(%s)" % ''.join(f)) else: queries.append(f) return ' '.join(queries) def return_if_object(module, response): # If not found, return nothing. if response.status_code == 404: return None # If no content, return nothing. if response.status_code == 204: return None try: module.raise_for_status(response) result = response.json() except getattr(json.decoder, 'JSONDecodeError', ValueError) as inst: module.fail_json(msg="Invalid JSON response with error: %s" % inst) if navigate_hash(result, ['error', 'errors']): module.fail_json(msg=navigate_hash(result, ['error', 'errors'])) return result if __name__ == "__main__": main()
Python
algo/library/lightsail_region_facts.py
#!/usr/bin/python # -*- coding: utf-8 -*- # Copyright: Ansible Project # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import absolute_import, division, print_function __metaclass__ = type ANSIBLE_METADATA = {'metadata_version': '1.1', 'status': ['preview'], 'supported_by': 'community'} DOCUMENTATION = ''' --- module: lightsail_region_facts short_description: Gather facts about AWS Lightsail regions. description: - Gather facts about AWS Lightsail regions. version_added: "2.5.3" author: "Jack Ivanov (@jackivanov)" options: requirements: - "python >= 2.6" - boto3 extends_documentation_fragment: - aws - ec2 ''' EXAMPLES = ''' # Gather facts about all regions - lightsail_region_facts: ''' RETURN = ''' regions: returned: on success description: > Each element consists of a dict with all the information related to that region. type: list sample: "[{ "availabilityZones": [], "continentCode": "NA", "description": "This region is recommended to serve users in the eastern United States", "displayName": "Virginia", "name": "us-east-1" }]" ''' import time import traceback try: import botocore HAS_BOTOCORE = True except ImportError: HAS_BOTOCORE = False try: import boto3 except ImportError: # will be caught by imported HAS_BOTO3 pass from ansible.module_utils.basic import AnsibleModule from ansible.module_utils.ec2 import (ec2_argument_spec, get_aws_connection_info, boto3_conn, HAS_BOTO3, camel_dict_to_snake_dict) def main(): argument_spec = ec2_argument_spec() module = AnsibleModule(argument_spec=argument_spec) if not HAS_BOTO3: module.fail_json(msg='Python module "boto3" is missing, please install it') if not HAS_BOTOCORE: module.fail_json(msg='Python module "botocore" is missing, please install it') try: region, ec2_url, aws_connect_kwargs = get_aws_connection_info(module, boto3=True) client = None try: client = boto3_conn(module, conn_type='client', resource='lightsail', region=region, endpoint=ec2_url, **aws_connect_kwargs) except (botocore.exceptions.ClientError, botocore.exceptions.ValidationError) as e: module.fail_json(msg='Failed while connecting to the lightsail service: %s' % e, exception=traceback.format_exc()) response = client.get_regions( includeAvailabilityZones=False ) module.exit_json(changed=False, data=response) except (botocore.exceptions.ClientError, Exception) as e: module.fail_json(msg=str(e), exception=traceback.format_exc()) if __name__ == '__main__': main()
Python
algo/library/linode_stackscript_v4.py
#!/usr/bin/python # -*- coding: utf-8 -*- from __future__ import absolute_import, division, print_function __metaclass__ = type import traceback from ansible.module_utils.basic import AnsibleModule, env_fallback, missing_required_lib from ansible.module_utils.linode import get_user_agent LINODE_IMP_ERR = None try: from linode_api4 import StackScript, LinodeClient HAS_LINODE_DEPENDENCY = True except ImportError: LINODE_IMP_ERR = traceback.format_exc() HAS_LINODE_DEPENDENCY = False def create_stackscript(module, client, **kwargs): """Creates a stackscript and handles return format.""" try: response = client.linode.stackscript_create(**kwargs) return response._raw_json except Exception as exception: module.fail_json(msg='Unable to query the Linode API. Saw: %s' % exception) def stackscript_available(module, client): """Try to retrieve a stackscript.""" try: label = module.params['label'] desc = module.params['description'] result = client.linode.stackscripts(StackScript.label == label, StackScript.description == desc, mine_only=True ) return result[0] except IndexError: return None except Exception as exception: module.fail_json(msg='Unable to query the Linode API. Saw: %s' % exception) def initialise_module(): """Initialise the module parameter specification.""" return AnsibleModule( argument_spec=dict( label=dict(type='str', required=True), state=dict( type='str', required=True, choices=['present', 'absent'] ), access_token=dict( type='str', required=True, no_log=True, fallback=(env_fallback, ['LINODE_ACCESS_TOKEN']), ), script=dict(type='str', required=True), images=dict(type='list', required=True), description=dict(type='str', required=False), public=dict(type='bool', required=False, default=False), ), supports_check_mode=False ) def build_client(module): """Build a LinodeClient.""" return LinodeClient( module.params['access_token'], user_agent=get_user_agent('linode_v4_module') ) def main(): """Module entrypoint.""" module = initialise_module() if not HAS_LINODE_DEPENDENCY: module.fail_json(msg=missing_required_lib('linode-api4'), exception=LINODE_IMP_ERR) client = build_client(module) stackscript = stackscript_available(module, client) if module.params['state'] == 'present' and stackscript is not None: module.exit_json(changed=False, stackscript=stackscript._raw_json) elif module.params['state'] == 'present' and stackscript is None: stackscript_json = create_stackscript( module, client, label=module.params['label'], script=module.params['script'], images=module.params['images'], desc=module.params['description'], public=module.params['public'], ) module.exit_json(changed=True, stackscript=stackscript_json) elif module.params['state'] == 'absent' and stackscript is not None: stackscript.delete() module.exit_json(changed=True, stackscript=stackscript._raw_json) elif module.params['state'] == 'absent' and stackscript is None: module.exit_json(changed=False, stackscript={}) if __name__ == "__main__": main()
Python
algo/library/linode_v4.py
#!/usr/bin/python # -*- coding: utf-8 -*- # Copyright (c) 2017 Ansible Project # GNU General Public License v3.0+ # (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import absolute_import, division, print_function __metaclass__ = type import traceback from ansible.module_utils.basic import AnsibleModule, env_fallback, missing_required_lib from ansible.module_utils.linode import get_user_agent LINODE_IMP_ERR = None try: from linode_api4 import Instance, LinodeClient HAS_LINODE_DEPENDENCY = True except ImportError: LINODE_IMP_ERR = traceback.format_exc() HAS_LINODE_DEPENDENCY = False def create_linode(module, client, **kwargs): """Creates a Linode instance and handles return format.""" if kwargs['root_pass'] is None: kwargs.pop('root_pass') try: response = client.linode.instance_create(**kwargs) except Exception as exception: module.fail_json(msg='Unable to query the Linode API. Saw: %s' % exception) try: if isinstance(response, tuple): instance, root_pass = response instance_json = instance._raw_json instance_json.update({'root_pass': root_pass}) return instance_json else: return response._raw_json except TypeError: module.fail_json(msg='Unable to parse Linode instance creation' ' response. Please raise a bug against this' ' module on https://github.com/ansible/ansible/issues' ) def maybe_instance_from_label(module, client): """Try to retrieve an instance based on a label.""" try: label = module.params['label'] result = client.linode.instances(Instance.label == label) return result[0] except IndexError: return None except Exception as exception: module.fail_json(msg='Unable to query the Linode API. Saw: %s' % exception) def initialise_module(): """Initialise the module parameter specification.""" return AnsibleModule( argument_spec=dict( label=dict(type='str', required=True), state=dict( type='str', required=True, choices=['present', 'absent'] ), access_token=dict( type='str', required=True, no_log=True, fallback=(env_fallback, ['LINODE_ACCESS_TOKEN']), ), authorized_keys=dict(type='list', required=False), group=dict(type='str', required=False), image=dict(type='str', required=False), region=dict(type='str', required=False), root_pass=dict(type='str', required=False, no_log=True), tags=dict(type='list', required=False), type=dict(type='str', required=False), stackscript_id=dict(type='int', required=False), ), supports_check_mode=False, required_one_of=( ['state', 'label'], ), required_together=( ['region', 'image', 'type'], ) ) def build_client(module): """Build a LinodeClient.""" return LinodeClient( module.params['access_token'], user_agent=get_user_agent('linode_v4_module') ) def main(): """Module entrypoint.""" module = initialise_module() if not HAS_LINODE_DEPENDENCY: module.fail_json(msg=missing_required_lib('linode-api4'), exception=LINODE_IMP_ERR) client = build_client(module) instance = maybe_instance_from_label(module, client) if module.params['state'] == 'present' and instance is not None: module.exit_json(changed=False, instance=instance._raw_json) elif module.params['state'] == 'present' and instance is None: instance_json = create_linode( module, client, authorized_keys=module.params['authorized_keys'], group=module.params['group'], image=module.params['image'], label=module.params['label'], region=module.params['region'], root_pass=module.params['root_pass'], tags=module.params['tags'], ltype=module.params['type'], stackscript_id=module.params['stackscript_id'], ) module.exit_json(changed=True, instance=instance_json) elif module.params['state'] == 'absent' and instance is not None: instance.delete() module.exit_json(changed=True, instance=instance._raw_json) elif module.params['state'] == 'absent' and instance is None: module.exit_json(changed=False, instance={}) if __name__ == "__main__": main()
Python
algo/library/scaleway_compute.py
#!/usr/bin/python # # Scaleway Compute management module # # Copyright (C) 2018 Online SAS. # https://www.scaleway.com # # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) from __future__ import absolute_import, division, print_function __metaclass__ = type ANSIBLE_METADATA = { 'metadata_version': '1.1', 'status': ['preview'], 'supported_by': 'community' } DOCUMENTATION = ''' --- module: scaleway_compute short_description: Scaleway compute management module version_added: "2.6" author: Remy Leone (@sieben) description: - "This module manages compute instances on Scaleway." extends_documentation_fragment: scaleway options: public_ip: description: - Manage public IP on a Scaleway server - Could be Scaleway IP address UUID - C(dynamic) Means that IP is destroyed at the same time the host is destroyed - C(absent) Means no public IP at all version_added: '2.8' default: absent enable_ipv6: description: - Enable public IPv6 connectivity on the instance default: false type: bool boot_type: description: - Boot method default: bootscript choices: - bootscript - local image: description: - Image identifier used to start the instance with required: true name: description: - Name of the instance organization: description: - Organization identifier required: true state: description: - Indicate desired state of the instance. default: present choices: - present - absent - running - restarted - stopped tags: description: - List of tags to apply to the instance (5 max) required: false default: [] region: description: - Scaleway compute zone required: true choices: - ams1 - EMEA-NL-EVS - par1 - EMEA-FR-PAR1 commercial_type: description: - Commercial name of the compute node required: true wait: description: - Wait for the instance to reach its desired state before returning. type: bool default: 'no' wait_timeout: description: - Time to wait for the server to reach the expected state required: false default: 300 wait_sleep_time: description: - Time to wait before every attempt to check the state of the server required: false default: 3 security_group: description: - Security group unique identifier - If no value provided, the default security group or current security group will be used required: false version_added: "2.8" ''' EXAMPLES = ''' - name: Create a server scaleway_compute: name: foobar state: present image: 89ee4018-f8c3-4dc4-a6b5-bca14f985ebe organization: 951df375-e094-4d26-97c1-ba548eeb9c42 region: ams1 commercial_type: VC1S tags: - test - www - name: Create a server attached to a security group scaleway_compute: name: foobar state: present image: 89ee4018-f8c3-4dc4-a6b5-bca14f985ebe organization: 951df375-e094-4d26-97c1-ba548eeb9c42 region: ams1 commercial_type: VC1S security_group: 4a31b633-118e-4900-bd52-facf1085fc8d tags: - test - www - name: Destroy it right after scaleway_compute: name: foobar state: absent image: 89ee4018-f8c3-4dc4-a6b5-bca14f985ebe organization: 951df375-e094-4d26-97c1-ba548eeb9c42 region: ams1 commercial_type: VC1S ''' RETURN = ''' ''' import datetime import time from ansible.module_utils.basic import AnsibleModule from ansible.module_utils.six.moves.urllib.parse import quote as urlquote from ansible.module_utils.scaleway import SCALEWAY_LOCATION, scaleway_argument_spec, Scaleway SCALEWAY_SERVER_STATES = ( 'stopped', 'stopping', 'starting', 'running', 'locked' ) SCALEWAY_TRANSITIONS_STATES = ( "stopping", "starting", "pending" ) def check_image_id(compute_api, image_id): response = compute_api.get(path="images") if response.ok and response.json: image_ids = [image["id"] for image in response.json["images"]] if image_id not in image_ids: compute_api.module.fail_json(msg='Error in getting image %s on %s' % (image_id, compute_api.module.params.get('api_url'))) else: compute_api.module.fail_json(msg="Error in getting images from: %s" % compute_api.module.params.get('api_url')) def fetch_state(compute_api, server): compute_api.module.debug("fetch_state of server: %s" % server["id"]) response = compute_api.get(path="servers/%s" % server["id"]) if response.status_code == 404: return "absent" if not response.ok: msg = 'Error during state fetching: (%s) %s' % (response.status_code, response.json) compute_api.module.fail_json(msg=msg) try: compute_api.module.debug("Server %s in state: %s" % (server["id"], response.json["server"]["state"])) return response.json["server"]["state"] except KeyError: compute_api.module.fail_json(msg="Could not fetch state in %s" % response.json) def wait_to_complete_state_transition(compute_api, server): wait = compute_api.module.params["wait"] if not wait: return wait_timeout = compute_api.module.params["wait_timeout"] wait_sleep_time = compute_api.module.params["wait_sleep_time"] start = datetime.datetime.utcnow() end = start + datetime.timedelta(seconds=wait_timeout) while datetime.datetime.utcnow() < end: compute_api.module.debug("We are going to wait for the server to finish its transition") if fetch_state(compute_api, server) not in SCALEWAY_TRANSITIONS_STATES: compute_api.module.debug("It seems that the server is not in transition anymore.") compute_api.module.debug("Server in state: %s" % fetch_state(compute_api, server)) break time.sleep(wait_sleep_time) else: compute_api.module.fail_json(msg="Server takes too long to finish its transition") def public_ip_payload(compute_api, public_ip): # We don't want a public ip if public_ip in ("absent",): return {"dynamic_ip_required": False} # IP is only attached to the instance and is released as soon as the instance terminates if public_ip in ("dynamic", "allocated"): return {"dynamic_ip_required": True} # We check that the IP we want to attach exists, if so its ID is returned response = compute_api.get("ips") if not response.ok: msg = 'Error during public IP validation: (%s) %s' % (response.status_code, response.json) compute_api.module.fail_json(msg=msg) ip_list = [] try: ip_list = response.json["ips"] except KeyError: compute_api.module.fail_json(msg="Error in getting the IP information from: %s" % response.json) lookup = [ip["id"] for ip in ip_list] if public_ip in lookup: return {"public_ip": public_ip} def create_server(compute_api, server): compute_api.module.debug("Starting a create_server") target_server = None data = {"enable_ipv6": server["enable_ipv6"], "tags": server["tags"], "commercial_type": server["commercial_type"], "image": server["image"], "dynamic_ip_required": server["dynamic_ip_required"], "name": server["name"], "organization": server["organization"] } if server["boot_type"]: data["boot_type"] = server["boot_type"] if server["security_group"]: data["security_group"] = server["security_group"] response = compute_api.post(path="servers", data=data) if not response.ok: msg = 'Error during server creation: (%s) %s' % (response.status_code, response.json) compute_api.module.fail_json(msg=msg) try: target_server = response.json["server"] except KeyError: compute_api.module.fail_json(msg="Error in getting the server information from: %s" % response.json) wait_to_complete_state_transition(compute_api=compute_api, server=target_server) return target_server def restart_server(compute_api, server): return perform_action(compute_api=compute_api, server=server, action="reboot") def stop_server(compute_api, server): return perform_action(compute_api=compute_api, server=server, action="poweroff") def start_server(compute_api, server): return perform_action(compute_api=compute_api, server=server, action="poweron") def perform_action(compute_api, server, action): response = compute_api.post(path="servers/%s/action" % server["id"], data={"action": action}) if not response.ok: msg = 'Error during server %s: (%s) %s' % (action, response.status_code, response.json) compute_api.module.fail_json(msg=msg) wait_to_complete_state_transition(compute_api=compute_api, server=server) return response def remove_server(compute_api, server): compute_api.module.debug("Starting remove server strategy") response = compute_api.delete(path="servers/%s" % server["id"]) if not response.ok: msg = 'Error during server deletion: (%s) %s' % (response.status_code, response.json) compute_api.module.fail_json(msg=msg) wait_to_complete_state_transition(compute_api=compute_api, server=server) return response def present_strategy(compute_api, wished_server): compute_api.module.debug("Starting present strategy") changed = False query_results = find(compute_api=compute_api, wished_server=wished_server, per_page=1) if not query_results: changed = True if compute_api.module.check_mode: return changed, {"status": "A server would be created."} target_server = create_server(compute_api=compute_api, server=wished_server) else: target_server = query_results[0] if server_attributes_should_be_changed(compute_api=compute_api, target_server=target_server, wished_server=wished_server): changed = True if compute_api.module.check_mode: return changed, {"status": "Server %s attributes would be changed." % target_server["id"]} target_server = server_change_attributes(compute_api=compute_api, target_server=target_server, wished_server=wished_server) return changed, target_server def absent_strategy(compute_api, wished_server): compute_api.module.debug("Starting absent strategy") changed = False target_server = None query_results = find(compute_api=compute_api, wished_server=wished_server, per_page=1) if not query_results: return changed, {"status": "Server already absent."} else: target_server = query_results[0] changed = True if compute_api.module.check_mode: return changed, {"status": "Server %s would be made absent." % target_server["id"]} # A server MUST be stopped to be deleted. while fetch_state(compute_api=compute_api, server=target_server) != "stopped": wait_to_complete_state_transition(compute_api=compute_api, server=target_server) response = stop_server(compute_api=compute_api, server=target_server) if not response.ok: err_msg = 'Error while stopping a server before removing it [{0}: {1}]'.format(response.status_code, response.json) compute_api.module.fail_json(msg=err_msg) wait_to_complete_state_transition(compute_api=compute_api, server=target_server) response = remove_server(compute_api=compute_api, server=target_server) if not response.ok: err_msg = 'Error while removing server [{0}: {1}]'.format(response.status_code, response.json) compute_api.module.fail_json(msg=err_msg) return changed, {"status": "Server %s deleted" % target_server["id"]} def running_strategy(compute_api, wished_server): compute_api.module.debug("Starting running strategy") changed = False query_results = find(compute_api=compute_api, wished_server=wished_server, per_page=1) if not query_results: changed = True if compute_api.module.check_mode: return changed, {"status": "A server would be created before being run."} target_server = create_server(compute_api=compute_api, server=wished_server) else: target_server = query_results[0] if server_attributes_should_be_changed(compute_api=compute_api, target_server=target_server, wished_server=wished_server): changed = True if compute_api.module.check_mode: return changed, {"status": "Server %s attributes would be changed before running it." % target_server["id"]} target_server = server_change_attributes(compute_api=compute_api, target_server=target_server, wished_server=wished_server) current_state = fetch_state(compute_api=compute_api, server=target_server) if current_state not in ("running", "starting"): compute_api.module.debug("running_strategy: Server in state: %s" % current_state) changed = True if compute_api.module.check_mode: return changed, {"status": "Server %s attributes would be changed." % target_server["id"]} response = start_server(compute_api=compute_api, server=target_server) if not response.ok: msg = 'Error while running server [{0}: {1}]'.format(response.status_code, response.json) compute_api.module.fail_json(msg=msg) return changed, target_server def stop_strategy(compute_api, wished_server): compute_api.module.debug("Starting stop strategy") query_results = find(compute_api=compute_api, wished_server=wished_server, per_page=1) changed = False if not query_results: if compute_api.module.check_mode: return changed, {"status": "A server would be created before being stopped."} target_server = create_server(compute_api=compute_api, server=wished_server) changed = True else: target_server = query_results[0] compute_api.module.debug("stop_strategy: Servers are found.") if server_attributes_should_be_changed(compute_api=compute_api, target_server=target_server, wished_server=wished_server): changed = True if compute_api.module.check_mode: return changed, { "status": "Server %s attributes would be changed before stopping it." % target_server["id"]} target_server = server_change_attributes(compute_api=compute_api, target_server=target_server, wished_server=wished_server) wait_to_complete_state_transition(compute_api=compute_api, server=target_server) current_state = fetch_state(compute_api=compute_api, server=target_server) if current_state not in ("stopped",): compute_api.module.debug("stop_strategy: Server in state: %s" % current_state) changed = True if compute_api.module.check_mode: return changed, {"status": "Server %s would be stopped." % target_server["id"]} response = stop_server(compute_api=compute_api, server=target_server) compute_api.module.debug(response.json) compute_api.module.debug(response.ok) if not response.ok: msg = 'Error while stopping server [{0}: {1}]'.format(response.status_code, response.json) compute_api.module.fail_json(msg=msg) return changed, target_server def restart_strategy(compute_api, wished_server): compute_api.module.debug("Starting restart strategy") changed = False query_results = find(compute_api=compute_api, wished_server=wished_server, per_page=1) if not query_results: changed = True if compute_api.module.check_mode: return changed, {"status": "A server would be created before being rebooted."} target_server = create_server(compute_api=compute_api, server=wished_server) else: target_server = query_results[0] if server_attributes_should_be_changed(compute_api=compute_api, target_server=target_server, wished_server=wished_server): changed = True if compute_api.module.check_mode: return changed, { "status": "Server %s attributes would be changed before rebooting it." % target_server["id"]} target_server = server_change_attributes(compute_api=compute_api, target_server=target_server, wished_server=wished_server) changed = True if compute_api.module.check_mode: return changed, {"status": "Server %s would be rebooted." % target_server["id"]} wait_to_complete_state_transition(compute_api=compute_api, server=target_server) if fetch_state(compute_api=compute_api, server=target_server) in ("running",): response = restart_server(compute_api=compute_api, server=target_server) wait_to_complete_state_transition(compute_api=compute_api, server=target_server) if not response.ok: msg = 'Error while restarting server that was running [{0}: {1}].'.format(response.status_code, response.json) compute_api.module.fail_json(msg=msg) if fetch_state(compute_api=compute_api, server=target_server) in ("stopped",): response = restart_server(compute_api=compute_api, server=target_server) wait_to_complete_state_transition(compute_api=compute_api, server=target_server) if not response.ok: msg = 'Error while restarting server that was stopped [{0}: {1}].'.format(response.status_code, response.json) compute_api.module.fail_json(msg=msg) return changed, target_server state_strategy = { "present": present_strategy, "restarted": restart_strategy, "stopped": stop_strategy, "running": running_strategy, "absent": absent_strategy } def find(compute_api, wished_server, per_page=1): compute_api.module.debug("Getting inside find") # Only the name attribute is accepted in the Compute query API response = compute_api.get("servers", params={"name": wished_server["name"], "per_page": per_page}) if not response.ok: msg = 'Error during server search: (%s) %s' % (response.status_code, response.json) compute_api.module.fail_json(msg=msg) search_results = response.json["servers"] return search_results PATCH_MUTABLE_SERVER_ATTRIBUTES = ( "ipv6", "tags", "name", "dynamic_ip_required", "security_group", ) def server_attributes_should_be_changed(compute_api, target_server, wished_server): compute_api.module.debug("Checking if server attributes should be changed") compute_api.module.debug("Current Server: %s" % target_server) compute_api.module.debug("Wished Server: %s" % wished_server) debug_dict = dict((x, (target_server[x], wished_server[x])) for x in PATCH_MUTABLE_SERVER_ATTRIBUTES if x in target_server and x in wished_server) compute_api.module.debug("Debug dict %s" % debug_dict) try: for key in PATCH_MUTABLE_SERVER_ATTRIBUTES: if key in target_server and key in wished_server: # When you are working with dict, only ID matter as we ask user to put only the resource ID in the playbook if isinstance(target_server[key], dict) and wished_server[key] and "id" in target_server[key].keys( ) and target_server[key]["id"] != wished_server[key]: return True # Handling other structure compare simply the two objects content elif not isinstance(target_server[key], dict) and target_server[key] != wished_server[key]: return True return False except AttributeError: compute_api.module.fail_json(msg="Error while checking if attributes should be changed") def server_change_attributes(compute_api, target_server, wished_server): compute_api.module.debug("Starting patching server attributes") patch_payload = dict() for key in PATCH_MUTABLE_SERVER_ATTRIBUTES: if key in target_server and key in wished_server: # When you are working with dict, only ID matter as we ask user to put only the resource ID in the playbook if isinstance(target_server[key], dict) and "id" in target_server[key] and wished_server[key]: # Setting all key to current value except ID key_dict = dict((x, target_server[key][x]) for x in target_server[key].keys() if x != "id") # Setting ID to the user specified ID key_dict["id"] = wished_server[key] patch_payload[key] = key_dict elif not isinstance(target_server[key], dict): patch_payload[key] = wished_server[key] response = compute_api.patch(path="servers/%s" % target_server["id"], data=patch_payload) if not response.ok: msg = 'Error during server attributes patching: (%s) %s' % (response.status_code, response.json) compute_api.module.fail_json(msg=msg) try: target_server = response.json["server"] except KeyError: compute_api.module.fail_json(msg="Error in getting the server information from: %s" % response.json) wait_to_complete_state_transition(compute_api=compute_api, server=target_server) return target_server def core(module): region = module.params["region"] wished_server = { "state": module.params["state"], "image": module.params["image"], "name": module.params["name"], "commercial_type": module.params["commercial_type"], "enable_ipv6": module.params["enable_ipv6"], "boot_type": module.params["boot_type"], "tags": module.params["tags"], "organization": module.params["organization"], "security_group": module.params["security_group"] } module.params['api_url'] = SCALEWAY_LOCATION[region]["api_endpoint"] compute_api = Scaleway(module=module) check_image_id(compute_api, wished_server["image"]) # IP parameters of the wished server depends on the configuration ip_payload = public_ip_payload(compute_api=compute_api, public_ip=module.params["public_ip"]) wished_server.update(ip_payload) changed, summary = state_strategy[wished_server["state"]](compute_api=compute_api, wished_server=wished_server) module.exit_json(changed=changed, msg=summary) def main(): argument_spec = scaleway_argument_spec() argument_spec.update(dict( image=dict(required=True), name=dict(), region=dict(required=True, choices=SCALEWAY_LOCATION.keys()), commercial_type=dict(required=True), enable_ipv6=dict(default=False, type="bool"), boot_type=dict(choices=['bootscript', 'local']), public_ip=dict(default="absent"), state=dict(choices=state_strategy.keys(), default='present'), tags=dict(type="list", default=[]), organization=dict(required=True), wait=dict(type="bool", default=False), wait_timeout=dict(type="int", default=300), wait_sleep_time=dict(type="int", default=3), security_group=dict(), )) module = AnsibleModule( argument_spec=argument_spec, supports_check_mode=True, ) core(module) if __name__ == '__main__': main()
YAML
algo/playbooks/cloud-post.yml
--- - name: Set subjectAltName as a fact set_fact: IP_subject_alt_name: "{{ (IP_subject_alt_name if algo_provider == 'local' else cloud_instance_ip) | lower }}" - name: Add the server to an inventory group add_host: name: "{% if cloud_instance_ip == 'localhost' %}localhost{% else %}{{ cloud_instance_ip }}{% endif %}" groups: vpn-host ansible_connection: "{% if cloud_instance_ip == 'localhost' %}local{% else %}ssh{% endif %}" ansible_ssh_user: "{{ ansible_ssh_user|default('root') }}" ansible_ssh_port: "{{ ansible_ssh_port|default(22) }}" ansible_python_interpreter: /usr/bin/python3 algo_provider: "{{ algo_provider }}" algo_server_name: "{{ algo_server_name }}" algo_ondemand_cellular: "{{ algo_ondemand_cellular }}" algo_ondemand_wifi: "{{ algo_ondemand_wifi }}" algo_ondemand_wifi_exclude: "{{ algo_ondemand_wifi_exclude }}" algo_dns_adblocking: "{{ algo_dns_adblocking }}" algo_ssh_tunneling: "{{ algo_ssh_tunneling }}" algo_store_pki: "{{ algo_store_pki }}" IP_subject_alt_name: "{{ IP_subject_alt_name }}" alternative_ingress_ip: "{{ alternative_ingress_ip | default(omit) }}" cloudinit: "{{ cloudinit|default(false) }}" - name: Additional variables for the server add_host: name: "{% if cloud_instance_ip == 'localhost' %}localhost{% else %}{{ cloud_instance_ip }}{% endif %}" ansible_ssh_private_key_file: "{{ SSH_keys.private_tmp }}" when: algo_provider != 'local' - name: Wait until SSH becomes ready... wait_for: port: "{{ ansible_ssh_port|default(22) }}" host: "{{ cloud_instance_ip }}" search_regex: OpenSSH delay: 10 timeout: 320 state: present when: cloud_instance_ip != "localhost" - name: Mount tmpfs import_tasks: tmpfs/main.yml when: - pki_in_tmpfs - not algo_store_pki - ansible_system == "Darwin" or ansible_system == "Linux" - debug: var: IP_subject_alt_name - name: Wait 600 seconds for target connection to become reachable/usable wait_for_connection: delegate_to: "{{ item }}" loop: "{{ groups['vpn-host'] }}"
YAML
algo/playbooks/cloud-pre.yml
--- - block: - name: Display the invocation environment shell: > ./algo-showenv.sh \ 'algo_provider "{{ algo_provider }}"' \ {% if ipsec_enabled %} 'algo_ondemand_cellular "{{ algo_ondemand_cellular }}"' \ 'algo_ondemand_wifi "{{ algo_ondemand_wifi }}"' \ 'algo_ondemand_wifi_exclude "{{ algo_ondemand_wifi_exclude }}"' \ {% endif %} 'algo_dns_adblocking "{{ algo_dns_adblocking }}"' \ 'algo_ssh_tunneling "{{ algo_ssh_tunneling }}"' \ 'wireguard_enabled "{{ wireguard_enabled }}"' \ 'dns_encryption "{{ dns_encryption }}"' \ > /dev/tty || true tags: debug - name: Install the requirements pip: state: present name: - pyOpenSSL>=0.15 - segno tags: - always - skip_ansible_lint delegate_to: localhost become: false - block: - name: Generate the SSH private key openssl_privatekey: path: "{{ SSH_keys.private }}" size: 2048 mode: "0600" type: RSA - name: Generate the SSH public key openssl_publickey: path: "{{ SSH_keys.public }}" privatekey_path: "{{ SSH_keys.private }}" format: OpenSSH - name: Copy the private SSH key to /tmp copy: src: "{{ SSH_keys.private }}" dest: "{{ SSH_keys.private_tmp }}" force: true mode: "0600" delegate_to: localhost become: false when: algo_provider != "local"
YAML
algo/playbooks/tmpfs/linux.yml
--- - name: Linux | set OS specific facts set_fact: tmpfs_volume_name: AlgoVPN-{{ IP_subject_alt_name }} tmpfs_volume_path: /dev/shm
YAML
algo/playbooks/tmpfs/macos.yml
--- - name: MacOS | set OS specific facts set_fact: tmpfs_volume_name: AlgoVPN-{{ IP_subject_alt_name }} tmpfs_volume_path: /Volumes - name: MacOS | mount a ram disk shell: > /usr/sbin/diskutil info "/{{ tmpfs_volume_path }}/{{ tmpfs_volume_name }}/" || /usr/sbin/diskutil erasevolume HFS+ "{{ tmpfs_volume_name }}" $(hdiutil attach -nomount ram://64000) args: creates: /{{ tmpfs_volume_path }}/{{ tmpfs_volume_name }}
YAML
algo/playbooks/tmpfs/main.yml
--- - name: Include tasks for MacOS import_tasks: macos.yml when: ansible_system == "Darwin" - name: Include tasks for Linux import_tasks: linux.yml when: ansible_system == "Linux" - name: Set config paths as facts set_fact: ipsec_pki_path: /{{ tmpfs_volume_path }}/{{ tmpfs_volume_name }}/IPsec/ - name: Update config paths add_host: name: "{{ 'localhost' if cloud_instance_ip == 'localhost' else cloud_instance_ip }}" ipsec_pki_path: "{{ ipsec_pki_path }}"
YAML
algo/playbooks/tmpfs/umount.yml
--- - name: Linux | Delete the PKI directory file: path: /{{ facts.tmpfs_volume_path }}/{{ facts.tmpfs_volume_name }}/ state: absent when: facts.ansible_system == "Linux" - block: - name: MacOS | check fs the ramdisk exists command: /usr/sbin/diskutil info "{{ facts.tmpfs_volume_name }}" ignore_errors: true changed_when: false register: diskutil_info - name: MacOS | unmount and eject the ram disk shell: > /usr/sbin/diskutil umount force "/{{ facts.tmpfs_volume_path }}/{{ facts.tmpfs_volume_name }}/" && /usr/sbin/diskutil eject "{{ facts.tmpfs_volume_name }}" changed_when: false when: diskutil_info.rc == 0 register: result until: result.rc == 0 retries: 5 delay: 3 when: - facts.ansible_system == "Darwin"
YAML
algo/roles/client/tasks/main.yml
--- - name: Gather Facts setup: - name: Include system based facts and tasks import_tasks: systems/main.yml - name: Install prerequisites package: name="{{ item }}" state=present with_items: - "{{ prerequisites }}" register: result until: result is succeeded retries: 10 delay: 3 - name: Install strongSwan package: name=strongswan state=present register: result until: result is succeeded retries: 10 delay: 3 - name: Setup the ipsec config template: src: roles/strongswan/templates/client_ipsec.conf.j2 dest: "{{ configs_prefix }}/ipsec.{{ IP_subject_alt_name }}.conf" mode: "0644" with_items: - "{{ vpn_user }}" notify: - restart strongswan - name: Setup the ipsec secrets template: src: roles/strongswan/templates/client_ipsec.secrets.j2 dest: "{{ configs_prefix }}/ipsec.{{ IP_subject_alt_name }}.secrets" mode: "0600" with_items: - "{{ vpn_user }}" notify: - restart strongswan - name: Include additional ipsec config lineinfile: dest: "{{ item.dest }}" line: "{{ item.line }}" create: true with_items: - dest: "{{ configs_prefix }}/ipsec.conf" line: include ipsec.{{ IP_subject_alt_name }}.conf - dest: "{{ configs_prefix }}/ipsec.secrets" line: include ipsec.{{ IP_subject_alt_name }}.secrets notify: - restart strongswan - name: Configure libstrongswan to relax CA constraints copy: src: libstrongswan-relax-constraints.conf dest: "{{ configs_prefix }}/strongswan.d/relax-ca-constraints.conf" owner: root group: root mode: 0644 - name: Setup the certificates and keys template: src: "{{ item.src }}" dest: "{{ item.dest }}" with_items: - src: configs/{{ IP_subject_alt_name }}/ipsec/.pki/certs/{{ vpn_user }}.crt dest: "{{ configs_prefix }}/ipsec.d/certs/{{ vpn_user }}.crt" - src: configs/{{ IP_subject_alt_name }}/ipsec/.pki/cacert.pem dest: "{{ configs_prefix }}/ipsec.d/cacerts/{{ IP_subject_alt_name }}.pem" - src: configs/{{ IP_subject_alt_name }}/ipsec/.pki/private/{{ vpn_user }}.key dest: "{{ configs_prefix }}/ipsec.d/private/{{ vpn_user }}.key" notify: - restart strongswan
YAML
algo/roles/client/tasks/systems/CentOS.yml
--- - name: Set OS specific facts set_fact: prerequisites: - epel-release configs_prefix: /etc/strongswan
YAML
algo/roles/client/tasks/systems/Debian.yml
--- - name: Set OS specific facts set_fact: prerequisites: - libstrongswan-standard-plugins configs_prefix: /etc
YAML
algo/roles/client/tasks/systems/Fedora.yml
--- - name: Set OS specific facts set_fact: prerequisites: - libselinux-python configs_prefix: /etc/strongswan
YAML
algo/roles/client/tasks/systems/main.yml
--- - include_tasks: Debian.yml when: ansible_distribution == 'Debian' - include_tasks: Ubuntu.yml when: ansible_distribution == 'Ubuntu' - include_tasks: CentOS.yml when: ansible_distribution == 'CentOS' - include_tasks: Fedora.yml when: ansible_distribution == 'Fedora'
YAML
algo/roles/client/tasks/systems/Ubuntu.yml
--- - name: Set OS specific facts set_fact: prerequisites: - libstrongswan-standard-plugins configs_prefix: /etc
YAML
algo/roles/cloud-azure/defaults/main.yml
--- # az account list-locations --query 'sort_by([].{name:name,displayName:displayName,regionalDisplayName:regionalDisplayName}, &name)' -o yaml azure_regions: - displayName: Asia name: asia regionalDisplayName: Asia - displayName: Asia Pacific name: asiapacific regionalDisplayName: Asia Pacific - displayName: Australia name: australia regionalDisplayName: Australia - displayName: Australia Central name: australiacentral regionalDisplayName: (Asia Pacific) Australia Central - displayName: Australia Central 2 name: australiacentral2 regionalDisplayName: (Asia Pacific) Australia Central 2 - displayName: Australia East name: australiaeast regionalDisplayName: (Asia Pacific) Australia East - displayName: Australia Southeast name: australiasoutheast regionalDisplayName: (Asia Pacific) Australia Southeast - displayName: Brazil name: brazil regionalDisplayName: Brazil - displayName: Brazil South name: brazilsouth regionalDisplayName: (South America) Brazil South - displayName: Brazil Southeast name: brazilsoutheast regionalDisplayName: (South America) Brazil Southeast - displayName: Canada name: canada regionalDisplayName: Canada - displayName: Canada Central name: canadacentral regionalDisplayName: (Canada) Canada Central - displayName: Canada East name: canadaeast regionalDisplayName: (Canada) Canada East - displayName: Central India name: centralindia regionalDisplayName: (Asia Pacific) Central India - displayName: Central US name: centralus regionalDisplayName: (US) Central US - displayName: Central US EUAP name: centraluseuap regionalDisplayName: (US) Central US EUAP - displayName: Central US (Stage) name: centralusstage regionalDisplayName: (US) Central US (Stage) - displayName: East Asia name: eastasia regionalDisplayName: (Asia Pacific) East Asia - displayName: East Asia (Stage) name: eastasiastage regionalDisplayName: (Asia Pacific) East Asia (Stage) - displayName: East US name: eastus regionalDisplayName: (US) East US - displayName: East US 2 name: eastus2 regionalDisplayName: (US) East US 2 - displayName: East US 2 EUAP name: eastus2euap regionalDisplayName: (US) East US 2 EUAP - displayName: East US 2 (Stage) name: eastus2stage regionalDisplayName: (US) East US 2 (Stage) - displayName: East US (Stage) name: eastusstage regionalDisplayName: (US) East US (Stage) - displayName: Europe name: europe regionalDisplayName: Europe - displayName: France Central name: francecentral regionalDisplayName: (Europe) France Central - displayName: France South name: francesouth regionalDisplayName: (Europe) France South - displayName: Germany North name: germanynorth regionalDisplayName: (Europe) Germany North - displayName: Germany West Central name: germanywestcentral regionalDisplayName: (Europe) Germany West Central - displayName: Global name: global regionalDisplayName: Global - displayName: India name: india regionalDisplayName: India - displayName: Japan name: japan regionalDisplayName: Japan - displayName: Japan East name: japaneast regionalDisplayName: (Asia Pacific) Japan East - displayName: Japan West name: japanwest regionalDisplayName: (Asia Pacific) Japan West - displayName: Jio India Central name: jioindiacentral regionalDisplayName: (Asia Pacific) Jio India Central - displayName: Jio India West name: jioindiawest regionalDisplayName: (Asia Pacific) Jio India West - displayName: Korea Central name: koreacentral regionalDisplayName: (Asia Pacific) Korea Central - displayName: Korea South name: koreasouth regionalDisplayName: (Asia Pacific) Korea South - displayName: North Central US name: northcentralus regionalDisplayName: (US) North Central US - displayName: North Central US (Stage) name: northcentralusstage regionalDisplayName: (US) North Central US (Stage) - displayName: North Europe name: northeurope regionalDisplayName: (Europe) North Europe - displayName: Norway East name: norwayeast regionalDisplayName: (Europe) Norway East - displayName: Norway West name: norwaywest regionalDisplayName: (Europe) Norway West - displayName: Qatar Central name: qatarcentral regionalDisplayName: (Europe) Qatar Central - displayName: South Africa North name: southafricanorth regionalDisplayName: (Africa) South Africa North - displayName: South Africa West name: southafricawest regionalDisplayName: (Africa) South Africa West - displayName: South Central US name: southcentralus regionalDisplayName: (US) South Central US - displayName: South Central US (Stage) name: southcentralusstage regionalDisplayName: (US) South Central US (Stage) - displayName: Southeast Asia name: southeastasia regionalDisplayName: (Asia Pacific) Southeast Asia - displayName: Southeast Asia (Stage) name: southeastasiastage regionalDisplayName: (Asia Pacific) Southeast Asia (Stage) - displayName: South India name: southindia regionalDisplayName: (Asia Pacific) South India - displayName: Sweden Central name: swedencentral regionalDisplayName: (Europe) Sweden Central - displayName: Sweden South name: swedensouth regionalDisplayName: (Europe) Sweden South - displayName: Switzerland North name: switzerlandnorth regionalDisplayName: (Europe) Switzerland North - displayName: Switzerland West name: switzerlandwest regionalDisplayName: (Europe) Switzerland West - displayName: UAE Central name: uaecentral regionalDisplayName: (Middle East) UAE Central - displayName: UAE North name: uaenorth regionalDisplayName: (Middle East) UAE North - displayName: United Kingdom name: uk regionalDisplayName: United Kingdom - displayName: UK South name: uksouth regionalDisplayName: (Europe) UK South - displayName: UK West name: ukwest regionalDisplayName: (Europe) UK West - displayName: United States name: unitedstates regionalDisplayName: United States - displayName: West Central US name: westcentralus regionalDisplayName: (US) West Central US - displayName: West Europe name: westeurope regionalDisplayName: (Europe) West Europe - displayName: West India name: westindia regionalDisplayName: (Asia Pacific) West India - displayName: West US name: westus regionalDisplayName: (US) West US - displayName: West US 2 name: westus2 regionalDisplayName: (US) West US 2 - displayName: West US 2 (Stage) name: westus2stage regionalDisplayName: (US) West US 2 (Stage) - displayName: West US 3 name: westus3 regionalDisplayName: (US) West US 3 - displayName: West US (Stage) name: westusstage regionalDisplayName: (US) West US (Stage)
JSON
algo/roles/cloud-azure/files/deployment.json
{ "$schema": "http://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json", "contentVersion": "1.0.0.0", "parameters": { "sshKeyData": { "type": "string" }, "WireGuardPort": { "type": "int" }, "vmSize": { "type": "string" }, "imageReferencePublisher": { "type": "string" }, "imageReferenceOffer": { "type": "string" }, "imageReferenceSku": { "type": "string" }, "imageReferenceVersion": { "type": "string" }, "osDiskType": { "type": "string" }, "SshPort": { "type": "int" }, "UserData": { "type": "string" } }, "variables": { "vnetID": "[resourceId('Microsoft.Network/virtualNetworks', resourceGroup().name)]", "subnet1Ref": "[concat(variables('vnetID'),'/subnets/', resourceGroup().name)]" }, "resources": [ { "apiVersion": "2015-06-15", "type": "Microsoft.Network/networkSecurityGroups", "name": "[resourceGroup().name]", "location": "[resourceGroup().location]", "properties": { "securityRules": [ { "name": "AllowSSH", "properties": { "description": "Allow SSH", "protocol": "Tcp", "sourcePortRange": "*", "destinationPortRange": "[parameters('SshPort')]", "sourceAddressPrefix": "*", "destinationAddressPrefix": "*", "access": "Allow", "priority": 100, "direction": "Inbound" } }, { "name": "AllowIPSEC500", "properties": { "description": "Allow UDP to port 500", "protocol": "Udp", "sourcePortRange": "*", "destinationPortRange": "500", "sourceAddressPrefix": "*", "destinationAddressPrefix": "*", "access": "Allow", "priority": 110, "direction": "Inbound" } }, { "name": "AllowIPSEC4500", "properties": { "description": "Allow UDP to port 4500", "protocol": "Udp", "sourcePortRange": "*", "destinationPortRange": "4500", "sourceAddressPrefix": "*", "destinationAddressPrefix": "*", "access": "Allow", "priority": 120, "direction": "Inbound" } }, { "name": "AllowWireGuard", "properties": { "description": "Locks inbound down to ssh default port 22.", "protocol": "Udp", "sourcePortRange": "*", "destinationPortRange": "[parameters('WireGuardPort')]", "sourceAddressPrefix": "*", "destinationAddressPrefix": "*", "access": "Allow", "priority": 130, "direction": "Inbound" } } ] } }, { "apiVersion": "2015-06-15", "type": "Microsoft.Network/publicIPAddresses", "name": "[resourceGroup().name]", "location": "[resourceGroup().location]", "properties": { "publicIPAllocationMethod": "Static" } }, { "apiVersion": "2015-06-15", "type": "Microsoft.Network/virtualNetworks", "name": "[resourceGroup().name]", "location": "[resourceGroup().location]", "properties": { "addressSpace": { "addressPrefixes": [ "10.10.0.0/16" ] }, "subnets": [ { "name": "[resourceGroup().name]", "properties": { "addressPrefix": "10.10.0.0/24" } } ] } }, { "apiVersion": "2015-06-15", "type": "Microsoft.Network/networkInterfaces", "name": "[resourceGroup().name]", "location": "[resourceGroup().location]", "dependsOn": [ "[concat('Microsoft.Network/networkSecurityGroups/', resourceGroup().name)]", "[concat('Microsoft.Network/publicIPAddresses/', resourceGroup().name)]", "[concat('Microsoft.Network/virtualNetworks/', resourceGroup().name)]" ], "properties": { "networkSecurityGroup": { "id": "[resourceId('Microsoft.Network/networkSecurityGroups', resourceGroup().name)]" }, "ipConfigurations": [ { "name": "ipconfig1", "properties": { "privateIPAllocationMethod": "Dynamic", "publicIPAddress": { "id": "[resourceId('Microsoft.Network/publicIPAddresses', resourceGroup().name)]" }, "subnet": { "id": "[variables('subnet1Ref')]" } } } ] } }, { "apiVersion": "2016-04-30-preview", "type": "Microsoft.Compute/virtualMachines", "name": "[resourceGroup().name]", "location": "[resourceGroup().location]", "dependsOn": [ "[concat('Microsoft.Network/networkInterfaces/', resourceGroup().name)]" ], "properties": { "hardwareProfile": { "vmSize": "[parameters('vmSize')]" }, "osProfile": { "computerName": "[resourceGroup().name]", "customData": "[parameters('UserData')]", "adminUsername": "algo", "linuxConfiguration": { "disablePasswordAuthentication": true, "ssh": { "publicKeys": [ { "path": "/home/algo/.ssh/authorized_keys", "keyData": "[parameters('sshKeyData')]" } ] } } }, "storageProfile": { "imageReference": { "publisher": "[parameters('imageReferencePublisher')]", "offer": "[parameters('imageReferenceOffer')]", "sku": "[parameters('imageReferenceSku')]", "version": "[parameters('imageReferenceVersion')]" }, "osDisk": { "createOption": "FromImage", "managedDisk": { "storageAccountType": "[parameters('osDiskType')]" } } }, "networkProfile": { "networkInterfaces": [ { "id": "[resourceId('Microsoft.Network/networkInterfaces', resourceGroup().name)]" } ] } } } ], "outputs": { "publicIPAddresses": { "type": "string", "value": "[reference(resourceId('Microsoft.Network/publicIPAddresses',resourceGroup().name),providers('Microsoft.Network', 'publicIPAddresses').apiVersions[0]).ipAddress]", } } }
YAML
algo/roles/cloud-azure/tasks/main.yml
--- - name: Build python virtual environment import_tasks: venv.yml - name: Include prompts import_tasks: prompts.yml - set_fact: algo_region: >- {% if region is defined %}{{ region }} {%- elif _algo_region.user_input %}{{ azure_regions[_algo_region.user_input | int -1 ]['name'] }} {%- else %}{{ azure_regions[default_region | int - 1]['name'] }}{% endif %} - name: Create AlgoVPN Server azure_rm_deployment: state: present deployment_name: "{{ algo_server_name }}" template: "{{ lookup('file', role_path + '/files/deployment.json') }}" secret: "{{ secret }}" tenant: "{{ tenant }}" client_id: "{{ client_id }}" subscription_id: "{{ subscription_id }}" resource_group_name: "{{ algo_server_name }}" location: "{{ algo_region }}" parameters: sshKeyData: value: "{{ lookup('file', '{{ SSH_keys.public }}') }}" WireGuardPort: value: "{{ wireguard_port }}" vmSize: value: "{{ cloud_providers.azure.size }}" imageReferencePublisher: value: "{{ cloud_providers.azure.image.publisher }}" imageReferenceOffer: value: "{{ cloud_providers.azure.image.offer }}" imageReferenceSku: value: "{{ cloud_providers.azure.image.sku }}" imageReferenceVersion: value: "{{ cloud_providers.azure.image.version }}" osDiskType: value: "{{ cloud_providers.azure.osDisk.type }}" SshPort: value: "{{ ssh_port }}" UserData: value: "{{ lookup('template', 'files/cloud-init/base.yml') | b64encode }}" register: azure_rm_deployment - set_fact: cloud_instance_ip: "{{ azure_rm_deployment.deployment.outputs.publicIPAddresses.value }}" ansible_ssh_user: algo ansible_ssh_port: "{{ ssh_port }}" cloudinit: true
YAML
algo/roles/cloud-azure/tasks/prompts.yml
--- - set_fact: secret: "{{ azure_secret | default(lookup('env','AZURE_SECRET'), true) }}" tenant: "{{ azure_tenant | default(lookup('env','AZURE_TENANT'), true) }}" client_id: "{{ azure_client_id | default(lookup('env','AZURE_CLIENT_ID'), true) }}" subscription_id: "{{ azure_subscription_id | default(lookup('env','AZURE_SUBSCRIPTION_ID'), true) }}" - block: - name: Set the default region set_fact: default_region: >- {% for r in azure_regions %} {%- if r['name'] == "eastus" %}{{ loop.index }}{% endif %} {%- endfor %} - pause: prompt: | What region should the server be located in? {% for r in azure_regions %} {{ loop.index }}. {{ r['regionalDisplayName'] }} {% endfor %} Enter the number of your desired region [{{ default_region }}] register: _algo_region when: region is undefined
YAML
algo/roles/cloud-azure/tasks/venv.yml
--- - name: Install requirements pip: requirements: https://raw.githubusercontent.com/ansible-collections/azure/v1.13.0/requirements-azure.txt state: latest virtualenv_python: python3
YAML
algo/roles/cloud-cloudstack/tasks/main.yml
--- - name: Build python virtual environment import_tasks: venv.yml - name: Include prompts import_tasks: prompts.yml - block: - set_fact: algo_region: >- {% if region is defined %}{{ region }} {%- elif _algo_region.user_input is defined and _algo_region.user_input | length > 0 %}{{ cs_zones[_algo_region.user_input | int -1 ]['name'] }} {%- else %}{{ cs_zones[default_zone | int - 1]['name'] }}{% endif %} - name: Security group created cs_securitygroup: name: "{{ algo_server_name }}-security_group" description: AlgoVPN security group register: cs_security_group - name: Security rules created cs_securitygroup_rule: security_group: "{{ cs_security_group.name }}" protocol: "{{ item.proto }}" start_port: "{{ item.start_port }}" end_port: "{{ item.end_port }}" cidr: "{{ item.range }}" with_items: - { proto: tcp, start_port: "{{ ssh_port }}", end_port: "{{ ssh_port }}", range: 0.0.0.0/0 } - { proto: udp, start_port: 4500, end_port: 4500, range: 0.0.0.0/0 } - { proto: udp, start_port: 500, end_port: 500, range: 0.0.0.0/0 } - { proto: udp, start_port: "{{ wireguard_port }}", end_port: "{{ wireguard_port }}", range: 0.0.0.0/0 } - name: Set facts set_fact: image_id: "{{ cloud_providers.cloudstack.image }}" size: "{{ cloud_providers.cloudstack.size }}" disk: "{{ cloud_providers.cloudstack.disk }}" - name: Server created cs_instance: name: "{{ algo_server_name }}" root_disk_size: "{{ disk }}" template: "{{ image_id }}" security_groups: "{{ cs_security_group.name }}" zone: "{{ algo_region }}" service_offering: "{{ size }}" user_data: "{{ lookup('template', 'files/cloud-init/base.yml') }}" register: cs_server - set_fact: cloud_instance_ip: "{{ cs_server.default_ip }}" ansible_ssh_user: algo ansible_ssh_port: "{{ ssh_port }}" cloudinit: true environment: CLOUDSTACK_KEY: "{{ algo_cs_key }}" CLOUDSTACK_SECRET: "{{ algo_cs_token }}" CLOUDSTACK_ENDPOINT: "{{ algo_cs_url }}"
YAML
algo/roles/cloud-cloudstack/tasks/prompts.yml
--- - block: - pause: prompt: | Enter the API key (https://trailofbits.github.io/algo/cloud-cloudstack.html): echo: false register: _cs_key when: - cs_key is undefined - lookup('env','CLOUDSTACK_KEY')|length <= 0 - pause: prompt: | Enter the API ssecret (https://trailofbits.github.io/algo/cloud-cloudstack.html): echo: false register: _cs_secret when: - cs_secret is undefined - lookup('env','CLOUDSTACK_SECRET')|length <= 0 - pause: prompt: | Enter the API endpoint (https://trailofbits.github.io/algo/cloud-cloudstack.html) [https://api.exoscale.com/compute] register: _cs_url when: - cs_url is undefined - lookup('env', 'CLOUDSTACK_ENDPOINT') | length <= 0 - set_fact: algo_cs_key: "{{ cs_key | default(_cs_key.user_input|default(None)) | default(lookup('env', 'CLOUDSTACK_KEY'), true) }}" algo_cs_token: "{{ cs_secret | default(_cs_secret.user_input|default(None)) | default(lookup('env', 'CLOUDSTACK_SECRET'), true) }}" algo_cs_url: "{{ cs_url | default(_cs_url.user_input|default(None)) | default(lookup('env', 'CLOUDSTACK_ENDPOINT'), true) | default('https://api.exoscale.com/compute',\ \ true) }}" - name: Get zones on cloud cs_zone_info: register: _cs_zones environment: CLOUDSTACK_KEY: "{{ algo_cs_key }}" CLOUDSTACK_SECRET: "{{ algo_cs_token }}" CLOUDSTACK_ENDPOINT: "{{ algo_cs_url }}" - name: Extract zones from output set_fact: cs_zones: "{{ _cs_zones['zones'] | sort(attribute='name') }}" - name: Set the default zone set_fact: default_zone: >- {% for z in cs_zones %} {%- if z['name'] == "ch-gva-2" %}{{ loop.index }}{% endif %} {%- endfor %} - pause: prompt: | What zone should the server be located in? {% for z in cs_zones %} {{ loop.index }}. {{ z['name'] }} {% endfor %} Enter the number of your desired zone [{{ default_zone }}] register: _algo_region when: region is undefined
YAML
algo/roles/cloud-cloudstack/tasks/venv.yml
--- - name: Install requirements pip: name: - cs - sshpubkeys state: latest virtualenv_python: python3
YAML
algo/roles/cloud-digitalocean/tasks/main.yml
--- - name: Include prompts import_tasks: prompts.yml - name: Upload the SSH key digital_ocean_sshkey: oauth_token: "{{ algo_do_token }}" name: "{{ SSH_keys.comment }}" ssh_pub_key: "{{ lookup('file', '{{ SSH_keys.public }}') }}" register: do_ssh_key - name: Creating a droplet... digital_ocean_droplet: state: present name: "{{ algo_server_name }}" oauth_token: "{{ algo_do_token }}" size: "{{ cloud_providers.digitalocean.size }}" region: "{{ algo_do_region }}" image: "{{ cloud_providers.digitalocean.image }}" wait_timeout: 300 unique_name: true ipv6: true ssh_keys: "{{ do_ssh_key.data.ssh_key.id }}" user_data: "{{ lookup('template', 'files/cloud-init/base.yml') }}" tags: - Environment:Algo register: digital_ocean_droplet # Return data is not idempotent - set_fact: droplet: "{{ digital_ocean_droplet.data.droplet | default(digital_ocean_droplet.data) }}" - block: - name: Create a Floating IP digital_ocean_floating_ip: state: present oauth_token: "{{ algo_do_token }}" droplet_id: "{{ droplet.id }}" register: digital_ocean_floating_ip - name: Set the static ip as a fact set_fact: cloud_alternative_ingress_ip: "{{ digital_ocean_floating_ip.data.floating_ip.ip }}" when: alternative_ingress_ip - set_fact: cloud_instance_ip: "{{ (droplet.networks.v4 | selectattr('type', '==', 'public')).0.ip_address }}" ansible_ssh_user: algo ansible_ssh_port: "{{ ssh_port }}" cloudinit: true
YAML
algo/roles/cloud-digitalocean/tasks/prompts.yml
--- - pause: prompt: | Enter your API token. The token must have read and write permissions (https://cloud.digitalocean.com/settings/api/tokens): echo: false register: _do_token when: - do_token is undefined - lookup('env','DO_API_TOKEN')|length <= 0 - name: Set the token as a fact set_fact: algo_do_token: "{{ do_token | default(_do_token.user_input|default(None)) | default(lookup('env','DO_API_TOKEN'), true) }}" - name: Get regions uri: url: https://api.digitalocean.com/v2/regions method: GET status_code: 200 headers: Content-Type: application/json Authorization: Bearer {{ algo_do_token }} register: _do_regions - name: Set facts about the regions set_fact: do_regions: "{{ _do_regions.json.regions | selectattr('available', 'true') | sort(attribute='slug') }}" - name: Set default region set_fact: default_region: >- {% for r in do_regions %} {%- if r['slug'] == "nyc3" %}{{ loop.index }}{% endif %} {%- endfor %} - pause: prompt: | What region should the server be located in? {% for r in do_regions %} {{ loop.index }}. {{ r['slug'] }} {{ r['name'] }} {% endfor %} Enter the number of your desired region [{{ default_region }}] register: _algo_region when: region is undefined - name: Set additional facts set_fact: algo_do_region: >- {% if region is defined %}{{ region }} {%- elif _algo_region.user_input %}{{ do_regions[_algo_region.user_input | int -1 ]['slug'] }} {%- else %}{{ do_regions[default_region | int - 1]['slug'] }}{% endif %}
YAML
algo/roles/cloud-ec2/defaults/main.yml
--- encrypted: "{{ cloud_providers.ec2.encrypted }}" ec2_vpc_nets: cidr_block: 172.16.0.0/16 subnet_cidr: 172.16.254.0/23 existing_eip: ""
YAML
algo/roles/cloud-ec2/files/stack.yaml
--- AWSTemplateFormatVersion: '2010-09-09' Description: 'Algo VPN stack' Parameters: InstanceTypeParameter: Type: String Default: t2.micro PublicSSHKeyParameter: Type: String ImageIdParameter: Type: String WireGuardPort: Type: String UseThisElasticIP: Type: String Default: '' EbsEncrypted: Type: String UserData: Type: String SshPort: Type: String InstanceMarketTypeParameter: Description: Launch a Spot instance or standard on-demand instance Type: String Default: on-demand AllowedValues: - spot - on-demand Conditions: AllocateNewEIP: !Equals [!Ref UseThisElasticIP, ''] AssociateExistingEIP: !Not [!Equals [!Ref UseThisElasticIP, '']] InstanceIsSpot: !Equals [spot, !Ref InstanceMarketTypeParameter] Resources: VPC: Type: AWS::EC2::VPC Properties: CidrBlock: 172.16.0.0/16 EnableDnsSupport: true EnableDnsHostnames: true InstanceTenancy: default Tags: - Key: Name Value: !Ref AWS::StackName VPCIPv6: Type: AWS::EC2::VPCCidrBlock Properties: AmazonProvidedIpv6CidrBlock: true VpcId: !Ref VPC InternetGateway: Type: AWS::EC2::InternetGateway Properties: Tags: - Key: Name Value: !Ref AWS::StackName Subnet: Type: AWS::EC2::Subnet Properties: CidrBlock: 172.16.254.0/23 MapPublicIpOnLaunch: false VpcId: !Ref VPC Tags: - Key: Name Value: !Ref AWS::StackName VPCGatewayAttachment: Type: AWS::EC2::VPCGatewayAttachment Properties: VpcId: !Ref VPC InternetGatewayId: !Ref InternetGateway RouteTable: Type: AWS::EC2::RouteTable Properties: VpcId: !Ref VPC Tags: - Key: Name Value: !Ref AWS::StackName Route: Type: AWS::EC2::Route DependsOn: - InternetGateway - RouteTable - VPCGatewayAttachment Properties: RouteTableId: !Ref RouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway RouteIPv6: Type: AWS::EC2::Route DependsOn: - InternetGateway - RouteTable - VPCGatewayAttachment Properties: RouteTableId: !Ref RouteTable DestinationIpv6CidrBlock: "::/0" GatewayId: !Ref InternetGateway SubnetIPv6: Type: AWS::EC2::SubnetCidrBlock DependsOn: - RouteIPv6 - VPC - VPCIPv6 Properties: Ipv6CidrBlock: "Fn::Join": - "" - - !Select [0, !Split [ "::", !Select [0, !GetAtt VPC.Ipv6CidrBlocks] ]] - "::dead:beef/64" SubnetId: !Ref Subnet RouteSubnet: Type: "AWS::EC2::SubnetRouteTableAssociation" DependsOn: - RouteTable - Subnet - Route Properties: RouteTableId: !Ref RouteTable SubnetId: !Ref Subnet InstanceSecurityGroup: Type: AWS::EC2::SecurityGroup DependsOn: - Subnet Properties: VpcId: !Ref VPC GroupDescription: Enable SSH and IPsec SecurityGroupIngress: - IpProtocol: tcp FromPort: !Ref SshPort ToPort: !Ref SshPort CidrIp: 0.0.0.0/0 - IpProtocol: udp FromPort: '500' ToPort: '500' CidrIp: 0.0.0.0/0 - IpProtocol: udp FromPort: '4500' ToPort: '4500' CidrIp: 0.0.0.0/0 - IpProtocol: udp FromPort: !Ref WireGuardPort ToPort: !Ref WireGuardPort CidrIp: 0.0.0.0/0 Tags: - Key: Name Value: !Ref AWS::StackName EC2LaunchTemplate: Type: AWS::EC2::LaunchTemplate Condition: InstanceIsSpot # Only create this template if requested Properties: # a spot instance_market_type in config.cfg LaunchTemplateName: !Ref AWS::StackName LaunchTemplateData: InstanceMarketOptions: MarketType: spot EC2Instance: Type: AWS::EC2::Instance DependsOn: - SubnetIPv6 - Subnet - InstanceSecurityGroup Properties: InstanceType: Ref: InstanceTypeParameter BlockDeviceMappings: - DeviceName: /dev/sda1 Ebs: DeleteOnTermination: true VolumeSize: 8 Encrypted: !Ref EbsEncrypted InstanceInitiatedShutdownBehavior: terminate SecurityGroupIds: - Ref: InstanceSecurityGroup ImageId: Ref: ImageIdParameter SubnetId: !Ref Subnet Ipv6AddressCount: 1 UserData: !Ref UserData LaunchTemplate: !If # Only if Conditions created "EC2LaunchTemplate" - InstanceIsSpot - LaunchTemplateId: !Ref EC2LaunchTemplate Version: 1 - !Ref AWS::NoValue # Else this LaunchTemplate not set Tags: - Key: Name Value: !Ref AWS::StackName ElasticIP: Type: AWS::EC2::EIP Condition: AllocateNewEIP Properties: Domain: vpc InstanceId: !Ref EC2Instance DependsOn: - EC2Instance - VPCGatewayAttachment ElasticIPAssociation: Type: AWS::EC2::EIPAssociation Condition: AssociateExistingEIP Properties: AllocationId: !Ref UseThisElasticIP InstanceId: !Ref EC2Instance Outputs: ElasticIP: Value: !GetAtt [EC2Instance, PublicIp]
YAML
algo/roles/cloud-ec2/tasks/cloudformation.yml
--- - name: Deploy the template cloudformation: aws_access_key: "{{ access_key }}" aws_secret_key: "{{ secret_key }}" stack_name: "{{ stack_name }}" state: present region: "{{ algo_region }}" template: roles/cloud-ec2/files/stack.yaml template_parameters: InstanceTypeParameter: "{{ cloud_providers.ec2.size }}" PublicSSHKeyParameter: "{{ lookup('file', SSH_keys.public) }}" ImageIdParameter: "{{ ami_image }}" WireGuardPort: "{{ wireguard_port }}" UseThisElasticIP: "{{ existing_eip }}" EbsEncrypted: "{{ encrypted }}" UserData: "{{ lookup('template', 'files/cloud-init/base.yml') | b64encode }}" SshPort: "{{ ssh_port }}" InstanceMarketTypeParameter: "{{ cloud_providers.ec2.instance_market_type }}" tags: Environment: Algo register: stack
YAML
algo/roles/cloud-ec2/tasks/main.yml
--- - name: Build python virtual environment import_tasks: venv.yml - name: Include prompts import_tasks: prompts.yml - name: Locate official AMI for region ec2_ami_info: aws_access_key: "{{ access_key }}" aws_secret_key: "{{ secret_key }}" owners: "{{ cloud_providers.ec2.image.owner }}" region: "{{ algo_region }}" filters: architecture: "{{ cloud_providers.ec2.image.arch }}" name: ubuntu/images/hvm-ssd/{{ cloud_providers.ec2.image.name }}-*64-server-* register: ami_search - name: Set the ami id as a fact set_fact: ami_image: "{{ (ami_search.images | sort(attribute='creation_date') | last)['image_id'] }}" - name: Deploy the stack import_tasks: cloudformation.yml - set_fact: cloud_instance_ip: "{{ stack.stack_outputs.ElasticIP }}" ansible_ssh_user: algo ansible_ssh_port: "{{ ssh_port }}" cloudinit: true
YAML
algo/roles/cloud-ec2/tasks/prompts.yml
--- - pause: prompt: | Enter your AWS Access Key ID (http://docs.aws.amazon.com/general/latest/gr/managing-aws-access-keys.html) Note: Make sure to use an IAM user with an acceptable policy attached (see https://github.com/trailofbits/algo/blob/master/docs/deploy-from-ansible.md) echo: false register: _aws_access_key when: - aws_access_key is undefined - lookup('env','AWS_ACCESS_KEY_ID')|length <= 0 - pause: prompt: | Enter your AWS Secret Access Key (http://docs.aws.amazon.com/general/latest/gr/managing-aws-access-keys.html) echo: false register: _aws_secret_key when: - aws_secret_key is undefined - lookup('env','AWS_SECRET_ACCESS_KEY')|length <= 0 - set_fact: access_key: "{{ aws_access_key | default(_aws_access_key.user_input|default(None)) | default(lookup('env','AWS_ACCESS_KEY_ID'), true) }}" secret_key: "{{ aws_secret_key | default(_aws_secret_key.user_input|default(None)) | default(lookup('env','AWS_SECRET_ACCESS_KEY'), true) }}" - block: - name: Get regions aws_region_info: aws_access_key: "{{ access_key }}" aws_secret_key: "{{ secret_key }}" region: us-east-1 register: _aws_regions - name: Set facts about the regions set_fact: aws_regions: "{{ _aws_regions.regions | sort(attribute='region_name') }}" - name: Set the default region set_fact: default_region: >- {% for r in aws_regions %} {%- if r['region_name'] == "us-east-1" %}{{ loop.index }}{% endif %} {%- endfor %} - pause: prompt: | What region should the server be located in? (https://docs.aws.amazon.com/general/latest/gr/rande.html#ec2_region) {% for r in aws_regions %} {{ loop.index }}. {{ r['region_name'] }} {% endfor %} Enter the number of your desired region [{{ default_region }}] register: _algo_region when: region is undefined - name: Set algo_region and stack_name facts set_fact: algo_region: >- {% if region is defined %}{{ region }} {%- elif _algo_region.user_input %}{{ aws_regions[_algo_region.user_input | int -1 ]['region_name'] }} {%- else %}{{ aws_regions[default_region | int - 1]['region_name'] }}{% endif %} stack_name: "{{ algo_server_name | replace('.', '-') }}" - block: - name: Get existing available Elastic IPs ec2_eip_info: aws_access_key: "{{ access_key }}" aws_secret_key: "{{ secret_key }}" region: "{{ algo_region }}" register: raw_eip_addresses - set_fact: available_eip_addresses: "{{ raw_eip_addresses.addresses | selectattr('association_id', 'undefined') | list }}" - pause: prompt: >- What Elastic IP would you like to use? {% for eip in available_eip_addresses %} {{ loop.index }}. {{ eip['public_ip'] }} {% endfor %} Enter the number of your desired Elastic IP register: _use_existing_eip - set_fact: existing_eip: "{{ available_eip_addresses[_use_existing_eip.user_input | int -1 ]['allocation_id'] }}" when: cloud_providers.ec2.use_existing_eip
YAML
algo/roles/cloud-ec2/tasks/venv.yml
--- - name: Install requirements pip: name: - boto>=2.5 - boto3 state: latest virtualenv_python: python3
YAML
algo/roles/cloud-gce/tasks/main.yml
--- - name: Build python virtual environment import_tasks: venv.yml - name: Include prompts import_tasks: prompts.yml - name: Network configured gcp_compute_network: auth_kind: serviceaccount service_account_file: "{{ credentials_file_path }}" project: "{{ project_id }}" name: algovpn auto_create_subnetworks: true routing_config: routing_mode: REGIONAL register: gcp_compute_network - name: Firewall configured gcp_compute_firewall: auth_kind: serviceaccount service_account_file: "{{ credentials_file_path }}" project: "{{ project_id }}" name: algovpn network: "{{ gcp_compute_network }}" direction: INGRESS allowed: - ip_protocol: udp ports: - "500" - "4500" - "{{ wireguard_port|string }}" - ip_protocol: tcp ports: - "{{ ssh_port }}" - ip_protocol: icmp - block: - name: External IP allocated gcp_compute_address: auth_kind: serviceaccount service_account_file: "{{ credentials_file_path }}" project: "{{ project_id }}" name: "{{ algo_server_name }}" region: "{{ algo_region }}" register: gcp_compute_address - name: Set External IP as a fact set_fact: external_ip: "{{ gcp_compute_address.address }}" when: cloud_providers.gce.external_static_ip - name: Instance created gcp_compute_instance: auth_kind: serviceaccount service_account_file: "{{ credentials_file_path }}" project: "{{ project_id }}" name: "{{ algo_server_name }}" zone: "{{ algo_zone }}" machine_type: "{{ cloud_providers.gce.size }}" disks: - auto_delete: true boot: true initialize_params: source_image: projects/ubuntu-os-cloud/global/images/family/{{ cloud_providers.gce.image }} metadata: ssh-keys: algo:{{ ssh_public_key_lookup }} user-data: "{{ lookup('template', 'files/cloud-init/base.yml') }}" network_interfaces: - network: "{{ gcp_compute_network }}" access_configs: - name: "{{ algo_server_name }}" nat_ip: "{{ gcp_compute_address|default(None) }}" type: ONE_TO_ONE_NAT tags: items: - environment-algo register: gcp_compute_instance - set_fact: cloud_instance_ip: "{{ gcp_compute_instance.networkInterfaces[0].accessConfigs[0].natIP }}" ansible_ssh_user: algo ansible_ssh_port: "{{ ssh_port }}" cloudinit: true
YAML
algo/roles/cloud-gce/tasks/prompts.yml
--- - pause: prompt: | Enter the local path to your credentials JSON file (https://support.google.com/cloud/answer/6158849?hl=en&ref_topic=6262490#serviceaccounts) register: _gce_credentials_file when: - gce_credentials_file is undefined - lookup('env','GCE_CREDENTIALS_FILE_PATH')|length <= 0 - set_fact: credentials_file_path: "{{ gce_credentials_file | default(_gce_credentials_file.user_input|default(None)) | default(lookup('env','GCE_CREDENTIALS_FILE_PATH'),\ \ true) }}" ssh_public_key_lookup: "{{ lookup('file', '{{ SSH_keys.public }}') }}" - set_fact: credentials_file_lookup: "{{ lookup('file', '{{ credentials_file_path }}') }}" - set_fact: service_account_email: "{{ credentials_file_lookup.client_email | default(lookup('env','GCE_EMAIL')) }}" project_id: "{{ credentials_file_lookup.project_id | default(lookup('env','GCE_PROJECT')) }}" - block: - name: Get regions gcp_compute_location_info: auth_kind: serviceaccount service_account_file: "{{ credentials_file_path }}" project: "{{ project_id }}" scope: regions filters: status=UP register: gcp_compute_regions_info - name: Set facts about the regions set_fact: gce_regions: >- [{%- for region in gcp_compute_regions_info.resources | sort(attribute='name') -%} '{{ region.name }}'{% if not loop.last %},{% endif %} {%- endfor -%}] - name: Set facts about the default region set_fact: default_region: >- {% for region in gce_regions %} {%- if region == "us-east1" %}{{ loop.index }}{% endif %} {%- endfor %} - pause: prompt: | What region should the server be located in? (https://cloud.google.com/compute/docs/regions-zones/#locations) {% for r in gce_regions %} {{ loop.index }}. {{ r }} {% endfor %} Enter the number of your desired region [{{ default_region }}] register: _gce_region when: region is undefined - name: Set region as a fact set_fact: algo_region: >- {% if region is defined %}{{ region }} {%- elif _gce_region.user_input %}{{ gce_regions[_gce_region.user_input | int -1 ] }} {%- else %}{{ gce_regions[default_region | int - 1] }}{% endif %} - name: Get zones gcp_compute_location_info: auth_kind: serviceaccount service_account_file: "{{ credentials_file_path }}" project: "{{ project_id }}" scope: zones filters: - name={{ algo_region }}-* - status=UP register: gcp_compute_zone_info - name: Set random available zone as a fact set_fact: algo_zone: "{{ (gcp_compute_zone_info.resources | random(seed=algo_server_name + algo_region + project_id) ).name }}"
YAML
algo/roles/cloud-gce/tasks/venv.yml
--- - name: Install requirements pip: name: - requests>=2.18.4 - google-auth>=1.3.0 state: latest virtualenv_python: python3
YAML
algo/roles/cloud-hetzner/tasks/main.yml
--- - name: Build python virtual environment import_tasks: venv.yml - name: Include prompts import_tasks: prompts.yml - name: Create an ssh key hcloud_ssh_key: name: algo-{{ 999999 | random(seed=lookup('file', SSH_keys.public)) }} public_key: "{{ lookup('file', SSH_keys.public) }}" state: present api_token: "{{ algo_hcloud_token }}" register: hcloud_ssh_key - name: Create a server... hcloud_server: name: "{{ algo_server_name }}" location: "{{ algo_hcloud_region }}" server_type: "{{ cloud_providers.hetzner.server_type }}" image: "{{ cloud_providers.hetzner.image }}" state: present api_token: "{{ algo_hcloud_token }}" ssh_keys: "{{ hcloud_ssh_key.hcloud_ssh_key.name }}" user_data: "{{ lookup('template', 'files/cloud-init/base.yml') }}" labels: Environment: algo register: hcloud_server - set_fact: cloud_instance_ip: "{{ hcloud_server.hcloud_server.ipv4_address }}" ansible_ssh_user: algo ansible_ssh_port: "{{ ssh_port }}" cloudinit: true
YAML
algo/roles/cloud-hetzner/tasks/prompts.yml
--- - pause: prompt: | Enter your API token (https://trailofbits.github.io/algo/cloud-hetzner.html#api-token): echo: false register: _hcloud_token when: - hcloud_token is undefined - lookup('env','HCLOUD_TOKEN')|length <= 0 - name: Set the token as a fact set_fact: algo_hcloud_token: "{{ hcloud_token | default(_hcloud_token.user_input|default(None)) | default(lookup('env','HCLOUD_TOKEN'), true) }}" - name: Get regions hcloud_datacenter_facts: api_token: "{{ algo_hcloud_token }}" register: _hcloud_regions - name: Set facts about the regions set_fact: hcloud_regions: "{{ hcloud_datacenter_facts | sort(attribute='location') }}" - name: Set default region set_fact: default_region: >- {% for r in hcloud_regions %} {%- if r['location'] == "nbg1" %}{{ loop.index }}{% endif %} {%- endfor %} - pause: prompt: | What region should the server be located in? {% for r in hcloud_regions %} {{ loop.index }}. {{ r['location'] }} {{ r['description'] }} {% endfor %} Enter the number of your desired region [{{ default_region }}] register: _algo_region when: region is undefined - name: Set additional facts set_fact: algo_hcloud_region: >- {% if region is defined %}{{ region }} {%- elif _algo_region.user_input %}{{ hcloud_regions[_algo_region.user_input | int -1 ]['location'] }} {%- else %}{{ hcloud_regions[default_region | int - 1]['location'] }}{% endif %}
YAML
algo/roles/cloud-hetzner/tasks/venv.yml
--- - name: Install requirements pip: name: - hcloud state: latest virtualenv_python: python3
YAML
algo/roles/cloud-lightsail/files/stack.yaml
AWSTemplateFormatVersion: '2010-09-09' Description: 'Algo VPN stack (LightSail)' Parameters: InstanceTypeParameter: Type: String Default: 'nano_2_0' ImageIdParameter: Type: String Default: 'ubuntu_20_04' WireGuardPort: Type: String Default: '51820' SshPort: Type: String Default: '4160' UserData: Type: String Default: 'true' Resources: Instance: Type: AWS::Lightsail::Instance Properties: BlueprintId: Ref: ImageIdParameter BundleId: Ref: InstanceTypeParameter InstanceName: !Ref AWS::StackName Networking: Ports: - AccessDirection: inbound Cidrs: ['0.0.0.0/0'] Ipv6Cidrs: ['::/0'] CommonName: SSH FromPort: !Ref SshPort ToPort: !Ref SshPort Protocol: tcp - AccessDirection: inbound Cidrs: ['0.0.0.0/0'] Ipv6Cidrs: ['::/0'] CommonName: WireGuard FromPort: !Ref WireGuardPort ToPort: !Ref WireGuardPort Protocol: udp - AccessDirection: inbound Cidrs: ['0.0.0.0/0'] Ipv6Cidrs: ['::/0'] CommonName: IPSec-4500 FromPort: 4500 ToPort: 4500 Protocol: udp - AccessDirection: inbound Cidrs: ['0.0.0.0/0'] Ipv6Cidrs: ['::/0'] CommonName: IPSec-500 FromPort: 500 ToPort: 500 Protocol: udp Tags: - Key: Name Value: !Ref AWS::StackName UserData: !Ref UserData StaticIP: Type: AWS::Lightsail::StaticIp Properties: AttachedTo: !Ref Instance StaticIpName: !Join [ "-", [ !Ref AWS::StackName, "ip" ] ] DependsOn: - Instance Outputs: IpAddress: Value: !GetAtt [StaticIP, IpAddress]
YAML
algo/roles/cloud-lightsail/tasks/cloudformation.yml
--- - name: Deploy the template cloudformation: aws_access_key: "{{ access_key }}" aws_secret_key: "{{ secret_key }}" stack_name: "{{ stack_name }}" state: present region: "{{ algo_region }}" template: roles/cloud-lightsail/files/stack.yaml template_parameters: InstanceTypeParameter: "{{ cloud_providers.lightsail.size }}" ImageIdParameter: "{{ cloud_providers.lightsail.image }}" WireGuardPort: "{{ wireguard_port }}" SshPort: "{{ ssh_port }}" UserData: "{{ lookup('template', 'files/cloud-init/base.sh') }}" tags: Environment: Algo Lightsail: true register: stack
YAML
algo/roles/cloud-lightsail/tasks/main.yml
--- - name: Build python virtual environment import_tasks: venv.yml - name: Include prompts import_tasks: prompts.yml - name: Deploy the stack import_tasks: cloudformation.yml - set_fact: cloud_instance_ip: "{{ stack.stack_outputs.IpAddress }}" ansible_ssh_user: algo ansible_ssh_port: "{{ ssh_port }}" cloudinit: true
YAML
algo/roles/cloud-lightsail/tasks/prompts.yml
--- - pause: prompt: | Enter your aws_access_key (http://docs.aws.amazon.com/general/latest/gr/managing-aws-access-keys.html) Note: Make sure to use an IAM user with an acceptable policy attached (see https://github.com/trailofbits/algo/blob/master/docs/deploy-from-ansible.md) echo: false register: _aws_access_key when: - aws_access_key is undefined - lookup('env','AWS_ACCESS_KEY_ID')|length <= 0 - pause: prompt: | Enter your aws_secret_key (http://docs.aws.amazon.com/general/latest/gr/managing-aws-access-keys.html) echo: false register: _aws_secret_key when: - aws_secret_key is undefined - lookup('env','AWS_SECRET_ACCESS_KEY')|length <= 0 - set_fact: access_key: "{{ aws_access_key | default(_aws_access_key.user_input|default(None)) | default(lookup('env','AWS_ACCESS_KEY_ID'), true) }}" secret_key: "{{ aws_secret_key | default(_aws_secret_key.user_input|default(None)) | default(lookup('env','AWS_SECRET_ACCESS_KEY'), true) }}" - block: - name: Get regions lightsail_region_facts: aws_access_key: "{{ access_key }}" aws_secret_key: "{{ secret_key }}" region: us-east-1 register: _lightsail_regions - name: Set facts about the regions set_fact: lightsail_regions: "{{ _lightsail_regions.data.regions | sort(attribute='name') }}" - name: Set the default region set_fact: default_region: >- {% for r in lightsail_regions %} {%- if r['name'] == "us-east-1" %}{{ loop.index }}{% endif %} {%- endfor %} - pause: prompt: | What region should the server be located in? (https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/) {% for r in lightsail_regions %} {{ (loop.index|string + '.').ljust(3) }} {{ r['name'].ljust(20) }} {{ r['displayName'] }} {% endfor %} Enter the number of your desired region [{{ default_region }}] register: _algo_region when: region is undefined - set_fact: stack_name: "{{ algo_server_name | replace('.', '-') }}" algo_region: >- {% if region is defined %}{{ region }} {%- elif _algo_region.user_input %}{{ lightsail_regions[_algo_region.user_input | int -1 ]['name'] }} {%- else %}{{ lightsail_regions[default_region | int - 1]['name'] }}{% endif %}
YAML
algo/roles/cloud-lightsail/tasks/venv.yml
--- - name: Install requirements pip: name: - boto>=2.5 - boto3 state: latest virtualenv_python: python3
YAML
algo/roles/cloud-linode/tasks/main.yml
--- - name: Build python virtual environment import_tasks: venv.yml - name: Include prompts import_tasks: prompts.yml - name: Set facts set_fact: stackscript: | {{ lookup('template', 'files/cloud-init/base.sh') }} mkdir -p /var/lib/cloud/data/ || true touch /var/lib/cloud/data/result.json - name: Create a stackscript linode_stackscript_v4: access_token: "{{ algo_linode_token }}" label: "{{ algo_server_name }}" state: present description: Environment:Algo images: - "{{ cloud_providers.linode.image }}" script: | {{ stackscript }} register: _linode_stackscript - name: Update the stackscript uri: url: https://api.linode.com/v4/linode/stackscripts/{{ _linode_stackscript.stackscript.id }} method: PUT body_format: json body: script: | {{ stackscript }} headers: Content-Type: application/json Authorization: Bearer {{ algo_linode_token }} when: (_linode_stackscript.stackscript.script | hash('md5')) != (stackscript | hash('md5')) - name: Creating an instance... linode_v4: access_token: "{{ algo_linode_token }}" label: "{{ algo_server_name }}" state: present region: "{{ algo_linode_region }}" image: "{{ cloud_providers.linode.image }}" type: "{{ cloud_providers.linode.type }}" authorized_keys: "{{ public_key }}" stackscript_id: "{{ _linode_stackscript.stackscript.id }}" register: _linode - set_fact: cloud_instance_ip: "{{ _linode.instance.ipv4[0] }}" ansible_ssh_user: algo ansible_ssh_port: "{{ ssh_port }}" cloudinit: true
YAML
algo/roles/cloud-linode/tasks/prompts.yml
--- - pause: prompt: | Enter your ACCESS token. (https://developers.linode.com/api/v4/#access-and-authentication): echo: false register: _linode_token when: - linode_token is undefined - lookup('env','LINODE_API_TOKEN')|length <= 0 - name: Set the token as a fact set_fact: algo_linode_token: "{{ linode_token | default(_linode_token.user_input|default(None)) | default(lookup('env','LINODE_API_TOKEN'), true) }}" - name: Get regions uri: url: https://api.linode.com/v4/regions method: GET status_code: 200 register: _linode_regions - name: Set facts about the regions set_fact: linode_regions: "{{ _linode_regions.json.data | sort(attribute='id') }}" - name: Set default region set_fact: default_region: >- {% for r in linode_regions %} {%- if r['id'] == "us-east" %}{{ loop.index }}{% endif %} {%- endfor %} - pause: prompt: | What region should the server be located in? {% for r in linode_regions %} {{ loop.index }}. {{ r['id'] }} {% endfor %} Enter the number of your desired region [{{ default_region }}] register: _algo_region when: region is undefined - name: Set additional facts set_fact: algo_linode_region: >- {% if region is defined %}{{ region }} {%- elif _algo_region.user_input %}{{ linode_regions[_algo_region.user_input | int -1 ]['id'] }} {%- else %}{{ linode_regions[default_region | int - 1]['id'] }}{% endif %} public_key: "{{ lookup('file', '{{ SSH_keys.public }}') }}"
YAML
algo/roles/cloud-linode/tasks/venv.yml
--- - name: Install requirements pip: name: - linode_api4 state: latest virtualenv_python: python3
YAML
algo/roles/cloud-openstack/tasks/main.yml
--- - fail: msg: "OpenStack credentials are not set. Download it from the OpenStack dashboard->Compute->API Access and source it in the shell (eg: source /tmp/dhc-openrc.sh)" when: lookup('env', 'OS_AUTH_URL')|length <= 0 - name: Build python virtual environment import_tasks: venv.yml - name: Security group created openstack.cloud.security_group: state: "{{ state|default('present') }}" name: "{{ algo_server_name }}-security_group" description: AlgoVPN security group register: os_security_group - name: Security rules created openstack.cloud.security_group_rule: state: "{{ state|default('present') }}" security_group: "{{ os_security_group.id }}" protocol: "{{ item.proto }}" port_range_min: "{{ item.port_min }}" port_range_max: "{{ item.port_max }}" remote_ip_prefix: "{{ item.range }}" with_items: - { proto: tcp, port_min: "{{ ssh_port }}", port_max: "{{ ssh_port }}", range: 0.0.0.0/0 } - { proto: icmp, port_min: -1, port_max: -1, range: 0.0.0.0/0 } - { proto: udp, port_min: 4500, port_max: 4500, range: 0.0.0.0/0 } - { proto: udp, port_min: 500, port_max: 500, range: 0.0.0.0/0 } - { proto: udp, port_min: "{{ wireguard_port }}", port_max: "{{ wireguard_port }}", range: 0.0.0.0/0 } - name: Gather facts about flavors openstack.cloud.compute_flavor_info: ram: "{{ cloud_providers.openstack.flavor_ram }}" register: os_flavor - name: Gather facts about images openstack.cloud.image_info: register: os_image - name: Set image as a fact set_fact: image_id: "{{ item.id }}" loop: "{{ os_image.openstack_image }}" when: - item.name == cloud_providers.openstack.image - item.status == "active" - name: Gather facts about public networks openstack.cloud.networks_info: register: os_network - name: Set the network as a fact set_fact: public_network_id: "{{ item.id }}" when: - item['router:external']|default(omit) - item['admin_state_up']|default(omit) - item['status'] == 'ACTIVE' with_items: "{{ os_network.openstack_networks }}" - name: Set facts set_fact: flavor_id: "{{ (os_flavor.openstack_flavors | sort(attribute='ram'))[0]['id'] }}" security_group_name: "{{ os_security_group['secgroup']['name'] }}" - name: Server created openstack.cloud.server: state: "{{ state|default('present') }}" name: "{{ algo_server_name }}" image: "{{ image_id }}" flavor: "{{ flavor_id }}" security_groups: "{{ security_group_name }}" userdata: "{{ lookup('template', 'files/cloud-init/base.yml') }}" nics: - net-id: "{{ public_network_id }}" register: os_server - set_fact: cloud_instance_ip: "{{ os_server['openstack']['public_v4'] }}" ansible_ssh_user: algo ansible_ssh_port: "{{ ssh_port }}" cloudinit: true
YAML
algo/roles/cloud-openstack/tasks/venv.yml
--- - name: Install requirements pip: name: shade state: latest virtualenv_python: python3
YAML
algo/roles/cloud-scaleway/tasks/main.yml
--- - name: Include prompts import_tasks: prompts.yml - block: - name: Gather Scaleway organizations facts scaleway_organization_info: register: scaleway_org - name: Get images scaleway_image_info: region: "{{ algo_region }}" register: scaleway_image - name: Set cloud specific facts set_fact: organization_id: "{{ scaleway_org.scaleway_organization_info[0]['id'] }}" images: >- [{% for i in scaleway_image.scaleway_image_info -%} {% if i.name == cloud_providers.scaleway.image and i.arch == cloud_providers.scaleway.arch -%} '{{ i.id }}'{% if not loop.last %},{% endif %} {%- endif -%} {%- endfor -%}] - name: Create a server scaleway_compute: name: "{{ algo_server_name }}" enable_ipv6: true public_ip: dynamic boot_type: local state: present image: "{{ images[0] }}" organization: "{{ organization_id }}" region: "{{ algo_region }}" commercial_type: "{{ cloud_providers.scaleway.size }}" wait: true tags: - Environment:Algo - AUTHORIZED_KEY={{ lookup('file', SSH_keys.public)|regex_replace(' ', '_') }} register: scaleway_compute - name: Patch the cloud-init uri: url: https://cp-{{ algo_region }}.scaleway.com/servers/{{ scaleway_compute.msg.id }}/user_data/cloud-init method: PATCH body: "{{ lookup('template', 'files/cloud-init/base.yml') }}" status_code: 204 headers: Content-Type: text/plain X-Auth-Token: "{{ algo_scaleway_token }}" - name: Start the server scaleway_compute: name: "{{ algo_server_name }}" enable_ipv6: true public_ip: dynamic boot_type: local state: running image: "{{ images[0] }}" organization: "{{ organization_id }}" region: "{{ algo_region }}" commercial_type: "{{ cloud_providers.scaleway.size }}" wait: true tags: - Environment:Algo - AUTHORIZED_KEY={{ lookup('file', SSH_keys.public)|regex_replace(' ', '_') }} register: algo_instance until: algo_instance.msg.public_ip retries: 3 delay: 3 environment: SCW_TOKEN: "{{ algo_scaleway_token }}" - set_fact: cloud_instance_ip: "{{ algo_instance.msg.public_ip.address }}" ansible_ssh_user: algo ansible_ssh_port: "{{ ssh_port }}" cloudinit: true
YAML
algo/roles/cloud-scaleway/tasks/prompts.yml
--- - pause: prompt: | Enter your auth token (https://trailofbits.github.io/algo/cloud-scaleway.html) echo: false register: _scaleway_token when: - scaleway_token is undefined - lookup('env','SCW_TOKEN')|length <= 0 - pause: prompt: | What region should the server be located in? {% for r in scaleway_regions %} {{ loop.index }}. {{ r['alias'] }} {% endfor %} Enter the number of your desired region [{{ scaleway_regions.0.alias }}] register: _algo_region when: region is undefined - name: Set scaleway facts set_fact: algo_scaleway_token: "{{ scaleway_token | default(_scaleway_token.user_input) | default(lookup('env','SCW_TOKEN'), true) }}" algo_region: >- {% if region is defined %}{{ region }} {%- elif _algo_region.user_input %}{{ scaleway_regions[_algo_region.user_input | int -1 ]['alias'] }} {%- else %}{{ scaleway_regions.0.alias }}{% endif %}
YAML
algo/roles/cloud-vultr/tasks/main.yml
--- - name: Include prompts import_tasks: prompts.yml - block: - name: Creating a firewall group vultr_firewall_group: name: "{{ algo_server_name }}" - name: Creating firewall rules vultr_firewall_rule: group: "{{ algo_server_name }}" protocol: "{{ item.protocol }}" port: "{{ item.port }}" ip_version: "{{ item.ip }}" cidr: "{{ item.cidr }}" with_items: - { protocol: tcp, port: "{{ ssh_port }}", ip: v4, cidr: 0.0.0.0/0 } - { protocol: tcp, port: "{{ ssh_port }}", ip: v6, cidr: "::/0" } - { protocol: udp, port: 500, ip: v4, cidr: 0.0.0.0/0 } - { protocol: udp, port: 500, ip: v6, cidr: "::/0" } - { protocol: udp, port: 4500, ip: v4, cidr: 0.0.0.0/0 } - { protocol: udp, port: 4500, ip: v6, cidr: "::/0" } - { protocol: udp, port: "{{ wireguard_port }}", ip: v4, cidr: 0.0.0.0/0 } - { protocol: udp, port: "{{ wireguard_port }}", ip: v6, cidr: "::/0" } - name: Upload the startup script vultr_startup_script: name: algo-startup script: | {{ lookup('template', 'files/cloud-init/base.yml') }} - name: Creating a server vultr_server: name: "{{ algo_server_name }}" startup_script: algo-startup hostname: "{{ algo_server_name }}" os: "{{ cloud_providers.vultr.os }}" plan: "{{ cloud_providers.vultr.size }}" region: "{{ algo_vultr_region }}" firewall_group: "{{ algo_server_name }}" state: started tag: Environment:Algo ipv6_enabled: true auto_backup_enabled: false notify_activate: false register: vultr_server - set_fact: cloud_instance_ip: "{{ vultr_server.vultr_server.v4_main_ip }}" ansible_ssh_user: algo ansible_ssh_port: "{{ ssh_port }}" cloudinit: true environment: VULTR_API_CONFIG: "{{ algo_vultr_config }}"
YAML
algo/roles/cloud-vultr/tasks/prompts.yml
--- - pause: prompt: | Enter the local path to your configuration INI file (https://trailofbits.github.io/algo/cloud-vultr.html): register: _vultr_config when: - vultr_config is undefined - lookup('env','VULTR_API_CONFIG')|length <= 0 - name: Set the token as a fact set_fact: algo_vultr_config: "{{ vultr_config | default(_vultr_config.user_input) | default(lookup('env','VULTR_API_CONFIG'), true) }}" - name: Get regions uri: url: https://api.vultr.com/v1/regions/list method: GET status_code: 200 register: _vultr_regions - name: Format regions set_fact: regions: >- [ {% for k, v in _vultr_regions.json.items() %} {{ v }}{% if not loop.last %},{% endif %} {% endfor %} ] - name: Set regions as a fact set_fact: vultr_regions: "{{ regions | sort(attribute='country') }}" - name: Set default region set_fact: default_region: >- {% for r in vultr_regions %} {%- if r['DCID'] == "1" %}{{ loop.index }}{% endif %} {%- endfor %} - pause: prompt: | What region should the server be located in? (https://www.vultr.com/locations/): {% for r in vultr_regions %} {{ loop.index }}. {{ r['name'] }} {% endfor %} Enter the number of your desired region [{{ default_region }}] register: _algo_region when: region is undefined - name: Set the desired region as a fact set_fact: algo_vultr_region: >- {% if region is defined %}{{ region }} {%- elif _algo_region.user_input %}{{ vultr_regions[_algo_region.user_input | int -1 ]['name'] }} {%- else %}{{ vultr_regions[default_region | int - 1]['name'] }}{% endif %}
YAML
algo/roles/common/defaults/main.yml
--- install_headers: false aip_supported_providers: - digitalocean snat_aipv4: false ipv6_default: "{{ ansible_default_ipv6.address + '/' + ansible_default_ipv6.prefix }}" ipv6_subnet_size: "{{ ipv6_default | ipaddr('size') }}" ipv6_egress_ip: >- {{ (ipv6_default | next_nth_usable(15 | random(seed=algo_server_name + ansible_fqdn))) + '/124' if ipv6_subnet_size|int > 1 else ipv6_default }}
YAML
algo/roles/common/handlers/main.yml
--- - name: restart rsyslog service: name=rsyslog state=restarted - name: restart ipfw service: name=ipfw state=restarted - name: flush routing cache shell: echo 1 > /proc/sys/net/ipv4/route/flush - name: restart systemd-networkd systemd: name: systemd-networkd state: restarted daemon_reload: true - name: restart systemd-resolved systemd: name: systemd-resolved state: restarted - name: restart loopback bsd shell: > ifconfig lo100 destroy || true && ifconfig lo100 create && ifconfig lo100 inet {{ local_service_ip }} netmask 255.255.255.255 && ifconfig lo100 inet6 {{ local_service_ipv6 }}/128; echo $? - name: restart iptables service: name=netfilter-persistent state=restarted - name: netplan apply command: netplan apply
YAML
algo/roles/common/tasks/facts.yml
--- - name: Define facts set_fact: p12_export_password: "{{ p12_password|default(lookup('password', '/dev/null length=9 chars=ascii_letters,digits,_,@')) }}" tags: update-users - name: Set facts set_fact: CA_password: "{{ ca_password|default(lookup('password', '/dev/null length=16 chars=ascii_letters,digits,_,@')) }}" IP_subject_alt_name: "{{ IP_subject_alt_name }}" - name: Set IPv6 support as a fact set_fact: ipv6_support: "{% if ansible_default_ipv6['gateway'] is defined %}true{% else %}false{% endif %}" tags: always - name: Check size of MTU set_fact: reduce_mtu: "{{ 1500 - ansible_default_ipv4['mtu']|int if reduce_mtu|int == 0 and ansible_default_ipv4['mtu']|int < 1500 else reduce_mtu|int }}" tags: always
YAML
algo/roles/common/tasks/freebsd.yml
--- - name: FreeBSD | Install prerequisites package: name: - python3 - sudo vars: ansible_python_interpreter: /usr/local/bin/python2.7 - name: Set python3 as the interpreter to use set_fact: ansible_python_interpreter: /usr/local/bin/python3 - name: Gather facts setup: - name: Gather additional facts import_tasks: facts.yml - name: Set OS specific facts set_fact: config_prefix: /usr/local/ strongswan_shell: /usr/sbin/nologin strongswan_home: /var/empty root_group: wheel ssh_service_name: sshd apparmor_enabled: false strongswan_additional_plugins: - kernel-pfroute - kernel-pfkey tools: - git - subversion - screen - coreutils - openssl - bash - wget sysctl: - item: net.inet.ip.forwarding value: 1 - item: "{{ 'net.inet6.ip6.forwarding' if ipv6_support else none }}" value: 1 - name: Install tools package: name="{{ item }}" state=present with_items: - "{{ tools|default([]) }}" - name: Loopback included into the rc config blockinfile: dest: /etc/rc.conf create: true block: | cloned_interfaces="lo100" ifconfig_lo100="inet {{ local_service_ip }} netmask 255.255.255.255" ifconfig_lo100_ipv6="inet6 {{ local_service_ipv6 }}/128" notify: - restart loopback bsd - name: Enable the gateway features lineinfile: dest=/etc/rc.conf regexp='^{{ item.param }}.*' line='{{ item.param }}={{ item.value }}' with_items: - { param: firewall_enable, value: '"YES"' } - { param: firewall_type, value: '"open"' } - { param: gateway_enable, value: '"YES"' } - { param: natd_enable, value: '"YES"' } - { param: natd_interface, value: '"{{ ansible_default_ipv4.device|default() }}"' } - { param: natd_flags, value: '"-dynamic -m"' } notify: - restart ipfw - name: FreeBSD | Activate IPFW shell: > kldstat -n ipfw.ko || kldload ipfw ; sysctl net.inet.ip.fw.enable=0 && bash /etc/rc.firewall && sysctl net.inet.ip.fw.enable=1 changed_when: false - meta: flush_handlers
YAML
algo/roles/common/tasks/iptables.yml
--- - name: Iptables configured template: src: "{{ item.src }}" dest: "{{ item.dest }}" owner: root group: root mode: 0640 with_items: - { src: rules.v4.j2, dest: /etc/iptables/rules.v4 } notify: - restart iptables - name: Iptables configured template: src: "{{ item.src }}" dest: "{{ item.dest }}" owner: root group: root mode: 0640 when: ipv6_support with_items: - { src: rules.v6.j2, dest: /etc/iptables/rules.v6 } notify: - restart iptables
YAML
algo/roles/common/tasks/main.yml
--- - name: Check the system raw: uname -a register: OS changed_when: false tags: - update-users - fail: when: cloud_test|default(false)|bool - include_tasks: ubuntu.yml when: '"Ubuntu" in OS.stdout or "Linux" in OS.stdout' tags: - update-users - include_tasks: freebsd.yml when: '"FreeBSD" in OS.stdout' tags: - update-users - name: Sysctl tuning sysctl: name="{{ item.item }}" value="{{ item.value }}" when: item.item with_items: - "{{ sysctl|default([]) }}" tags: - always - meta: flush_handlers
YAML
algo/roles/common/tasks/ubuntu.yml
--- - name: Gather facts setup: - name: Cloud only tasks block: - name: Install software updates apt: update_cache: true install_recommends: true upgrade: dist register: result until: result is succeeded retries: 30 delay: 10 - name: Check if reboot is required shell: > if [[ -e /var/run/reboot-required ]]; then echo "required"; else echo "no"; fi args: executable: /bin/bash register: reboot_required - name: Reboot shell: sleep 2 && shutdown -r now "Ansible updates triggered" async: 1 poll: 0 when: reboot_required is defined and reboot_required.stdout == 'required' ignore_errors: true - name: Wait until the server becomes ready... wait_for_connection: delay: 20 timeout: 320 when: reboot_required is defined and reboot_required.stdout == 'required' become: false when: algo_provider != "local" - name: Include unattended upgrades configuration import_tasks: unattended-upgrades.yml - name: Disable MOTD on login and SSHD replace: dest="{{ item.file }}" regexp="{{ item.regexp }}" replace="{{ item.line }}" with_items: - { regexp: ^session.*optional.*pam_motd.so.*, line: "# MOTD DISABLED", file: /etc/pam.d/login } - { regexp: ^session.*optional.*pam_motd.so.*, line: "# MOTD DISABLED", file: /etc/pam.d/sshd } - name: Ensure fallback resolvers are set ini_file: path: /etc/systemd/resolved.conf section: Resolve option: FallbackDNS value: "{{ dns_servers.ipv4 | join(' ') }}" notify: - restart systemd-resolved - name: Loopback for services configured template: src: 10-algo-lo100.network.j2 dest: /etc/systemd/network/10-algo-lo100.network notify: - restart systemd-networkd - name: systemd services enabled and started systemd: name: "{{ item }}" state: started enabled: true daemon_reload: true with_items: - systemd-networkd - systemd-resolved - meta: flush_handlers - name: Check apparmor support command: apparmor_status ignore_errors: true changed_when: false register: apparmor_status - name: Set fact if apparmor enabled set_fact: apparmor_enabled: true when: '"profiles are in enforce mode" in apparmor_status.stdout' - name: Gather additional facts import_tasks: facts.yml - name: Set OS specific facts set_fact: tools: - git - screen - apparmor-utils - uuid-runtime - coreutils - iptables-persistent - cgroup-tools - openssl - gnupg2 sysctl: - item: net.ipv4.ip_forward value: 1 - item: net.ipv4.conf.all.forwarding value: 1 - item: "{{ 'net.ipv6.conf.all.forwarding' if ipv6_support else none }}" value: 1 - name: Install tools apt: name: "{{ tools|default([]) }}" state: present update_cache: true - name: Install headers apt: name: - linux-headers-generic - linux-headers-{{ ansible_kernel }} state: present when: install_headers | bool - name: Configure the alternative ingress ip include_tasks: aip/main.yml when: alternative_ingress_ip - include_tasks: iptables.yml tags: iptables
YAML
algo/roles/common/tasks/unattended-upgrades.yml
--- - name: Install unattended-upgrades apt: name: unattended-upgrades state: present - name: Configure unattended-upgrades template: src: 50unattended-upgrades.j2 dest: /etc/apt/apt.conf.d/50unattended-upgrades owner: root group: root mode: 0644 - name: Periodic upgrades configured template: src: 10periodic.j2 dest: /etc/apt/apt.conf.d/10periodic owner: root group: root mode: 0644
YAML
algo/roles/common/tasks/aip/digitalocean.yml
--- - name: Get the anchor IP uri: url: http://169.254.169.254/metadata/v1/interfaces/public/0/anchor_ipv4/address return_content: true register: anchor_ipv4 until: anchor_ipv4 is succeeded retries: 30 delay: 10 - name: Set SNAT IP as a fact set_fact: snat_aipv4: "{{ anchor_ipv4.content }}" - name: IPv6 egress alias configured template: src: 99-algo-ipv6-egress.yaml.j2 dest: /etc/netplan/99-algo-ipv6-egress.yaml when: - ipv6_support - ipv6_subnet_size|int > 1 notify: - netplan apply
YAML
algo/roles/common/tasks/aip/main.yml
--- - name: Verify the provider assert: that: algo_provider in aip_supported_providers msg: Algo does not support Alternative Ingress IP for {{ algo_provider }} - name: Include alternative ingress ip configuration include_tasks: file: "{{ algo_provider if algo_provider in aip_supported_providers else 'placeholder' }}.yml" when: algo_provider in aip_supported_providers - name: Verify SNAT IPv4 found assert: that: snat_aipv4 | ipv4 msg: The SNAT IPv4 address not found. Cannot proceed with the alternative ingress ip.
algo/roles/common/templates/10-algo-lo100.network.j2
[Match] Name=lo [Network] Description=lo:100 Address={{ local_service_ip }}/32 Address={{ local_service_ipv6 }}/128
algo/roles/common/templates/10periodic.j2
APT::Periodic::Update-Package-Lists "1"; APT::Periodic::Download-Upgradeable-Packages "1"; APT::Periodic::AutocleanInterval "7"; APT::Periodic::Unattended-Upgrade "1";
algo/roles/common/templates/50unattended-upgrades.j2
// Automatically upgrade packages from these (origin:archive) pairs // // Note that in Ubuntu security updates may pull in new dependencies // from non-security sources (e.g. chromium). By allowing the release // pocket these get automatically pulled in. Unattended-Upgrade::Allowed-Origins { "${distro_id}:${distro_codename}-security"; // Extended Security Maintenance; doesn't necessarily exist for // every release and this system may not have it installed, but if // available, the policy for updates is such that unattended-upgrades // should also install from here by default. "${distro_id}ESM:${distro_codename}"; "${distro_id}:${distro_codename}-updates"; // "${distro_id}:${distro_codename}-proposed"; // "${distro_id}:${distro_codename}-backports"; }; // List of packages to not update (regexp are supported) Unattended-Upgrade::Package-Blacklist { // "vim"; // "libc6"; // "libc6-dev"; // "libc6-i686"; }; // This option will controls whether the development release of Ubuntu will be // upgraded automatically. Unattended-Upgrade::DevRelease "false"; // This option allows you to control if on a unclean dpkg exit // unattended-upgrades will automatically run // dpkg --force-confold --configure -a // The default is true, to ensure updates keep getting installed Unattended-Upgrade::AutoFixInterruptedDpkg "true"; // Split the upgrade into the smallest possible chunks so that // they can be interrupted with SIGTERM. This makes the upgrade // a bit slower but it has the benefit that shutdown while a upgrade // is running is possible (with a small delay) Unattended-Upgrade::MinimalSteps "true"; // Install all unattended-upgrades when the machine is shutting down // instead of doing it in the background while the machine is running // This will (obviously) make shutdown slower //Unattended-Upgrade::InstallOnShutdown "true"; // Send email to this address for problems or packages upgrades // If empty or unset then no email is sent, make sure that you // have a working mail setup on your system. A package that provides // 'mailx' must be installed. E.g. "user@example.com" //Unattended-Upgrade::Mail "root"; // Set this value to "true" to get emails only on errors. Default // is to always send a mail if Unattended-Upgrade::Mail is set //Unattended-Upgrade::MailOnlyOnError "true"; // Remove unused automatically installed kernel-related packages // (kernel images, kernel headers and kernel version locked tools). Unattended-Upgrade::Remove-Unused-Kernel-Packages "true"; // Do automatic removal of new unused dependencies after the upgrade // (equivalent to apt-get autoremove) Unattended-Upgrade::Remove-Unused-Dependencies "true"; // Automatically reboot *WITHOUT CONFIRMATION* // if the file /var/run/reboot-required is found after the upgrade Unattended-Upgrade::Automatic-Reboot "{{ unattended_reboot.enabled|lower }}"; // If automatic reboot is enabled and needed, reboot at the specific // time instead of immediately // Default: "now" Unattended-Upgrade::Automatic-Reboot-Time "{{ unattended_reboot.time }}"; // Use apt bandwidth limit feature, this example limits the download // speed to 70kb/sec //Acquire::http::Dl-Limit "70"; // Enable logging to syslog. Default is False Unattended-Upgrade::SyslogEnable "true"; // Specify syslog facility. Default is daemon // Unattended-Upgrade::SyslogFacility "daemon"; // Download and install upgrades only on AC power // (i.e. skip or gracefully stop updates on battery) // Unattended-Upgrade::OnlyOnACPower "true"; // Download and install upgrades only on non-metered connection // (i.e. skip or gracefully stop updates on a metered connection) // Unattended-Upgrade::Skip-Updates-On-Metered-Connections "true"; // Keep the custom conffile when upgrading Dpkg::Options { "--force-confdef"; "--force-confold"; };
algo/roles/common/templates/99-algo-ipv6-egress.yaml.j2
network: version: 2 ethernets: {{ ansible_default_ipv6.interface }}: addresses: - {{ ipv6_egress_ip }}
algo/roles/common/templates/rules.v4.j2
{% set subnets = ([strongswan_network] if ipsec_enabled else []) + ([wireguard_network_ipv4] if wireguard_enabled else []) %} {% set ports = (['500', '4500'] if ipsec_enabled else []) + ([wireguard_port] if wireguard_enabled else []) + ([wireguard_port_actual] if wireguard_enabled and wireguard_port|int == wireguard_port_avoid|int else []) %} #### The mangle table # This table allows us to modify packet headers # Packets enter this table first # *mangle :PREROUTING ACCEPT [0:0] :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] {% if reduce_mtu|int > 0 and ipsec_enabled %} -A FORWARD -s {{ strongswan_network }} -p tcp -m tcp --tcp-flags SYN,RST SYN -j TCPMSS --set-mss {{ 1360 - reduce_mtu|int }} {% endif %} COMMIT #### The nat table # This table enables Network Address Translation # (This is technically a type of packet mangling) # *nat :PREROUTING ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] {% if wireguard_enabled and wireguard_port|int == wireguard_port_avoid|int %} # Handle the special case of allowing access to WireGuard over an already used # port like 53 -A PREROUTING -s {{ subnets|join(',') }} -p udp --dport {{ wireguard_port_avoid }} -j RETURN -A PREROUTING --in-interface {{ ansible_default_ipv4['interface'] }} -p udp --dport {{ wireguard_port_avoid }} -j REDIRECT --to-port {{ wireguard_port_actual }} {% endif %} # Allow traffic from the VPN network to the outside world, and replies -A POSTROUTING -s {{ subnets|join(',') }} -m policy --pol none --dir out {{ '-j SNAT --to ' + snat_aipv4 if snat_aipv4 else '-j MASQUERADE' }} COMMIT #### The filter table # The default ipfilter table # *filter # By default, drop packets that are destined for this server :INPUT DROP [0:0] # By default, drop packets that request to be forwarded by this server :FORWARD DROP [0:0] # By default, accept any packets originating from this server :OUTPUT ACCEPT [0:0] # Accept packets destined for localhost -A INPUT -i lo -j ACCEPT # Accept any packet from an open TCP connection -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT # Accept packets using the encapsulation protocol -A INPUT -p esp -j ACCEPT -A INPUT -p ah -j ACCEPT # rate limit ICMP traffic per source -A INPUT -p icmp --icmp-type echo-request -m hashlimit --hashlimit-upto 5/s --hashlimit-mode srcip --hashlimit-srcmask 32 --hashlimit-name icmp-echo-drop -j ACCEPT # Accept IPSEC/WireGuard traffic to ports {{ subnets|join(',') }} -A INPUT -p udp -m multiport --dports {{ ports|join(',') }} -j ACCEPT # Allow new traffic to port {{ ansible_ssh_port }} (SSH) -A INPUT -p tcp --dport {{ ansible_ssh_port }} -m conntrack --ctstate NEW -j ACCEPT {% if ipsec_enabled %} # Allow any traffic from the IPsec VPN -A INPUT -p ipencap -m policy --dir in --pol ipsec --proto esp -j ACCEPT {% endif %} # TODO: # The IP of the resolver should be bound to a DUMMY interface. # DUMMY interfaces are the proper way to install IPs without assigning them any # particular virtual (tun,tap,...) or physical (ethernet) interface. # Accept DNS traffic to the local DNS resolver -A INPUT -d {{ local_service_ip }} -p udp --dport 53 -j ACCEPT # Drop traffic between VPN clients -A FORWARD -s {{ subnets|join(',') }} -d {{ subnets|join(',') }} -j {{ "DROP" if BetweenClients_DROP else "ACCEPT" }} # Drop traffic to VPN clients from SSH tunnels -A OUTPUT -d {{ subnets|join(',') }} -m owner --gid-owner 15000 -j {{ "DROP" if BetweenClients_DROP else "ACCEPT" }} # Drop traffic to the link-local network -A FORWARD -s {{ subnets|join(',') }} -d 169.254.0.0/16 -j DROP # Drop traffic to the link-local network from SSH tunnels -A OUTPUT -d 169.254.0.0/16 -m owner --gid-owner 15000 -j DROP # Forward any packet that's part of an established connection -A FORWARD -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT # Drop SMB/CIFS traffic that requests to be forwarded -A FORWARD -p tcp --dport 445 -j {{ "DROP" if block_smb else "ACCEPT" }} # Drop NETBIOS traffic that requests to be forwarded -A FORWARD -p udp -m multiport --ports 137,138 -j {{ "DROP" if block_netbios else "ACCEPT" }} -A FORWARD -p tcp -m multiport --ports 137,139 -j {{ "DROP" if block_netbios else "ACCEPT" }} {% if ipsec_enabled %} # Forward any IPSEC traffic from the VPN network -A FORWARD -m conntrack --ctstate NEW -s {{ strongswan_network }} -m policy --pol ipsec --dir in -j ACCEPT {% endif %} {% if wireguard_enabled %} # Forward any traffic from the WireGuard VPN network -A FORWARD -m conntrack --ctstate NEW -s {{ wireguard_network_ipv4 }} -m policy --pol none --dir in -j ACCEPT {% endif %} COMMIT
algo/roles/common/templates/rules.v6.j2
{% set subnets = ([strongswan_network_ipv6] if ipsec_enabled else []) + ([wireguard_network_ipv6] if wireguard_enabled else []) %} {% set ports = (['500', '4500'] if ipsec_enabled else []) + ([wireguard_port] if wireguard_enabled else []) + ([wireguard_port_actual] if wireguard_enabled and wireguard_port|int == wireguard_port_avoid|int else []) %} #### The mangle table # This table allows us to modify packet headers # Packets enter this table first # *mangle :PREROUTING ACCEPT [0:0] :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] {% if reduce_mtu|int > 0 and ipsec_enabled %} -A FORWARD -s {{ strongswan_network_ipv6 }} -p tcp -m tcp --tcp-flags SYN,RST SYN -j TCPMSS --set-mss {{ 1340 - reduce_mtu|int }} {% endif %} COMMIT #### The nat table # This table enables Network Address Translation # (This is technically a type of packet mangling) # *nat :PREROUTING ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] {% if wireguard_enabled and wireguard_port|int == wireguard_port_avoid|int %} # Handle the special case of allowing access to WireGuard over an already used # port like 53 -A PREROUTING -s {{ subnets|join(',') }} -p udp --dport {{ wireguard_port_avoid }} -j RETURN -A PREROUTING --in-interface {{ ansible_default_ipv6['interface'] }} -p udp --dport {{ wireguard_port_avoid }} -j REDIRECT --to-port {{ wireguard_port_actual }} {% endif %} # Allow traffic from the VPN network to the outside world, and replies -A POSTROUTING -s {{ subnets|join(',') }} -m policy --pol none --dir out {{ '-j SNAT --to ' + ipv6_egress_ip | ipaddr('address') if alternative_ingress_ip else '-j MASQUERADE' }} COMMIT #### The filter table # The default ipfilter table # *filter # By default, drop packets that are destined for this server :INPUT DROP [0:0] # By default, drop packets that request to be forwarded by this server :FORWARD DROP [0:0] # By default, accept any packets originating from this server :OUTPUT ACCEPT [0:0] # Create the ICMPV6-CHECK chain and its log chain # These chains are used later to prevent a type of bug that would # allow malicious traffic to reach over the server into the private network # An instance of such a bug on Cisco software is described here: # https://www.insinuator.net/2016/05/cve-2016-1409-ipv6-ndp-dos-vulnerability-in-cisco-software/ # other software implementations might be at least as broken as the one in CISCO gear. :ICMPV6-CHECK - [0:0] :ICMPV6-CHECK-LOG - [0:0] # Accept packets destined for localhost -A INPUT -i lo -j ACCEPT # Accept any packet from an open TCP connection -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT # Accept packets using the encapsulation protocol -A INPUT -p esp -j ACCEPT -A INPUT -m ah -j ACCEPT # rate limit ICMP traffic per source -A INPUT -p icmpv6 --icmpv6-type echo-request -m hashlimit --hashlimit-upto 5/s --hashlimit-mode srcip --hashlimit-srcmask 32 --hashlimit-name icmp-echo-drop -j ACCEPT # Accept IPSEC/WireGuard traffic to ports {{ subnets|join(',') }} -A INPUT -p udp -m multiport --dports {{ ports|join(',') }} -j ACCEPT # Allow new traffic to port {{ ansible_ssh_port }} (SSH) -A INPUT -p tcp --dport {{ ansible_ssh_port }} -m conntrack --ctstate NEW -j ACCEPT # Accept properly formatted Neighbor Discovery Protocol packets -A INPUT -p icmpv6 --icmpv6-type router-advertisement -m hl --hl-eq 255 -j ACCEPT -A INPUT -p icmpv6 --icmpv6-type neighbor-solicitation -m hl --hl-eq 255 -j ACCEPT -A INPUT -p icmpv6 --icmpv6-type neighbor-advertisement -m hl --hl-eq 255 -j ACCEPT -A INPUT -p icmpv6 --icmpv6-type redirect -m hl --hl-eq 255 -j ACCEPT # DHCP in AWS -A INPUT -m conntrack --ctstate NEW -m udp -p udp --dport 546 -d fe80::/64 -j ACCEPT # TODO: # The IP of the resolver should be bound to a DUMMY interface. # DUMMY interfaces are the proper way to install IPs without assigning them any # particular virtual (tun,tap,...) or physical (ethernet) interface. # Accept DNS traffic to the local DNS resolver -A INPUT -d {{ local_service_ipv6 }}/128 -p udp --dport 53 -j ACCEPT # Drop traffic between VPN clients -A FORWARD -s {{ subnets|join(',') }} -d {{ subnets|join(',') }} -j {{ "DROP" if BetweenClients_DROP else "ACCEPT" }} # Drop traffic to VPN clients from SSH tunnels -A OUTPUT -d {{ subnets|join(',') }} -m owner --gid-owner 15000 -j {{ "DROP" if BetweenClients_DROP else "ACCEPT" }} -A FORWARD -j ICMPV6-CHECK -A FORWARD -p tcp --dport 445 -j {{ "DROP" if block_smb else "ACCEPT" }} -A FORWARD -p udp -m multiport --ports 137,138 -j {{ "DROP" if block_netbios else "ACCEPT" }} -A FORWARD -p tcp -m multiport --ports 137,139 -j {{ "DROP" if block_netbios else "ACCEPT" }} -A FORWARD -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT {% if ipsec_enabled %} -A FORWARD -m conntrack --ctstate NEW -s {{ strongswan_network_ipv6 }} -m policy --pol ipsec --dir in -j ACCEPT {% endif %} {% if wireguard_enabled %} -A FORWARD -m conntrack --ctstate NEW -s {{ wireguard_network_ipv6 }} -m policy --pol none --dir in -j ACCEPT {% endif %} # Use the ICMPV6-CHECK chain, described above -A ICMPV6-CHECK -p icmpv6 -m hl ! --hl-eq 255 --icmpv6-type router-solicitation -j ICMPV6-CHECK-LOG -A ICMPV6-CHECK -p icmpv6 -m hl ! --hl-eq 255 --icmpv6-type router-advertisement -j ICMPV6-CHECK-LOG -A ICMPV6-CHECK -p icmpv6 -m hl ! --hl-eq 255 --icmpv6-type neighbor-solicitation -j ICMPV6-CHECK-LOG -A ICMPV6-CHECK -p icmpv6 -m hl ! --hl-eq 255 --icmpv6-type neighbor-advertisement -j ICMPV6-CHECK-LOG -A ICMPV6-CHECK-LOG -j LOG --log-prefix "ICMPV6-CHECK-LOG DROP " -A ICMPV6-CHECK-LOG -j DROP COMMIT
YAML
algo/roles/dns/defaults/main.yml
--- algo_dns_adblocking: false apparmor_enabled: true dns_encryption: true ipv6_support: false dnscrypt_servers: ipv4: - cloudflare ipv6: - cloudflare-ipv6
algo/roles/dns/files/50-dnscrypt-proxy-unattended-upgrades
// Automatically upgrade packages from these (origin:archive) pairs Unattended-Upgrade::Allowed-Origins { "LP-PPA-shevchuk-dnscrypt-proxy:${distro_codename}"; };
algo/roles/dns/files/apparmor.profile.dnscrypt-proxy
#include <tunables/global> /usr/{s,}bin/dnscrypt-proxy flags=(attach_disconnected) { #include <abstractions/base> #include <abstractions/nameservice> #include <abstractions/openssl> capability chown, capability dac_override, capability net_bind_service, capability setgid, capability setuid, capability sys_resource, /etc/dnscrypt-proxy/** r, /usr/bin/dnscrypt-proxy mr, /var/cache/{private/,}dnscrypt-proxy/** rw, /tmp/*.tmp w, owner /tmp/*.tmp r, /run/systemd/notify rw, /lib/x86_64-linux-gnu/ld-*.so mr, @{PROC}/sys/kernel/hostname r, @{PROC}/sys/net/core/somaxconn r, /etc/ld.so.cache r, /usr/local/lib/{@{multiarch}/,}libldns.so* mr, /usr/local/lib/{@{multiarch}/,}libsodium.so* mr, }
YAML
algo/roles/dns/handlers/main.yml
--- - name: daemon reload systemd: daemon_reload: true - name: restart dnscrypt-proxy systemd: name: dnscrypt-proxy state: restarted daemon_reload: true when: ansible_distribution == 'Ubuntu' - name: restart dnscrypt-proxy service: name: dnscrypt-proxy state: restarted when: ansible_distribution == 'FreeBSD'
YAML
algo/roles/dns/tasks/dns_adblocking.yml
--- - name: Adblock script created template: src: adblock.sh.j2 dest: /usr/local/sbin/adblock.sh owner: root group: "{{ root_group|default('root') }}" mode: 0755 - name: Adblock script added to cron cron: name: Adblock hosts update minute: "{{ range(0, 60) | random }}" hour: "{{ range(0, 24) | random }}" job: /usr/local/sbin/adblock.sh user: root - name: Update adblock hosts command: /usr/local/sbin/adblock.sh changed_when: false
YAML
algo/roles/dns/tasks/freebsd.yml
--- - name: Install dnscrypt-proxy package: name: dnscrypt-proxy2 - name: Enable mac_portacl lineinfile: path: /etc/rc.conf line: dnscrypt_proxy_mac_portacl_enable="YES"
YAML
algo/roles/dns/tasks/main.yml
--- - name: Include tasks for Ubuntu include_tasks: ubuntu.yml when: ansible_distribution == 'Debian' or ansible_distribution == 'Ubuntu' - name: Include tasks for FreeBSD include_tasks: freebsd.yml when: ansible_distribution == 'FreeBSD' - name: dnscrypt-proxy ip-blacklist configured template: src: ip-blacklist.txt.j2 dest: "{{ config_prefix|default('/') }}etc/dnscrypt-proxy/ip-blacklist.txt" notify: - restart dnscrypt-proxy - name: dnscrypt-proxy configured template: src: dnscrypt-proxy.toml.j2 dest: "{{ config_prefix|default('/') }}etc/dnscrypt-proxy/dnscrypt-proxy.toml" notify: - restart dnscrypt-proxy - name: Include DNS adblocking tasks import_tasks: dns_adblocking.yml when: algo_dns_adblocking - meta: flush_handlers - name: dnscrypt-proxy enabled and started service: name: dnscrypt-proxy state: started enabled: true
YAML
algo/roles/dns/tasks/ubuntu.yml
--- - block: - name: Add the repository apt_repository: state: present codename: "{{ ansible_distribution_release }}" repo: ppa:shevchuk/dnscrypt-proxy register: result until: result is succeeded retries: 10 delay: 3 - name: Configure unattended-upgrades copy: src: 50-dnscrypt-proxy-unattended-upgrades dest: /etc/apt/apt.conf.d/50-dnscrypt-proxy-unattended-upgrades owner: root group: root mode: 0644 when: ansible_facts['distribution_version'] is version('20.04', '<') - name: Install dnscrypt-proxy apt: name: dnscrypt-proxy state: present update_cache: true - block: - name: Ubuntu | Configure AppArmor policy for dnscrypt-proxy copy: src: apparmor.profile.dnscrypt-proxy dest: /etc/apparmor.d/usr.bin.dnscrypt-proxy owner: root group: root mode: 0600 notify: restart dnscrypt-proxy - name: Ubuntu | Enforce the dnscrypt-proxy AppArmor policy command: aa-enforce usr.bin.dnscrypt-proxy changed_when: false tags: apparmor when: apparmor_enabled|default(false)|bool - name: Ubuntu | Ensure that the dnscrypt-proxy service directory exist file: path: /etc/systemd/system/dnscrypt-proxy.service.d/ state: directory mode: 0755 owner: root group: root - name: Ubuntu | Add custom requirements to successfully start the unit copy: dest: /etc/systemd/system/dnscrypt-proxy.service.d/99-algo.conf content: | [Unit] After=systemd-resolved.service Requires=systemd-resolved.service [Service] AmbientCapabilities=CAP_NET_BIND_SERVICE notify: - restart dnscrypt-proxy
algo/roles/dns/templates/adblock.sh.j2
#!/bin/sh # Block ads, malware, etc.. TEMP="$(mktemp)" TEMP_SORTED="$(mktemp)" WHITELIST="/etc/dnscrypt-proxy/white.list" BLACKLIST="/etc/dnscrypt-proxy/black.list" BLOCKHOSTS="{{ config_prefix|default('/') }}etc/dnscrypt-proxy/blacklist.txt" BLOCKLIST_URLS="{% for url in adblock_lists %}{{ url }} {% endfor %}" #Delete the old block.hosts to make room for the updates rm -f $BLOCKHOSTS echo 'Downloading hosts lists...' #Download and process the files needed to make the lists (enable/add more, if you want) for url in $BLOCKLIST_URLS; do wget --timeout=2 --tries=3 -qO- "$url" | grep -Ev "(localhost)" | grep -Ev "#" | sed -E "s/(0.0.0.0 |127.0.0.1 |255.255.255.255 )//" >> "$TEMP" done #Add black list, if non-empty if [ -s "$BLACKLIST" ] then echo 'Adding blacklist...' cat $BLACKLIST >> "$TEMP" fi #Sort the download/black lists awk '/^[^#]/ { print $1 }' "$TEMP" | sort -u > "$TEMP_SORTED" #Filter (if applicable) if [ -s "$WHITELIST" ] then #Filter the blacklist, suppressing whitelist matches # This is relatively slow =-( echo 'Filtering white list...' grep -v -E "^[[:space:]]*$" $WHITELIST | awk '/^[^#]/ {sub(/\r$/,"");print $1}' | grep -vf - "$TEMP_SORTED" > $BLOCKHOSTS else cat "$TEMP_SORTED" > $BLOCKHOSTS fi echo 'Restarting dns service...' #Restart the dns service systemctl restart dnscrypt-proxy.service exit 0
algo/roles/dns/templates/dnscrypt-proxy.toml.j2
############################################## # # # dnscrypt-proxy configuration # # # ############################################## ## This is an example configuration file. ## You should adjust it to your needs, and save it as "dnscrypt-proxy.toml" ## ## Online documentation is available here: https://dnscrypt.info/doc ################################## # Global settings # ################################## ## List of servers to use ## ## Servers from the "public-resolvers" source (see down below) can ## be viewed here: https://dnscrypt.info/public-servers ## ## If this line is commented, all registered servers matching the require_* filters ## will be used. ## ## The proxy will automatically pick the fastest, working servers from the list. ## Remove the leading # first to enable this; lines starting with # are ignored. {# Allow either list to be empty. Output nothing if both are empty. #} {% set servers = [] %} {% if dnscrypt_servers.ipv4 %}{% set servers = dnscrypt_servers.ipv4 %}{% endif %} {% if ipv6_support and dnscrypt_servers.ipv6 %}{% set servers = servers + dnscrypt_servers.ipv6 %}{% endif %} {% if servers %}server_names = ['{{ servers | join("', '") }}']{% endif %} ## List of local addresses and ports to listen to. Can be IPv4 and/or IPv6. ## Note: When using systemd socket activation, choose an empty set (i.e. [] ). listen_addresses = [ '{{ local_service_ip }}:53'{% if ipv6_support %}, '[{{ local_service_ipv6 }}]:53'{% endif %} ] ## Maximum number of simultaneous client connections to accept max_clients = 250 ## Switch to a different system user after listening sockets have been created. ## Note (1): this feature is currently unsupported on Windows. ## Note (2): this feature is not compatible with systemd socket activation. ## Note (3): when using -pidfile, the PID file directory must be writable by the new user # user_name = 'nobody' ## Require servers (from static + remote sources) to satisfy specific properties # Use servers reachable over IPv4 ipv4_servers = true # Use servers reachable over IPv6 -- Do not enable if you don't have IPv6 connectivity ipv6_servers = {{ ipv6_support | bool | lower }} # Use servers implementing the DNSCrypt protocol dnscrypt_servers = true # Use servers implementing the DNS-over-HTTPS protocol doh_servers = true ## Require servers defined by remote sources to satisfy specific properties # Server must support DNS security extensions (DNSSEC) require_dnssec = true # Server must not log user queries (declarative) require_nolog = true # Server must not enforce its own blacklist (for parental control, ads blocking...) require_nofilter = true # Server names to avoid even if they match all criteria disabled_server_names = [] ## Always use TCP to connect to upstream servers. ## This can be useful if you need to route everything through Tor. ## Otherwise, leave this to `false`, as it doesn't improve security ## (dnscrypt-proxy will always encrypt everything even using UDP), and can ## only increase latency. force_tcp = false ## SOCKS proxy ## Uncomment the following line to route all TCP connections to a local Tor node ## Tor doesn't support UDP, so set `force_tcp` to `true` as well. # proxy = "socks5://127.0.0.1:9050" ## HTTP/HTTPS proxy ## Only for DoH servers # http_proxy = "http://127.0.0.1:8888" ## How long a DNS query will wait for a response, in milliseconds timeout = 2500 ## Keepalive for HTTP (HTTPS, HTTP/2) queries, in seconds keepalive = 30 ## Response for blocked queries. Options are `refused`, `hinfo` (default) or ## an IP response. To give an IP response, use the format `a:<IPv4>,aaaa:<IPv6>`. ## Using the `hinfo` option means that some responses will be lies. ## Unfortunately, the `hinfo` option appears to be required for Android 8+ # blocked_query_response = 'refused' ## Load-balancing strategy: 'p2' (default), 'ph', 'first' or 'random' lb_strategy = 'p2' ## Set to `true` to constantly try to estimate the latency of all the resolvers ## and adjust the load-balancing parameters accordingly, or to `false` to disable. # lb_estimator = true ## Log level (0-6, default: 2 - 0 is very verbose, 6 only contains fatal errors) log_level = 2 ## log file for the application # log_file = 'dnscrypt-proxy.log' ## Use the system logger (syslog on Unix, Event Log on Windows) use_syslog = true ## Delay, in minutes, after which certificates are reloaded cert_refresh_delay = 240 ## DNSCrypt: Create a new, unique key for every single DNS query ## This may improve privacy but can also have a significant impact on CPU usage ## Only enable if you don't have a lot of network load dnscrypt_ephemeral_keys = true ## DoH: Disable TLS session tickets - increases privacy but also latency tls_disable_session_tickets = true ## DoH: Use a specific cipher suite instead of the server preference ## 49199 = TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 ## 49195 = TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 ## 52392 = TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 ## 52393 = TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305 ## 4865 = TLS_AES_128_GCM_SHA256 ## 4867 = TLS_CHACHA20_POLY1305_SHA256 ## ## On non-Intel CPUs such as MIPS routers and ARM systems (Android, Raspberry Pi...), ## the following suite improves performance. ## This may also help on Intel CPUs running 32-bit operating systems. ## ## Keep tls_cipher_suite empty if you have issues fetching sources or ## connecting to some DoH servers. Google and Cloudflare are fine with it. # tls_cipher_suite = [52392, 49199] ## Fallback resolver ## This is a normal, non-encrypted DNS resolver, that will be only used ## for one-shot queries when retrieving the initial resolvers list, and ## only if the system DNS configuration doesn't work. ## No user application queries will ever be leaked through this resolver, ## and it will not be used after IP addresses of resolvers URLs have been found. ## It will never be used if lists have already been cached, and if stamps ## don't include host names without IP addresses. ## It will not be used if the configured system DNS works. ## A resolver supporting DNSSEC is recommended. This may become mandatory. ## ## People in China may need to use 114.114.114.114:53 here. ## Other popular options include 8.8.8.8 and 1.1.1.1. fallback_resolver = '{% if ansible_distribution == "FreeBSD" %}{{ ansible_dns.nameservers.0 }}:53{% else %}127.0.0.53:53{% endif %}' ## Never let dnscrypt-proxy try to use the system DNS settings; ## unconditionally use the fallback resolver. ignore_system_dns = true ## Maximum time (in seconds) to wait for network connectivity before ## initializing the proxy. ## Useful if the proxy is automatically started at boot, and network ## connectivity is not guaranteed to be immediately available. ## Use 0 to not test for connectivity at all (not recommended), ## and -1 to wait as much as possible. netprobe_timeout = 60 ## Address and port to try initializing a connection to, just to check ## if the network is up. It can be any address and any port, even if ## there is nothing answering these on the other side. Just don't use ## a local address, as the goal is to check for Internet connectivity. ## On Windows, a datagram with a single, nul byte will be sent, only ## when the system starts. ## On other operating systems, the connection will be initialized ## but nothing will be sent at all. netprobe_address = "1.1.1.1:53" ## Offline mode - Do not use any remote encrypted servers. ## The proxy will remain fully functional to respond to queries that ## plugins can handle directly (forwarding, cloaking, ...) # offline_mode = false ## Automatic log files rotation # Maximum log files size in MB log_files_max_size = 10 # How long to keep backup files, in days log_files_max_age = 7 # Maximum log files backups to keep (or 0 to keep all backups) log_files_max_backups = 1 ######################### # Filters # ######################### ## Immediately respond to IPv6-related queries with an empty response ## This makes things faster when there is no IPv6 connectivity, but can ## also cause reliability issues with some stub resolvers. ## Do not enable if you added a validating resolver such as dnsmasq in front ## of the proxy. block_ipv6 = false ################################################################################## # Route queries for specific domains to a dedicated set of servers # ################################################################################## ## Example map entries (one entry per line): ## example.com 9.9.9.9 ## example.net 9.9.9.9,8.8.8.8,1.1.1.1 # forwarding_rules = 'forwarding-rules.txt' ############################### # Cloaking rules # ############################### ## Cloaking returns a predefined address for a specific name. ## In addition to acting as a HOSTS file, it can also return the IP address ## of a different name. It will also do CNAME flattening. ## ## Example map entries (one entry per line) ## example.com 10.1.1.1 ## www.google.com forcesafesearch.google.com # cloaking_rules = 'cloaking-rules.txt' ########################### # DNS cache # ########################### ## Enable a DNS cache to reduce latency and outgoing traffic cache = true ## Cache size cache_size = 4096 ## Minimum TTL for cached entries cache_min_ttl = 2400 ## Maximum TTL for cached entries cache_max_ttl = 86400 ## Minimum TTL for negatively cached entries cache_neg_min_ttl = 60 ## Maximum TTL for negatively cached entries cache_neg_max_ttl = 600 ############################### # Query logging # ############################### ## Log client queries to a file [query_log] ## Path to the query log file (absolute, or relative to the same directory as the executable file) # file = 'query.log' ## Query log format (currently supported: tsv and ltsv) format = 'tsv' ## Do not log these query types, to reduce verbosity. Keep empty to log everything. # ignored_qtypes = ['DNSKEY', 'NS'] ############################################ # Suspicious queries logging # ############################################ ## Log queries for nonexistent zones ## These queries can reveal the presence of malware, broken/obsolete applications, ## and devices signaling their presence to 3rd parties. [nx_log] ## Path to the query log file (absolute, or relative to the same directory as the executable file) # file = 'nx.log' ## Query log format (currently supported: tsv and ltsv) format = 'tsv' ###################################################### # Pattern-based blocking (blacklists) # ###################################################### ## Blacklists are made of one pattern per line. Example of valid patterns: ## ## example.com ## =example.com ## *sex* ## ads.* ## ads*.example.* ## ads*.example[0-9]*.com ## ## Example blacklist files can be found at https://download.dnscrypt.info/blacklists/ ## A script to build blacklists from public feeds can be found in the ## `utils/generate-domains-blacklists` directory of the dnscrypt-proxy source code. [blacklist] ## Path to the file of blocking rules (absolute, or relative to the same directory as the executable file) {{ "blacklist_file = 'blacklist.txt'" if algo_dns_adblocking else "" }} ## Optional path to a file logging blocked queries # log_file = 'blocked.log' ## Optional log format: tsv or ltsv (default: tsv) # log_format = 'tsv' ########################################################### # Pattern-based IP blocking (IP blacklists) # ########################################################### ## IP blacklists are made of one pattern per line. Example of valid patterns: ## ## 127.* ## fe80:abcd:* ## 192.168.1.4 [ip_blacklist] ## Path to the file of blocking rules (absolute, or relative to the same directory as the executable file) blacklist_file = 'ip-blacklist.txt' ## Optional path to a file logging blocked queries # log_file = 'ip-blocked.log' ## Optional log format: tsv or ltsv (default: tsv) # log_format = 'tsv' ###################################################### # Pattern-based whitelisting (blacklists bypass) # ###################################################### ## Whitelists support the same patterns as blacklists ## If a name matches a whitelist entry, the corresponding session ## will bypass names and IP filters. ## ## Time-based rules are also supported to make some websites only accessible at specific times of the day. [whitelist] ## Path to the file of whitelisting rules (absolute, or relative to the same directory as the executable file) # whitelist_file = 'whitelist.txt' ## Optional path to a file logging whitelisted queries # log_file = 'whitelisted.log' ## Optional log format: tsv or ltsv (default: tsv) # log_format = 'tsv' ########################################## # Time access restrictions # ########################################## ## One or more weekly schedules can be defined here. ## Patterns in the name-based blocklist can optionally be followed with @schedule_name ## to apply the pattern 'schedule_name' only when it matches a time range of that schedule. ## ## For example, the following rule in a blacklist file: ## *.youtube.* @time-to-sleep ## would block access to YouTube only during the days, and period of the days ## define by the 'time-to-sleep' schedule. ## ## {after='21:00', before= '7:00'} matches 0:00-7:00 and 21:00-0:00 ## {after= '9:00', before='18:00'} matches 9:00-18:00 [schedules] # [schedules.'time-to-sleep'] # mon = [{after='21:00', before='7:00'}] # tue = [{after='21:00', before='7:00'}] # wed = [{after='21:00', before='7:00'}] # thu = [{after='21:00', before='7:00'}] # fri = [{after='23:00', before='7:00'}] # sat = [{after='23:00', before='7:00'}] # sun = [{after='21:00', before='7:00'}] # [schedules.'work'] # mon = [{after='9:00', before='18:00'}] # tue = [{after='9:00', before='18:00'}] # wed = [{after='9:00', before='18:00'}] # thu = [{after='9:00', before='18:00'}] # fri = [{after='9:00', before='17:00'}] ######################### # Servers # ######################### ## Remote lists of available servers ## Multiple sources can be used simultaneously, but every source ## requires a dedicated cache file. ## ## Refer to the documentation for URLs of public sources. ## ## A prefix can be prepended to server names in order to ## avoid collisions if different sources share the same for ## different servers. In that case, names listed in `server_names` ## must include the prefixes. ## ## If the `urls` property is missing, cache files and valid signatures ## must be already present; This doesn't prevent these cache files from ## expiring after `refresh_delay` hours. [sources] ## An example of a remote source from https://github.com/DNSCrypt/dnscrypt-resolvers [sources.'public-resolvers'] urls = ['https://raw.githubusercontent.com/DNSCrypt/dnscrypt-resolvers/master/v2/public-resolvers.md', 'https://download.dnscrypt.info/resolvers-list/v2/public-resolvers.md'] cache_file = '/var/cache/dnscrypt-proxy/public-resolvers.md' minisign_key = 'RWQf6LRCGA9i53mlYecO4IzT51TGPpvWucNSCh1CBM0QTaLn73Y7GFO3' prefix = '' ## Quad9 over DNSCrypt - https://quad9.net/ # [sources.quad9-resolvers] # urls = ["https://www.quad9.net/quad9-resolvers.md"] # minisign_key = "RWQBphd2+f6eiAqBsvDZEBXBGHQBJfeG6G+wJPPKxCZMoEQYpmoysKUN" # cache_file = "quad9-resolvers.md" # prefix = "quad9-" ## Another example source, with resolvers censoring some websites not appropriate for children ## This is a subset of the `public-resolvers` list, so enabling both is useless # [sources.'parental-control'] # urls = ['https://raw.githubusercontent.com/DNSCrypt/dnscrypt-resolvers/master/v2/parental-control.md', 'https://download.dnscrypt.info/resolvers-list/v2/parental-control.md'] # cache_file = 'parental-control.md' # minisign_key = 'RWQf6LRCGA9i53mlYecO4IzT51TGPpvWucNSCh1CBM0QTaLn73Y7GFO3' ## Optional, local, static list of additional servers ## Mostly useful for testing your own servers. [static] {% if custom_server_stamps %}{% for name, stamp in custom_server_stamps.items() %} [static.'{{ name }}'] stamp = '{{ stamp }}' {%- endfor %}{% endif %} # [static.'myserver'] # stamp = 'sdns:AQcAAAAAAAAAAAAQMi5kbnNjcnlwdC1jZXJ0Lg'
algo/roles/dns/templates/ip-blacklist.txt.j2
0.0.0.0 10.* 127.* 169.254.* 172.16.* 172.17.* 172.18.* 172.19.* 172.20.* 172.21.* 172.22.* 172.23.* 172.24.* 172.25.* 172.26.* 172.27.* 172.28.* 172.29.* 172.30.* 172.31.* 192.168.* ::ffff:0.0.0.0 ::ffff:10.* ::ffff:127.* ::ffff:169.254.* ::ffff:172.16.* ::ffff:172.17.* ::ffff:172.18.* ::ffff:172.19.* ::ffff:172.20.* ::ffff:172.21.* ::ffff:172.22.* ::ffff:172.23.* ::ffff:172.24.* ::ffff:172.25.* ::ffff:172.26.* ::ffff:172.27.* ::ffff:172.28.* ::ffff:172.29.* ::ffff:172.30.* ::ffff:172.31.* ::ffff:192.168.* fd00::* fe80::*