ahembree / ansible-hms-docker
- пятница, 14 января 2022 г. в 00:32:06
Ansible playbook for automated home media server setup
Ansible Playbook to setup an automated Home Media Server stack running on Docker across a variety of platforms with support for GPUs, SSL, DDNS, and more.
root
or sudo
access80/tcp
(HTTP)443/tcp
(HTTPS)32400/tcp
(Plex)This playbook assumes that it is a fresh install of an operating system that has not been configured yet. It should be safe to run on an existing system, BUT it may cause issues with Python since it installs Python 3.8, Docker repos, configures Nvidia GPU acceleration (if enabled), and configures network mounts (if enabled).
To ensure no conflicting changes with an existing system, you can run this playbook in "check mode" to see if any changes would be made by supplying the additional --check
flag (outlined again below with example)
Setting up the individual container configurations, such as for Sonarr, Radarr, Overseerr, Prowlarr, etc. are outside the scope of this project. The purpose of this project is to ensure the necessary base containers are running.
By default, the content is laid out in the following directory structure:
Generated compose file location: /opt/hms-docker/docker-compose.yml
Container data directory: /opt/hms-docker/apps
Default mount path for local share: /opt/hms-docker/media_data/
Media folder that contains movie and tv show folders: <mount path>/_library
Movie folder: <media path>/Movies
TV Show folder: <media path>/TV_Shows
It is recommended to read and follow this guide entirely as there is a lot of configuration options that are required to get the system up and running to its full potential.
Install git and clone the repository:
CentOS, Fedora, Alma, Rocky, Red Hat:
# Install git if not already installed
sudo yum install git -y
Ubuntu, Debian:
# Install git if not already installed
sudo apt-get install git -y
# Clone the repository and then go into the folder
git clone https://github.com/ahembree/ansible-hms-docker.git
cd ansible-hms-docker/
Install Ansible if not installed already:
CentOS, Fedora, Alma, Rocky, Red Hat
sudo yum install python38
### (If you wish to stay on Python 3.6, run `sudo yum install python3-pip` and then `pip3 install -U pip`)
sudo pip3 install ansible
Ubuntu, Debian
sudo apt-get install python38
### (If you wish to stay on Python 3.6, run `sudo apt-get install python3-pip` and then `pip3 install -U pip`)
sudo pip3 install ansible
Edit the vars/default.yml
file to configure settings and variables used in the playbook.
vars/default.yml
fileSettings to configure:
plex_claim_token
: (optional) your Plex claim code from https://plex.tv/claimhms_docker_domain
: the local domain name of the server to be used for proxy rules and SSL certificates (e.g. home.local
)transmission_vpn_user
: the username of the VPN usertransmission_vpn_pass
: the password of the VPN usertransmission_vpn_provider
: the VPN provider (e.g. nordvpn
, see this page for the list of supported providers)hms_docker_media_share_type
: the type of network share (cifs
, nfs
, local
)Required settings for wildcard SSL certificate generation:
traefik_ssl_enabled
: whether or not to generate a wildcard SSL certificatetraefik_ssl_dns_provider_zone
: the zone of the DNS provider (e.g. example.com
, this will default to the hms_docker_domain
if not modified)traefik_ssl_dns_provider_code
: the code of the DNS provider (e.g. cloudflare
, found at link above)traefik_ssl_dns_provider_environment_vars
: the environment variables, along with their values, of the DNS provider you're using (e.g. "CF_DNS_API_TOKEN": "<token>"
if using cloudflare
, found at link above)traefik_ssl_letsencrypt_email
: the email address to use for Let's Encrypttraefik_ssl_use_letsencrypt_staging_url
: whether or not to use the Let's Encrypt staging URL for initial testing (yes
or no
) (default: yes
)
no
to use the production server and get a valid certificate.Required settings for the hms_docker_media_share_type
of cifs
:
nas_client_remote_cifs_path
: the path to the network share (e.g. //nas.example.com/share
)nas_client_cifs_username
: the username of the network sharenas_client_cifs_password
: the password of the network sharenas_client_cifs_opts
: the options for the network share (Google can help you find the correct options)Required settings for the hms_docker_media_share_type
of nfs
:
nas_client_remote_nfs_path
: the path to the network share (e.g. nas.example.com:/share
)nas_client_nfs_opts
: the options for the network share (Google can help you find the correct options)Required settings for using Cloudflare DDNS:
cloudflare_ddns_enabled
: yes
or no
to enable/disable Cloudflare DDNS (default: no
)cloudflare_api_token
: the API token of the Cloudflare accountcloudflare_zone
: the domain name of the Cloudflare zone (e.g. example.com
)cloudflare_ddns_subdomain
: the subdomain record (e.g. overseerr
would be created as overseerr.example.com
) (default: overseerr
)cloudflare_ddns_proxied
: 'true'
or 'false'
to enable/disable proxying the traffic through Cloudflare (default: 'true'
)Optional settings to configure:
cp roles/hmsdocker/defaults/main.yml vars/default.yml
# If you're running against the local system (to check for any changes made, add `--check` to the end of the command):
ansible-playbook -i inventory --connection local hms-docker.yml
# If you wish to run it against a remote host, add the host to the `inventory` file and then run the command:
ansible-playbook -i inventory hms-docker.yml
Once the playbook has finished running, it may take up to a few minutes for the SSL certificate to be generated (if enabled).
If you do not already have a "wildcard" DNS record setup for the domain you used on your LOCAL DNS server (such as *.home.local
), create this record to point to the IP address of the server. If you enabled Cloudflare DDNS, an "overseerr" public A record will be created.
You can also create individual A records for each container listed in the table below.
If the above DNS requirements are met, you can then access the containers by using the following URLs (substituting {{ domain }}
for the domain you used):
Plex: https://plex.{{ domain }}
Sonarr: https://sonarr.{{ domain }}
Radarr: https://radarr.{{ domain }}
Bazarr: https://bazarr.{{ domain }}
Overseerr: https://overseerr.{{ domain }}
Prowlarr: https://prowlarr.{{ domain }}
Transmission: https://transmission.{{ domain }}
Tautulli: https://tautulli.{{ domain }}
Traefik: https://traefik.{{ domain }}
NZBGet: https://nzbget.{{ domain }}
When connecting Prowlarr to Sonarr and Radarr and etc, you can use the name of the container (e.g. prowlarr
or radarr
) and then defining the container port to connect to (e.g. prowlarr:9696
or radarr:7878
).
If you choose to expose the container ports on the host (by setting container_expose_ports: yes
in the vars/default.yml
file), see below for which ports are mapped to which container on the host.
Service Name | Container Name | Host Port (if enabled) | Container Port | Accessible via Traefik |
---|---|---|---|---|
Plex | plex |
32400 |
32400 |
|
Sonarr | sonarr |
8989 |
8989 |
|
Radarr | radarr |
7878 |
7878 |
|
Prowlarr | prowlarr |
9696 |
9696 |
|
Overseerr | Overseerr |
5055 |
5055 |
|
Transmission | transmission |
9091 |
9091 |
|
Transmission Proxy | transmission-proxy |
8081 |
8080 |
☐ |
Portainer | portainer |
9000 |
9000 |
|
Bazarr | bazarr |
6767 |
6767 |
|
Tautulli | tautulli |
8181 |
8181 |
|
Traefik | traefik |
8080 |
8080 |
|
NZBGet | nzbget |
6789 |
6789 |
If you only want to generate the config files for docker-compose and Traefik, you can run the following command:
ansible-playbook -i inventory --connection local generate-configs.yml
By default, it will output these configs into /opt/hms-docker/