Skip to content

Add docs for using the Kubernetes monitor on self-hosted instances #2755

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 3 commits into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion src/pages/docs/installation/automating-installation.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ Here is an example of what the script might look like:
```bash
"[INSTALLLOCATION]\Octopus.Server.exe" create-instance --instance "<instance name>" --config "<new instance config path>" --serverNodeName "<machine name>"
"[INSTALLLOCATION]\Octopus.Server.exe" database --instance "<instance name>" --connectionString "<database connection string>" --create
"[INSTALLLOCATION]\Octopus.Server.exe" configure --instance "<instance name>" --upgradeCheck "True" --upgradeCheckWithStatistics "True" --usernamePasswordIsEnabled "True" --webForceSSL "False" --webListenPrefixes "<url to expose>" --commsListenPort "10943"
"[INSTALLLOCATION]\Octopus.Server.exe" configure --instance "<instance name>" --upgradeCheck "True" --upgradeCheckWithStatistics "True" --usernamePasswordIsEnabled "True" --webForceSSL "False" --webListenPrefixes "<url to expose>" --commsListenPort "10943" --grpcListenPort "8443"
"[INSTALLLOCATION]\Octopus.Server.exe" service --instance "<instance name>" --stop
"[INSTALLLOCATION]\Octopus.Server.exe" admin --instance "<instance name>" --username "<admin username>" --email "<admin email>" --password "<admin password>"
"[INSTALLLOCATION]\Octopus.Server.exe" license --instance "<instance name>" --licenseBase64 "<a very long license string>"
Expand Down
7 changes: 6 additions & 1 deletion src/pages/docs/installation/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,12 @@ There are three components to an Octopus Deploy instance:
- **SQL Server Database** Most data used by the Octopus Server nodes is stored in this database. SQL Server 2016+ or Azure SQL is required.
- **Files or BLOB Storage** Some larger files - like [packages](/docs/packaging-applications/package-repositories), artifacts, and deployment task logs - aren't suitable to be stored in the database and are stored on the file system instead. This can be a local folder, a network file share, or a cloud provider's storage.

All inbound traffic to Octopus Deploy is either http/https (ports 80/443) or polling tentacles (port 10943). For production instances of Octopus Deploy, it is best to configure a [load balancer](/docs/installation/load-balancers) to route traffic to your instance. Leveraging a load balancer offers numerous benefits, such as redirecting users to a maintenance page while the instance is down for upgrading, as well as making it much easier to configure High Availability later.
All inbound traffic to Octopus Deploy is via:
- HTTP/HTTPS (ports 80/443)
- Polling tentacles (port 10943)
- gRPC (port 8443)

For production instances of Octopus Deploy, it is best to configure a [load balancer](/docs/installation/load-balancers) to route traffic to your instance. Leveraging a load balancer offers numerous benefits, such as redirecting users to a maintenance page while the instance is down for upgrading, as well as making it much easier to configure High Availability later.

## Self-hosted Octopus Server

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ This example assumes:

- NGINX will terminate your SSL connections.
- [Polling Tentacles](/docs/infrastructure/deployment-targets/tentacle/tentacle-communication/#polling-tentacles) are not required.
- [Kubernetes monitors](/docs/kubernetes/targets/kubernetes-agent/kubernetes-monitor) are not required.

Our starting configuration:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -46,6 +46,7 @@ services:
ports:
- 8080:8080
- 11111:10943
- 8443:8443
depends_on:
- db
volumes:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -92,6 +92,7 @@ Read Docker [docs](https://docs.docker.com/engine/reference/commandline/run/#pub
|**8080**| Port for API and HTTP portal |
|**443**| SSL Port for API and HTTP portal |
|**10943**|Port for Polling Tentacles to contact the server|
|**8443**|Port for gRPC clients to contact the server|


### Volume mounts
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -148,7 +148,7 @@ spec:

Unlike the Octopus Web Portal, Polling Tentacles must be able to connect to each Octopus node individually to pick up new tasks. Our Octopus HA cluster assumes two nodes, therefore a load balancer is required for each node to allow direct access.

The following YAML creates load balancers with separate public IPs for each node. They direct web traffic to each node on port `80` and Polling Tentacle traffic on port `10943`.
The following YAML creates load balancers with separate public IPs for each node. They direct web traffic to each node on port `80`, Polling Tentacle traffic on port `10943` and gRPC traffic on port `8443`.

The `octopus-0` load balancer:
```yaml
Expand All @@ -167,6 +167,10 @@ spec:
port: 10943
targetPort: 10943
protocol: TCP
- name: gRPC
port: 8443
targetPort: 8443
protocol: TCP
selector:
statefulset.kubernetes.io/pod-name: octopus-0
```
Expand All @@ -189,6 +193,10 @@ spec:
port: 10943
targetPort: 10943
protocol: TCP
- name: gRPC
port: 8443
targetPort: 8443
protocol: TCP
selector:
statefulset.kubernetes.io/pod-name: octopus-1
```
Expand Down Expand Up @@ -218,7 +226,7 @@ Most of the YAML in this guide can be used with any Kubernetes provider. However
To find out more about storage classes, refer to the [Kubernetes Storage Classes](https://kubernetes.io/docs/concepts/storage/storage-classes/) documentation.
:::

Whilst it is possible to mount external storage by manually defining Persistent Volume definitions in YAML, Cloud providers offering Kubernetes managed services typically include the option to dynamically provision file storage based on persistent volume claim definitions.
While it is possible to mount external storage by manually defining Persistent Volume definitions in YAML, Cloud providers offering Kubernetes managed services typically include the option to dynamically provision file storage based on persistent volume claim definitions.

The next sections describe how to create file storage for use with Octopus running in Kubernetes using different Kubernetes providers to dynamically provision file storage.

Expand Down Expand Up @@ -760,7 +768,7 @@ Once the old Stateful Set has been deleted, the new fresh copy of the Stateful S

It's recommended best practice to access your Octopus instance over a secure HTTPS connection.

Whilst this guide doesn't include specific instructions on how to configure access to Octopus Server in Kubernetes using an SSL/TLS certificate, there are many guides available.
While this guide doesn't include specific instructions on how to configure access to Octopus Server in Kubernetes using an SSL/TLS certificate, there are many guides available.

In Kubernetes, this can be configured using an [Ingress Controller](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/), for example [NGINX](https://kubernetes.github.io/ingress-nginx/user-guide/tls/).

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ TimeoutStartSec=0
RestartSec=5
Environment="HOME=/root"
SyslogIdentifier=docker-octopusdeploy
ExecStartPre=-/usr/bin/docker create --net octopus -m 0b -e "ADMIN_USERNAME=admin" -e "[email protected]" -e "ADMIN_PASSWORD=Password01!" -e "ACCEPT_EULA=Y" -e "DB_CONNECTION_STRING=Server=mssql,1433;Database=Octopus;User Id=SA;Password=Password01!;ConnectRetryCount=6" -e "MASTER_KEY=6EdU6IWsCtMEwk0kPKflQQ==" -e "DISABLE_DIND=Y" -p 80:8080 -p 10943:10943 --restart=always --name octopusdeploy octopusdeploy/octopusdeploy
ExecStartPre=-/usr/bin/docker create --net octopus -m 0b -e "ADMIN_USERNAME=admin" -e "[email protected]" -e "ADMIN_PASSWORD=Password01!" -e "ACCEPT_EULA=Y" -e "DB_CONNECTION_STRING=Server=mssql,1433;Database=Octopus;User Id=SA;Password=Password01!;ConnectRetryCount=6" -e "MASTER_KEY=6EdU6IWsCtMEwk0kPKflQQ==" -e "DISABLE_DIND=Y" -p 80:8080 -p 10943:10943 -p 8443:8443 --restart=always --name octopusdeploy octopusdeploy/octopusdeploy
ExecStart=/usr/bin/docker start -a octopusdeploy
ExecStop=-/usr/bin/docker stop --time=0 octopusdeploy

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,12 @@ This page will help you diagnose and solve issues with Kubernetes Live Object St

Some firewalls may prevent the applications from making outbound connections over non-standard ports. If this is preventing the Kubernetes monitor from connecting to your Octopus Server, configure your environment to allow outbound connections.

For customers running a self-hosted instance, ensure that Octopus Server's `grpcListenPort` parameter is configured to be 8443, or that the Kubernetes monitors `server-grpc-url` parameter has been updated to match.

:::div{.warning}
The [kubernetes monitor]() is not yet compatible with high availability Octopus clusters, trying to install the Kubernetes monitor may result in unexpected behavior.
:::

## Runtime

### Failed to establish connection with Kubernetes Monitor \{#failed-to-establish–connection-with-kubernetes-monitor}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ description: How to install/update the agent when running Octopus in an HA Clust
navOrder: 50
---


## Octopus Deploy HA Cluster

Similarly to Polling Tentacles, the Kubernetes agent must have a URL for each individual node in the HA Cluster so that it receive commands from all clusters. These URLs must be provided when registering the agent or some deployments may fail depending on which node the tasks are executing.
Expand Down Expand Up @@ -55,3 +56,9 @@ helm upgrade --atomic \
<agent-release-name> \
oci://registry-1.docker.io/octopusdeploy/kubernetes-agent
```

## Kubernetes Monitor

:::div{.warning}
The [kubernetes monitor]() is not yet compatible with high availability Octopus clusters, trying to install the Kubernetes monitor may result in unexpected behavior.
:::