You can find documentation related to Leafcloud Kubernetes at https://docs.leaf.cloud/en/latest/
You can find documentation related to Leafcloud Kubernetes at https://docs.leaf.cloud/en/latest/
You can manage and create Kubernetes clusters (shoots) using Gardener in two main ways:some text
Both methods offer flexible and efficient cluster management. For detailed instructions, visit the Leafcloud Gardenctl Documentation.
Navigate to Projects on your dashboard. Select your project and tick the checkbox in the section with the red outline
Note: managed Kubernetes cannot be switched off, but does not accrue additional costs
Cannot be switched off, but does not accrue additional costs
Peerless auto-healing & scaling, easy updates and certificates, and great GPU support. Take a look at our Kubernetes page for more information.
1.29 and 1.30. Leafcloud is committed to staying close to the latest stable Kubernetes release. We run Kuberentes version1.29 and a preview of 1.30
Yes, Leafcloud Managed Kubernetes has excellent GPU support.
Yes, the base cluster price (80)
Yes gardener uses the standard k8s autsoscaling implementation
When you create a PVC in Kubernetes, a storage volume is automatically created for you in OpenStack based on the specified storage class.
When you create a LoadBalancer service in Kubernetes:some text
Calico and Cilium
Gardener has its own command line tool called gardenctl. You can use gardenctl to connect to your cluster and manage your garden.some text
The Kubernetes setup consists of several parts: some text
This setup ensures that the control plane is managed by us, while the customer's resources (worker nodes, network, and security groups) are provisioned in their own OpenStack project.
Yes, you can use Let's Encrypt with a Kubernetes load balancer by using ingress-nginex in combination with certbot certmanager as described below.
First, run:
~~~
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.45.0/deploy/static/provider/cloud/deploy.yaml
~~~
This will create a new namespace called ingress-nginex, inside which the nginex controller is deployed.
Next, run:
~~~
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: <your-project>-ingress
namespace: <your-project>
annotations:
cert-manager.io/issuer: "letsencrypt-prod"
kubernetes.io/ingress.class: "nginx"
spec:
tls:
- hosts:
- join.leaf.cloud
secretName: <your-project>-ingress-tls
rules:
- host: join.leaf.cloud
http:
paths:
- path: /
pathType: Prefix
backend:
serviceName: <your-project>-entrypoint
servicePort: 8000
---
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
namespace: <your-project>
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: my@email.com
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- http01:
ingress:
class: nginx
~~~
###
Yes, you can do this by configuring Leafcloud S3 compatible Object Storage in pgo.yml. To do this, follow the steps below:
**Step 1: create credentials for the ec2 compatible Leafcloud API**
First, you need to create credentials for the ec2 compatible Leafcloud API:
~~~
openstack ec2 credentials create
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| default_project_id | 17e5a06a7ce34sjfu5b6e0eejt00663c |
| domain_id | default |
| enabled | True |
| id | d407b0kckdu3994j5jjg8403c0973cf9 |
| name | myuser |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
~~~
~~~
Cluster:
BackrestS3Bucket: my-postgresql-backups-example
BackrestS3Endpoint: leafcloud.store
BackrestS3Region: europe-nl-ams1
BackrestS3URIStyle: host
BackrestS3VerifyTLS: true
~~~
Click [here ](https://access.crunchydata.com/documentation/postgres-operator/4.7.2/architecture/disaster-recovery/#using-s3)for more information.
**Step 2: Schedule your backups**
Second, schedule your backups as follows:
~~~
pgo create schedule hacluster --schedule="0 1 * * *" \
--schedule-type=pgbackrest --pgbackrest-backup-type=full
pgo create schedule hacluster --schedule="0 */3 * * *" \
--schedule-type=pgbackrest --pgbackrest-backup-type=incr
~~~
Click [here](https://access.crunchydata.com/documentation/postgres-operator/4.7.2/pgo-client/common-tasks/#disaster-recovery-backups-restores) for more information.
The autohealer will replace instances that become unreactive, for example, because they run out of memory.
you can disable it by entering the following:
~~~
kubectl -n kube-system patch daemonset magnum-auto-healer -p '{"spec": {"template": {"spec": {"nodeSelector": {"non-existing": "true"}}}}}'
~~~
You can disable your cluster autoscaler as follows:
~~~
kubectl -n kube-system scale deployment cluster-autoscaler --replicas=0
~~~
the user that you would probably need is:
~~~
core
~~~To get it, enter the following command:~~~
ssh -I ~/.ssh/id_rsa core@45.135.56.227
~~~