FCP Setup
Table of Content
As a FCP
Install Container Runtime Environment
Optional-Setup a docker registry server
Create a Kubernetes Cluster
Install the Network Plugin
Install the NVIDIA Plugin
Install the Ingress-nginx Controller
Prerequisites
Before you install the Computing Provider, you need to know there are some resources required:
✅ Possess a public IP
✅ Have a domain name (*.example.com)
✅ Have an SSL certificate
✅ Have at least one GPU
✅ At least 8 vCPUs
✅ Minimum 50GB SSD storage
✅ Minimum 32GB memory
✅ Minimum 50MB bandwidth
✅
Go
version must be 1.21.7+, you can refer here:
Install the Kubernetes
The Kubernetes version should be v1.24.0+
Install Container Runtime Environment
If you plan to run a Kubernetes cluster, you need to install a container runtime into each node in the cluster so that Pods can run there, refer to here. And you just need to choose one option to install the Container Runtime Environment
Option 1: Install the Docker
and cri-dockerd
(Recommended)
To install the Docker Container Runtime
and the cri-dockerd
, follow the steps below:
Install the
Docker
:Please refer to the official documentation from here.
Install
cri-dockerd
:cri-dockerd
is a CRI (Container Runtime Interface) implementation for Docker. You can install it refer to here.
Option 2: Install the Docker
and theContainerd
Install the
Docker
:Please refer to the official documentation from here.
To install
Containerd
on your system:Containerd
is an industry-standard container runtime that can be used as an alternative to Docker. To installcontainerd
on your system, follow the instructions on getting started with containerd.
Optional-Setup a docker registry server
If you are using the docker and you have only one node, the step can be skipped.
If you have deployed a Kubernetes cluster with multiple nodes, it is recommended to set up a private Docker Registry to allow other nodes to quickly pull images within the intranet.
Create a directory
/docker_repo
on your docker server. It will be mounted on the registry container as persistent storage for our docker registry.
Launch the docker registry container:
Add the registry server to the node
If you have installed the
Docker
andcri-dockerd
(Option 1), you can update every node's configuration:
Then restart the docker service
If you have installed the
containerd
(Option 2), you can update every node's configuration:
Then restart containerd
service
<Your_registry_server_IP>: the intranet IP address of your registry server.
Finally, you can check the installation by the command:
Create a Kubernetes Cluster
To create a Kubernetes cluster, you can use a container management tool like kubeadm
. The below steps can be followed:
Install the Network Plugin
Calico is an open-source networking and network security solution for containers, virtual machines, and native host-based workloads. Calico supports a broad range of platforms including Kubernetes, OpenShift, Mirantis Kubernetes Engine (MKE), OpenStack, and bare metal services.
To install Calico, you can follow the below steps, more information can be found here.
step 1: Install the Tigera Calico operator and custom resource definitions
step 2: Install Calico by creating the necessary custom resource
step 3: Confirm that all of the pods are running with the following command
step 4: Remove the taints on the control plane so that you can schedule pods on it.
If you have installed it correctly, you can see the result shown in the figure by the command kubectl get po -A
Note:
If you are a single-host Kubernetes cluster, remember to remove the taint mark, otherwise, the task can not be scheduled to it.
Install the NVIDIA Plugin
If your computing provider wants to provide a GPU resource, the NVIDIA Plugin should be installed, please follow the steps:
Recommend NVIDIA Linux drivers version should be 470.xx+
If you have installed it correctly, you can see the result shown in the figure by the command kubectl get po -n kube-system
Install the Ingress-nginx Controller
The ingress-nginx
is an ingress controller for Kubernetes using NGINX
as a reverse proxy and load balancer. You can run the following command to install it:
If you have installed it correctly, you can see the result shown in the figure by the command:
Run
kubectl get po -n ingress-nginx
Run
kubectl get svc -n ingress-nginx
Install and config the Nginx
Install
Nginx
service to the Server
Add a configuration for your Domain name Assume your domain name is
*.example.com
Note:
server_name
: a generic domain namessl_certificate
andssl_certificate_key
: certificate for https.proxy_pass
: The port should be the Intranet port corresponding toingress-nginx-controller
service port 80
Reload the
Nginx
configMap your "catch-all (wildcard) subdomain(*.example.com)" to a public IP address
Install the Hardware resource-exporter
The resource-exporter
plugin is developed to collect the node resource constantly, computing provider will report the resource to the Lagrange Auction Engine to match the space requirement. To get the computing task, every node in the cluster must install the plugin. You just need to run the following command:
If you have installed it correctly, you can see the result shown in the figure by the command: kubectl get po -n kube-system
Install Redis service
Install the
redis-server
Run Redis service:
Build and config the Computing Provider
Build the Computing Provider
Firstly, clone the code to your local:
Then build the Computing provider follow the below steps:
Update Configuration The computing provider's configuration sample locate in
./go-computing-provider/config.toml.sample
Edit the necessary configuration files according to your deployment requirements. These files may include settings for the computing-provider components, container runtime, Kubernetes, and other services.
Note: Example WalletWhiteList hosted on GitHub can be found here.
Install AI Inference Dependency
It is necessary for Computing Provider to deploy the AI inference endpoint. But if you do not want to support the feature, you can skip it.
Config and Receive UBI Tasks
Step 1: Prerequisites: Perform Filecoin Commit2 (fil-c2) UBI tasks.
Download the tool for Filecoin Commit2 task parameters:
Download parameters (specify path with FIL_PROOFS_PARAMETER_CACHE variable):
Configure environment variables in
fil-c2.env
under CP repo ($CP_PATH):
Adjust the value of
RUST_GPU_TOOLS_CUSTOM_GPU
based on the GPU used by the CP's Kubernetes cluster for fil-c2 tasks.For more device choices, please refer to this page:https://github.com/filecoin-project/bellperson
Step 2: Enable UBI tasks in CP's config.toml
:
config.toml
:Step 3: Initialize a Wallet and Deposit Swan-ETH
Generate a new wallet address:
Example output:
Deposit Swan-ETH to the generated wallet address as a gas fee:
Example output:
Note: Follow this guide to claim Swan-ETH and bridge it to Swan Saturn Chain.
Step 4: Initialization
Deploy a contract with CP's basic info:
Output:
Step 5: Account Management
Use computing-provider account
subcommands to update CP details:
Step 6: Check the Status of UBI-Task
To check the UBI task list, use the following command:
Example output:
Start the Computing Provider
You can run computing-provider
using the following command
CLI of Computing Provider
Check the current list of tasks running on CP, display detailed information for tasks using
-v
Retrieve detailed information for a specific task using
space_uuid
Delete task by
space_uuid
Getting Help
For usage questions or issues reach out to the Swan team either in the Discord channel or open a new issue here on GitHub.
License
Apache
Last updated