Lagrange Definition Language(LDL)
Lagrange Definition Language (LDL) is essentially a YAML-based configuration language used for defining deployment requirements in Swan Chain. Similar to how Dockerfiles are used to define container builds, LDL files (deploy.yaml) are used to specify how your application should be deployed and run on the Swan Chain network.
Lagrange Definition Language (LDL)
LDL is a human-friendly data standard for declaring deployment attributes. The LDL file is a "form" to request resources from the Network. LDL is compatible with the YAML standard and similar to Docker Compose files.
Configuration files may end in .yml
or .yaml
.A complete deployment has the following sections:
networking
Networking - allowing connectivity to and between workloads - can be configured via the LDL file for a deployment. By default, workloads in a deployment group are isolated - nothing else is allowed to connect to them. This restriction can be relaxed.
version
Indicates the version of the configuration file. Currently only "2.0"
one is accepted.
services
The top-level services
entry contains a map of workloads to be run on the deployment. Each key is a service name; values are a map containing the following keys:
Name | Required | Meaning |
---|---|---|
| Yes | Docker image of the container Best practices:
|
| Yes | Entities allowed to connect to the services. See services.expose |
| No | specifying dependencies for a particular service, indicates that the mentioned service relies on or requires certain other service to function properly |
| No | Custom command use when executing container |
| No | Arguments to custom command use when executing the container |
| No | Environment variables to set in running container. See services.env |
| No | NOTE - field is marked for future use and currently has no impact on deployments. |
| No | A configuration section that defines a list of models for the service |
services.depends-on
depends-on
specifies dependencies for a particular service, indicates that the mentioned service relies on or requires certain other service to function properly
services.env
A list of environment variables to expose to the running container.
services.expose
Notes Regarding Port Use in the Expose Stanza
HTTPS is possible in Lagrange deployments but only self-signed certs are generated.
To implement signed certs the deployment must be front-ended via a solution such as Cloudflare.
You can expose any other port besides 80 as the ingress port (HTTP, HTTPS) port using as: 80 directive if the app understands HTTP / HTTPS. Example of exposing a React web app using this method:
In the LDL it is only necessary to expose port 80 for web apps. With this specification, both ports 80 and 443 are exposed.
expose
is a list describing what can connect to the service. Each entry is a map containing one or more of the following fields:
Name | Required | Meaning |
---|---|---|
| Yes | Container port to expose |
| No | Port number to expose the container port as |
| No | List of hosts to accept connections for |
| No | Protocol type ( |
| No | List of entities allowed to connect. See services.expose.to |
The as
value governs the default proto
value as follows:
NOTE - when as is not set, it will default to the value set by the port mandatory directive.
NOTE - when one exposes as: 80 (HTTP), the Kubernetes ingress controler makes the application available over HTTPS as well, though with the default self-signed ingress certs.
|
|
80 | http, https |
all others | tcp |
services.expose.to
expose.to
is a list of clients to accept connections from. Each item is a map with one or more of the following entries:
Name | Value | Default | Description |
---|---|---|---|
| A service in this deployment | | Allow the given service to connect |
service.model
model
is a configuration section that defines a list of models for the service.Each model in the list has the following properties:
name: This property specifies the name of the model.
url: This property specifies the URL from which the model's data can be downloaded.
dir: This property specifies the directory path within the container where the model's files will be stored after they are downloaded from the specified URL.
Example:
profiles
The profiles
section contains named compute and placement profiles to be used in the deployment.
deployment
The deployment
section defines how to deploy the services. It is a mapping of service name to deployment configuration.
Each service to be deployed has an entry in the deployment
. This entry is maps datacenter profiles to compute profiles to create a final desired configuration for the resources required for the service.
Example:
This says that the instances of the minesweeper
service should be deployed to a Computing Provider within Lagrange.
The final deploy.yaml
should look like this:
Check out here to interact with the sample.
Here is another sample deploy.yaml
:
Check out here to interact with the sample.
Last updated