Lagrange Definition Language(LDL)

Lagrange Definition Language (LDL) is essentially a YAML-based configuration language used for defining deployment requirements in Swan Chain. Similar to how Dockerfiles are used to define container builds, LDL files (deploy.yaml) are used to specify how your application should be deployed and run on the Swan Chain network.

Lagrange Definition Language (LDL)

LDL is a human-friendly data standard for declaring deployment attributes. The LDL file is a "form" to request resources from the Network. LDL is compatible with the YAML standard and similar to Docker Compose files.

Configuration files may end in .yml or .yaml.A complete deployment has the following sections:

networking

Networking - allowing connectivity to and between workloads - can be configured via the LDL file for a deployment. By default, workloads in a deployment group are isolated - nothing else is allowed to connect to them. This restriction can be relaxed.

version

Indicates the version of the configuration file. Currently only "2.0" one is accepted.

services

The top-level services entry contains a map of workloads to be run on the deployment. Each key is a service name; values are a map containing the following keys:

NameRequiredMeaning

image

Yes

Docker image of the container Best practices:

  • avoid using :latest image tags as Computing Providers heavily cache images

expose

Yes

Entities allowed to connect to the services. See services.expose​

depends-on

No

specifying dependencies for a particular service, indicates that the mentioned service relies on or requires certain other service to function properly

command

No

Custom command use when executing container

args

No

Arguments to custom command use when executing the container

env

No

Environment variables to set in running container. See services.env​

ready

No

NOTE - field is marked for future use and currently has no impact on deployments.

model

No

A configuration section that defines a list of models for the service

services.depends-on

depends-on specifies dependencies for a particular service, indicates that the mentioned service relies on or requires certain other service to function properly

    depends-on:
       - db

services.env

A list of environment variables to expose to the running container.

env:
- "GF_PATHS_CONFIG=/opt/grafana/grafana.ini"

services.expose

Notes Regarding Port Use in the Expose Stanza

  • HTTPS is possible in Lagrange deployments but only self-signed certs are generated.

  • To implement signed certs the deployment must be front-ended via a solution such as Cloudflare.

  • You can expose any other port besides 80 as the ingress port (HTTP, HTTPS) port using as: 80 directive if the app understands HTTP / HTTPS. Example of exposing a React web app using this method:

    expose:
      - port: 3000 
        as: 80
  • In the LDL it is only necessary to expose port 80 for web apps. With this specification, both ports 80 and 443 are exposed.

expose is a list describing what can connect to the service. Each entry is a map containing one or more of the following fields:

NameRequiredMeaning

port

Yes

Container port to expose

as

No

Port number to expose the container port as

accept

No

List of hosts to accept connections for

proto

No

Protocol type (tcp, udp, or http)

to

No

List of entities allowed to connect. See services.expose.to​

The as value governs the default proto value as follows:

NOTE - when as is not set, it will default to the value set by the port mandatory directive.

NOTE - when one exposes as: 80 (HTTP), the Kubernetes ingress controler makes the application available over HTTPS as well, though with the default self-signed ingress certs.

port

proto default

80

http, https

all others

tcp

services.expose.to

expose.to is a list of clients to accept connections from. Each item is a map with one or more of the following entries:

NameValueDefaultDescription

service

A service in this deployment

Allow the given service to connect

service.model

model is a configuration section that defines a list of models for the service.Each model in the list has the following properties:

  • name: This property specifies the name of the model.

  • url: This property specifies the URL from which the model's data can be downloaded.

  • dir: This property specifies the directory path within the container where the model's files will be stored after they are downloaded from the specified URL.

Example:

services:
  stable-diffusion-ui:
    image: sonic868/stable-diffusion:v1.0
    models:
      - name: illustroV3.safetensors
        url: https://civitai.com/api/download/models/151490
        dir: "/easy-diffusion/models/stable-diffusion"

profiles

The profiles section contains named compute and placement profiles to be used in the deployment.

deployment

The deployment section defines how to deploy the services. It is a mapping of service name to deployment configuration.

Each service to be deployed has an entry in the deployment. This entry is maps datacenter profiles to compute profiles to create a final desired configuration for the resources required for the service.

Example:

deployment:
  minesweeper:
    lagrange:
      count: 1

This says that the instances of the minesweeper service should be deployed to a Computing Provider within Lagrange.

The final deploy.yaml should look like this:

version: "2.0"

services:
  minesweeper:
    image: creepto/minesweeper
    expose:
      - port: 3000
        as: 80
    
deployment:
  minesweeper:
    lagrange:
      count: 1

Check out here to interact with the sample.

Here is another sample deploy.yaml:

version: "2.0"

services:
  db:
    image: postgres:11.6-alpine
    env:
      - POSTGRES_USER=codimd
      - POSTGRES_PASSWORD=rootadmin
      - POSTGRES_DB=codimd
    expose:
        - port: 5432
          as: 5432
          to:
            - service: db
    ready-cmd:
        - "psql"
        - "-w"
        - "-U"
        - "codimd"
        - "-d"
        - "codimd"
        - "-c"
        - "SELECT 1"
  codimd:
    image: hackmdio/hackmd:2.4.1
    env:
      - CMD_DB_URL=postgres://codimd:rootadmin@127.0.0.1:5432/codimd
      - CMD_USECDN=false
    depends-on:
      - db
    expose:
        - port: 3000
          as: 3000
          to:
            - global: true

deployment:
  db:
    lagrange:
      count: 1
  codimd:
    lagrange:
      count: 1

Check out here to interact with the sample.

Last updated