API extensions

The changes below were introduced to the LXD API after the 1.0 API was finalized.

They are all backward compatible and can be detected by client tools by looking at the api_extensions field in GET /1.0/.


A storage.zfs_remove_snapshots daemon configuration key was introduced.

It's a boolean that defaults to false and that when set to true instructs LXD to remove any needed snapshot when attempting to restore another.

This is needed as ZFS will only let you restore the latest snapshot.


A boot.host_shutdown_timeout container configuration key was introduced.

It's an integer which indicates how long LXD should wait for the container to stop before killing it.

Its value is only used on clean LXD daemon shutdown. It defaults to 30s.


A boot.stop.priority container configuration key was introduced.

It's an integer which indicates the priority of a container during shutdown.

Containers will shutdown starting with the highest priority level.

Containers with the same priority will shutdown in parallel. It defaults to 0.


A number of new syscalls related container configuration keys were introduced.

  • security.syscalls.blacklist_default
  • security.syscalls.blacklist_compat
  • security.syscalls.blacklist
  • security.syscalls.whitelist

See configuration.md for how to use them.


This indicates support for PKI authentication mode.

In this mode, the client and server both must use certificates issued by the same PKI.

See security.md for details.


A last_used_at field was added to the GET /1.0/containers/<name> endpoint.

It is a timestamp of the last time the container was started.

If a container has been created but not started yet, last_used_at field will be 1970-01-01T00:00:00Z


Add support for the ETag header on all relevant endpoints.

This adds the following HTTP header on answers to GET:

  • ETag (SHA-256 of user modifiable content)

And adds support for the following HTTP header on PUT requests:

  • If-Match (ETag value retrieved through previous GET)

This makes it possible to GET a LXD object, modify it and PUT it without risking to hit a race condition where LXD or another client modified the object in the meantime.


Add support for the HTTP PATCH method.

PATCH allows for partial update of an object in place of PUT.


Add support for USB hotplug.


To use LXD API with all Web Browsers (via SPAs) you must send credentials (certificate) with each XHR (in order for this to happen, you should set "withCredentials=true" flag to each XHR Request).

Some browsers like Firefox and Safari can't accept server response without Access-Control-Allow-Credentials: true header. To ensure that the server will return a response with that header, set core.https_allowed_credentials=true.


This adds support for a compression_algorithm property when creating an image (POST /1.0/images).

Setting this property overrides the server default value (images.compression_algorithm).


This allows for creating and listing directories via the LXD API, and exports the file type via the X-LXD-type header, which can be either "file" or "directory" right now.


This adds support for retrieving cpu time for a running container.


Introduces a new server property storage.zfs_use_refquota which instructs LXD to set the "refquota" property instead of "quota" when setting a size limit on a container. LXD will also then use "usedbydataset" in place of "used" when being queried about disk utilization.

This effectively controls whether disk usage by snapshots should be considered as part of the container's disk space usage.


Adds a new storage.lvm_mount_options daemon configuration option which defaults to "discard" and allows the user to set addition mount options for the filesystem used by the LVM LV.


Network management API for LXD.

This includes:

  • Addition of the "managed" property on /1.0/networks entries
  • All the network configuration options (see configuration.md for details)
  • POST /1.0/networks (see RESTful API for details)
  • PUT /1.0/networks/<entry> (see RESTful API for details)
  • PATCH /1.0/networks/<entry> (see RESTful API for details)
  • DELETE /1.0/networks/<entry> (see RESTful API for details)
  • ipv4.address property on "nic" type devices (when nictype is "bridged")
  • ipv6.address property on "nic" type devices (when nictype is "bridged")
  • security.mac_filtering property on "nic" type devices (when nictype is "bridged")


Adds a new used_by field to profile entries listing the containers that are using it.


When a container is created in push mode, the client serves as a proxy between the source and target server. This is useful in cases where the target server is behind a NAT or firewall and cannot directly communicate with the source server and operate in pull mode.


Introduces a new boolean "record-output", parameter to /1.0/containers/<name>/exec which when set to "true" and combined with with "wait-for-websocket" set to false, will record stdout and stderr to disk and make them available through the logs interface.

The URL to the recorded output is included in the operation metadata once the command is done running.

That output will expire similarly to other log files, typically after 48 hours.


Adds the following to the REST API:

  • ETag header on GET of a certificate
  • PUT of certificate entries
  • PATCH of certificate entries


Adds support /1.0/containers/<name>/exec for forwarding signals sent to the client to the processes executing in the container. Currently SIGTERM and SIGHUP are forwarded. Further signals that can be forwarded might be added later.


Enables adding GPUs to a container.


Introduces a new image config key space. Read-only, includes the properties of the parent image.


Transfer progress is now exported as part of the operation, on both sending and receiving ends. This shows up as a "fs_progress" attribute in the operation metadata.


Enables setting the security.idmap.isolated and security.idmap.isolated, security.idmap.size, and raw.id_map fields.


Add two new keys, ipv4.firewall and ipv6.firewall which if set to false will turn off the generation of iptables FORWARDING rules. NAT rules will still be added so long as the matching ipv4.nat or ipv6.nat key is set to true.

Rules necessary for dnsmasq to work (DHCP/DNS) will always be applied if dnsmasq is enabled on the bridge.


Introduces ipv4.routes and ipv6.routes which allow routing additional subnets to a LXD bridge.


Storage management API for LXD.

This includes:

  • GET /1.0/storage-pools
  • POST /1.0/storage-pools (see RESTful API for details)

  • GET /1.0/storage-pools/<name> (see RESTful API for details)

  • POST /1.0/storage-pools/<name> (see RESTful API for details)
  • PUT /1.0/storage-pools/<name> (see RESTful API for details)
  • PATCH /1.0/storage-pools/<name> (see RESTful API for details)
  • DELETE /1.0/storage-pools/<name> (see RESTful API for details)

  • GET /1.0/storage-pools/<name>/volumes (see RESTful API for details)

  • GET /1.0/storage-pools/<name>/volumes/<volume_type> (see RESTful API for details)

  • POST /1.0/storage-pools/<name>/volumes/<volume_type> (see RESTful API for details)

  • GET /1.0/storage-pools/<pool>/volumes/<volume_type>/<name> (see RESTful API for details)

  • POST /1.0/storage-pools/<pool>/volumes/<volume_type>/<name> (see RESTful API for details)
  • PUT /1.0/storage-pools/<pool>/volumes/<volume_type>/<name> (see RESTful API for details)
  • PATCH /1.0/storage-pools/<pool>/volumes/<volume_type>/<name> (see RESTful API for details)
  • DELETE /1.0/storage-pools/<pool>/volumes/<volume_type>/<name> (see RESTful API for details)

  • All storage configuration options (see configuration.md for details)


Implements DELETE in /1.0/containers/<name>/files


Implements the X-LXD-write header which can be one of overwrite or append.


Introduces ipv4.dhcp.expiry and ipv6.dhcp.expiry allowing to set the DHCP lease expiry time.


Introduces the ability to rename a volume group by setting storage.lvm.vg_name.


Introduces the ability to rename a thinpool name by setting storage.thinpool_name.


This adds a new vlan property to macvlan network devices.

When set, this will instruct LXD to attach to the specified VLAN. LXD will look for an existing interface for that VLAN on the host. If one can't be found it will create one itself and then use that as the macvlan parent.


Adds a new aliases field to POST /1.0/images allowing for aliases to be set at image creation/import time.


This introduces a new live attribute in POST /1.0/containers/<name>. Setting it to false tells LXD not to attempt running state transfer.


Introduces a new boolean container_only attribute. When set to true only the container will be copied or moved.


Introduces a new boolean storage_zfs_clone_copy property for ZFS storage pools. When set to false copying a container will be done through zfs send and receive. This will make the target container independent of its source container thus avoiding the need to keep dependent snapshots in the ZFS pool around. However, this also entails less efficient storage usage for the affected pool. The default value for this property is true, i.e. space-efficient snapshots will be used unless explicitly set to "false".


Introduces the ability to rename the unix-block/unix-char device inside container by setting path, and the source attribute is added to specify the device on host. If source is set without a path, we should assume that path will be the same as source. If path is set without source and major/minor isn't set, we should assume that source will be the same as path. So at least one of them must be set.


When rsync has to be invoked to transfer storage entities setting rsync.bwlimit places an upper limit on the amount of socket I/O allowed.


This introduces a new tunnel.NAME.interface option for networks.

This key control what host network interface is used for a VXLAN tunnel.


This introduces the btrfs.mount_options property for btrfs storage pools.

This key controls what mount options will be used for the btrfs storage pool.


This adds descriptions to entities like containers, snapshots, networks, storage pools and volumes.


This allows forcing a refresh for an existing image.


This introduces the ability to resize logical volumes by setting the size property in the containers root disk device.


This introduces a new security.idmap.base allowing the user to skip the map auto-selection process for isolated containers and specify what host uid/gid to use as the base.

This adds support for transferring symlinks through the file API. X-LXD-type can now be "symlink" with the request content being the target path.


This adds the target field to POST /1.0/containers/<name> which can be used to have the source LXD host connect to the target during migration.


Allows use of vlan property with physical network devices.

When set, this will instruct LXD to attach to the specified VLAN on the parent interface. LXD will look for an existing interface for that parent and VLAN on the host. If one can't be found it will create one itself. Then, LXD will directly attach this interface to the container.


This enabled the storage API to delete storage volumes for images from a specific storage pool.


This adds support for editing a container metadata.yaml and related templates via API, by accessing urls under /1.0/containers/<name>/metadata. It can be used to edit a container before publishing an image from it.


This enables migrating stateful container snapshots to new containers.


This adds a ceph storage driver.


This adds the ability to specify the ceph user.


This adds the instance_type field to the container creation request. Its value is expanded to LXD resource limits.


This records the actual source passed to LXD during storage pool creation.


This introduces the ceph.osd.force_reuse property for the ceph storage driver. When set to true LXD will reuse a osd storage pool that is already in use by another LXD instance.


This adds support for btrfs as a storage volume filesystem, in addition to ext4 and xfs.


This adds support for querying an LXD daemon for the system resources it has available.


This adds support for setting process limits such as maximum number of open files for the container via nofile. The format is limits.kernel.[limit name].


This adds support for renaming custom storage volumes.


This adds support for external authentication via Macaroons.


This adds support for SR-IOV enabled network devices.


This adds support to interact with the container console device and console log.


A new security.devlxd container configuration key was introduced. The key controls whether the /dev/lxd interface is made available to the container. If set to false, this effectively prevents the container from interacting with the LXD daemon.


This adds support for optimized memory transfer during live migration.


This adds support to use infiniband network devices.


This adds support for MAAS network integration.

When configured at the daemon level, it's then possible to attach a "nic" device to a particular MAAS subnet.


This adds a websocket API to the devlxd socket.

When connecting to /1.0/events over the devlxd socket, you will now be getting a stream of events over websocket.


This adds a new proxy device type to containers, allowing forwarding of connections between the host and container.


Introduces a new ipv4.dhcp.gateway network config key to set an alternate gateway.

This makes it possible to retrieve symlinks using the file API.


Adds a new /1.0/networks/NAME/leases API endpoint to query the lease database on bridges which run a LXD-managed DHCP server.


This adds support for the "required" property for unix devices.


This add the ability to copy and move custom storage volumes locally in the same and between storage pools.


Adds a "description" field to all operations.


Clustering API for LXD.

This includes the following new endpoints (see RESTful API for details):

  • GET /1.0/cluster
  • UPDATE /1.0/cluster

  • GET /1.0/cluster/members

  • GET /1.0/cluster/members/<name>

  • POST /1.0/cluster/members/<name>
  • DELETE /1.0/cluster/members/<name>

The following existing endpoints have been modified:

  • POST /1.0/containers accepts a new target query parameter
  • POST /1.0/storage-pools accepts a new target query parameter
  • GET /1.0/storage-pool/<name> accepts a new target query parameter
  • POST /1.0/storage-pool/<pool>/volumes/<type> accepts a new target query parameter
  • GET /1.0/storage-pool/<pool>/volumes/<type>/<name> accepts a new target query parameter
  • POST /1.0/storage-pool/<pool>/volumes/<type>/<name> accepts a new target query parameter
  • PUT /1.0/storage-pool/<pool>/volumes/<type>/<name> accepts a new target query parameter
  • PATCH /1.0/storage-pool/<pool>/volumes/<type>/<name> accepts a new target query parameter
  • DELETE /1.0/storage-pool/<pool>/volumes/<type>/<name> accepts a new target query parameter
  • POST /1.0/networks accepts a new target query parameter
  • GET /1.0/networks/<name> accepts a new target query parameter


This adds a new lifecycle message type to the events API.


This adds the ability to copy and move custom storage volumes between remote.


Adds a nvidia_runtime config option for containers, setting this to true will have the NVIDIA runtime and CUDA libraries passed to the container.


This introduces the new candid.api.url config option and removes core.macaroon.endpoint.


This introduces the config keys candid.domains and candid.expiry. The former allows specifying allowed/valid Candid domains, the latter makes the macaroon's expiry configurable. The lxc remote add command now has a --domain flag which allows specifying a Candid domain.


This introduces a new candid.api.key option which allows for setting the expected public key for the endpoint, allowing for safe use of a HTTP-only candid server.


As the name implies, the vendorid field on USB devices attached to containers has now been made optional, allowing for all USB devices to be passed to a container (similar to what's done for GPUs).


This introduces a new internal volatile.idmap.current key which is used to track the current mapping for the container.

This effectively gives us: - volatile.last\_state.idmap => On-disk idmap - volatile.idmap.current => Current kernel map - volatile.idmap.next => Next on-disk idmap

This is required to implement environments where the on-disk map isn't changed but the kernel map is (e.g. shiftfs).