Networking API and UX documentation

More doc updates will follow

Signed-off-by: Madhu Venugopal <madhu@docker.com>
master
Madhu Venugopal 2015-09-28 18:57:03 -07:00 committed by Tibor Vass
parent 0afb6cc862
commit da80c0929a
16 changed files with 250 additions and 892 deletions

View File

@ -15,6 +15,7 @@ weight = 6
Currently, you can extend Docker by adding a plugin. This section contains the following topics:
* [Understand Docker plugins](/extend/plugins)
* [Write a volume plugin](/extend/plugins_volume)
* [Docker plugin API](/extend/plugin_api)
* [Understand Docker plugins](/extend/plugins.md)
* [Write a volume plugin](/extend/plugins_volume.md)
* [Write a network plugin](/extend/plugins_network.md)
* [Docker plugin API](/extend/plugin_api.md)

View File

@ -17,8 +17,10 @@ plugins.
## Types of plugins
Plugins extend Docker's functionality. They come in specific types. For
example, a [volume plugin](/extend/plugins_volume) might enable Docker
volumes to persist across multiple Docker hosts.
example, a [volume plugin](/extend/plugins_volume.md) might enable Docker
volumes to persist across multiple Docker hosts and a
[network plugin](/extend/plugins_network.md) might provide network plumbing
using a favorite networking technology, such as vxlan overlay, ipvlan, EVPN, etc.
Currently Docker supports volume and network driver plugins. In the future it
will support additional plugin types.

View File

@ -1,4 +1,4 @@
# Experimental: Docker network driver plugins
# Docker network driver plugins
Docker supports network driver plugins via
[LibNetwork](https://github.com/docker/libnetwork). Network driver plugins are
@ -21,7 +21,9 @@ commands. For example,
Some network driver plugins are listed in [plugins.md](/docs/extend/plugins.md)
The network thus created is owned by the plugin, so subsequent commands
referring to that network will also be run through the plugin.
referring to that network will also be run through the plugin such as,
docker run --net=mynet busybox top
## Network driver plugin protocol
@ -36,10 +38,3 @@ Google Groups, or the IRC channel #docker-network.
- [#14083](https://github.com/docker/docker/issues/14083) Feedback on
experimental networking features
Other pertinent issues:
- [#13977](https://github.com/docker/docker/issues/13977) UI for using networks
- [#14023](https://github.com/docker/docker/pull/14023) --default-network option
- [#14051](https://github.com/docker/docker/pull/14051) --publish-service option
- [#13441](https://github.com/docker/docker/pull/13441) (Deprecated) Networks API & UI

View File

@ -0,0 +1,30 @@
<!--[metadata]>
+++
title = "network connect"
description = "The network connect command description and usage"
keywords = ["network, connect"]
[menu.main]
parent = "smn_cli"
+++
<![end-metadata]-->
# network connect
Usage: docker network connect [OPTIONS] NETWORK CONTAINER
Connects a container to a network
--help=false Print usage
Connects a running container to a network. This enables instant communication with other containers belonging to the same network.
```
$ docker network create -d overlay multi-host-network
$ docker run -d --name=container1 busybox top
$ docker network connect multi-host-network container1
```
the container will be connected to the network that is created and managed by the driver (multi-host overlay driver in the above example) or external network plugins.
Multiple containers can be connected to the same network and the containers in the same network will start to communicate with each other. If the driver/plugin supports multi-host connectivity, then the containers connected to the same multi-host network will be able to communicate seamlessly.

View File

@ -0,0 +1,32 @@
<!--[metadata]>
+++
title = "network create"
description = "The network create command description and usage"
keywords = ["network, create"]
[menu.main]
parent = "smn_cli"
+++
<![end-metadata]-->
# network create
Usage: docker network create [OPTIONS] NETWORK-NAME
Creates a new network with a name specified by the user
-d, --driver= Driver to manage the Network
--help=false Print usage
Creates a new network that containers can connect to. If the driver supports multi-host networking, the created network will be made available across all the hosts in the cluster. Daemon will do its best to identify network name conflicts. But its the users responsibility to make sure network name is unique across the cluster. You create a network and then configure the container to use it, for example:
```
$ docker network create -d overlay multi-host-network
$ docker run -itd --net=multi-host-network busybox
```
the container will be connected to the network that is created and managed by the driver (multi-host overlay driver in the above example) or external network plugins.
Multiple containers can be connected to the same network and the containers in the same network will start to communicate with each other. If the driver/plugin supports multi-host connectivity, then the containers connected to the same multi-host network will be able to communicate seamlessly.
*Note*: UX needs enhancement to accept network options to be passed to the drivers

View File

@ -0,0 +1,27 @@
<!--[metadata]>
+++
title = "network disconnect"
description = "The network disconnect command description and usage"
keywords = ["network, disconnect"]
[menu.main]
parent = "smn_cli"
+++
<![end-metadata]-->
# network disconnect
Usage: docker network disconnect [OPTIONS] NETWORK CONTAINER
Disconnects a container from a network
--help=false Print usage
Disconnects a running container from a network.
```
$ docker network create -d overlay multi-host-network
$ docker run -d --net=multi-host-network --name=container1 busybox top
$ docker network disconnect multi-host-network container1
```
the container will be disconnected from the network.

View File

@ -0,0 +1,49 @@
<!--[metadata]>
+++
title = "network inspect"
description = "The network inspect command description and usage"
keywords = ["network, inspect"]
[menu.main]
parent = "smn_cli"
+++
<![end-metadata]-->
# network inspect
Usage: docker network inspect [OPTIONS] NETWORK
Displays detailed information on a network
--help=false Print usage
Returns information about a network. By default, this command renders all results
in a JSON object.
Example output:
```
$ sudo docker run -itd --name=container1 busybox
f2870c98fd504370fb86e59f32cd0753b1ac9b69b7d80566ffc7192a82b3ed27
$ sudo docker run -itd --name=container2 busybox
bda12f8922785d1f160be70736f26c1e331ab8aaf8ed8d56728508f2e2fd4727
$ sudo docker network inspect bridge
{
"name": "bridge",
"id": "7fca4eb8c647e57e9d46c32714271e0c3f8bf8d17d346629e2820547b2d90039",
"driver": "bridge",
"containers": {
"bda12f8922785d1f160be70736f26c1e331ab8aaf8ed8d56728508f2e2fd4727": {
"endpoint": "e0ac95934f803d7e36384a2029b8d1eeb56cb88727aa2e8b7edfeebaa6dfd758",
"mac_address": "02:42:ac:11:00:03",
"ipv4_address": "172.17.0.3/16"
},
"f2870c98fd504370fb86e59f32cd0753b1ac9b69b7d80566ffc7192a82b3ed27": {
"endpoint": "31de280881d2a774345bbfb1594159ade4ae4024ebfb1320cb74a30225f6a8ae",
"mac_address": "02:42:ac:11:00:02",
"ipv4_address": "172.17.0.2/16"
}
}
}
```

View File

@ -0,0 +1,32 @@
<!--[metadata]>
+++
title = "network ls"
description = "The network ls command description and usage"
keywords = ["network, list"]
[menu.main]
parent = "smn_cli"
+++
<![end-metadata]-->
# docker network ls
Usage: docker network ls [OPTIONS]
Lists all the networks created by the user
--help=false Print usage
-l, --latest=false Show the latest network created
-n=-1 Show n last created networks
--no-trunc=false Do not truncate the output
-q, --quiet=false Only display numeric IDs
Lists all the networks Docker knows about. This include the networks that spans across multiple hosts in a cluster.
Example output:
```
$ sudo docker network ls
NETWORK ID NAME DRIVER
7fca4eb8c647 bridge bridge
9f904ee27bf5 none null
cf03ee007fb4 host host
```

View File

@ -0,0 +1,23 @@
<!--[metadata]>
+++
title = "network rm"
description = "the network rm command description and usage"
keywords = ["network, rm"]
[menu.main]
parent = "smn_cli"
+++
<![end-metadata]-->
# network rm
Usage: docker network rm [OPTIONS] NETWORK
Deletes a network
--help=false Print usage
Removes a network. You cannot remove a network that is in use by 1 or more containers.
```
$ docker network rm my-network
```

View File

@ -132,6 +132,12 @@ namespaces, cgroups, capabilities, and filesystem access controls. It allows
you to manage the lifecycle of the container performing additional operations
after the container is created.
## libnetwork
libnetwork provides a native Go implementation for creating and managing container
network namespaces and other network resources. It manage the networking lifecycle
of the container performing additional operations after the container is created.
## link
links provide an interface to connect Docker containers running on the same host
@ -149,7 +155,12 @@ installs Docker on them, then configures the Docker client to talk to them.
*Also known as : docker-machine*
## overlay
## overlay network driver
Overlay network driver provides out of the box multi-host network connectivity
for docker containers in a cluster.
## overlay storage driver
OverlayFS is a [filesystem](#filesystem) service for Linux which implements a
[union mount](http://en.wikipedia.org/wiki/Union_mount) for other file systems.

View File

@ -245,11 +245,12 @@ of the containers.
## Network settings
--dns=[] : Set custom dns servers for the container
--net="bridge" : Set the Network mode for the container
--net="bridge" : Connects a container to a network
'bridge': creates a new network stack for the container on the docker bridge
'none': no networking for this container
'container:<name|id>': reuses another container network stack
'host': use the host network stack inside the container
'NETWORK': connects the container to user-created network using `docker network create` command
--add-host="" : Add a line to /etc/hosts (host:IP)
--mac-address="" : Sets the container's Ethernet device's MAC address
@ -269,12 +270,12 @@ By default, the MAC address is generated using the IP address allocated to the
container. You can set the container's MAC address explicitly by providing a
MAC address via the `--mac-address` parameter (format:`12:34:56:78:9a:bc`).
Supported networking modes are:
Supported networks :
<table>
<thead>
<tr>
<th class="no-wrap">Mode</th>
<th class="no-wrap">Network</th>
<th>Description</th>
</tr>
</thead>
@ -304,19 +305,25 @@ Supported networking modes are:
its *name* or *id*.
</td>
</tr>
<tr>
<td class="no-wrap"><strong>NETWORK</strong></td>
<td>
Connects the container to a user created network (using `docker network create` command)
</td>
</tr>
</tbody>
</table>
#### Mode: none
#### Network: none
With the networking mode set to `none` a container will not have a
With the network is `none` a container will not have
access to any external routes. The container will still have a
`loopback` interface enabled in the container but it does not have any
routes to external traffic.
#### Mode: bridge
#### Network: bridge
With the networking mode set to `bridge` a container will use docker's
With the network set to `bridge` a container will use docker's
default networking setup. A bridge is setup on the host, commonly named
`docker0`, and a pair of `veth` interfaces will be created for the
container. One side of the `veth` pair will remain on the host attached
@ -325,9 +332,9 @@ container's namespaces in addition to the `loopback` interface. An IP
address will be allocated for containers on the bridge's network and
traffic will be routed though this bridge to the container.
#### Mode: host
#### Network: host
With the networking mode set to `host` a container will share the host's
With the network set to `host` a container will share the host's
network stack and all interfaces from the host will be available to the
container. The container's hostname will match the hostname on the host
system. Note that `--add-host` `--hostname` `--dns` `--dns-search`
@ -343,9 +350,9 @@ or a High Performance Web Server.
> **Note**: `--net="host"` gives the container full access to local system
> services such as D-bus and is therefore considered insecure.
#### Mode: container
#### Network: container
With the networking mode set to `container` a container will share the
With the network set to `container` a container will share the
network stack of another container. The other container's name must be
provided in the format of `--net container:<name|id>`. Note that `--add-host`
`--hostname` `--dns` `--dns-search` `--dns-opt` and `--mac-address` are
@ -360,6 +367,21 @@ running the `redis-cli` command and connecting to the Redis server over the
$ # use the redis container's network stack to access localhost
$ docker run --rm -it --net container:redis example/redis-cli -h 127.0.0.1
#### Network: User-Created NETWORK
In addition to all the above special networks, user can create a network using
their favorite network driver or external plugin. The driver used to create the
network takes care of all the network plumbing requirements for the container
connected to that network.
Example creating a network using the inbuilt overlay network driver and running
a container in the created network
```
$ docker network create -d overlay multi-host-network
$ docker run --net=multi-host-network -itd --name=container3 busybox
```
### Managing /etc/hosts
Your container will have lines in `/etc/hosts` which define the hostname of the

View File

@ -71,11 +71,6 @@ to build a Docker binary with the experimental features enabled:
## Current experimental features
* [Network plugins](plugins_network.md)
* [Networking and Services UI](networking.md)
* [Native multi-host networking](network_overlay.md)
* [Compose, Swarm and networking integration](compose_swarm_networking.md)
## How to comment on an experimental feature
Each feature's documentation includes a list of proposal pull requests or PRs associated with the feature. If you want to comment on or suggest a change to a feature, please add it to the existing feature PR.

View File

@ -1,238 +0,0 @@
# Experimental: Compose, Swarm and Multi-Host Networking
The [experimental build of Docker](https://github.com/docker/docker/tree/master/experimental) has an entirely new networking system, which enables secure communication between containers on multiple hosts. In combination with Docker Swarm and Docker Compose, you can now run multi-container apps on multi-host clusters with the same tooling and configuration format you use to develop them locally.
> Note: This functionality is in the experimental stage, and contains some hacks and workarounds which will be removed as it matures.
## Prerequisites
Before you start, youll need to install the experimental build of Docker, and the latest versions of Machine and Compose.
- To install the experimental Docker build on a Linux machine, follow the instructions [here](https://github.com/docker/docker/tree/master/experimental#install-docker-experimental).
- To install the experimental Docker build on a Mac, run these commands:
$ curl -L https://experimental.docker.com/builds/Darwin/x86_64/docker-latest > /usr/local/bin/docker
$ chmod +x /usr/local/bin/docker
- To install Machine, follow the instructions [here](http://docs.docker.com/machine/).
- To install Compose, follow the instructions [here](http://docs.docker.com/compose/install/).
Youll also need a [Docker Hub](https://hub.docker.com/account/signup/) account and a [Digital Ocean](https://www.digitalocean.com/) account.
It works with the amazonec2 driver as well (by adapting the commands accordingly), except you'll need to manually open the ports 8500 (consul) and 7946 (serf) by editing the inbound rules of the corresponding security group.
## Set up a swarm with multi-host networking
Set the `DIGITALOCEAN_ACCESS_TOKEN` environment variable to a valid Digital Ocean API token, which you can generate in the [API panel](https://cloud.digitalocean.com/settings/applications).
export DIGITALOCEAN_ACCESS_TOKEN=abc12345
Start a consul server:
docker-machine --debug create \
-d digitalocean \
--engine-install-url="https://experimental.docker.com" \
consul
docker $(docker-machine config consul) run -d \
-p "8500:8500" \
-h "consul" \
progrium/consul -server -bootstrap
(In a real world setting youd set up a distributed consul, but thats beyond the scope of this guide!)
Create a Swarm token:
export SWARM_TOKEN=$(docker run swarm create)
Next, you create a Swarm master with Machine:
docker-machine --debug create \
-d digitalocean \
--digitalocean-image="ubuntu-14-04-x64" \
--engine-install-url="https://experimental.docker.com" \
--engine-opt="default-network=overlay:multihost" \
--engine-opt="kv-store=consul:$(docker-machine ip consul):8500" \
--engine-label="com.docker.network.driver.overlay.bind_interface=eth0" \
swarm-0
Usually Machine can create Swarms for you, but it doesn't yet fully support multi-host networks yet, so you'll have to start up the Swarm manually:
docker $(docker-machine config swarm-0) run -d \
--restart="always" \
--net="bridge" \
swarm:latest join \
--addr "$(docker-machine ip swarm-0):2376" \
"token://$SWARM_TOKEN"
docker $(docker-machine config swarm-0) run -d \
--restart="always" \
--net="bridge" \
-p "3376:3376" \
-v "/etc/docker:/etc/docker" \
swarm:latest manage \
--tlsverify \
--tlscacert="/etc/docker/ca.pem" \
--tlscert="/etc/docker/server.pem" \
--tlskey="/etc/docker/server-key.pem" \
-H "tcp://0.0.0.0:3376" \
--strategy spread \
"token://$SWARM_TOKEN"
Create a Swarm node:
docker-machine --debug create \
-d digitalocean \
--digitalocean-image="ubuntu-14-10-x64" \
--engine-install-url="https://experimental.docker.com" \
--engine-opt="default-network=overlay:multihost" \
--engine-opt="kv-store=consul:$(docker-machine ip consul):8500" \
--engine-label="com.docker.network.driver.overlay.bind_interface=eth0" \
--engine-label="com.docker.network.driver.overlay.neighbor_ip=$(docker-machine ip swarm-0)" \
swarm-1
docker $(docker-machine config swarm-1) run -d \
--restart="always" \
--net="bridge" \
swarm:latest join \
--addr "$(docker-machine ip swarm-1):2376" \
"token://$SWARM_TOKEN"
You can create more Swarm nodes if you want - its best to give them sensible names (swarm-2, swarm-3, etc).
Finally, point Docker at your swarm:
export DOCKER_HOST=tcp://"$(docker-machine ip swarm-0):3376"
export DOCKER_TLS_VERIFY=1
export DOCKER_CERT_PATH="$HOME/.docker/machine/machines/swarm-0"
## Run containers and get them communicating
Now that youve got a swarm up and running, you can create containers on it just like a single Docker instance:
$ docker run busybox echo hello world
hello world
If you run `docker ps -a`, you can see what node that container was started on by looking at its name (here its swarm-3):
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
41f59749737b busybox "echo hello world" 15 seconds ago Exited (0) 13 seconds ago swarm-3/trusting_leakey
As you start more containers, theyll be placed on different nodes across the cluster, thanks to Swarms default “spread” scheduling strategy.
Every container started on this swarm will use the “overlay:multihost” network by default, meaning they can all intercommunicate. Each container gets an IP address on that network, and an `/etc/hosts` file which will be updated on-the-fly with every other containers IP address and name. That means that if you have a running container named foo, other containers can access it at the hostname foo.
Lets verify that multi-host networking is functioning. Start a long-running container:
$ docker run -d --name long-running busybox top
<container id>
If you start a new container and inspect its /etc/hosts file, youll see the long-running container in there:
$ docker run busybox cat /etc/hosts
...
172.21.0.6 long-running
Verify that connectivity works between containers:
$ docker run busybox ping long-running
PING long-running (172.21.0.6): 56 data bytes
64 bytes from 172.21.0.6: seq=0 ttl=64 time=7.975 ms
64 bytes from 172.21.0.6: seq=1 ttl=64 time=1.378 ms
64 bytes from 172.21.0.6: seq=2 ttl=64 time=1.348 ms
^C
--- long-running ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 1.140/2.099/7.975 ms
## Run a Compose application
Heres an example of a simple Python + Redis app using multi-host networking on a swarm.
Create a directory for the app:
$ mkdir composetest
$ cd composetest
Inside this directory, create 2 files.
First, create `app.py` - a simple web app that uses the Flask framework and increments a value in Redis:
from flask import Flask
from redis import Redis
import os
app = Flask(__name__)
redis = Redis(host='composetest_redis_1', port=6379)
@app.route('/')
def hello():
redis.incr('hits')
return 'Hello World! I have been seen %s times.' % redis.get('hits')
if __name__ == "__main__":
app.run(host="0.0.0.0", debug=True)
Note that were connecting to a host called `composetest_redis_1` - this is the name of the Redis container that Compose will start.
Second, create a Dockerfile for the app container:
FROM python:2.7
RUN pip install flask redis
ADD . /code
WORKDIR /code
CMD ["python", "app.py"]
Build the Docker image and push it to the Hub (youll need a Hub account). Replace `<username>` with your Docker Hub username:
$ docker build -t <username>/counter .
$ docker push <username>/counter
Next, create a `docker-compose.yml`, which defines the configuration for the web and redis containers. Once again, replace `<username>` with your Hub username:
web:
image: <username>/counter
ports:
- "80:5000"
redis:
image: redis
Now start the app:
$ docker-compose up -d
Pulling web (username/counter:latest)...
swarm-0: Pulling username/counter:latest... : downloaded
swarm-2: Pulling username/counter:latest... : downloaded
swarm-1: Pulling username/counter:latest... : downloaded
swarm-3: Pulling username/counter:latest... : downloaded
swarm-4: Pulling username/counter:latest... : downloaded
Creating composetest_web_1...
Pulling redis (redis:latest)...
swarm-2: Pulling redis:latest... : downloaded
swarm-1: Pulling redis:latest... : downloaded
swarm-3: Pulling redis:latest... : downloaded
swarm-4: Pulling redis:latest... : downloaded
swarm-0: Pulling redis:latest... : downloaded
Creating composetest_redis_1...
Swarm has created containers for both web and redis, and placed them on different nodes, which you can check with `docker ps`:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
92faad2135c9 redis "/entrypoint.sh redi 43 seconds ago Up 42 seconds swarm-2/composetest_redis_1
adb809e5cdac username/counter "/bin/sh -c 'python 55 seconds ago Up 54 seconds 45.67.8.9:80->5000/tcp swarm-1/composetest_web_1
You can also see that the web container has exposed port 80 on its swarm node. If you curl that IP, youll get a response from the container:
$ curl http://45.67.8.9
Hello World! I have been seen 1 times.
If you hit it repeatedly, the counter will increment, demonstrating that the web and redis container are communicating:
$ curl http://45.67.8.9
Hello World! I have been seen 2 times.
$ curl http://45.67.8.9
Hello World! I have been seen 3 times.
$ curl http://45.67.8.9
Hello World! I have been seen 4 times.

View File

@ -1,14 +0,0 @@
# Native Multi-host networking
There is a lot to talk about the native multi-host networking and the `overlay` driver that makes it happen. The technical details are documented under https://github.com/docker/libnetwork/blob/master/docs/overlay.md.
Using the above experimental UI `docker network`, `docker service` and `--publish-service`, the user can exercise the power of multi-host networking.
Since `network` and `service` objects are globally significant, this feature requires distributed states provided by the `libkv` project.
Using `libkv`, the user can plug any of the supported Key-Value store (such as consul, etcd or zookeeper).
User can specify the Key-Value store of choice using the `--cluster-store` daemon flag, which takes configuration value of format `PROVIDER:URL`, where
`PROVIDER` is the name of the Key-Value store (such as consul, etcd or zookeeper) and
`URL` is the url to reach the Key-Value store.
Example : `docker daemon --cluster-store=consul://localhost:8500`
Send us feedback and comments on [#14083](https://github.com/docker/docker/issues/14083)
or on the usual Google Groups (docker-user, docker-dev) and IRC channels.

View File

@ -1,120 +0,0 @@
# Experimental: Networking and Services
In this feature:
- `network` and `service` become first class objects in the Docker UI
- one can now create networks, publish services on that network and attach containers to the services
- Native multi-host networking
- `network` and `service` objects are globally significant and provides multi-host container connectivity natively
- Inbuilt simple Service Discovery
- With multi-host networking and top-level `service` object, Docker now provides out of the box simple Service Discovery for containers running in a network
- Batteries included but removable
- Docker provides inbuilt native multi-host networking by default & can be swapped by any remote driver provided by external plugins.
This is an experimental feature. For information on installing and using experimental features, see [the experimental feature overview](README.md).
## Using Networks
Usage: docker network [OPTIONS] COMMAND [OPTIONS] [arg...]
Commands:
create Create a network
rm Remove a network
ls List all networks
info Display information of a network
Run 'docker network COMMAND --help' for more information on a command.
--help=false Print usage
The `docker network` command is used to manage Networks.
To create a network, `docker network create foo`. You can also specify a driver
if you have loaded a networking plugin e.g `docker network create -d <plugin_name> foo`
$ docker network create foo
aae601f43744bc1f57c515a16c8c7c4989a2cad577978a32e6910b799a6bccf6
$ docker network create -d overlay bar
d9989793e2f5fe400a58ef77f706d03f668219688ee989ea68ea78b990fa2406
`docker network ls` is used to display the currently configured networks
$ docker network ls
NETWORK ID NAME TYPE
d367e613ff7f none null
bd61375b6993 host host
cc455abccfeb bridge bridge
aae601f43744 foo bridge
d9989793e2f5 bar overlay
To get detailed information on a network, you can use the `docker network info`
command.
$ docker network info foo
Network Id: aae601f43744bc1f57c515a16c8c7c4989a2cad577978a32e6910b799a6bccf6
Name: foo
Type: null
If you no longer have need of a network, you can delete it with `docker network rm`
$ docker network rm bar
bar
$ docker network ls
NETWORK ID NAME TYPE
aae601f43744 foo bridge
d367e613ff7f none null
bd61375b6993 host host
cc455abccfeb bridge bridge
## User-Defined default network
Docker daemon supports a configuration flag `--default-network` which takes configuration value of format `DRIVER:NETWORK`, where,
`DRIVER` represents the in-built drivers such as bridge, overlay, container, host and none. or Remote drivers via Network Plugins.
`NETWORK` is the name of the network created using the `docker network create` command
When a container is created and if the network mode (`--net`) is not specified, then this default network will be used to connect
the container. If `--default-network` is not specified, the default network will be the `bridge` driver.
Example : `docker daemon --default-network=overlay:multihost`
## Using Services
Usage: docker service COMMAND [OPTIONS] [arg...]
Commands:
publish Publish a service
unpublish Remove a service
attach Attach a backend (container) to the service
detach Detach the backend from the service
ls Lists all services
info Display information about a service
Run 'docker service COMMAND --help' for more information on a command.
--help=false Print usage
Assuming we want to publish a service from container `a0ebc12d3e48` on network `foo` as `my-service` we would use the following command:
$ docker service publish my-service.foo
ec56fd74717d00f968c26675c9a77707e49ae64b8e54832ebf78888eb116e428
$ docker service attach a0ebc12d3e48 my-service.foo
This would make the container `a0ebc12d3e48` accessible as `my-service` on network `foo`. Any other container in network `foo` can use DNS to resolve the address of `my-service`
This can also be achieved by using the `--publish-service` flag for `docker run`:
docker run -itd --publish-service db.foo postgres
`db.foo` in this instance means "place the container on network `foo`, and allow other hosts on `foo` to discover it under the name `db`"
We can see the current services using the `docker service ls` command
$ docker service ls
SERVICE ID NAME NETWORK PROVIDER
ec56fd74717d my-service foo a0ebc12d3e48
To remove the a service:
$ docker service detach a0ebc12d3e48 my-service.foo
$ docker service unpublish my-service.foo
Send us feedback and comments on [#14083](https://github.com/docker/docker/issues/14083)
or on the usual Google Groups (docker-user, docker-dev) and IRC channels.

View File

@ -1,489 +0,0 @@
# Networking API
### List networks
`GET /networks`
List networks
**Example request**:
GET /networks HTTP/1.1
**Example response**:
HTTP/1.1 200 OK
Content-Type: application/json
[
{
"name": "none",
"id": "8e4e55c6863ef4241c548c1c6fc77289045e9e5d5b5e4875401a675326981898",
"type": "null",
"endpoints": []
},
{
"name": "host",
"id": "062b6d9ea7913fde549e2d186ff0402770658f8c4e769958e1b943ff4e675011",
"type": "host",
"endpoints": []
},
{
"name": "bridge",
"id": "a87dd9a9d58f030962df1c15fb3fa142fbd9261339de458bc89be1895cef2c70",
"type": "bridge",
"endpoints": []
}
]
Query Parameters:
- **name** Filter results with the given name
- **partial-id** Filter results using the partial network ID
Status Codes:
- **200** no error
- **400** bad parameter
- **500** server error
### Create a Network
`POST /networks`
**Example request**
POST /networks HTTP/1.1
Content-Type: application/json
{
"name": "foo",
"network_type": "",
"options": {}
}
**Example Response**
HTTP/1.1 200 OK
"32fbf63200e2897f5de72cb2a4b653e4b1a523b15116e96e3d73f7849e583653",
Status Codes:
- **200** no error
- **400** bad request
- **500** server error
### Get a network
`GET /networks/<network_id>`
Get a network
**Example request**:
GET /networks/32fbf63200e2897f5de72cb2a4b653e4b1a523b15116e96e3d73f7849e583653 HTTP/1.1
**Example response**:
HTTP/1.1 200 OK
Content-Type: application/json
{
"name": "foo",
"id": "32fbf63200e2897f5de72cb2a4b653e4b1a523b15116e96e3d73f7849e583653",
"type": "bridge",
"endpoints": []
}
Status Codes:
- **200** no error
- **404** not found
- **500** server error
### List a networks endpoints
`GET /networks/<network_id>/endpoints`
**Example request**
GET /networks/32fbf63200e2897f5de72cb2a4b653e4b1a523b15116e96e3d73f7849e583653/endpoints HTTP/1.1
**Example Response**
HTTP/1.1 200 OK
Content-Type: application/json
[
{
"id": "7e0c116b882ee489a8a5345a2638c0129099aa47f4ba114edde34e75c1e4ae0d",
"name": "/lonely_pasteur",
"network": "foo"
}
]
Query Parameters:
- **name** Filter results with the given name
- **partial-id** Filter results using the partial network ID
Status Codes:
- **200** no error
- **400** bad parameter
- **500** server error
### Create an endpoint on a network
`POST /networks/<network_id>/endpoints`
**Example request**
POST /networks/32fbf63200e2897f5de72cb2a4b653e4b1a523b15116e96e3d73f7849e583653/endpoints HTTP/1.1
Content-Type: application/json
{
"name": "baz",
"exposed_ports": [
{
"proto": 6,
"port": 8080
}
],
"port_mapping": null
}
**Example Response**
HTTP/1.1 200 OK
Content-Type: application/json
"b18b795af8bad85cdd691ff24ffa2b08c02219d51992309dd120322689d2ab5a"
Status Codes:
- **200** no error
- **400** bad parameter
- **500** server error
### Get an endpoint
`GET /networks/<network_id>/endpoints/<endpoint_id>`
**Example request**
GET /networks/32fbf63200e2897f5de72cb2a4b653e4b1a523b15116e96e3d73f7849e583653/endpoints/b18b795af8bad85cdd691ff24ffa2b08c02219d51992309dd120322689d2ab5a HTTP/1.1
**Example Response**
HTTP/1.1 200 OK
Content-Type: application/json
{
"id": "b18b795af8bad85cdd691ff24ffa2b08c02219d51992309dd120322689d2ab5a",
"name": "baz",
"network": "foo"
}
Status Codes:
- **200** no error
- **404** - not found
- **500** server error
### Join an endpoint to a container
`POST /networks/<network_id>/endpoints/<endpoint_id>/containers`
**Example request**
POST /networks/32fbf63200e2897f5de72cb2a4b653e4b1a523b15116e96e3d73f7849e583653//endpoints/b18b795af8bad85cdd691ff24ffa2b08c02219d51992309dd120322689d2ab5a/containers HTTP/1.1
Content-Type: application/json
{
"container_id": "e76f406417031bd24c17aeb9bb2f5968b628b9fb6067da264b234544754bf857",
"host_name": null,
"domain_name": null,
"hosts_path": null,
"resolv_conf_path": null,
"dns": null,
"extra_hosts": null,
"parent_updates": null,
"use_default_sandbox": true
}
**Example response**
HTTP/1.1 200 OK
Content-Type: application/json
"/var/run/docker/netns/e76f40641703"
Status Codes:
- **200** no error
- **400** bad parameter
- **404** - not found
- **500** server error
### Detach an endpoint from a container
`DELETE /networks/<network_id>/endpoints/<endpoint_id>/containers/<container_id>`
**Example request**
DELETE /networks/32fbf63200e2897f5de72cb2a4b653e4b1a523b15116e96e3d73f7849e583653/endpoints/b18b795af8bad85cdd691ff24ffa2b08c02219d51992309dd120322689d2ab5a/containers/e76f406417031bd24c17aeb9bb2f5968b628b9fb6067da264b234544754bf857 HTTP/1.1
Content-Type: application/json
**Example response**
HTTP/1.1 200 OK
Status Codes:
- **200** no error
- **400** bad parameter
- **404** - not found
- **500** server error
### Delete an endpoint
`DELETE /networks/<network_id>/endpoints/<endpoint_id>`
**Example request**
DELETE /networks/32fbf63200e2897f5de72cb2a4b653e4b1a523b15116e96e3d73f7849e583653/endpoints/b18b795af8bad85cdd691ff24ffa2b08c02219d51992309dd120322689d2ab5a HTTP/1.1
**Example Response**
HTTP/1.1 200 OK
Status Codes:
- **200** no error
- **404** - not found
- **500** server error
### Delete a network
`DELETE /networks/<network_id>`
Delete a network
**Example request**:
DELETE /networks/0984d158bd8ae108e4d6bc8fcabedf51da9a174b32cc777026d4a29045654951 HTTP/1.1
**Example response**:
HTTP/1.1 200 OK
Status Codes:
- **200** no error
- **404** not found
- **500** server error
# Services API
### Publish a Service
`POST /services`
Publish a service
**Example Request**
POST /services HTTP/1.1
Content-Type: application/json
{
"name": "bar",
"network_name": "foo",
"exposed_ports": null,
"port_mapping": null
}
**Example Response**
HTTP/1.1 200 OK
Content-Type: application/json
"0aee0899e6c5e903cf3ef2bdc28a1c9aaf639c8c8c331fa4ae26344d9e32c1ff"
Status Codes:
- **200** no error
- **400** bad parameter
- **500** server error
### Get a Service
`GET /services/0aee0899e6c5e903cf3ef2bdc28a1c9aaf639c8c8c331fa4ae26344d9e32c1ff`
Get a service
**Example Request**:
GET /services/0aee0899e6c5e903cf3ef2bdc28a1c9aaf639c8c8c331fa4ae26344d9e32c1ff HTTP/1.1
**Example Response**:
HTTP/1.1 200 OK
Content-Type: application/json
{
"name": "bar",
"id": "0aee0899e6c5e903cf3ef2bdc28a1c9aaf639c8c8c331fa4ae26344d9e32c1ff",
"network": "foo"
}
Status Codes:
- **200** no error
- **400** bad parameter
- **404** - not found
- **500** server error
### Attach a backend to a service
`POST /services/0aee0899e6c5e903cf3ef2bdc28a1c9aaf639c8c8c331fa4ae26344d9e32c1ff/backend`
Attach a backend to a service
**Example Request**:
POST /services/0aee0899e6c5e903cf3ef2bdc28a1c9aaf639c8c8c331fa4ae26344d9e32c1ff/backend HTTP/1.1
Content-Type: application/json
{
"container_id": "98c5241f9475e9efc17e7198e931fb48166010b80f96d48df204e251378ca547",
"host_name": "",
"domain_name": "",
"hosts_path": "",
"resolv_conf_path": "",
"dns": null,
"extra_hosts": null,
"parent_updates": null,
"use_default_sandbox": false
}
**Example Response**:
HTTP/1.1 200 OK
Content-Type: application/json
"/var/run/docker/netns/98c5241f9475"
Status Codes:
- **200** no error
- **400** bad parameter
- **500** server error
### Get Backends for a Service
Get all backends for a given service
**Example Request**
GET /services/0aee0899e6c5e903cf3ef2bdc28a1c9aaf639c8c8c331fa4ae26344d9e32c1ff/backend HTTP/1.1
**Example Response**
HTTP/1.1 200 OK
Content-Type: application/json
[
{
"id": "98c5241f9475e9efc17e7198e931fb48166010b80f96d48df204e251378ca547"
}
]
Status Codes:
- **200** no error
- **400** bad parameter
- **500** server error
### List Services
`GET /services`
List services
**Example request**:
GET /services HTTP/1.1
**Example response**:
HTTP/1.1 200 OK
Content-Type: application/json
[
{
"name": "/stupefied_stallman",
"id": "c826b26bf736fb4a77db33f83562e59f9a770724e259ab9c3d50d948f8233ae4",
"network": "bridge"
},
{
"name": "bar",
"id": "0aee0899e6c5e903cf3ef2bdc28a1c9aaf639c8c8c331fa4ae26344d9e32c1ff",
"network": "foo"
}
]
Query Parameters:
- **name** Filter results with the given name
- **partial-id** Filter results using the partial network ID
- **network** - Filter results by the given network
Status Codes:
- **200** no error
- **400** bad parameter
- **500** server error
### Detach a Backend from a Service
`DELETE /services/0aee0899e6c5e903cf3ef2bdc28a1c9aaf639c8c8c331fa4ae26344d9e32c1ff/backend/98c5241f9475e9efc17e7198e931fb48166010b80f96d48df204e251378ca547`
Detach a backend from a service
**Example Request**
DELETE /services/0aee0899e6c5e903cf3ef2bdc28a1c9aaf639c8c8c331fa4ae26344d9e32c1ff/backend/98c5241f9475e9efc17e7198e931fb48166010b80f96d48df204e251378ca547 HTTP/1.1
**Example Response**
HTTP/1.1 200 OK
Status Codes:
- **200** no error
- **400** bad parameter
- **500** server error
### Un-Publish a Service
`DELETE /services/0aee0899e6c5e903cf3ef2bdc28a1c9aaf639c8c8c331fa4ae26344d9e32c1ff`
Unpublish a service
**Example Request**
DELETE /services/0aee0899e6c5e903cf3ef2bdc28a1c9aaf639c8c8c331fa4ae26344d9e32c1ff HTTP/1.1
**Example Response**
HTTP/1.1 200 OK
Status Codes:
- **200** no error
- **400** bad parameter
- **500** server error