You are on page 1of 84

Table of Contents

Overview
What is the Azure Container Service?
Get Started
Deploy an ACS cluster
Deploy to ACS using the Azure CLI 2.0 Preview
Connect with an ACS cluster
Scale an ACS cluster
How To
Manage with DC/OS
Container management - DC/OS web UI
Container management - DC/OS REST API
Container management - DC/OS continuous integration
DC/OS Agent pools
Enable DC/OS public access
Load balance containers in DC/OS
App/User Specific Orchestrator in DC/OS
Monitor with OMS (DC/OS)
Monitor with Datadog (DC/OS)
Monitor with Sysdig (DC/OS)
Manage with Kubernetes
Manage with Docker Swarm
Reference
REST API
Resources
Region availability
Pricing
Service Updates

Azure Container Service introduction


11/15/2016 3 min to read Edit on GitHub

Contributors
Ross Gardler Kim Whitlatch (Beyondsoft Corporation) Tyson Nevil Neil Peterson Andy De George alexandair esell Neil Gat
Razi Rais katiecumming Justin Luk ciphertxt

Azure Container Service makes it simpler for you to create, configure, and manage a cluster of virtual machines that
are preconfigured to run containerized applications. It uses an optimized configuration of popular open-source
scheduling and orchestration tools. This enables you to use your existing skills, or draw upon a large and growing
body of community expertise, to deploy and manage container-based applications on Microsoft Azure.

Azure Container Service leverages the Docker container format to ensure that your application containers are fully
portable. It also supports your choice of Marathon and DC/OS or Docker Swarm so that you can scale these
applications to thousands of containers, or even tens of thousands.
By using Azure Container Service, you can take advantage of the enterprise-grade features of Azure, while still
maintaining application portability--including portability at the orchestration layers.

Using Azure Container Service


Our goal with Azure Container Service is to provide a container hosting environment by using open-source tools
and technologies that are popular among our customers today. To this end, we expose the standard API endpoints
for your chosen orchestrator (DC/OS or Docker Swarm). By using these endpoints, you can leverage any software
that is capable of talking to those endpoints. For example, in the case of the Docker Swarm endpoint, you might
choose to use the Docker command-line interface (CLI). For DC/OS, you might choose to use the DCOS CLI.

Creating a Docker cluster by using Azure Container Service


To begin using Azure Container Service, you deploy an Azure Container Service cluster via the portal (search for
'Azure Container Service'), by using an Azure Resource Manager template (Docker Swarm or DC/OS) or with the
CLI. The provided quickstart templates can be modified to include additional or advanced Azure configuration. For
more information on deploying an Azure Container Service cluster, see Deploy an Azure Container Service cluster.

Deploying an application
Azure Container Service provides a choice of either Docker Swarm or DC/OS for orchestration. How you deploy

your application depends on your choice of orchestrator.

Using DC/OS
DC/OS is a distributed operating system based on the Apache Mesos distributed systems kernel. Apache Mesos is
housed at the Apache Software Foundation and lists some of the biggest names in IT as users and contributors.

DC/OS and Apache Mesos include an impressive feature set:


Proven scalability
Fault-tolerant replicated master and slaves using Apache ZooKeeper
Support for Docker-formatted containers
Native isolation between tasks with Linux containers
Multiresource scheduling (memory, CPU, disk, and ports)
Java, Python, and C++ APIs for developing new parallel applications
A web UI for viewing cluster state
By default, DC/OS running on Azure Container Service includes the Marathon orchestration platform for scheduling
workloads. However, included with the DC/OS deployment of ACS is the Mesosphere Universe of services that can
be added to your service, these include Spark, Hadoop, Cassandra and much more.

Using Marathon

Marathon is a cluster-wide init and control system for services in cgroups--or, in the case of Azure Container
Service, Docker-formatted containers. Marathon provides a web UI from which you can deploy your applications.
You can access this at a URL that looks something like http://DNS_PREFIX.REGION.cloudapp.azure.com where
DNS_PREFIX and REGION are both defined at deployment time. Of course, you can also provide your own DNS

name. For more information on running a container using the Marathon web UI, see Container management
through the web UI.

You can also use the REST APIs for communicating with Marathon. There are a number of client libraries that are
available for each tool. They cover a variety of languages--and, of course, you can use the HTTP protocol in any
language. In addition, many popular DevOps tools provide support for Marathon. This provides maximum flexibility
for your operations team when you are working with an Azure Container Service cluster. For more information on
running a container by using the Marathon REST API, see Container management with the REST API.

Using Docker Swarm


Docker Swarm provides native clustering for Docker. Because Docker Swarm serves the standard Docker API, any
tool that already communicates with a Docker daemon can use Swarm to transparently scale to multiple hosts on
Azure Container Service.

Supported tools for managing containers on a Swarm cluster include, but are not limited to, the following:
Dokku
Docker CLI and Docker Compose
Krane
Jenkins

Videos

Getting started with Azure Container Service (101):

Building Applications Using the Azure Container Service (Build 2016)

Deploy an Azure Container Service cluster


11/15/2016 5 min to read Edit on GitHub

Contributors
Ross Gardler Andy Pasic Kim Whitlatch (Beyondsoft Corporation) Tyson Nevil Stuart Leeks Neil Peterson 4c74356b41
katiecumming Neil Gat Cynthia Nottingham [MSFT]

Azure Container Service provides rapid deployment of popular open-source container clustering and
orchestration solutions. By using Azure Container Service, you can deploy DC/OS and Docker Swarm clusters with
Azure Resource Manager templates or the Azure portal. You deploy these clusters by using Azure Virtual Machine
Scale Sets, and the clusters take advantage of Azure networking and storage offerings. To access Azure Container
Service, you need an Azure subscription. If you don't have one, then you can sign up for a free trial.
This document walks you through deploying an Azure Container Service cluster by using the Azure portal, the
Azure command-line interface (CLI), and the Azure PowerShell module.

Create a service by using the Azure portal


Sign in to the Azure portal, select New , and search the Azure Marketplace for Azure Container S ervice .

Select Azure Container S ervice , and click Create .

Enter the following information:


User nam e : This is the user name that will be used for an account on each of the virtual machines and virtual
machine scale sets in the Azure Container Service cluster.
S ubscription : Select an Azure subscription.
Resource group : Select an existing resource group, or create a new one.
Location : Select an Azure region for the Azure Container Service deployment.
S S H public key : Add the public key that will be used for authentication against Azure Container Service
virtual machines. It is very important that this key contains no line breaks, and that it includes the 'ssh-rsa'
prefix and the 'username@domain' postfix. It should look something like the following: ssh-rsa AAAAB3Nz...
<...>...UcyupgH azureuser@linuxvm . For guidance on creating Secure Shell (SSH) keys, see the Linux and
Windows articles.
Click OK when you're ready to proceed.

Select an Orchestration type. The options are:


DC/OS : Deploys a DC/OS cluster.
S w arm : Deploys a Docker Swarm cluster.
Click OK when you're ready to proceed.

Enter the following information:


Master count : The number of masters in the cluster.
Agent count : For Docker Swarm, this will be the initial number of agents in the agent scale set. For DC/OS,
this will be the initial number of agents in a private scale set. Additionally, a public scale set is created, which
contains a predetermined number of agents. The number of agents in this public scale set is determined by
how many masters have been created in the cluster--one public agent for one master, and two public agents
for three or five masters.
Agent virtual m achine size : The size of the agent virtual machines.
DNS prefix : A world unique name that will be used to prefix key parts of the fully qualified domain names for
the service.
Click OK when you're ready to proceed.

Click OK after service validation has finished.

Click Create to start the deployment process.

If you've elected to pin the deployment to the Azure portal, you can see the deployment status.

When the deployment has completed, the Azure Container Service cluster is ready for use.

Create a service by using the Azure CLI


To create an instance of Azure Container Service by using the command line, you need an Azure subscription. If
you don't have one, then you can sign up for a free trial. You also need to have installed and configured the Azure
CLI.
To deploy a DC/OS or Docker Swarm cluster, select one of the following templates from GitHub. Note that both of

these templates are the same, with the exception of the default orchestrator selection.
DC/OS template
Swarm template
Next, make sure that the Azure CLI has been connected to an Azure subscription. You can do this by using the
following command:
azure account show

If an Azure account is not returned, use the following command to sign the CLI in to Azure.
azure login -u user@domain.com

Next, configure the Azure CLI tools to use Azure Resource Manager.
azure config mode arm

Create an Azure resource group and Container Service cluster with the following command, where:
RES OURCE_GROUP is the name of the resource group that you want to use for this service.
LOCATION is the Azure region where the resource group and Azure Container Service deployment will be
created.
TEMPLATE_URI is the location of the deployment file. Note that this must be the Raw file, not a pointer to the
GitHub UI. To find this URL, select the azuredeploy.json file in GitHub, and click the Raw button.
NOTE
When you run this command, the shell will prompt you for deployment parameter values.

azure group create -n RESOURCE_GROUP DEPLOYMENT_NAME -l LOCATION --template-uri TEMPLATE_URI

Provide template parameters


This version of the command requires you to define parameters interactively. If you want to provide parameters,
such as a JSON-formatted string, you can do so by using the -p switch. For example:
azure group deployment create RESOURCE_GROUP DEPLOYMENT_NAME --template-uri TEMPLATE_URI -p '{ "param1":
"value1" }'

Alternatively, you can provide a JSON-formatted parameters file by using the

-e

switch:

azure group deployment create RESOURCE_GROUP DEPLOYMENT_NAME --template-uri TEMPLATE_URI -e PATH/FILE.JSON

To see an example parameters file named


Service templates in GitHub.

azuredeploy.parameters.json

, look for it with the Azure Container

Create a service by using PowerShell


You can also deploy an Azure Container Service cluster with PowerShell. This document is based on the version
1.0 Azure PowerShell module.
To deploy a DC/OS or Docker Swarm cluster, select one of the following templates. Note that both of these
templates are the same, with the exception of the default orchestrator selection.

DC/OS template
Swarm template
Before creating a cluster in your Azure subscription, verify that your PowerShell session has been signed in to
Azure. You can do this with the Get-AzureRMSubscription command:
Get-AzureRmSubscription

If you need to sign in to Azure, use the

Login-AzureRMAccount

command:

Login-AzureRmAccount

If you're deploying to a new resource group, you must first create the resource group. To create a new resource
group, use the New-AzureRmResourceGroup command, and specify a resource group name and destination region:
New-AzureRmResourceGroup -Name GROUP_NAME -Location REGION

After you create a resource group, you can create your cluster with the following command. The URI of the
desired template will be specified for the -TemplateUri parameter. When you run this command, PowerShell will
prompt you for deployment parameter values.
New-AzureRmResourceGroupDeployment -Name DEPLOYMENT_NAME -ResourceGroupName RESOURCE_GROUP_NAME -TemplateUri
TEMPLATE_URI

Provide template parameters


If you're familiar with PowerShell, you know that you can cycle through the available parameters for a cmdlet by
typing a minus sign (-) and then pressing the TAB key. This same functionality also works with parameters that
you define in your template. As soon as you type the template name, the cmdlet fetches the template, parses the
parameters, and adds the template parameters to the command dynamically. This makes it very easy to specify
the template parameter values. And, if you forget a required parameter value, PowerShell prompts you for the
value.
Below is the full command, with parameters included. You can provide your own values for the names of the
resources.
New-AzureRmResourceGroupDeployment -ResourceGroupName RESOURCE_GROUP_NAME-TemplateURI TEMPLATE_URI -adminuser
value1 -adminpassword value2 ....

Next steps
Now that you have a functioning cluster, see these documents for connection and management details:
Connect to an Azure Container Service cluster
Work with Azure Container Service and DC/OS
Work with Azure Container Service and Docker Swarm

Using the Azure CLI 2.0 preview to create an Azure


Container Service cluster
11/22/2016 1 min to read Edit on GitHub

Contributors
Saurya Das Ralph Squillace

To create an Azure Container Service cluster, you need:


an Azure account (get a free trial)
the Azure CLI v. 2.0 (Preview) installed
to be logged in to your Azure account (see below)

Log in to your account


az login

You will need to go to this link to authenticate with the device code provided in the CLI.

Create a resource group


az resource group create -n acsrg1 -l "westus"

List of available Azure Container Service CLI commands


az acs -h

Create an Azure Container Service Cluster


ACS create usage in the CLI
az acs create -h

The name of the container service, the resource group created in the previous step and a unique DNS name are
mandatory. Other inputs are set to default values(please see the following screen with the help snapshot
below)unless overwritten using their respective switches.

Quick ACS create using defaults. If you do not have a SSH key use the second command. This second create
command with the --generate-ssh-keys switch will create one for you

az acs create -n acs-cluster -g acsrg1 -d applink789

az acs create -n acs-cluster -g acsrg1 -d applink789 --generate-ssh-keys

Please ensure that the dns-prefix (-d switch) is unique. If you get an error, please try again with a unique string.
After you type the preceding command, wait for about 10 minutes for the cluster to be created.

List ACS clusters


Under a subscription
az acs list --output table

In a specific resource group


az acs list -g acsrg1 --output table

Display details of a container service cluster


az acs show -g acsrg1 -n acs-cluster --output list

Scale the ACS cluster


Both scaling in and scaling out are allowed. The paramater new-agent-count is the new number of agents in the
ACS cluster.
az acs scale -g acsrg1 -n acs-cluster --new-agent-count 4

Delete a container service cluster


az acs delete -g acsrg1 -n acs-cluster

Note that this delete command does not delete all resources (network and storage) created while creating the
container service. To delete all resources, it is recommended that a single ACS cluster be created per resource
group and then the resource group itself be deleted when the acs cluster is no longer required to ensure that all
related resources are deleted and you are not charged for them.

Connect to an Azure Container Service cluster


11/15/2016 3 min to read Edit on GitHub

Contributors
Ross Gardler Kim Whitlatch (Beyondsoft Corporation) Tyson Nevil Neil Peterson katiecumming Neil Gat Javier Moreno
Mark Anderson

The DC/OS and Docker Swarm clusters that are deployed by Azure Container Service expose REST endpoints.
However, these endpoints are not open to the outside world. In order to manage these endpoints, you must create
a Secure Shell (SSH) tunnel. After an SSH tunnel has been established, you can run commands against the cluster
endpoints and view the cluster UI through a browser on your own system. This document walks you through
creating an SSH tunnel from Linux, OS X, and Windows.
NOTE
You can create an SSH session with a cluster management system. However, we don't recommend this. Working directly on
a management system exposes the risk for inadvertent configuration changes.

Create an SSH tunnel on Linux or OS X


The first thing that you do when you create an SSH tunnel on Linux or OS X is to locate the public DNS name of
load-balanced masters. To do this, expand the resource group so that each resource is being displayed. Locate and
select the public IP address of the master. This will open up a blade that contains information about the public IP
address, which includes the DNS name. Save this name for later use.

Now open a shell and run the following command where:


PORT is the port of the endpoint that you want to expose. For Swarm, this is 2375. For DC/OS, use port 80.
US ERNAME is the user name that was provided when you deployed the cluster.
DNS PREFIX is the DNS prefix that you provided when you deployed the cluster.
REGION is the region in which your resource group is located.
PATH_TO_PRIVATE_KEY [OPTIONAL] is the path to the private key that corresponds to the public key you
provided when you created the Container Service cluster. Use this option with the -i flag.

ssh -L PORT:localhost:PORT -f -N [USERNAME]@[DNSPREFIX]mgmt.[REGION].cloudapp.azure.com -p 2200

The SSH connection port is 2200--not the standard port 22.

DC/OS tunnel
To open a tunnel to the DC/OS-related endpoints, execute a command that is similar to the following:
sudo ssh -L 80:localhost:80 -f -N azureuser@acsexamplemgmt.japaneast.cloudapp.azure.com -p 2200

You can now access the DC/OS-related endpoints at:


DC/OS: http://localhost/
Marathon: http://localhost/marathon
Mesos: http://localhost/mesos
Similarly, you can reach the rest APIs for each application through this tunnel.

Swarm tunnel
To open a tunnel to the Swarm endpoint, execute a command that looks similar to the following:
ssh -L 2375:localhost:2375 -f -N azureuser@acsexamplemgmt.japaneast.cloudapp.azure.com -p 2200

Now you can set your DOCKER_HOST environment variable as follows. You can continue to use your Docker
command-line interface (CLI) as normal.
export DOCKER_HOST=:2375

Create an SSH tunnel on Windows


There are multiple options for creating SSH tunnels on Windows. This document will describe how to use PuTTY
to do this.
Download PuTTY to your Windows system and run the application.
Enter a host name that is comprised of the cluster admin user name and the public DNS name of the first master
in the cluster. The Host Nam e will look like this: adminuser@PublicDNS . Enter 2200 for the Port .

Select S S H and Authentication . Add your private key file for authentication.

Select Tunnels and configure the following forwarded ports:


S ource Port: Your preference--use 80 for DC/OS or 2375 for Swarm.
Destination: Use localhost:80 for DC/OS or localhost:2375 for Swarm.
The following example is configured for DC/OS, but will look similar for Docker Swarm.
NOTE
Port 80 must not be in use when you create this tunnel.

When you're finished, save the connection configuration, and connect the PuTTY session. When you connect, you
can see the port configuration in the PuTTY event log.

When you've configured the tunnel for DC/OS, you can access the related endpoint at:
DC/OS: http://localhost/
Marathon: http://localhost/marathon
Mesos: http://localhost/mesos
When you've configured the tunnel for Docker Swarm, you can access the Swarm cluster through the Docker CLI.
You will first need to configure a Windows environment variable named DOCKER_HOST with a value of :2375 .

Next steps
Deploy and manage containers with DC/OS or Swarm:
Work with Azure Container Service and DC/OS
Work with the Azure Container Service and Docker Swarm

Scale an Azure Container Service


11/15/2016 2 min to read Edit on GitHub

Contributors
Andy De George Kim Whitlatch (Beyondsoft Corporation) Tyson Nevil Dan Lepow

You can scale out the number of nodes your Azure Container Service (ACS) has by using the Azure CLI tool. When
you use the Azure CLI to scale, the tool returns you a new configuration file representing the change applied to the
container.

About the command


The Azure CLI must be in Azure Resource Manager mode for you to interact with Azure Containers. You can switch
to Resource Manager mode by calling azure config mode arm . The acs command has a child-command named
scale that does all the scale operations for a container service. You can get help about the various parameters
used in the scale command by running azure acs scale --help , which outputs something similar to this:
azure acs scale --help
help:
help:
help:
help:
help:
help:
help:
help:
help:
help:
help:
help:
help:
help:
help:

The operation to scale a container service.


Usage: acs scale [options] <resource-group> <name> <new-agent-count>
Options:
-h, --help
-v, --verbose
-vv
--json
-g, --resource-group <resource-group>
-n, --name <name>
-o, --new-agent-count <new-agent-count>
-s, --subscription <subscription>

output usage information


use verbose output
more verbose with debug output
use json output
resource-group
name
New agent count
The subscription identifier

Current Mode: arm (Azure Resource Management)

Use the command to scale


To scale a container service, you first need to know the resource group and the Azure Container S ervice
(ACS ) nam e , and also specify the new count of agents. By using a smaller or higher amount, you can scale down
or up respectively.
You may want to know what the current count of agents for a container service before you scale. Use the
azure acs show <resource group> <ACS name> command to return the ACS config. Note the Count result.
See current count

azure acs show containers-test containerservice-containers-test


info:
Executing command acs show
data:
data:
Id
: /subscriptions/<guid>/resourceGroups/containerstest/providers/Microsoft.ContainerService/containerServices/containerservice-containers-test
data:
Name
: containerservice-containers-test
data:
Type
: Microsoft.ContainerService/ContainerServices
data:
Location
: westus
data:
ProvisioningState : Succeeded
data:
OrchestratorProfile
data:
OrchestratorType : DCOS
data:
MasterProfile
data:
Count
: 1
data:
DnsPrefix
: myprefixmgmt
data:
Fqdn
: myprefixmgmt.westus.cloudapp.azure.com
data:
AgentPoolProfiles
data:
#0
data:
Name
: agentpools
data:
<mark>Count
: 1</mark>
data:
VmSize
: Standard_D2
data:
DnsPrefix
: myprefixagents
data:
Fqdn
: myprefixagents.westus.cloudapp.azure.com
data:
LinuxProfile
data:
AdminUsername
: azureuser
data:
Ssh
data:
PublicKeys
data:
#0
data:
KeyData
: ssh-rsa <ENCODED VALUE>
data:
DiagnosticsProfile
data:
VmDiagnostics
data:
Enabled
: true
data:
StorageUri
: https://<storageid>.blob.core.windows.net/

Scale to new count

As it is probably already self-evident, you can scale the container service by calling azure acs scale and supplying
the resource group , ACS nam e , and agent count . When you scale a container service, Azure CLI returns a JSON
string representing the new configuration of the container service, including the new agent count.

azure acs scale containers-test containerservice-containers-test 10


info:
Executing command acs scale
data:
{
data:
id: '/subscriptions/<guid>/resourceGroups/containerstest/providers/Microsoft.ContainerService/containerServices/containerservice-containers-test',
data:
name: 'containerservice-containers-test',
data:
type: 'Microsoft.ContainerService/ContainerServices',
data:
location: 'westus',
data:
provisioningState: 'Succeeded',
data:
orchestratorProfile: { orchestratorType: 'DCOS' },
data:
masterProfile: {
data:
count: 1,
data:
dnsPrefix: 'myprefixmgmt',
data:
fqdn: 'myprefixmgmt.westus.cloudapp.azure.com'
data:
},
data:
agentPoolProfiles: [
data:
{
data:
name: 'agentpools',
data:
<mark>count: 10</mark>,
data:
vmSize: 'Standard_D2',
data:
dnsPrefix: 'myprefixagents',
data:
fqdn: 'myprefixagents.westus.cloudapp.azure.com'
data:
}
data:
],
data:
linuxProfile: {
data:
adminUsername: 'azureuser',
data:
ssh: {
data:
publicKeys: [
data:
{ keyData: 'ssh-rsa <ENCODED VALUE>' }
data:
]
data:
}
data:
},
data:
diagnosticsProfile: {
data:
vmDiagnostics: { enabled: true, storageUri: 'https://<storageid>.blob.core.windows.net/' }
data:
}
data:
}
info:
acs scale command OK

Next steps
Deploy a cluster

Container management through the web UI


11/15/2016 2 min to read Edit on GitHub

Contributors
Neil Peterson Kim Whitlatch (Beyondsoft Corporation) Tyson Nevil Glenn Gailey Dan Lepow Andy De George katiecumming
Neil Gat Ross Gardler

DC/OS provides an environment for deploying and scaling clustered workloads, while abstracting the underlying
hardware. On top of DC/OS, there is a framework that manages scheduling and executing compute workloads.
While frameworks are available for many popular workloads, this document will describe how you can create and
scale container deployments with Marathon. Before working through these examples, you will need a DC/OS
cluster that is configured in Azure Container Service. You also need to have remote connectivity to this cluster. For
more information on these items, see the following articles:
Deploy an Azure Container Service cluster
Connect to an Azure Container Service cluster

Explore the DC/OS UI


With a Secure Shell (SSH) tunnel established, browse to http://localhost/. This loads the DC/OS web UI and shows
information about the cluster, such as used resources, active agents, and running services.

Explore the Marathon UI


To see the Marathon UI, browse to http://localhost/Marathon. From this screen, you can start a new container or
another application on the Azure Container Service DC/OS cluster. You can also see information about running

containers and applications.

Deploy a Docker-formatted container


To deploy a new container by using Marathon, click the Create Application button, and enter the following
information into the form:
FIELD

VALUE

ID

nginx

Image

nginx

Network

Bridged

Host Port

80

Protocol

TCP

If you want to statically map the container port to a port on the agent, you need to use JSON Mode. To do so,
switch the New Application wizard to JS ON Mode by using the toggle. Then enter the following under the
portMappings section of the application definition. This example binds port 80 of the container to port 80 of the
DC/OS agent. You can switch this wizard out of JSON Mode after you make this change.
"hostPort": 80,

The DC/OS cluster is deployed with set of private and public agents. For the cluster to be able to access
applications from the Internet, you need to deploy the applications to a public agent. To do so, select the Optional
tab of the New Application wizard and enter slave_public for the Accepted Resource Roles .

Back on the Marathon main page, you can see the deployment status for the container.

When you switch back to the DC/OS web UI (http://localhost/), you will see that a task (in this case, a Dockerformatted container) is running on the DC/OS cluster.

You can also see the cluster node that the task is running on.

Scale your containers


You can use the Marathon UI to scale the instance count of a container. To do so, navigate to the Marathon page,
select the container that you want to scale, and click the S cale button. In the S cale Application dialog box, enter
the number of container instances that you want, and select S cale Application .

After the scale operation finishes, you will see multiple instances of the same task spread across DC/OS agents.

Next steps
Work with DC/OS and the Marathon API
Deep dive on the Azure Container Service with Mesos

Container management through the REST API


11/15/2016 4 min to read Edit on GitHub

Contributors
Neil Peterson Andy Pasic Kim Whitlatch (Beyondsoft Corporation) Tyson Nevil Dan Lepow katiecumming Richard Watson
Neil Gat Ross Gardler

DC/OS provides an environment for deploying and scaling clustered workloads, while abstracting the underlying
hardware. On top of DC/OS, there is a framework that manages scheduling and executing compute workloads.
Although frameworks are available for many popular workloads, this document describes how you can create and
scale container deployments by using Marathon. Before working through these examples, you need a DC/OS
cluster that is configured in Azure Container Service. You also need to have remote connectivity to this cluster. For
more information on these items, see the following articles:
Deploying an Azure Container Service cluster
Connecting to an Azure Container Service cluster
After you are connected to the Azure Container Service cluster, you can access the DC/OS and related REST APIs
through http://localhost:local-port. The examples in this document assume that you are tunneling on port 80. For
example, the Marathon endpoint can be reached at http://localhost/marathon/v2/ . For more information on the
various APIs, see the Mesosphere documentation for the Marathon API and the Chronos API, and the Apache
documentation for the Mesos Scheduler API.

Gather information from DC/OS and Marathon


Before you deploy containers to the DC/OS cluster, gather some information about the DC/OS cluster, such as the
names and current status of the DC/OS agents. To do so, query the master/slaves endpoint of the DC/OS REST
API. If everything goes well, you will see a list of DC/OS agents and several properties for each.
curl http://localhost/mesos/master/slaves

Now, use the Marathon /apps endpoint to check for current application deployments to the DC/OS cluster. If this
is a new cluster, you will see an empty array for apps.
curl localhost/marathon/v2/apps
{"apps":[]}

Deploy a Docker-formatted container


You deploy Docker-formatted containers through Marathon by using a JSON file that describes the intended
deployment. The following sample will deploy the Nginx container, binding port 80 of the DC/OS agent to port 80
of the container. Also note that the acceptedResourceRoles property is set to slave_public. This will deploy the
container to an agent in the public-facing agent scale set.

{
"id": "nginx",
"cpus": 0.1,
"mem": 16.0,
"instances": 1,
"acceptedResourceRoles": [
"slave_public"
],
"container": {
"type": "DOCKER",
"docker": {
"image": "nginx",
"network": "BRIDGE",
"portMappings": [
{ "containerPort": 80, "hostPort": 80, "servicePort": 9000, "protocol": "tcp" }
]
}
}
}

In order to deploy a Docker-formatted container, create your own JSON file, or use the sample provided at Azure
Container Service demo. Store it in an accessible location. Next, to deploy the container, run the following
command. Specify the name of the JSON file.
curl -X POST http://localhost/marathon/v2/apps -d @marathon.json -H "Content-type: application/json"

The output will be similar to the following:


{"version":"2015-11-20T18:59:00.494Z","deploymentId":"b12f8a73-f56a-4eb1-9375-4ac026d6cdec"}

Now, if you query Marathon for applications, this new application will show in the output.
curl localhost/marathon/v2/apps

Scale your containers


You can also use the Marathon API to scale out or scale in application deployments. In the previous example, you
deployed one instance of an application. Let's scale this out to three instances of an application. To do so, create a
JSON file by using the following JSON text, and store it in an accessible location.
{ "instances": 3 }

Run the following command to scale out the application.


NOTE
The URI will be http://localhost/marathon/v2/apps/ and then the ID of the application to scale. If you are using the Nginx
sample that is provided here, the URI would be http://localhost/marathon/v2/apps/nginx.

curl http://localhost/marathon/v2/apps/nginx -H "Content-type: application/json" -X PUT -d @scale.json

Finally, query the Marathon endpoint for applications. You will see that there are now three of the Nginx
containers.

curl localhost/marathon/v2/apps

Use PowerShell for this exercise: Marathon REST API interaction with
PowerShell
You can perform these same actions by using PowerShell commands on a Windows system.
To gather information about the DC/OS cluster, such as agent names and agent status, run the following
command.
Invoke-WebRequest -Uri http://localhost/mesos/master/slaves

You deploy Docker-formatted containers through Marathon by using a JSON file that describes the intended
deployment. The following sample will deploy the Nginx container, binding port 80 of the DC/OS agent to port 80
of the container.
{
"id": "nginx",
"cpus": 0.1,
"mem": 16.0,
"instances": 1,
"container": {
"type": "DOCKER",
"docker": {
"image": "nginx",
"network": "BRIDGE",
"portMappings": [
{ "containerPort": 80, "hostPort": 80, "servicePort": 9000, "protocol": "tcp" }
]
}
}
}

Create your own JSON file, or use the sample provided at Azure Container Service demo. Store it in an accessible
location. Next, to deploy the container, run the following command. Specify the name of the JSON file.
Invoke-WebRequest -Method Post -Uri http://localhost/marathon/v2/apps -ContentType application/json -InFile
'c:\marathon.json'

You can also use the Marathon API to scale out or scale in application deployments. In the previous example, you
deployed one instance of an application. Let's scale this out to three instances of an application. To do so, create a
JSON file by using the following JSON text, and store it in an accessible location.
{ "instances": 3 }

Run the following command to scale out the application.


NOTE
The URI will be http://localhost/marathon/v2/apps/ and then the ID of the application to scale. If you are using the Nginx
sample provided here, the URI would be http://localhost/marathon/v2/apps/nginx.

Invoke-WebRequest -Method Put -Uri http://localhost/marathon/v2/apps/nginx -ContentType application/json -InFile


'c:\scale.json'

Next steps
Read more about the Mesos HTTP endpoints.
Read more about the Marathon REST API.

Continuous Integration and Deployment of MultiContainer Docker Applications to Azure Container


Service
11/22/2016 14 min to read Edit on GitHub

Contributors
Shayne Boyer katiecumming Iain Foulds Ralph Squillace

In this tutorial, we cover how to fully automate building and deploying a multi-container Docker app to an Azure
Container Service cluster running DC/OS. While the benefits of continuous integration and deployment (CI/CD) are
known, there are new considerations when integrating containers into your workflow. Using the new Azure
Container Registry and CLI commands, we set up an end-to-end flow, which you can customize.

Get started
You can run this walkthrough on OS X, Windows, or Linux.
You need an Azure subscription. If you don't have one, you can sign up for an account.
Install the Azure Command-line tools.

What we'll create


Let's touch on some key aspects of the app and its deployment flow that we are setting up:
The application is com posed of m ultiple services . Docker assets, Dockerfile, and docker-compose.yml,
to define the services in our app, each running in separate containers. These enable parts of the app to scale
independently, and each service can be written in a different programming language and framework. The
app's code can be hosted across one or more Git source repositories (the tools currently support GitHub or
Visual Studio Team Services).
The app runs in an ACS cluster configured w ith DC/OS . The container orchestrator can manage the
health of our cluster and ensure our required number of container instances keep running.
The process of building and deploying container im ages fully autom ate w ith zero-dow ntim e . We
want developers on the team to 'git push' to a branch, which automatically triggers an integration process.
That is, build and tag container images, run tests on each container, and push those images to a Docker
private registry. From there, new images are automatically deployed to a shared pre-production
environment on an ACS cluster for further testing.
Prom ote a release from one environm ent to the next , for example from Dev -> Test -> Staging ->
Production. Each time we promote to a downstream environment, we will not need to rebuild our container
images to ensure we deploy the same images tested in a prior environment. This process is the concept of
immutable services, and reduces the likelihood of undetected errors creeping into production.
To most effectively utilize compute resources in our ACS cluster, we utilize the same cluster to run build
tasks fully containerizing build and deploy steps. The cluster also hosts our multiple dev/test/production
environments.

Create an Azure Container Service cluster configured with DC/OS

IMPORTANT
To create a secure cluster you pass your SSH public key file to pass when you call az acs create . Either you can have the
Azure CLI 2.0 generate the keys for you and pass them at the same time using the --generate-ssh-keys option, or you
can pass the path to your keys using the --ssh-key-value option (the default location on Linux is ~/.ssh/id_rsa.pub
and on Windows %HOMEPATH%\.ssh\id_rsa.pub , but this can be changed). To create SSH public and private key files on
Linux, see Create SSH keys on Linux and Mac. To create SSH public and private key files on Windows, see Create SSH keys on
Windows.

1. First, type the az login command in a terminal window to log in to your Azure subscription with the Azure
CLI:
az login

2. Create a resource group in which we place our cluster using az resource group create:
az resource group create --name myacs-rg --location westus

You may want to specify the Azure datacenter region closest to you.
3. Create an ACS cluster with default settings using az acs create and passing the path to you public SSH key
file:
az acs create \
--resource-group myacs-rg
--name myacs \
--dns-prefix myacs \
--ssh-key-value ~/.ssh/id_rsa.pub

This step takes several minutes, so feel free to read on. The acs create command returns information about the
newly created cluster (or you can list the ACS clusters in your subscription with az acs list ). For more ACS
configuration options, read more about creating and configuring an ACS cluster.

Set up sample code


While the cluster is being created, we can set up sample code that we deploy to ACS.
1. Fork the sample GitHub repository so that you have your own copy: https://github.com/azuresamples/container-service-dotnet-continuous-integration-multi-container.git. The app is essentially a multicontainer version of "hello world."
2. Once you have created a fork in your own GitHub account, locally clone the repository on your computer:
git clone https://github.com/your-github-account/container-service-dotnet-continuous-integration-multicontainer.git
cd container-service-dotnet-continuous-integration-multi-container

Let's take a closer look at the code:


is an Angular.js-based web app with a Node.js backend.
/service-b is a .NET Core service, and is called by service-a via REST.
Both service-a and service-b contain a Dockerfile in each of their directories that respectively describe
Node.js- and .NET Core-based container images.
docker-compose.yml declares the set of services that are built and deployed.
In addition to service-a and service-b , a third service named cache runs a Redis cache that service-a can
use. cache differs from the first two services in that we don't have code for it in our source repository. Instead,
/service-a

we fetch a pre-made

redis:alpine

image from Docker Hub and deploy it to ACS.

contains code where service-a calls both service-b and cache . Notice that
service-a code references service-b and cache by how they are named in docker-compose.yml . If we run
these services on our local machine via docker-compose , Docker ensures the services are all networked
appropriately to find each other by name. Running the services in a cluster environment with load-balanced
networking typically makes it much more complex than running locally. The good news is the Azure CLI
commands set up a CI/CD flow that ensures this straight-forward service discovery code continues to run
as-is in ACS.
/service-a/server.js

Set up continuous integration and deployment


1. Ensure the ACS cluster is ready: run az acs list and confirm that our ACS cluster is listed. (Note: ACS must be
running DC/OS 1.8 or greater.)
2. Create a GitHub personal access token, granting it at least the repo scope. Don't forget to copy the token to
your clipboard, as we'll use it in the next command (it will set up a webhook on our GitHub repo).
3. Set your current directory to the root of your cloned source repository, and create a build and release
pipeline, using the _<GitHubPersonalAccessToken> that you just created:
cd container-service-dotnet-continuous-integration-multi-container

az container release create \


--target-name myacs \
--target-resource-group myacs-rg \
--remote-access-token <GitHubPersonalAccessToken>

Where --target-name is the name of your ACS cluster, and


resource group name.

--target-resource-group

is the ACS cluster's

On first run, this command may take a minute or so to complete. Once completed, important information is
returned regarding the build and release pipeline it created:
: a webhook is configured for the source repository so that the build and release pipeline is
automatically triggered whenever source code is pushed to it.
vstsProject : Visual Studio Team Services (VSTS) is configured to drive the workflow (the actual build and
deployment tasks run within containers in ACS). If you would like to use a specific VSTS account and project, you
can define using the --vsts-account-name and --vsts-project-name parameters.
buildDefinition : defines the tasks that run for each build. Container images are produced for each service
defined in the docker-compose.yml, and then pushed to a Docker container registry.
containerRegistry : The Azure Container Registry is a managed service that runs a Docker container registry. A
new Azure Container Registry is created with a default name or you can alternatively specify an Azure Container
Registry name via the --registry-name parameter.
releaseDefinition : defines the tasks that are run for each deployment. Container images for the services
sourceRepo

defined in docker-compose.yml are pulled from the container registry, and deployed to the ACS cluster. By
default, three environments are created: Dev , Test, and Production. The release definition is configured by
default to automatically deploy to Dev each time a build completes successfully. A release can be promoted to
Test or Production manually without requiring a rebuild. The default flow can be customized in VSTS.
containerService : the target ACS cluster (must be running DC/OS 1.8).
The following snippet is an example command you would type if you already have an existing Azure Container
Registry named myregistry . Create and build release definitions with a VSTS account at
myvstsaccount.visualstudio.com , and an existing VSTS project myvstsproject :
az container release create \
--target-name myacs \
--target-resource-group myacs-rg \
--registry-name myregistry \
--vsts-account-name myvstsaccount \
--vsts-project-name myvstsproject \
--remote-access-token <GitHubPersonalAccessToken>

View deployment pipeline progress


Once the pipeline is created, a first-time build and deployment is kicked off automatically. Subsequent builds are
triggered each time code is pushed to the source repository. You can check progress of a build and/or release by
opening your browser to the build definition or release definition URLs.
You can always find the release definition URL associated with an ACS cluster by running this command:
az container release list \
--target-name myacs \
--target-resource-group myacs-rg

VSTS screenshot showing CI results of our multi-container app

VSTS docker-compose release with multiple environments

View the application


At this point, our application is deployed to our shared dev environment and is not publicly exposed. In the
meantime, use the DC/OS dashboard to view and manage our services and create an SSH tunnel to the DC/OSrelated endpoints or run a convenience command provided by the Azure CLI.
IMPORTANT
On a first-time deployment, confirm the VSTS release successfully deployed before proceeding.

NOTE
Windows Only: You need to set up Pageant to complete this section.
Launch PuttyGen and load the private SSH key used to create the ACS cluster (%homepath%\id_rsa).
Save the private SSH key as id_rsa.ppk in the same folder.
Launch Pageant - it will start running and display an icon in your bottom-right system tray.
Right-click the system tray icon and select Add Key.
Add the id_rsa.ppk file.

1. Open the ACS cluster's DC/OS dashboard using the Azure CLI convenience command:
az acs dcos browse -g myacs-rg -n myacs
-g
-n

is the resource group name of the target ACS cluster


is the name of the target ACS cluster.

You may be prompted for your local account password, since this command requires administrator
privilege. The command creates an SSH tunnel to a DC/OS endpoint, opens your default browser to
that endpoint, and temporarily configures the browser's web proxy.

TIP
If you need to look up the name of your ACS cluster, you can list all ACS clusters in your subscription by
running az acs list .

2. In the DC/OS dashboard, click S ervices on the left navigation menu (http://localhost/#/services). Services
deployed via our pipeline are grouped under a root folder named dev (named after the environment in the
VSTS release definition).

You can perform many useful things in the DC/OS dashboard


tracking deployment status for each service
viewing CPU and Memory requirements
viewing logs
scaling the number of instances for each service
To view the w eb application for service-a : start at the dev root folder, then drill down the folder hierarchy
until you reach service-a . This view lists the running tasks (or container instances) for service-a .

Click a task to open its view, then click one of its available endpoints.

Our simple web app calls service-a , which calls service-b , and returns a hello world message. A counter is
incremented on Redis each time a request is made.

(Optional) Reaching a service from the command line


If you want to reach a service via curl from the command line:
1. Run az acs dcos browse --verbose
:" after you enter your password.

-g myacs-rg -n myacs

, and take note of the line that reads "Proxy running on

2. In a new terminal window, type:


export http_proxy=http://<web-proxy-service-ip>:<portnumber>

For example:

export http_proxy=http://127.0.0.1:55405

3. Now you can curl against your service endpoint, curl http://service-url , where service-url is the
address you see when you navigate to your service endpoint from Marathon UI. To unset the http_proxy
variable from your command line, type unset http_proxy .

Scale services
While we're in the DC/OS dashboard, let's scale our services.
1. Navigate to the application in the dev subfolder.
2. Hover over

service-b

, click the gear icon, and select S cale .

3. Increase the number to 3 and click S cale S ervice .

4. Navigate back to the running web app, and repeatedly click the Say It Again button. Notice that
invocations begin to round-robin across a collection of hostnames, while the single instance of
continues to report the same host.

service-b
service-a

Promote a release to downstream environments without rebuilding


container images
Our VSTS release pipeline set up three environments by default: Dev , Test, and Production. So far we've deployed
to Dev . Let's look at how we can promote a release to the next downstream environment, Test, without rebuilding
our container images. This workflow ensures we're deploying the exact same images we tested in the prior
environment and is the concept of immutable services, and reduces the likelihood of undetected errors creeping
into production.
1. In the VSTS web UI, navigate to Releases

2. Open the most recent release.


3. In the release definition's menu bar, click Deploy , then select Test as the next environment we want to

deploy to start a new deployment, reusing the same images that were previously deployed to Dev . Click
Logs if you want to follow along the deployment in more detail.

Once deployment to Test has succeeded, a new root folder in Marathon UI named test that contains the running
services for that environment.

Trigger a new build and deployment


Let's simulate what would happen if a developer on our team pushed a code change to the source repository.
1. Back in the code editor, open

service-a/public/index.html

2. Modify this line of code:


<h2>Server Says</h2>

to something like:
<h2>Server Says Hello</h2>

3. Save the file, then commit and push the code change to your source repository.
git commit -am 'updated title'
git push

The commit automatically kicks off a new build, and a new release to be deployed to Dev . Services in downstream
environments (Test or Production) remains unchanged until we decide to promote a specific release to that
environment.

If you open the build definition in VSTS, you'll see something like this:

Expose public endpoint for production


1. Add the following yaml code to a new file named docker-compose.env.production.yml at the root folder of
your source repository. This adds a label that causes a public endpoint to be exposed for service-a .
version: "2"
services:
service-a:
labels:
com.microsoft.acs.dcos.marathon.vhost: "<FQDN, or custom domain>"

For the label value, you can either specify the URL of your ACS agent's fully qualified domain name
(FQDN), or a custom domain (for example, app.contoso.com). To find your ACS agent's FQDN, run the
command az acs list , and check the property for agentPoolProfiles.fqdn . For example,
myacsagents.westus.cloudapp.azure.com .
By following the filename convention docker-compose.env.environment-name.yml, these settings only
affect the named environment (in this case, the environment named Production). Inspect the release
definition in VSTS, each environment's deployment task is set up to read from a docker-compose file
named after this convention.
2. Commit and push the file to your master source repository to start another build.
git add .
git commit -am "expose public port for service-a"
git push

3. Wait until the update has been built and deployed to Dev , then promote it to Test, and then promote it to
Production. (For the purposes of this tutorial, you can deploy directly to Production but it is good to get in
the practice of only deploying to the next downstream environment.)
4. (Optional) If you specified a custom dom ain for vhost (for example, app.contoso.com), add a DNS record
in your domain provider's settings. Log in to your domain provider's administrative UI and add a DNS
record as follows:
Type: CNAME
Host: Your custom domain, for example, app.contoso.com
Answer: ACS agent FQDN, for example, myacsagents.westus.cloudapp.azure.com

TTL (Optional): Sometimes, your domain provider gives you the ability to edit the TTL. A lower value
results in a DNS record update to be propagated more quickly.
5. Once the release has been deployed to Production, that version is accessible to anyone. Open your browser
to the URL you specified for the com.microsoft.acs.dcos.marathon.vhost label. (Note: releases to preproduction environments continue to be private).

Summary
Congratulations! You learned how to create an ACS cluster with DC/OS, and set up a fully automated and
containerized build and deployment pipeline for a multi-container app.
Some next steps to explore:
S cale VS TS Agents. If you need more throughput for running build and release tasks, you can increase the
number of VSTS agent instances. Navigate to S ervices in the DC/OS Dashboard, open the vsts-agents folder,
and experiment with scaling the number of VSTS agent instances.
Integrate unit tests. This GitHub repository shows how to make unit tests and integration tests run in
containers and include them in the build tasks: https://github.com/mindaro/sample-app.
Hint: look at these files in the repository: service-a/unit-tests.js , service-a/service-tests.js ,
docker-compose.ci.unit-tests.yml , and docker-compose.ci.service-tests.yml .

Clean up
To limit your compute charges related to this tutorial, run the following command and take note of the deployment
pipeline resources that are related to an ACS cluster:
az container release list --resource-name myacs --resource-group myacs-rg

Delete the ACS cluster:


1. Sign into the Azure portal
2. Look up the resource group that contains your ACS cluster.
3. Open the resource group's blade UI, and click Delete in the blade's command bar.
Delete the Azure Container Registry:
1. In the Azure portal, search for the Azure Container Registry, and delete it.
The Visual Studio Team Services account offers free Basic Access Level for the first five users, but you can delete the
build and release definitions.
1. Delete the VSTS Build Definition:
Open the Build Definition URL in your browser, then click on the Build Definitions link (next to the
name of the build definition you are currently viewing).
Click the action menu beside the build definition you want to delete, and select Delete Definition

2. Delete the VSTS Release Definition:


Open the Release Definition URL in your browser.
In the Release Definitions list on the left-hand side, click the drop-down beside the release definition
you want to delete, and select Delete .

DC/OS Agent Pools for Azure Container Service


11/15/2016 1 min to read Edit on GitHub

Contributors
Andy De George Kim Whitlatch (Beyondsoft Corporation) Tyson Nevil Dan Lepow

DC/OS Azure Container Service divides agents into public or private pools. A deployment can be made to either
pool, affecting accessibility between machines in your container service. The machines can be exposed to the
internet (public) or kept internal (private). This article gives a brief overview of why there are a public and private
pool.

Private agents
Private agent nodes run through a non-routable network. This network is only accessible from the admin zone or
through the public zone edge router. By default, DC/OS launches apps on private agent nodes. Consult the DC/OS
documentation for more information about network security.
Public agents
Public agent nodes run DC/OS apps and services through a publicly accessible network. Consult the DC/OS
documentation for more information about network security.

Using agent pools


By default, Marathon deploys any new application to the private agent nodes. You have to explicitly deploy the
application to the public node during the creation of the application. Select the Optional tab and enter
slave_public for the Accepted Resource Roles value. This process is documented here and in the DC\OS
documentation.

Next steps
Read more information about managing your DC/OS containers.
Learn how to open the firewall provided by Azure to allow public access to your DC/OS container.

Enable public access to an Azure Container Service


application
11/15/2016 2 min to read Edit on GitHub

Contributors
Andy De George Kim Whitlatch (Beyondsoft Corporation) Tyson Nevil Dan Lepow

Any DC/OS container in the ACS public agent pool is automatically exposed to the internet. By default, ports 80 ,
443 , 8080 are opened, and any (public) container listening on those ports are accessible. This article shows you
how to open more ports for your applications in Azure Container Service.

Open a port (portal)


First, we need to open the port we want.
1. Log in to the portal.
2. Find the resource group that you deployed the Azure Container Service to.
3. Select the agent load balancer (which is named similar to XXXX-agent-lb-XXXX ).

4. Click Probes and then Add .

5. Fill out the probe form and click OK .


FIELD

DESCRIPTION

Name

A descriptive name of the probe.

Port

The port of the container to test.

Path

(When in HTTP mode) The relative website path to probe.


HTTPS not supported.

Interval

The amount of time between probe attempts, in seconds.

Unhealthy threshold

Number of consecutive probe attempts before considering


the container unhealthy.

6. Back at the properties of the agent load balancer, click Load balancing rules and then Add .

7. Fill out the load balancer form and click OK .


FIELD

DESCRIPTION

Name

A descriptive name of the load balancer.

Port

The public incoming port.

Backend port

The internal-public port of the container to route traffic to.

Backend pool

The containers in this pool will be the target for this load
balancer.

Probe

The probe used to determine if a target in the Backend


pool is healthy.

Session persistence

Determines how traffic from a client should be handled for


the duration of the session.
None: Successive requests from the same client can be
handled by any container.
Client IP : Successive requests from the same client IP are
handled by the same container.
Client IP and protocol: Successive requests from the
same client IP and protocol combination are handled by
the same container.

Idle timeout

(TCP only) In minutes, the time to keep a TCP/HTTP client


open without relying on keep-alive messages.

Add a security rule (portal)


Next, we need to add a security rule that routes traffic from our opened port through the firewall.
1. Log in to the portal.
2. Find the resource group that you deployed the Azure Container Service to.
3. Select the public agent network security group (which is named similar to XXXX-agent-public-nsg-

XXXX ).

4. Select Inbound security rules and then Add .

5. Fill out the firewall rule to allow your public port and click OK .
FIELD

DESCRIPTION

Name

A descriptive name of the firewall rule.

FIELD

DESCRIPTION

Priority

Priority rank for the rule. The lower the number the higher
the priority.

Source

Restrict the incoming IP address range to be allowed or


denied by this rule. Use Any to not specify a restriction.

Service

Select a set of predefined services this security rule is for.


Otherwise use Custom to create your own.

Protocol

Restrict traffic based on TCP or UDP . Use Any to not


specify a restriction.

Port range

When Service is Custom , specifies the range of ports


that this rule affects. You can use a single port, such as 80 ,
or a range like 1024-1500 .

Action

Allow or deny traffic that meets the criteria.

Next steps
Learn about the difference between public and private DC/OS agents.
Read more information about managing your DC/OS containers.

Load balance containers in an Azure Container


Service cluster
11/15/2016 4 min to read Edit on GitHub

Contributors
Ross Gardler Kim Whitlatch (Beyondsoft Corporation) Tyson Nevil William Buchwalter katiecumming Neil Gat

In this article, we'll explore how to create an internal load balancer in a a DC/OS managed Azure Container Service
using Marathon-LB. This will enable you to scale your applications horizontally. It will also enable you to take
advantage of the public and private agent clusters by placing your load balancers on the public cluster and your
application containers on the private cluster.

Prerequisites
Deploy an instance of Azure Container Service with orchestrator type DC/OS and ensure that your client can
connect to your cluster.

Load balancing
There are two load-balancing layers in the Container Service cluster we will build:
1. Azure Load Balancer provides public entry points (the ones that end users will hit). This is provided
automatically by Azure Container Service and is, by default, configured to expose port 80, 443 and 8080.
2. The Marathon Load Balancer (marathon-lb) routes inbound requests to container instances that service those
requests. As we scale the containers providing our web service, marathon-lb dynamically adapts. This load
balancer is not provided by default in your Container Service, but it is very easy to install.

Marathon Load Balancer


Marathon Load Balancer dynamically reconfigures itself based on the containers that you've deployed. It's also
resilient to the loss of a container or an agent - if this occurs, Apache Mesos will simply restart the container
elsewhere and marathon-lb will adapt.
To install the Marathon Load Balancer you can use either the DC/OS web UI or the command line.

Install Marathon-LB using DC/OS Web UI


1. Click 'Universe'
2. Search for 'Marathon-LB'
3. Click 'Install'

Install Marathon-LB using the DC/OS CLI


After installing the DC/OS CLI and ensuring you can connect to your cluster, run the following command from your
client machine:
dcos package install marathon-lb

This commadn automatically installs the load balancer on the public agents cluster.

Deploy A Load Balanced Web Application


Now that we have the marathon-lb package, we can deploy an application container that we wish to load balance.
For this example we will deploy a simple web server by using the following configuration:
{
"id": "web",
"container": {
"type": "DOCKER",
"docker": {
"image": "yeasy/simple-web",
"network": "BRIDGE",
"portMappings": [
{ "hostPort": 0, "containerPort": 80, "servicePort": 10000 }
],
"forcePullImage":true
}
},
"instances": 3,
"cpus": 0.1,
"mem": 65,
"healthChecks": [{
"protocol": "HTTP",
"path": "/",
"portIndex": 0,
"timeoutSeconds": 10,
"gracePeriodSeconds": 10,
"intervalSeconds": 2,
"maxConsecutiveFailures": 10
}],
"labels":{
"HAPROXY_GROUP":"external",
"HAPROXY_0_VHOST":"YOUR FQDN",
"HAPROXY_0_MODE":"http"
}
}

Set the value of

to the FQDN of the load balancer for your agents. This is in the form
<acsName>agents.<region>.cloudapp.azure.com . For example, if you create a Container Service cluster with name
myacs in region West US , the FQDN would be myacsagents.westus.cloudapp.azure.com . You can also find this by
looking for the load balancer with "agent" in the name when you're looking through the resources in the
resource group that you created for Container Service in the Azure portal.
Set the servicePort to a port >= 10,000. This identifies the service that is being run in this container--marathonlb uses this to identify services that it should balance across.
Set the HAPROXY_GROUP label to "external".
Set hostPort to 0. This means that Marathon will arbitrarily allocate an available port.
Set instances to the number of instances you want to create. You can always scale these up and down later.
HAProxy_0_VHOST

It is worth noing that by default Marathon will deploy to the private cluster, this means that the above deployment
will only be accessible via your load balancer, which is usually the behavior we desire.

Deploy using the DC/OS Web UI


1. Visit the Marathon page at http://localhost/marathon (after setting up your SSH tunnel and click
Create Appliction

2. In the New Application dialog click JSON


3. Paste the above JSON into the editor
4. Click Create Appliction

Mode

in the upper right corner

Deploy using the DC/OS CLI


To deploy this application with the DC/OS CLI simply copy the above JSON into a file called
run:

hello-web.json

, and

dcos marathon app add hello-web.json

Azure Load Balancer


By default, Azure Load Balancer exposes ports 80, 8080, and 443. If you're using one of these three ports (as we do
in the above example), then there is nothing you need to do. You should be able to hit your agent load balancer's
FQDN--and each time you refresh, you'll hit one of your three web servers in a round-robin fashion. However, if
you use a different port, you need to add a round-robin rule and a probe on the load balancer for the port that you
used. You can do this from the Azure CLI, with the commands azure network lb rule create and
azure network lb probe create . You can also do this using the Azure Portal.

Additional scenarios
You could have a scenario where you use different domains to expose different services. For example:
mydomain1.com -> Azure LB:80 -> marathon-lb:10001 -> mycontainer1:33292
mydomain2.com -> Azure LB:80 -> marathon-lb:10002 -> mycontainer2:22321
To achieve this, check out virtual hosts, which provide a way to associate domains to specific marathon-lb paths.
Alternatively, you could expose different ports and remap them to the correct service behind marathon-lb. For
example:
Azure lb:80 -> marathon-lb:10001 -> mycontainer:233423
Azure lb:8080 -> marathon-lb:1002 -> mycontainer2:33432

Next steps
See the DC/OS documentation for more on marathon-lb.

Create an application or user-specific Marathon


service
11/15/2016 2 min to read Edit on GitHub

Contributors
Ross Gardler Kim Whitlatch (Beyondsoft Corporation) Tyson Nevil katiecumming Neil Gat

Azure Container Service provides a set of master servers on which we preconfigure Apache Mesos and Marathon.
These can be used to orchestrate your applications on the cluster, but it's best not to use the master servers for this
purpose. For example, tweaking the configuration of Marathon requires logging into the master servers themselves
and making changes--this encourages unique master servers that are a little different from the standard and need
to be cared for and managed independently. Additionally, the configuration required by one team might not be the
optimal configuration for another team.
In this article, we'll explain how to add an application or user-specific Marathon service.
Because this service will belong to a single user or team, they are free to configure it in any way that they desire.
Also, Azure Container Service will ensure that the service continues to run. If the service fails, Azure Container
Service will restart it for you. Most of the time you won't even notice it had downtime.

Prerequisites
Deploy an instance of Azure Container Service with orchestrator type DC/OS and ensure that your client can
connect to your cluster. Also, do the following steps.
NOTE
This is for working with DC/OS-based ACS clusters. There is no need to do this for Swarm-based ACS clusters.

First, connect to your DC/OS-based ACS cluster. Once you have done this, you can install the DC/OS CLI on your
client machine with the commands below:
sudo pip install virtualenv
mkdir dcos && cd dcos
wget https://raw.githubusercontent.com/mesosphere/dcos-cli/master/bin/install/install-optout-dcos-cli.sh
chmod +x install-optout-dcos-cli.sh
./install-optout-dcos-cli.sh . http://localhost --add-path yes

If you are using an old version of Python, you may notice some "InsecurePlatformWarnings". You can safely ignore
these.
In order to get started without restarting your shell, run:
source ~/.bashrc

This step will not be necessary when you start new shells.
Now you can confirm that the CLI is installed:

dcos --help

Create an application or user-specific Marathon service


Begin by creating a JSON configuration file that defines the name of the application service that you want to create.
Here we use marathon-alice as the framework name. Save the file as something like marathon-alice.json :
{"marathon": {"framework-name": "marathon-alice" }}

Next, use the DC/OS CLI to install the Marathon instance with the options that are set in your configuration file:
dcos package install --options=marathon-alice.json marathon

You should now see your

service running in the Services tab of your DC/OS UI. The UI will be
http://<hostname>/service/marathon-alice/ if you want to access it directly.
marathon-alice

Set the DC/OS CLI to access the service


You can optionally configure your DC/OS CLI to access this new service by setting the
point to the marathon-alice instance as follows:

marathon.url

property to

dcos config set marathon.url http://<hostname>/service/marathon-alice/

You can verify which instance of Marathon that your CLI is working against with the dcos config show command.
You can revert to using your master Marathon service with the command dcos config unset marathon.url .

Using OMS to monitor container applications on ACS


DC/OS
11/22/2016 3 min to read Edit on GitHub

Contributors
Keiko Harada Ralph Squillace

Microsoft Operations Management (OMS) is Microsoft's cloud-based IT management solution that helps you
manage and protect your on-premises and cloud infrastructure. Container Solution is a solution in OMS Log
Analytics, which helps you view the container inventory, performance, and logs in a single location. You can audit,
troubleshoot containers by viewing the logs in centralized location, and find noisy consuming excess container on a
host.

For more information about Container Solution, please refer to the Container Solution Log Analytics.

Setting up OMS from the DC/OS universe


This article assumes that you have set up an DC/OS and have deployed simple web container applications on the
cluster.

Pre-requisite
Microsoft Azure Subscription - You can get this for free.
Microsoft OMS Workspace Setup - see "Step 3" below
DC/OS CLI installed.
1. In the DC/OS dashboard, click on Universe and search for OMS as shown below.

1. Click Install . You will see a pop up with the OMS version information and an Install Package or Advanced
Installation button. When you click Advanced Installation , which leads you to the OMS specific
configuration properties page.

1. Here, you will be asked to enter the wsid (the OMS workspace ID) and wskey (the OMS primary key for the
workspace id). To get both wsid and wskey you need to create an OMS account at
https://mms.microsoft.com. Please follow the steps to create an account. Once you are done creating the
account, you need to obtain your wsid and wskey by clicking S ettings , then Connected S ources , and
then Linux S ervers , as shown below.

2. Select the number you OMS instances that you want and click the Review and Install button. Typically, you
will want to have the number of OMS instances equal to the number of VMs you have in your agent cluster.
OMS Agent for Linux is installs as individual containers on each VM that it wants to collect information for

monitoring and logging information.

Setting up a simple OMS dashboard


Once you have installed the OMS Agent for Linux on the VMs, next step is to set up the OMS dashboard. There are
two ways to do this: OMS Portal or Azure Portal.

OMS Portal
Log in to the OMS portal (https://mms.microsoft.com) and go to the S olution Gallery .

Once you are in the S olution Gallery , select Containers .

Once youve selected the Container Solution, you will see the tile on the OMS Overview Dashboard page. Once the
ingested container data is indexed, you will see the tile populated with information on the solution view tiles.

Azure Portal
Login to Azure portal at https://portal.microsoft.com/. Go to Marketplace , select Monitoring + m anagem ent
and click S ee All . Then Type containers in search. You will see "containers" in the search results. Select
Containers and click Create .

Once you click Create , it will ask you for your workspace. Select your workspace or if you do not have one, create a
new workspace.

Once youve selected your workspace, click Create .

For more information about the OMS Container Solution, please refer to the Container Solution Log Analytics.

How to scale OMS Agent with ACS DC/OS


In case you need to have installed OMS agent short of the actual node count or you are scaling up VMSS by adding
more VM, you can do so by scaling the msoms service.
You can either go to Marathon or the DC/OS UI Services tab and scale up your node count.

This will deploy to other nodes which have not yet deployed the OMS agent.

Uninstall MS OMS
To uninstall MS OMS enter the following command:
$ dcos package uninstall msoms

Let us know!!!
What works? What is missing? What else do you need for this to be useful for you? Let us know at OMSContainers.

Next steps
Now that you have set up OMS to monitor your containers,see your container dashboard.

Monitor an Azure Container Service cluster with


Datadog
11/15/2016 1 min to read Edit on GitHub

Contributors
rbitia Kim Whitlatch (Beyondsoft Corporation) Tyson Nevil

In this article we will deploy Datadog agents to all the agent nodes in your Azure Container Service cluster. You will
need an account with Datadog for this configuration.

Prerequisites
Deploy and connect a cluster configured by Azure Container Service. Explore the Marathon UI. Go to
http://datadoghq.com to set up a Datadog account.

Datadog
Datadog is a monitoring service that gathers monitoring data from your containers within your Azure Container
Service cluster. Datadog has a Docker Integration Dashboard where you can see specific metrics within your
containers. Metrics gathered from your containers are organized by CPU, Memory, Network and I/O. Datadog splits
metrics into containers and images. An example of what the UI looks like for CPU usage is below.

Configure a Datadog deployment with Marathon


These steps will show you how to configure and deploy Datadog applications to your cluster with Marathon.
Access your DC/OS UI via http://localhost:80/. Once in the DC/OS UI navigate to the "Universe" which is on the
bottom left and then search for "Datadog" and click "Install."

Now to complete the configuration you will need a Datadog account or a free trial account. Once you're logged in
to the Datadog website look to the left and go to Integrations -> then API's.

Next enter your API key into the Datadog configuration within the DC/OS Universe.

In the above configuration instances are set to 10000000 so whenever a new node is added to the cluster Datadog
will automatically deploy an agent to that node. This is an interim solution. Once you've installed the package you
should navigate back to the Datadog website and find "Dashboards." From there you will see Custom and
Integration Dashboards. The Docker Integration Dashboard will have all the container metrics you need for
monitoring your cluster.

Monitor an Azure Container Service cluster with


Sysdig
11/15/2016 1 min to read Edit on GitHub

Contributors
rbitia Andy Pasic Kim Whitlatch (Beyondsoft Corporation) Tyson Nevil

In this article, we will deploy Sysdig agents to all the agent nodes in your Azure Container Service cluster. You need
an account with Sysdig for this configuration.

Prerequisites
Deploy and connect a cluster configured by Azure Container Service. Explore the Marathon UI. Go to
http://app.sysdigcloud.com to set up a Sysdig cloud account.

Sysdig
Sysdig is a monitoring service that allows you to monitor your containers within your cluster. Sysdig is known to
help with troubleshooting but it also has your basic monitoring metrics for CPU, Networking, Memory, and I/O.
Sysdig makes it easy to see which containers are working the hardest or essentially using the most memory and
CPU. This view is in the Overview section, which is currently in beta.

Configure a Sysdig deployment with Marathon


These steps will show you how to configure and deploy Sysdig applications to your cluster with Marathon.
Access your DC/OS UI via http://localhost:80/ Once in the DC/OS UI navigate to the "Universe", which is on the

bottom left and then search for "Sysdig."

Now to complete the configuration you need a Sysdig cloud account or a free trial account. Once you're logged in
to the Sysdig cloud website, click on your user name, and on the page you should see your "Access Key."

Next enter your Access Key into the Sysdig configuration within the DC/OS Universe.

Now set the instances to 10000000 so whenever a new node is added to the cluster Sysdig will automatically
deploy an agent to that new node. This is an interim solution to make sure Sysdig will deploy to all new agents
within the cluster.

Once you've installed the package navigate back to the Sysdig UI and you'll be able to explore the different usage
metrics for the containers within your cluster.

Microsoft Azure Container Service Engine Kubernetes Walkthrough


11/22/2016 4 min to read Edit on GitHub

Contributors
anhowe Ralph Squillace Saurya Das

Deployment
Here are the steps to deploy a simple Kubernetes cluster:
1. generate your ssh key
2. generate your service principal
3. Click the Deploy to Azure Button on README and fill in the fields

Walkthrough
Once your Kubernetes cluster has been created you will have a resource group containing:
1. 1 master accessible by SSH on port 22 or kubectl on port 443
2. a set of nodes in an availability set. The nodes can be accessed through a master. See agent forwarding for
an example of how to do this.
The following image shows the architecture of a container service cluster with 1 master, and 2 agents:

In the image above, you can see the following parts:


1. Master Com ponents - The master runs the Kubernetes scheduler, api server, and controller manager. Port 443
is exposed for remote management with the kubectl cli.
2. Nodes - the Kubernetes nodes run in an availability set. Azure load balancers are dynamically added to the
cluster depending on exposed services.
3. Com m on Com ponents - All VMs run a kubelet, Docker, and a Proxy.
4. Netw orking - All VMs are assigned an ip address in the 10.240.0.0/16 network. Each VM is assigned a /24
subnet for their pod CIDR enabling IP per pod. The proxy running on each VM implements the service network
10.0.0.0/16.

All VMs are in the same private VNET and are fully accessible to each other.

Create your First Kubernetes Service


After completing this walkthrough you will know how to:
access Kubernetes cluster via SSH,
deploy a simple Docker application and expose to the world,
the location of the Kube config file and how to access the Kubernetes cluster remotely,
use kubectl exec to run commands in a container,
and finally access the Kubernetes dashboard.
1. After successfully deploying the template write down the master FQDNs (Fully Qualified Domain Name).
a. If using Powershell or CLI, the output parameter is in the OutputsString section named 'masterFQDN'
b. If using Portal, browse to the Overview blade of the ContainerService resource to copy the "Master
FQDN":

2. SSH to the master FQDN obtained in step 1.


3. Explore your nodes and running pods:
a. to see a list of your nodes type kubectl get nodes . If you want full detail of the nodes, add
become kubectl get nodes -o yaml .
b. to see a list of running pods type kubectl get pods --all-namespaces .
4. Start your first Docker image by typing
container in a pod on one of the nodes.

kubectl run nginx --image nginx

-o yaml

to

. This will start the nginx Docker

5. Type kubectl get pods -o yaml to see the full details of the nginx deployment. You can see the host IP and
the podIP. The pod IP is assigned from the pod CIDR on the host. Run curl to the pod ip to see the nginx
output, eg. curl 10.244.1.4

6. The next step is to expose the nginx deployment as a Kubernetes service on the private service network
10.0.0.0/16:
a. expose the service with command kubectl
b. get the service IP kubectl get service
c. run curl to the IP, eg. curl 10.0.105.199

expose deployment nginx --port=80

7. The final step is to expose the service to the world. This is done by changing the service type from
to LoadBalancer :

ClusterIP

a. edit the service: kubectl edit svc/nginx


b. change type from ClusterIP to LoadBalancer and save it. This will now cause Kubernetes to create an
Azure Load Balancer with a public IP.
c. the change will take about 2-3 minutes. To watch the service change from "pending" to an external ip
type watch 'kubectl get svc'

a. once you see the external IP, you can browse to it in your browser:

8. The next step in this walkthrough is to show you how to remotely manage your Kubernetes cluster. First
download Kubectl to your machine and put it in your path:
Windows Kubectl
OSX Kubectl
Linux
9. The Kubernetes master contains the kube config file for remote access under the home directory
~/.kube/config. Download this file to your machine, set the KUBECONFIG environment variable, and run
kubectl to verify you can connect to cluster:
Windows to use pscp from putty. Ensure you have your certificate exposed through pageant:
# MASTERFQDN is obtained in step1 pscp azureuser@MASTERFQDN:.kube/config . SET KUBECONFIG=%CD%\config
kubectl get nodes

OS X or Linux:
# MASTERFQDN is obtained in step1 scp azureuser@MASTERFQDN:.kube/config . export
KUBECONFIG=`pwd`/config kubectl get nodes

10. The next step is to show you how to remotely run commands in a remote Docker container:
a. Run kubectl get pods to show the name of your nginx pod
b. using your pod name, you can run a remote command on your pod. eg.
kubectl exec nginx-701339712-retbj date

c. try running a remote bash session. eg.


screen shot shows these commands:

kubectl exec nginx-701339712-retbj -it bash

11. The final step of this tutorial is to show you the dashboard:
a. run kubectl proxy to directly connect to the proxy
b. in your browser browse to the dashboard
c. browse around and explore your pods and services.

. The following

Learning More
Here are recommended links to learn more about Kubernetes:
1. Azure Kubernetes documentation

Kubernetes Community Documentation


1. Kubernetes Bootcamp - shows you how to deploy, scale, update, and debug containerized applications.
2. Kubernetes Userguide - provides information on running programs in an existing Kubernetes cluster.
3. Kubernetes Examples - provides a number of examples on how to run real applications with Kubernetes.

Container management with Docker Swarm


11/15/2016 2 min to read Edit on GitHub

Contributors
Neil Peterson Kim Whitlatch (Beyondsoft Corporation) Tyson Nevil Dan Lepow Ross Gardler Neil Gat katiecumming

Docker Swarm provides an environment for deploying containerized workloads across a pooled set of Docker
hosts. Docker Swarm uses the native Docker API. The workflow for managing containers on a Docker Swarm is
almost identical to what it would be on a single container host. This document provides simple examples of
deploying containerized workloads in an Azure Container Service instance of Docker Swarm. For more in-depth
documentation on Docker Swarm, see Docker Swarm on Docker.com.
Prerequisites to the exercises in this document:
Create a Swarm cluster in Azure Container Service
Connect with the Swarm cluster in Azure Container Service

Deploy a new container


To create a new container in the Docker Swarm, use the docker run command (ensuring that you have opened an
SSH tunnel to the masters as per the prerequisites above). This example creates a container from the
yeasy/simple-web image:
user@ubuntu:~$ docker run -d -p 80:80 yeasy/simple-web
4298d397b9ab6f37e2d1978ef3c8c1537c938e98a8bf096ff00def2eab04bf72

After the container has been created, use docker ps to return information about the container. Notice here that
the Swarm agent that is hosting the container is listed:
user@ubuntu:~$ docker ps
CONTAINER ID
IMAGE
COMMAND
NAMES
4298d397b9ab
yeasy/simple-web
"/bin/sh -c 'python i"
10.0.0.5:80->80/tcp swarm-agent-34A73819-1/happy_allen

CREATED

STATUS

31 seconds ago

Up 9 seconds

PORTS

You can now access the application that is running in this container through the public DNS name of the Swarm
agent load balancer. You can find this information in the Azure portal:

By default the Load Balancer has ports 80, 8080 and 443 open. If you want to connect on another port you will

need to open that port on the Azure Load Balancer for the Agent Pool.

Deploy multiple containers


As multiple containers are started, by executing 'docker run' multiple times, you can use the docker ps command
to see which hosts the containers are running on. In the example below, three containers are spread evenly across
the three Swarm agents:
user@ubuntu:~$ docker ps
CONTAINER ID
NAMES
11be062ff602
10.0.0.6:83->80/tcp
1ff421554c50
10.0.0.4:82->80/tcp
4298d397b9ab
10.0.0.5:80->80/tcp

IMAGE

COMMAND

yeasy/simple-web
"/bin/sh -c 'python i"
swarm-agent-34A73819-2/clever_banach
yeasy/simple-web
"/bin/sh -c 'python i"
swarm-agent-34A73819-0/stupefied_ride
yeasy/simple-web
"/bin/sh -c 'python i"
swarm-agent-34A73819-1/happy_allen

CREATED

STATUS

11 seconds ago

Up 10 seconds

49 seconds ago

Up 48 seconds

2 minutes ago

Up 2 minutes

PORTS

Deploy containers by using Docker Compose


You can use Docker Compose to automate the deployment and configuration of multiple containers. To do so,
ensure that a Secure Shell (SSH) tunnel has been created and that the DOCKER_HOST variable has been set (see
the pre-requisites above).
Create a docker-compose.yml file on your local system. To do this, use this sample.
web:
image: adtd/web:0.1
ports:
- "80:80"
links:
- rest:rest-demo-azure.marathon.mesos
rest:
image: adtd/rest:0.1
ports:
- "8080:8080"

Run

docker-compose up -d

to start the container deployments:

user@ubuntu:~/compose$ docker-compose up -d
Pulling rest (adtd/rest:0.1)...
swarm-agent-3B7093B8-0: Pulling adtd/rest:0.1... : downloaded
swarm-agent-3B7093B8-2: Pulling adtd/rest:0.1... : downloaded
swarm-agent-3B7093B8-3: Pulling adtd/rest:0.1... : downloaded
Creating compose_rest_1
Pulling web (adtd/web:0.1)...
swarm-agent-3B7093B8-3: Pulling adtd/web:0.1... : downloaded
swarm-agent-3B7093B8-0: Pulling adtd/web:0.1... : downloaded
swarm-agent-3B7093B8-2: Pulling adtd/web:0.1... : downloaded
Creating compose_web_1

Finally, the list of running containers will be returned. This list reflects the containers that were deployed by using
Docker Compose:

user@ubuntu:~/compose$ docker ps
CONTAINER ID
IMAGE
COMMAND
CREATED
NAMES
caf185d221b7
adtd/web:0.1
"apache2-foreground" 2 minutes ago
10.0.0.4:80->80/tcp
swarm-agent-3B7093B8-0/compose_web_1
040efc0ea937
adtd/rest:0.1
"catalina.sh run"
3 minutes ago
10.0.0.4:8080->8080/tcp swarm-agent-3B7093B8-0/compose_rest_1

Naturally, you can use

docker-compose ps

Next steps
Learn more about Docker Swarm

STATUS

PORTS

Up About a minute
Up 2 minutes

to examine only the containers defined in your

compose.yml

file.

You might also like