Professional Documents
Culture Documents
Overview
What is the Azure Container Service?
Get Started
Deploy an ACS cluster
Deploy to ACS using the Azure CLI 2.0 Preview
Connect with an ACS cluster
Scale an ACS cluster
How To
Manage with DC/OS
Container management - DC/OS web UI
Container management - DC/OS REST API
Container management - DC/OS continuous integration
DC/OS Agent pools
Enable DC/OS public access
Load balance containers in DC/OS
App/User Specific Orchestrator in DC/OS
Monitor with OMS (DC/OS)
Monitor with Datadog (DC/OS)
Monitor with Sysdig (DC/OS)
Manage with Kubernetes
Manage with Docker Swarm
Reference
REST API
Resources
Region availability
Pricing
Service Updates
Contributors
Ross Gardler Kim Whitlatch (Beyondsoft Corporation) Tyson Nevil Neil Peterson Andy De George alexandair esell Neil Gat
Razi Rais katiecumming Justin Luk ciphertxt
Azure Container Service makes it simpler for you to create, configure, and manage a cluster of virtual machines that
are preconfigured to run containerized applications. It uses an optimized configuration of popular open-source
scheduling and orchestration tools. This enables you to use your existing skills, or draw upon a large and growing
body of community expertise, to deploy and manage container-based applications on Microsoft Azure.
Azure Container Service leverages the Docker container format to ensure that your application containers are fully
portable. It also supports your choice of Marathon and DC/OS or Docker Swarm so that you can scale these
applications to thousands of containers, or even tens of thousands.
By using Azure Container Service, you can take advantage of the enterprise-grade features of Azure, while still
maintaining application portability--including portability at the orchestration layers.
Deploying an application
Azure Container Service provides a choice of either Docker Swarm or DC/OS for orchestration. How you deploy
Using DC/OS
DC/OS is a distributed operating system based on the Apache Mesos distributed systems kernel. Apache Mesos is
housed at the Apache Software Foundation and lists some of the biggest names in IT as users and contributors.
Using Marathon
Marathon is a cluster-wide init and control system for services in cgroups--or, in the case of Azure Container
Service, Docker-formatted containers. Marathon provides a web UI from which you can deploy your applications.
You can access this at a URL that looks something like http://DNS_PREFIX.REGION.cloudapp.azure.com where
DNS_PREFIX and REGION are both defined at deployment time. Of course, you can also provide your own DNS
name. For more information on running a container using the Marathon web UI, see Container management
through the web UI.
You can also use the REST APIs for communicating with Marathon. There are a number of client libraries that are
available for each tool. They cover a variety of languages--and, of course, you can use the HTTP protocol in any
language. In addition, many popular DevOps tools provide support for Marathon. This provides maximum flexibility
for your operations team when you are working with an Azure Container Service cluster. For more information on
running a container by using the Marathon REST API, see Container management with the REST API.
Supported tools for managing containers on a Swarm cluster include, but are not limited to, the following:
Dokku
Docker CLI and Docker Compose
Krane
Jenkins
Videos
Contributors
Ross Gardler Andy Pasic Kim Whitlatch (Beyondsoft Corporation) Tyson Nevil Stuart Leeks Neil Peterson 4c74356b41
katiecumming Neil Gat Cynthia Nottingham [MSFT]
Azure Container Service provides rapid deployment of popular open-source container clustering and
orchestration solutions. By using Azure Container Service, you can deploy DC/OS and Docker Swarm clusters with
Azure Resource Manager templates or the Azure portal. You deploy these clusters by using Azure Virtual Machine
Scale Sets, and the clusters take advantage of Azure networking and storage offerings. To access Azure Container
Service, you need an Azure subscription. If you don't have one, then you can sign up for a free trial.
This document walks you through deploying an Azure Container Service cluster by using the Azure portal, the
Azure command-line interface (CLI), and the Azure PowerShell module.
If you've elected to pin the deployment to the Azure portal, you can see the deployment status.
When the deployment has completed, the Azure Container Service cluster is ready for use.
these templates are the same, with the exception of the default orchestrator selection.
DC/OS template
Swarm template
Next, make sure that the Azure CLI has been connected to an Azure subscription. You can do this by using the
following command:
azure account show
If an Azure account is not returned, use the following command to sign the CLI in to Azure.
azure login -u user@domain.com
Next, configure the Azure CLI tools to use Azure Resource Manager.
azure config mode arm
Create an Azure resource group and Container Service cluster with the following command, where:
RES OURCE_GROUP is the name of the resource group that you want to use for this service.
LOCATION is the Azure region where the resource group and Azure Container Service deployment will be
created.
TEMPLATE_URI is the location of the deployment file. Note that this must be the Raw file, not a pointer to the
GitHub UI. To find this URL, select the azuredeploy.json file in GitHub, and click the Raw button.
NOTE
When you run this command, the shell will prompt you for deployment parameter values.
-e
switch:
azuredeploy.parameters.json
DC/OS template
Swarm template
Before creating a cluster in your Azure subscription, verify that your PowerShell session has been signed in to
Azure. You can do this with the Get-AzureRMSubscription command:
Get-AzureRmSubscription
Login-AzureRMAccount
command:
Login-AzureRmAccount
If you're deploying to a new resource group, you must first create the resource group. To create a new resource
group, use the New-AzureRmResourceGroup command, and specify a resource group name and destination region:
New-AzureRmResourceGroup -Name GROUP_NAME -Location REGION
After you create a resource group, you can create your cluster with the following command. The URI of the
desired template will be specified for the -TemplateUri parameter. When you run this command, PowerShell will
prompt you for deployment parameter values.
New-AzureRmResourceGroupDeployment -Name DEPLOYMENT_NAME -ResourceGroupName RESOURCE_GROUP_NAME -TemplateUri
TEMPLATE_URI
Next steps
Now that you have a functioning cluster, see these documents for connection and management details:
Connect to an Azure Container Service cluster
Work with Azure Container Service and DC/OS
Work with Azure Container Service and Docker Swarm
Contributors
Saurya Das Ralph Squillace
You will need to go to this link to authenticate with the device code provided in the CLI.
The name of the container service, the resource group created in the previous step and a unique DNS name are
mandatory. Other inputs are set to default values(please see the following screen with the help snapshot
below)unless overwritten using their respective switches.
Quick ACS create using defaults. If you do not have a SSH key use the second command. This second create
command with the --generate-ssh-keys switch will create one for you
Please ensure that the dns-prefix (-d switch) is unique. If you get an error, please try again with a unique string.
After you type the preceding command, wait for about 10 minutes for the cluster to be created.
Note that this delete command does not delete all resources (network and storage) created while creating the
container service. To delete all resources, it is recommended that a single ACS cluster be created per resource
group and then the resource group itself be deleted when the acs cluster is no longer required to ensure that all
related resources are deleted and you are not charged for them.
Contributors
Ross Gardler Kim Whitlatch (Beyondsoft Corporation) Tyson Nevil Neil Peterson katiecumming Neil Gat Javier Moreno
Mark Anderson
The DC/OS and Docker Swarm clusters that are deployed by Azure Container Service expose REST endpoints.
However, these endpoints are not open to the outside world. In order to manage these endpoints, you must create
a Secure Shell (SSH) tunnel. After an SSH tunnel has been established, you can run commands against the cluster
endpoints and view the cluster UI through a browser on your own system. This document walks you through
creating an SSH tunnel from Linux, OS X, and Windows.
NOTE
You can create an SSH session with a cluster management system. However, we don't recommend this. Working directly on
a management system exposes the risk for inadvertent configuration changes.
DC/OS tunnel
To open a tunnel to the DC/OS-related endpoints, execute a command that is similar to the following:
sudo ssh -L 80:localhost:80 -f -N azureuser@acsexamplemgmt.japaneast.cloudapp.azure.com -p 2200
Swarm tunnel
To open a tunnel to the Swarm endpoint, execute a command that looks similar to the following:
ssh -L 2375:localhost:2375 -f -N azureuser@acsexamplemgmt.japaneast.cloudapp.azure.com -p 2200
Now you can set your DOCKER_HOST environment variable as follows. You can continue to use your Docker
command-line interface (CLI) as normal.
export DOCKER_HOST=:2375
Select S S H and Authentication . Add your private key file for authentication.
When you're finished, save the connection configuration, and connect the PuTTY session. When you connect, you
can see the port configuration in the PuTTY event log.
When you've configured the tunnel for DC/OS, you can access the related endpoint at:
DC/OS: http://localhost/
Marathon: http://localhost/marathon
Mesos: http://localhost/mesos
When you've configured the tunnel for Docker Swarm, you can access the Swarm cluster through the Docker CLI.
You will first need to configure a Windows environment variable named DOCKER_HOST with a value of :2375 .
Next steps
Deploy and manage containers with DC/OS or Swarm:
Work with Azure Container Service and DC/OS
Work with the Azure Container Service and Docker Swarm
Contributors
Andy De George Kim Whitlatch (Beyondsoft Corporation) Tyson Nevil Dan Lepow
You can scale out the number of nodes your Azure Container Service (ACS) has by using the Azure CLI tool. When
you use the Azure CLI to scale, the tool returns you a new configuration file representing the change applied to the
container.
As it is probably already self-evident, you can scale the container service by calling azure acs scale and supplying
the resource group , ACS nam e , and agent count . When you scale a container service, Azure CLI returns a JSON
string representing the new configuration of the container service, including the new agent count.
Next steps
Deploy a cluster
Contributors
Neil Peterson Kim Whitlatch (Beyondsoft Corporation) Tyson Nevil Glenn Gailey Dan Lepow Andy De George katiecumming
Neil Gat Ross Gardler
DC/OS provides an environment for deploying and scaling clustered workloads, while abstracting the underlying
hardware. On top of DC/OS, there is a framework that manages scheduling and executing compute workloads.
While frameworks are available for many popular workloads, this document will describe how you can create and
scale container deployments with Marathon. Before working through these examples, you will need a DC/OS
cluster that is configured in Azure Container Service. You also need to have remote connectivity to this cluster. For
more information on these items, see the following articles:
Deploy an Azure Container Service cluster
Connect to an Azure Container Service cluster
VALUE
ID
nginx
Image
nginx
Network
Bridged
Host Port
80
Protocol
TCP
If you want to statically map the container port to a port on the agent, you need to use JSON Mode. To do so,
switch the New Application wizard to JS ON Mode by using the toggle. Then enter the following under the
portMappings section of the application definition. This example binds port 80 of the container to port 80 of the
DC/OS agent. You can switch this wizard out of JSON Mode after you make this change.
"hostPort": 80,
The DC/OS cluster is deployed with set of private and public agents. For the cluster to be able to access
applications from the Internet, you need to deploy the applications to a public agent. To do so, select the Optional
tab of the New Application wizard and enter slave_public for the Accepted Resource Roles .
Back on the Marathon main page, you can see the deployment status for the container.
When you switch back to the DC/OS web UI (http://localhost/), you will see that a task (in this case, a Dockerformatted container) is running on the DC/OS cluster.
You can also see the cluster node that the task is running on.
After the scale operation finishes, you will see multiple instances of the same task spread across DC/OS agents.
Next steps
Work with DC/OS and the Marathon API
Deep dive on the Azure Container Service with Mesos
Contributors
Neil Peterson Andy Pasic Kim Whitlatch (Beyondsoft Corporation) Tyson Nevil Dan Lepow katiecumming Richard Watson
Neil Gat Ross Gardler
DC/OS provides an environment for deploying and scaling clustered workloads, while abstracting the underlying
hardware. On top of DC/OS, there is a framework that manages scheduling and executing compute workloads.
Although frameworks are available for many popular workloads, this document describes how you can create and
scale container deployments by using Marathon. Before working through these examples, you need a DC/OS
cluster that is configured in Azure Container Service. You also need to have remote connectivity to this cluster. For
more information on these items, see the following articles:
Deploying an Azure Container Service cluster
Connecting to an Azure Container Service cluster
After you are connected to the Azure Container Service cluster, you can access the DC/OS and related REST APIs
through http://localhost:local-port. The examples in this document assume that you are tunneling on port 80. For
example, the Marathon endpoint can be reached at http://localhost/marathon/v2/ . For more information on the
various APIs, see the Mesosphere documentation for the Marathon API and the Chronos API, and the Apache
documentation for the Mesos Scheduler API.
Now, use the Marathon /apps endpoint to check for current application deployments to the DC/OS cluster. If this
is a new cluster, you will see an empty array for apps.
curl localhost/marathon/v2/apps
{"apps":[]}
{
"id": "nginx",
"cpus": 0.1,
"mem": 16.0,
"instances": 1,
"acceptedResourceRoles": [
"slave_public"
],
"container": {
"type": "DOCKER",
"docker": {
"image": "nginx",
"network": "BRIDGE",
"portMappings": [
{ "containerPort": 80, "hostPort": 80, "servicePort": 9000, "protocol": "tcp" }
]
}
}
}
In order to deploy a Docker-formatted container, create your own JSON file, or use the sample provided at Azure
Container Service demo. Store it in an accessible location. Next, to deploy the container, run the following
command. Specify the name of the JSON file.
curl -X POST http://localhost/marathon/v2/apps -d @marathon.json -H "Content-type: application/json"
Now, if you query Marathon for applications, this new application will show in the output.
curl localhost/marathon/v2/apps
Finally, query the Marathon endpoint for applications. You will see that there are now three of the Nginx
containers.
curl localhost/marathon/v2/apps
Use PowerShell for this exercise: Marathon REST API interaction with
PowerShell
You can perform these same actions by using PowerShell commands on a Windows system.
To gather information about the DC/OS cluster, such as agent names and agent status, run the following
command.
Invoke-WebRequest -Uri http://localhost/mesos/master/slaves
You deploy Docker-formatted containers through Marathon by using a JSON file that describes the intended
deployment. The following sample will deploy the Nginx container, binding port 80 of the DC/OS agent to port 80
of the container.
{
"id": "nginx",
"cpus": 0.1,
"mem": 16.0,
"instances": 1,
"container": {
"type": "DOCKER",
"docker": {
"image": "nginx",
"network": "BRIDGE",
"portMappings": [
{ "containerPort": 80, "hostPort": 80, "servicePort": 9000, "protocol": "tcp" }
]
}
}
}
Create your own JSON file, or use the sample provided at Azure Container Service demo. Store it in an accessible
location. Next, to deploy the container, run the following command. Specify the name of the JSON file.
Invoke-WebRequest -Method Post -Uri http://localhost/marathon/v2/apps -ContentType application/json -InFile
'c:\marathon.json'
You can also use the Marathon API to scale out or scale in application deployments. In the previous example, you
deployed one instance of an application. Let's scale this out to three instances of an application. To do so, create a
JSON file by using the following JSON text, and store it in an accessible location.
{ "instances": 3 }
Next steps
Read more about the Mesos HTTP endpoints.
Read more about the Marathon REST API.
Contributors
Shayne Boyer katiecumming Iain Foulds Ralph Squillace
In this tutorial, we cover how to fully automate building and deploying a multi-container Docker app to an Azure
Container Service cluster running DC/OS. While the benefits of continuous integration and deployment (CI/CD) are
known, there are new considerations when integrating containers into your workflow. Using the new Azure
Container Registry and CLI commands, we set up an end-to-end flow, which you can customize.
Get started
You can run this walkthrough on OS X, Windows, or Linux.
You need an Azure subscription. If you don't have one, you can sign up for an account.
Install the Azure Command-line tools.
IMPORTANT
To create a secure cluster you pass your SSH public key file to pass when you call az acs create . Either you can have the
Azure CLI 2.0 generate the keys for you and pass them at the same time using the --generate-ssh-keys option, or you
can pass the path to your keys using the --ssh-key-value option (the default location on Linux is ~/.ssh/id_rsa.pub
and on Windows %HOMEPATH%\.ssh\id_rsa.pub , but this can be changed). To create SSH public and private key files on
Linux, see Create SSH keys on Linux and Mac. To create SSH public and private key files on Windows, see Create SSH keys on
Windows.
1. First, type the az login command in a terminal window to log in to your Azure subscription with the Azure
CLI:
az login
2. Create a resource group in which we place our cluster using az resource group create:
az resource group create --name myacs-rg --location westus
You may want to specify the Azure datacenter region closest to you.
3. Create an ACS cluster with default settings using az acs create and passing the path to you public SSH key
file:
az acs create \
--resource-group myacs-rg
--name myacs \
--dns-prefix myacs \
--ssh-key-value ~/.ssh/id_rsa.pub
This step takes several minutes, so feel free to read on. The acs create command returns information about the
newly created cluster (or you can list the ACS clusters in your subscription with az acs list ). For more ACS
configuration options, read more about creating and configuring an ACS cluster.
we fetch a pre-made
redis:alpine
contains code where service-a calls both service-b and cache . Notice that
service-a code references service-b and cache by how they are named in docker-compose.yml . If we run
these services on our local machine via docker-compose , Docker ensures the services are all networked
appropriately to find each other by name. Running the services in a cluster environment with load-balanced
networking typically makes it much more complex than running locally. The good news is the Azure CLI
commands set up a CI/CD flow that ensures this straight-forward service discovery code continues to run
as-is in ACS.
/service-a/server.js
--target-resource-group
On first run, this command may take a minute or so to complete. Once completed, important information is
returned regarding the build and release pipeline it created:
: a webhook is configured for the source repository so that the build and release pipeline is
automatically triggered whenever source code is pushed to it.
vstsProject : Visual Studio Team Services (VSTS) is configured to drive the workflow (the actual build and
deployment tasks run within containers in ACS). If you would like to use a specific VSTS account and project, you
can define using the --vsts-account-name and --vsts-project-name parameters.
buildDefinition : defines the tasks that run for each build. Container images are produced for each service
defined in the docker-compose.yml, and then pushed to a Docker container registry.
containerRegistry : The Azure Container Registry is a managed service that runs a Docker container registry. A
new Azure Container Registry is created with a default name or you can alternatively specify an Azure Container
Registry name via the --registry-name parameter.
releaseDefinition : defines the tasks that are run for each deployment. Container images for the services
sourceRepo
defined in docker-compose.yml are pulled from the container registry, and deployed to the ACS cluster. By
default, three environments are created: Dev , Test, and Production. The release definition is configured by
default to automatically deploy to Dev each time a build completes successfully. A release can be promoted to
Test or Production manually without requiring a rebuild. The default flow can be customized in VSTS.
containerService : the target ACS cluster (must be running DC/OS 1.8).
The following snippet is an example command you would type if you already have an existing Azure Container
Registry named myregistry . Create and build release definitions with a VSTS account at
myvstsaccount.visualstudio.com , and an existing VSTS project myvstsproject :
az container release create \
--target-name myacs \
--target-resource-group myacs-rg \
--registry-name myregistry \
--vsts-account-name myvstsaccount \
--vsts-project-name myvstsproject \
--remote-access-token <GitHubPersonalAccessToken>
NOTE
Windows Only: You need to set up Pageant to complete this section.
Launch PuttyGen and load the private SSH key used to create the ACS cluster (%homepath%\id_rsa).
Save the private SSH key as id_rsa.ppk in the same folder.
Launch Pageant - it will start running and display an icon in your bottom-right system tray.
Right-click the system tray icon and select Add Key.
Add the id_rsa.ppk file.
1. Open the ACS cluster's DC/OS dashboard using the Azure CLI convenience command:
az acs dcos browse -g myacs-rg -n myacs
-g
-n
You may be prompted for your local account password, since this command requires administrator
privilege. The command creates an SSH tunnel to a DC/OS endpoint, opens your default browser to
that endpoint, and temporarily configures the browser's web proxy.
TIP
If you need to look up the name of your ACS cluster, you can list all ACS clusters in your subscription by
running az acs list .
2. In the DC/OS dashboard, click S ervices on the left navigation menu (http://localhost/#/services). Services
deployed via our pipeline are grouped under a root folder named dev (named after the environment in the
VSTS release definition).
Click a task to open its view, then click one of its available endpoints.
Our simple web app calls service-a , which calls service-b , and returns a hello world message. A counter is
incremented on Redis each time a request is made.
-g myacs-rg -n myacs
For example:
export http_proxy=http://127.0.0.1:55405
3. Now you can curl against your service endpoint, curl http://service-url , where service-url is the
address you see when you navigate to your service endpoint from Marathon UI. To unset the http_proxy
variable from your command line, type unset http_proxy .
Scale services
While we're in the DC/OS dashboard, let's scale our services.
1. Navigate to the application in the dev subfolder.
2. Hover over
service-b
4. Navigate back to the running web app, and repeatedly click the Say It Again button. Notice that
invocations begin to round-robin across a collection of hostnames, while the single instance of
continues to report the same host.
service-b
service-a
deploy to start a new deployment, reusing the same images that were previously deployed to Dev . Click
Logs if you want to follow along the deployment in more detail.
Once deployment to Test has succeeded, a new root folder in Marathon UI named test that contains the running
services for that environment.
service-a/public/index.html
to something like:
<h2>Server Says Hello</h2>
3. Save the file, then commit and push the code change to your source repository.
git commit -am 'updated title'
git push
The commit automatically kicks off a new build, and a new release to be deployed to Dev . Services in downstream
environments (Test or Production) remains unchanged until we decide to promote a specific release to that
environment.
If you open the build definition in VSTS, you'll see something like this:
For the label value, you can either specify the URL of your ACS agent's fully qualified domain name
(FQDN), or a custom domain (for example, app.contoso.com). To find your ACS agent's FQDN, run the
command az acs list , and check the property for agentPoolProfiles.fqdn . For example,
myacsagents.westus.cloudapp.azure.com .
By following the filename convention docker-compose.env.environment-name.yml, these settings only
affect the named environment (in this case, the environment named Production). Inspect the release
definition in VSTS, each environment's deployment task is set up to read from a docker-compose file
named after this convention.
2. Commit and push the file to your master source repository to start another build.
git add .
git commit -am "expose public port for service-a"
git push
3. Wait until the update has been built and deployed to Dev , then promote it to Test, and then promote it to
Production. (For the purposes of this tutorial, you can deploy directly to Production but it is good to get in
the practice of only deploying to the next downstream environment.)
4. (Optional) If you specified a custom dom ain for vhost (for example, app.contoso.com), add a DNS record
in your domain provider's settings. Log in to your domain provider's administrative UI and add a DNS
record as follows:
Type: CNAME
Host: Your custom domain, for example, app.contoso.com
Answer: ACS agent FQDN, for example, myacsagents.westus.cloudapp.azure.com
TTL (Optional): Sometimes, your domain provider gives you the ability to edit the TTL. A lower value
results in a DNS record update to be propagated more quickly.
5. Once the release has been deployed to Production, that version is accessible to anyone. Open your browser
to the URL you specified for the com.microsoft.acs.dcos.marathon.vhost label. (Note: releases to preproduction environments continue to be private).
Summary
Congratulations! You learned how to create an ACS cluster with DC/OS, and set up a fully automated and
containerized build and deployment pipeline for a multi-container app.
Some next steps to explore:
S cale VS TS Agents. If you need more throughput for running build and release tasks, you can increase the
number of VSTS agent instances. Navigate to S ervices in the DC/OS Dashboard, open the vsts-agents folder,
and experiment with scaling the number of VSTS agent instances.
Integrate unit tests. This GitHub repository shows how to make unit tests and integration tests run in
containers and include them in the build tasks: https://github.com/mindaro/sample-app.
Hint: look at these files in the repository: service-a/unit-tests.js , service-a/service-tests.js ,
docker-compose.ci.unit-tests.yml , and docker-compose.ci.service-tests.yml .
Clean up
To limit your compute charges related to this tutorial, run the following command and take note of the deployment
pipeline resources that are related to an ACS cluster:
az container release list --resource-name myacs --resource-group myacs-rg
Contributors
Andy De George Kim Whitlatch (Beyondsoft Corporation) Tyson Nevil Dan Lepow
DC/OS Azure Container Service divides agents into public or private pools. A deployment can be made to either
pool, affecting accessibility between machines in your container service. The machines can be exposed to the
internet (public) or kept internal (private). This article gives a brief overview of why there are a public and private
pool.
Private agents
Private agent nodes run through a non-routable network. This network is only accessible from the admin zone or
through the public zone edge router. By default, DC/OS launches apps on private agent nodes. Consult the DC/OS
documentation for more information about network security.
Public agents
Public agent nodes run DC/OS apps and services through a publicly accessible network. Consult the DC/OS
documentation for more information about network security.
Next steps
Read more information about managing your DC/OS containers.
Learn how to open the firewall provided by Azure to allow public access to your DC/OS container.
Contributors
Andy De George Kim Whitlatch (Beyondsoft Corporation) Tyson Nevil Dan Lepow
Any DC/OS container in the ACS public agent pool is automatically exposed to the internet. By default, ports 80 ,
443 , 8080 are opened, and any (public) container listening on those ports are accessible. This article shows you
how to open more ports for your applications in Azure Container Service.
DESCRIPTION
Name
Port
Path
Interval
Unhealthy threshold
6. Back at the properties of the agent load balancer, click Load balancing rules and then Add .
DESCRIPTION
Name
Port
Backend port
Backend pool
The containers in this pool will be the target for this load
balancer.
Probe
Session persistence
Idle timeout
XXXX ).
5. Fill out the firewall rule to allow your public port and click OK .
FIELD
DESCRIPTION
Name
FIELD
DESCRIPTION
Priority
Priority rank for the rule. The lower the number the higher
the priority.
Source
Service
Protocol
Port range
Action
Next steps
Learn about the difference between public and private DC/OS agents.
Read more information about managing your DC/OS containers.
Contributors
Ross Gardler Kim Whitlatch (Beyondsoft Corporation) Tyson Nevil William Buchwalter katiecumming Neil Gat
In this article, we'll explore how to create an internal load balancer in a a DC/OS managed Azure Container Service
using Marathon-LB. This will enable you to scale your applications horizontally. It will also enable you to take
advantage of the public and private agent clusters by placing your load balancers on the public cluster and your
application containers on the private cluster.
Prerequisites
Deploy an instance of Azure Container Service with orchestrator type DC/OS and ensure that your client can
connect to your cluster.
Load balancing
There are two load-balancing layers in the Container Service cluster we will build:
1. Azure Load Balancer provides public entry points (the ones that end users will hit). This is provided
automatically by Azure Container Service and is, by default, configured to expose port 80, 443 and 8080.
2. The Marathon Load Balancer (marathon-lb) routes inbound requests to container instances that service those
requests. As we scale the containers providing our web service, marathon-lb dynamically adapts. This load
balancer is not provided by default in your Container Service, but it is very easy to install.
This commadn automatically installs the load balancer on the public agents cluster.
to the FQDN of the load balancer for your agents. This is in the form
<acsName>agents.<region>.cloudapp.azure.com . For example, if you create a Container Service cluster with name
myacs in region West US , the FQDN would be myacsagents.westus.cloudapp.azure.com . You can also find this by
looking for the load balancer with "agent" in the name when you're looking through the resources in the
resource group that you created for Container Service in the Azure portal.
Set the servicePort to a port >= 10,000. This identifies the service that is being run in this container--marathonlb uses this to identify services that it should balance across.
Set the HAPROXY_GROUP label to "external".
Set hostPort to 0. This means that Marathon will arbitrarily allocate an available port.
Set instances to the number of instances you want to create. You can always scale these up and down later.
HAProxy_0_VHOST
It is worth noing that by default Marathon will deploy to the private cluster, this means that the above deployment
will only be accessible via your load balancer, which is usually the behavior we desire.
Mode
hello-web.json
, and
Additional scenarios
You could have a scenario where you use different domains to expose different services. For example:
mydomain1.com -> Azure LB:80 -> marathon-lb:10001 -> mycontainer1:33292
mydomain2.com -> Azure LB:80 -> marathon-lb:10002 -> mycontainer2:22321
To achieve this, check out virtual hosts, which provide a way to associate domains to specific marathon-lb paths.
Alternatively, you could expose different ports and remap them to the correct service behind marathon-lb. For
example:
Azure lb:80 -> marathon-lb:10001 -> mycontainer:233423
Azure lb:8080 -> marathon-lb:1002 -> mycontainer2:33432
Next steps
See the DC/OS documentation for more on marathon-lb.
Contributors
Ross Gardler Kim Whitlatch (Beyondsoft Corporation) Tyson Nevil katiecumming Neil Gat
Azure Container Service provides a set of master servers on which we preconfigure Apache Mesos and Marathon.
These can be used to orchestrate your applications on the cluster, but it's best not to use the master servers for this
purpose. For example, tweaking the configuration of Marathon requires logging into the master servers themselves
and making changes--this encourages unique master servers that are a little different from the standard and need
to be cared for and managed independently. Additionally, the configuration required by one team might not be the
optimal configuration for another team.
In this article, we'll explain how to add an application or user-specific Marathon service.
Because this service will belong to a single user or team, they are free to configure it in any way that they desire.
Also, Azure Container Service will ensure that the service continues to run. If the service fails, Azure Container
Service will restart it for you. Most of the time you won't even notice it had downtime.
Prerequisites
Deploy an instance of Azure Container Service with orchestrator type DC/OS and ensure that your client can
connect to your cluster. Also, do the following steps.
NOTE
This is for working with DC/OS-based ACS clusters. There is no need to do this for Swarm-based ACS clusters.
First, connect to your DC/OS-based ACS cluster. Once you have done this, you can install the DC/OS CLI on your
client machine with the commands below:
sudo pip install virtualenv
mkdir dcos && cd dcos
wget https://raw.githubusercontent.com/mesosphere/dcos-cli/master/bin/install/install-optout-dcos-cli.sh
chmod +x install-optout-dcos-cli.sh
./install-optout-dcos-cli.sh . http://localhost --add-path yes
If you are using an old version of Python, you may notice some "InsecurePlatformWarnings". You can safely ignore
these.
In order to get started without restarting your shell, run:
source ~/.bashrc
This step will not be necessary when you start new shells.
Now you can confirm that the CLI is installed:
dcos --help
Next, use the DC/OS CLI to install the Marathon instance with the options that are set in your configuration file:
dcos package install --options=marathon-alice.json marathon
service running in the Services tab of your DC/OS UI. The UI will be
http://<hostname>/service/marathon-alice/ if you want to access it directly.
marathon-alice
marathon.url
property to
You can verify which instance of Marathon that your CLI is working against with the dcos config show command.
You can revert to using your master Marathon service with the command dcos config unset marathon.url .
Contributors
Keiko Harada Ralph Squillace
Microsoft Operations Management (OMS) is Microsoft's cloud-based IT management solution that helps you
manage and protect your on-premises and cloud infrastructure. Container Solution is a solution in OMS Log
Analytics, which helps you view the container inventory, performance, and logs in a single location. You can audit,
troubleshoot containers by viewing the logs in centralized location, and find noisy consuming excess container on a
host.
For more information about Container Solution, please refer to the Container Solution Log Analytics.
Pre-requisite
Microsoft Azure Subscription - You can get this for free.
Microsoft OMS Workspace Setup - see "Step 3" below
DC/OS CLI installed.
1. In the DC/OS dashboard, click on Universe and search for OMS as shown below.
1. Click Install . You will see a pop up with the OMS version information and an Install Package or Advanced
Installation button. When you click Advanced Installation , which leads you to the OMS specific
configuration properties page.
1. Here, you will be asked to enter the wsid (the OMS workspace ID) and wskey (the OMS primary key for the
workspace id). To get both wsid and wskey you need to create an OMS account at
https://mms.microsoft.com. Please follow the steps to create an account. Once you are done creating the
account, you need to obtain your wsid and wskey by clicking S ettings , then Connected S ources , and
then Linux S ervers , as shown below.
2. Select the number you OMS instances that you want and click the Review and Install button. Typically, you
will want to have the number of OMS instances equal to the number of VMs you have in your agent cluster.
OMS Agent for Linux is installs as individual containers on each VM that it wants to collect information for
OMS Portal
Log in to the OMS portal (https://mms.microsoft.com) and go to the S olution Gallery .
Once youve selected the Container Solution, you will see the tile on the OMS Overview Dashboard page. Once the
ingested container data is indexed, you will see the tile populated with information on the solution view tiles.
Azure Portal
Login to Azure portal at https://portal.microsoft.com/. Go to Marketplace , select Monitoring + m anagem ent
and click S ee All . Then Type containers in search. You will see "containers" in the search results. Select
Containers and click Create .
Once you click Create , it will ask you for your workspace. Select your workspace or if you do not have one, create a
new workspace.
For more information about the OMS Container Solution, please refer to the Container Solution Log Analytics.
This will deploy to other nodes which have not yet deployed the OMS agent.
Uninstall MS OMS
To uninstall MS OMS enter the following command:
$ dcos package uninstall msoms
Let us know!!!
What works? What is missing? What else do you need for this to be useful for you? Let us know at OMSContainers.
Next steps
Now that you have set up OMS to monitor your containers,see your container dashboard.
Contributors
rbitia Kim Whitlatch (Beyondsoft Corporation) Tyson Nevil
In this article we will deploy Datadog agents to all the agent nodes in your Azure Container Service cluster. You will
need an account with Datadog for this configuration.
Prerequisites
Deploy and connect a cluster configured by Azure Container Service. Explore the Marathon UI. Go to
http://datadoghq.com to set up a Datadog account.
Datadog
Datadog is a monitoring service that gathers monitoring data from your containers within your Azure Container
Service cluster. Datadog has a Docker Integration Dashboard where you can see specific metrics within your
containers. Metrics gathered from your containers are organized by CPU, Memory, Network and I/O. Datadog splits
metrics into containers and images. An example of what the UI looks like for CPU usage is below.
Now to complete the configuration you will need a Datadog account or a free trial account. Once you're logged in
to the Datadog website look to the left and go to Integrations -> then API's.
Next enter your API key into the Datadog configuration within the DC/OS Universe.
In the above configuration instances are set to 10000000 so whenever a new node is added to the cluster Datadog
will automatically deploy an agent to that node. This is an interim solution. Once you've installed the package you
should navigate back to the Datadog website and find "Dashboards." From there you will see Custom and
Integration Dashboards. The Docker Integration Dashboard will have all the container metrics you need for
monitoring your cluster.
Contributors
rbitia Andy Pasic Kim Whitlatch (Beyondsoft Corporation) Tyson Nevil
In this article, we will deploy Sysdig agents to all the agent nodes in your Azure Container Service cluster. You need
an account with Sysdig for this configuration.
Prerequisites
Deploy and connect a cluster configured by Azure Container Service. Explore the Marathon UI. Go to
http://app.sysdigcloud.com to set up a Sysdig cloud account.
Sysdig
Sysdig is a monitoring service that allows you to monitor your containers within your cluster. Sysdig is known to
help with troubleshooting but it also has your basic monitoring metrics for CPU, Networking, Memory, and I/O.
Sysdig makes it easy to see which containers are working the hardest or essentially using the most memory and
CPU. This view is in the Overview section, which is currently in beta.
Now to complete the configuration you need a Sysdig cloud account or a free trial account. Once you're logged in
to the Sysdig cloud website, click on your user name, and on the page you should see your "Access Key."
Next enter your Access Key into the Sysdig configuration within the DC/OS Universe.
Now set the instances to 10000000 so whenever a new node is added to the cluster Sysdig will automatically
deploy an agent to that new node. This is an interim solution to make sure Sysdig will deploy to all new agents
within the cluster.
Once you've installed the package navigate back to the Sysdig UI and you'll be able to explore the different usage
metrics for the containers within your cluster.
Contributors
anhowe Ralph Squillace Saurya Das
Deployment
Here are the steps to deploy a simple Kubernetes cluster:
1. generate your ssh key
2. generate your service principal
3. Click the Deploy to Azure Button on README and fill in the fields
Walkthrough
Once your Kubernetes cluster has been created you will have a resource group containing:
1. 1 master accessible by SSH on port 22 or kubectl on port 443
2. a set of nodes in an availability set. The nodes can be accessed through a master. See agent forwarding for
an example of how to do this.
The following image shows the architecture of a container service cluster with 1 master, and 2 agents:
All VMs are in the same private VNET and are fully accessible to each other.
-o yaml
to
5. Type kubectl get pods -o yaml to see the full details of the nginx deployment. You can see the host IP and
the podIP. The pod IP is assigned from the pod CIDR on the host. Run curl to the pod ip to see the nginx
output, eg. curl 10.244.1.4
6. The next step is to expose the nginx deployment as a Kubernetes service on the private service network
10.0.0.0/16:
a. expose the service with command kubectl
b. get the service IP kubectl get service
c. run curl to the IP, eg. curl 10.0.105.199
7. The final step is to expose the service to the world. This is done by changing the service type from
to LoadBalancer :
ClusterIP
a. once you see the external IP, you can browse to it in your browser:
8. The next step in this walkthrough is to show you how to remotely manage your Kubernetes cluster. First
download Kubectl to your machine and put it in your path:
Windows Kubectl
OSX Kubectl
Linux
9. The Kubernetes master contains the kube config file for remote access under the home directory
~/.kube/config. Download this file to your machine, set the KUBECONFIG environment variable, and run
kubectl to verify you can connect to cluster:
Windows to use pscp from putty. Ensure you have your certificate exposed through pageant:
# MASTERFQDN is obtained in step1 pscp azureuser@MASTERFQDN:.kube/config . SET KUBECONFIG=%CD%\config
kubectl get nodes
OS X or Linux:
# MASTERFQDN is obtained in step1 scp azureuser@MASTERFQDN:.kube/config . export
KUBECONFIG=`pwd`/config kubectl get nodes
10. The next step is to show you how to remotely run commands in a remote Docker container:
a. Run kubectl get pods to show the name of your nginx pod
b. using your pod name, you can run a remote command on your pod. eg.
kubectl exec nginx-701339712-retbj date
11. The final step of this tutorial is to show you the dashboard:
a. run kubectl proxy to directly connect to the proxy
b. in your browser browse to the dashboard
c. browse around and explore your pods and services.
. The following
Learning More
Here are recommended links to learn more about Kubernetes:
1. Azure Kubernetes documentation
Contributors
Neil Peterson Kim Whitlatch (Beyondsoft Corporation) Tyson Nevil Dan Lepow Ross Gardler Neil Gat katiecumming
Docker Swarm provides an environment for deploying containerized workloads across a pooled set of Docker
hosts. Docker Swarm uses the native Docker API. The workflow for managing containers on a Docker Swarm is
almost identical to what it would be on a single container host. This document provides simple examples of
deploying containerized workloads in an Azure Container Service instance of Docker Swarm. For more in-depth
documentation on Docker Swarm, see Docker Swarm on Docker.com.
Prerequisites to the exercises in this document:
Create a Swarm cluster in Azure Container Service
Connect with the Swarm cluster in Azure Container Service
After the container has been created, use docker ps to return information about the container. Notice here that
the Swarm agent that is hosting the container is listed:
user@ubuntu:~$ docker ps
CONTAINER ID
IMAGE
COMMAND
NAMES
4298d397b9ab
yeasy/simple-web
"/bin/sh -c 'python i"
10.0.0.5:80->80/tcp swarm-agent-34A73819-1/happy_allen
CREATED
STATUS
31 seconds ago
Up 9 seconds
PORTS
You can now access the application that is running in this container through the public DNS name of the Swarm
agent load balancer. You can find this information in the Azure portal:
By default the Load Balancer has ports 80, 8080 and 443 open. If you want to connect on another port you will
need to open that port on the Azure Load Balancer for the Agent Pool.
IMAGE
COMMAND
yeasy/simple-web
"/bin/sh -c 'python i"
swarm-agent-34A73819-2/clever_banach
yeasy/simple-web
"/bin/sh -c 'python i"
swarm-agent-34A73819-0/stupefied_ride
yeasy/simple-web
"/bin/sh -c 'python i"
swarm-agent-34A73819-1/happy_allen
CREATED
STATUS
11 seconds ago
Up 10 seconds
49 seconds ago
Up 48 seconds
2 minutes ago
Up 2 minutes
PORTS
Run
docker-compose up -d
user@ubuntu:~/compose$ docker-compose up -d
Pulling rest (adtd/rest:0.1)...
swarm-agent-3B7093B8-0: Pulling adtd/rest:0.1... : downloaded
swarm-agent-3B7093B8-2: Pulling adtd/rest:0.1... : downloaded
swarm-agent-3B7093B8-3: Pulling adtd/rest:0.1... : downloaded
Creating compose_rest_1
Pulling web (adtd/web:0.1)...
swarm-agent-3B7093B8-3: Pulling adtd/web:0.1... : downloaded
swarm-agent-3B7093B8-0: Pulling adtd/web:0.1... : downloaded
swarm-agent-3B7093B8-2: Pulling adtd/web:0.1... : downloaded
Creating compose_web_1
Finally, the list of running containers will be returned. This list reflects the containers that were deployed by using
Docker Compose:
user@ubuntu:~/compose$ docker ps
CONTAINER ID
IMAGE
COMMAND
CREATED
NAMES
caf185d221b7
adtd/web:0.1
"apache2-foreground" 2 minutes ago
10.0.0.4:80->80/tcp
swarm-agent-3B7093B8-0/compose_web_1
040efc0ea937
adtd/rest:0.1
"catalina.sh run"
3 minutes ago
10.0.0.4:8080->8080/tcp swarm-agent-3B7093B8-0/compose_rest_1
docker-compose ps
Next steps
Learn more about Docker Swarm
STATUS
PORTS
Up About a minute
Up 2 minutes
compose.yml
file.