You are on page 1of 670

O F F I C I A L M I C R O S O F T L E A R N I N G P R O D U C T

20487B
Developing Windows Azure and Web
Services
xiv Developing Windows Azure and Web Services

Contents
Module 1: Overview of Service and Cloud Technologies
Lesson 1: Key Components of Distributed Applications 1-2
Lesson 2: Data and Data Access Technologies 1-6
Lesson 3: Service Technologies 1-9
Lesson 4: Cloud Computing 1-13
Lesson 5: Exploring the Blue Yonder Airlines Travel Companion Application 1-22
Lab: Exploring the Work Environment 1-26

Module 2: Querying and Manipulating Data Using Entity Framework


Lesson 1: ADO.NET Overview 2-2
Lesson 2: Creating an Entity Data Model 2-6
Lesson 3: Querying Data 2-20
Lesson 4: Manipulating Data 2-26
Lab: Creating a Data Access Layer by Using Entity Framework 2-34

Module 3: Creating and Consuming ASP.NET Web API Services


Lesson 1: HTTP Services 3-2
Lesson 2: Creating an ASP.NET Web API Service 3-13
Lesson 3: Handling HTTP Requests and Responses 3-20
Lesson 4: Hosting and Consuming ASP.NET Web API Services 3-24
Lab: Creating the Travel Reservation ASP.NET Web API Service 3-31

Module 4: Extending and Securing ASP.NET Web API Services


Lesson 1: The ASP.NET Web API Pipeline 4-2
Lesson 2: Creating OData Services 4-13
Lesson 3: Implementing Security in ASP.NET Web API Services 4-18
Lesson 5: Injecting Dependencies into Controllers 4-32
Lab: Extending Travel Companions ASP.NET Web API Services 4-34

Module 5: Creating WCF Services


Lesson 1: Advantages of Creating Services with WCF 5-3
Lesson 2: Creating and Implementing a Contract 5-6
Lesson 3: Configuring and Hosting WCF Services 5-14
Lesson 4: Consuming WCF Services 5-29
Lab: Creating and Consuming the WCF Booking Service 5-35

Module 6: Hosting Services


Lesson 1: Hosting Services On-Premises 6-3
Lesson 2: Hosting Services in Windows Azure 6-13
Lab: Hosting Services 6-22
Developing Windows Azure and Web Services xv

Module 7: Windows Azure Service Bus


Lesson 1: What Are Windows Azure Service Bus Relays? 7-2
Lesson 2: Windows Azure Service Bus Queues 7-12
Lesson 3: Windows Azure Service Bus Topics 7-23
Lab: Windows Azure Service Bus 7-32

Module 8: Deploying Services

Lesson 1: Web Deployment with Visual Studio 2012 8-3


Lesson 2: Creating and Deploying Web Application Packages 8-7
Lesson 3: Command-Line Tools for Web Deploy 8-11
Lesson 4: Deploying Web and Service Applications to Windows Azure 8-16
Lesson 5: Continuous Delivery with TFS and Git 8-21
Lesson 6: Best Practices for Production Deployment 8-25
Lab: Deploying Services 8-33

Module 9: Windows Azure Storage


Lesson 1: Introduction to Windows Azure Storage 9-3
Lesson 2: Windows Azure Blob Storage 9-8
Lesson 3: Windows Azure Table Storage 9-19
Lesson 4: Windows Azure Queue Storage 9-25
Lesson 5: Restricting Access to Windows Azure Storage 9-32
Lab: Windows Azure Storage 9-37

Module 10: Monitoring and Diagnostics


Lesson 1: Performing Diagnostics Using Tracing 10-2
Lesson 2: Configuring Service Diagnostics 10-7
Lesson 3: Monitoring Services Using Windows Azure Diagnostics 10-18
Lesson 4: Collecting Windows Azure Metrics 10-29
Lab: Monitoring and Diagnostics 10-35

Module 11: Identity Management and Access Control


Lesson 1: Claims-based Identity Concepts 11-2
Lesson 2: Using the Windows Azure Access Control Service 11-8
Lesson 3: Configuring Services to Use Federated Identities 11-12
Lab: Identity Management and Access Control 11-18

Module 12: Scaling Services


Lesson 1: Introduction to Scalability 12-2
Lesson 2: Load Balancing 12-5
Lesson 3: Scaling On-Premises Services with Distributed Cache 12-8
Lesson 4: Windows Azure Caching 12-15
Lesson 5: Scaling Globally 12-22
xvi Developing Windows Azure and Web Services

Lab: Scalability 12-25

Appendix A: Designing and Extending WCF Services


Lesson 1: Applying Design Principles to Service Contracts 13-2
Lesson 2: Handling Distributed Transactions 13-14
Lesson 3: Extending the WCF Pipeline 13-21
Lab: Designing and Extending WCF Services 13-37

Appendix B: Implementing Security in WCF Services


Lesson 1: Introduction to Web Services Security 14-2
Lesson 2: Transport Security 14-6
Lesson 3: Message Security 14-15
Lesson 4: Configuring Service Authentication and Authorization 14-25
Lab: Securing a WCF Service 14-34

Lab Answer Keys


Module 1 Lab: Exploring the Work Environment L1-1
Module 2 Lab Creating a Data Access Layer by Using Entity Framework L2-1
Module 3 Lab: Creating the Travel Reservation ASP.NET Web API Service L3-1
Module 4 Lab: Extending Travel Companions ASP.NET Web API Services L4-1
Module 5 Lab: Creating and Consuming the WCF Booking Service L5-1
Module 6 Lab: Hosting Services L6-1
Module 7 Lab: Windows Azure Service Bus L7-1
Module 8 Lab: Deploying Services L8-1
Module 9 Lab: Restricting Access to Windows Azure Storage L9-1
Module 10 Lab: Monitoring and Diagnostics L10-1
Module 11 Lab: Identity Management and Access Control L11-1
Module 12 Lab: Scalability L12-1
Appendix A Lab: Designing and Extending WCF Services L13-1
Appendix B Lab: Securing a WCF Service L14-1
1-1

Module 1
Overview of Service and Cloud Technologies
Contents:
Module Overview 1-1

Lesson 1: Key Components of Distributed Applications 1-2

Lesson 2: Data and Data Access Technologies 1-6


Lesson 3: Service Technologies 1-9

Lesson 4: Cloud Computing 1-13


Lesson 5: Exploring the Blue Yonder Airlines Travel Companion Application 1-22
Lab: Exploring the Work Environment 1-26

Module Review and Takeaways 1-30

Module Overview
This module provides an overview of service and cloud technologies using the Microsoft .NET Framework
and the Windows Azure cloud. The first lesson, Key Components of Distributed Applications, discusses
characteristics that are common to distributed systems, regardless of the technologies they use. Lesson 2,
Data and Data Access Technologies, describes how data is used in distributed applications. Lesson 3,
Service Technologies, discusses two of the most common protocols in distributed system and the .NET
Framework technologies used to develop services based on those protocols. Lesson 4, Cloud
Computing, describes cloud computing and how it is implemented in the Windows Azure platform.
Lesson 5, Blue Yonder Airlines Travel Companion Application", describes Blue Yonder Airlines Travel
Companion Application that you will develop throughout the labs in this course.

Note: The Management Portal UI and Windows Azure dialog boxes in Visual Studio 2012
are updated frequently when new Windows Azure components and SDKs for .NET are released.
Therefore, it is possible that some differences will exist between screen shots and steps shown in
this module, and the actual UI you encounter in the Management Portal and Visual Studio 2012.

Objectives
After completing this module, you will be able to:

Describe the key components of distributed applications.

Describe data and data access technologies.

Explain service technologies.

Describe the features and functionalities of cloud computing.

Describe the architecture and working of the Blue Yonder Airlines Travel Companion application.
1-2 Overview of Service and Cloud Technologies

Lesson 1
Key Components of Distributed Applications
Users today expect applications to present and process information from varied data sources, which might
be geographically distributed. Modern applications must also support different platforms such as mobile
and desktop, in addition to providing up-to-date information and an appealing UI.

Designing such applications is not a trivial task, and involves collaboration and integration between
several groups of components.

This lesson describes the key components and architecture of modern distributed applications.

Lesson Objectives
After completing this lesson, you will be able to:

Describe the basic characteristics of distributed applications.


Describe the logical layers that constitute a distributed application.

Characteristics of Distributed Applications


Todays data is distributed by nature. People share
data with family, friends, and colleagues.
Companies share data with partners and
customers, and applications share data on the
web.
Customers expect applications to be always
connected and to fetch all the information they
need, when they need it.
The virtual world does not have borders and data
must be available across technologies and
platforms.
Modern applications can run multiple instances on a variety of different platforms, yet they are expected
to have access to the same data and always stay in sync.

Data is distributed between data centers, private computers, and mobile devices. Data should be secured
and private, but at the same time available to its owners and legitimate customers. Today, both data and
the number of users have increased exponentially. Applications must provide services to access data and
maintain high-quality standards in terms of availability and performance.

The only way to achieve availability and performance is by collaboration and distribution of load. An
application can achieve its performance requirements by distributing the computing load across multiple
servers. By using a large number of web servers that are geographically distributed, you also increase the
high availability of your applications. Applications also consume data to provide a rich set of functionality
from a variety of data sources and share their data . Finally, applications replicate cache and centralize
data to provide the best user experience.

It is simply impossible to provide a modern, high-scale application within the borders of a traditional
single computer. Today, data and computing distribution is a necessity.

Distributed systems can be described by the following basic characteristics:


Scalability
Developing Windows Azure and Web Services 1-3

Availability

Latency

Reliability

Security and privacy

Scalability
Distributed systems provide value by using the collaboration of a group of services and clients that are
geographically distributed. Each service has to serve a large number of requests originating from different
clients. A scalable service can provide service to a growing number of clients. Scalability is measured by
the ratio of the growth in the number of customers and the growth in the infrastructure required. You can
achieve scalability by using an appropriate design, such as designing stateless services so you can run
them on multiple computers, and integrating distributed cache solutions for services that need to share
their state between computers.

Availability
Todays systems serve a global audience, located around the world in different time zones. Services must
be available 100 percent of the time and be resilient to connectivity or performance issues. You can
achieve high-availability in a distributed environment by using design guidelines such as fail-over services
and appropriate decoupling between services.

Latency
Latency is the delay introduced by a system when responding to a single request. Users expect
applications to present valuable information without any unnecessary delays. The information must always
be available, the application must be responsive, and user experience must be smooth. To provide a
seamless user experience, services must have a short response time. If the service introduces a long delay,
the experience is not considered to be a smooth one. When designing a system to have low latency, you
should consider concepts such as caching data, parallelizing tasks, and reducing the size of payloads for
both requests and responses.

Reliability
Information is a valuable asset. Clients expect distributed applications to store their data reliably and
make sure that it is never lost or damaged. Keeping data consistent might not be trivial in a distributed
environment where multiple instances might handle the same piece of data concurrently. Data must be
replicated and geographically distributed to handle the risk of hardware failure of any kind.

Security and Privacy


One of the greatest concerns when dealing with distributed systems is security and privacy.

The fact that the system is distributed means that data will be distributed as well. Yet the system has to
ensure that only legitimate stakeholders get access to it at any time. Often distributed systems have no
boundaries and are accessible to anyone through the Internet. This can include potential attackers who
wish to harm the system and disturb its normal behavior. Proper security design that incorporates
concepts such as communication encryption, authentication, and authorization, can reduce the risk of
information disclosure, denial of service, and data theft.

Question: What could be the consequences if a system cannot be scaled?


1-4 Overview of Service and Cloud Technologies

Logical Layers of Distributed Applications


As a developer of distributed systems, you are
often required to troubleshoot complicated issues.
It is often simpler to break down a complicated
task into smaller ones that are easier to resolve.
Separation of concerns is an approach that can
help you simplify the issues by splitting the larger
process into simpler tasks.

Separating the responsibilities between different


components helps you achieve better
maintainability, testability, and agility. It is easier
to test each layer separately rather than testing
the whole system together. Systems that are
usually hard to test are not tested at all, and if applications are not properly designed and poorly tested,
they will fail during production. Maintenance complexity is relative to application complexity and so
proper separation can help. At the same time, integration introduces its own maintenance challenges and
these too have to be taken into account.
The responsibilities in a distributed system can be divided as follows:

Data layer

Execution layer

Service layer
User Interface layer

Data Layer
The data layer is responsible for storing and accessing data. Data not only has to be stored, but also has to
be queried and updated. The data layer is responsible for storing, querying, updating, or deleting the data
as required, while maintaining a reasonable performance. This can be a complicated task when you are
dealing with a large set of data, distributed across a number of data sources.
The data manipulation policy depends on the data type and its properties. Data can be replicated,
distributed, and handled according to its characteristics. For example, client contacts can be replicated
across the data center because they change slowly. However, information about stocks must be always
accurate and therefore must be read from a single source.

Execution Layer
The execution layer contains the business logic and is responsible for carrying out the use-case scenarios
of the application. In other words, the execution layer implements the logic of the application. The
business logic uses the data layer to read and store data, and the UI layer to interact with the client. The
execution layer contains all the algorithms and logic of the application and is considered the brain of the
application.

Service Layer
The service layer exposes some of the capabilities of the application to the world as services. Other
applications might consume these services and use them as a data source or as a remote execution
engine.

The service layer acts as the interface for other applications, in contrast to the user interface layer, which
targets humans. The service layer drives collaboration of applications and enables distribution of
computing load and data. It is responsible for defining a contract that consumers must maintain to use
Developing Windows Azure and Web Services 1-5

the service. It enforces security policies, validates incoming requests, and maintains the application
resources.

User Interface Layer


The user interface layer is the layer through which users interact with the application. It visually depicts
the data and operations of the application effectively and provides the users a simplified medium for
consuming the application data. While designing the UI, developers must consider the varying
expectations of different people and cultures. The UI should always be responsive, yet its ability to
respond quickly might be CPU-intensive especially when using modern interfaces such as touchscreens.
The UI can be displayed on a variety of different devices, some of which might have extreme limitations
such as screen size and resolution. Nevertheless, the UI must be effective and present a useful
visualization. The UI has to provide simple, yet effective methods for the user to enter data and activate
the business and data layers to store and process it. Proper UI design is crucial because if the UI is not user
friendly, the application will not be used.
1-6 Overview of Service and Cloud Technologies

Lesson 2
Data and Data Access Technologies
Our identities, financial status, commercial activities, professional, social relations and more, are persisted
as data, located across various data sources.

Applications access data, process it to provide value, and finally produce some more data for future use.
In this lesson, you will be introduced to various database technologies, along with .NET data access
technologies.

Lesson Objectives
After completing this lesson, you will be able to:

Describe common database technologies.


Describe data access technologies in the .NET Framework.

Data Storage Strategies


Data can be persisted in a variety of different
formats and in a wide range of infrastructures.
Each infrastructure and format is designed for
different scenarios and data types. Some storage
infrastructures are used to store a huge amount of
data and others have limited capacity. Some
storage infrastructures can execute complex
queries and others cannot. Some can access data
very quickly while others introduce long delays.
Data entities can be organized in different types
of models, such as relational, hierarchical, and the
object-oriented model. In a relational model,
entities are persisted as tables consisting of rows and columns with predefined relations between them.
The relational model is the conceptual basis of relational databases such as Microsoft SQL Server. In a
hierarchical model, entities are organized in a tree. Each entity might have one parent and multiple
children. Trees are easy to express in text, for example, in XML or JavaScript Object Notation (JSON)
documents. The object-oriented model is used to process data entities inside applications as most modern
languages are based on the Object Oriented Programming (OOP) approach.
Data can be persisted in a wide range of data sources such as relational databases, file-systems,
distributed file systems, distributed caches, NoSQL databases, cloud-storage, and in-memory stores.

Relational Databases
SQL Server databases and the Windows Azure SQL Database are the traditional large-scale data sources.
They are designed to store relational data and can execute complex queries and user-defined functions.
Queries are written declaratively in languages such as T-SQL and can execute Create, Read, Update, and
Delete (CRUD) operations.

File System
A file system is used to store and retrieve unstructured data on disks. The basic unit of data is a file. Files
are organized in a tree of directories that have a volume as its root. Operating systems such as Windows
and Linux use file systems as their basic storage system.
Developing Windows Azure and Web Services 1-7

Distributed File System


A distributed file system provides the simplicity and data model of a file system, and at the same time
solves the size limitation derived from the disk size. A distributed file system is an arrangement of
networked computers that store data files and the users are exposed only to an abstraction of a single file
system. The distributed file system is transparent to the user.

Distributed Caches
Data access from relational databases is considered a long operation. To reduce latency, some data can be
cached in-memory, yet the size of such a cache is limited. Distributed in-memory cache solves the size
limitation by using an arrangement of networked computers, which store in-memory data as key-value
pairs and provides an experience that mimics a single cache to the end user. Distributed caches will be
discussed in Module 12, "Scaling Services" of Course 20487.

NoSQL Databases
NoSQL databases are an umbrella for many types of data stores, all of which store data in a non-relational
fashion. NoSQL databases are often used to store large amounts of data. These data stores are schema-
free, but data can be organized in a variety of different models such as document database, key-value
store, or graph database.

Cloud-Storage
Infrastructures such as Windows Azure Storage enable cloud and on-premises applications to store their
data, which can be structured or unstructured, on a high-scale and persistent data store. Windows Azure
Storage exposes an interoperable API based on HTTP that can be used by any application running on any
platform.
Windows Azure Table Storage can be referred to as a key-value No-SQL database in the cloud, and
Windows Azure Blob Storage is similar to a huge file system in the cloud. Windows Azure Storage will be
discussed in Module 9, "Windows Azure Storage" of Course 20487.

In-Memory Stores
In-memory stores are the fastest data store but are limited in size, not persistent, and hard to use in a
multi-server environment. In-Memory stores are used to store temporary data, local volatile data, or
replication of data that was retrieved from an external data source.

.NET Data Technologies


Applications written by using the .NET Framework
also have to access data. The .NET Framework
provides a variety of data access technologies:

System.IO contains all the infrastructure


required to access data persisted on a file
system. FileStream provides the basic
read\write operations and classes such as
FileInfo or DirectoryInfo provide the
required metadata.

ADO.NET is the basic SQL Data-Access


technology. By using ADO.NET, it is possible
to open a connection to relational databases
and execute SQL statements and stored procedures. A stored procedure is code, usually SQL-based,
which is stored and executed on the database itself. The data is retrieved and can be saved as a
collection of rows and columns that reflect the relational model in which the data is stored on the
1-8 Overview of Service and Cloud Technologies

database. ADO.NET provides several techniques for fetching and manipulating data by using self-
managed cursors and iterators, and relational and object-oriented models for storing the data in the
application memory.
Entity Framework (EF) is an Object Relational Mapper (ORM) infrastructure. Applications use the
object-oriented approach to represent data entities and thus collections of rows and columns are not
a natural representation for a running program. Data has to be converted from the relational model
to the object-oriented model. This is the role of an ORM infrastructure. EF was introduced in the .NET
Framework 3.5 and provides an infrastructure where queries are written in C#, executed against
relational databases, and produce results as collections of C# objects. At the core of the EF is a model
that represents the mapping between the relational and object-oriented representations. Entity
Framework will be discussed in Module 2, "Querying and Manipulating Data Using Entity Framework"
of Course 20487.

ASP.NET (the System.Web assembly) introduces a powerful in-memory cache that can be used by
any .NET application.

Distributed cache solutions, such as Windows AppFabric Cache and Windows Azure Caching, are an
in-memory store for almost any .NET Framework type, which negates the memory size limitation of
in-memory caches by distributing cache objects over several servers. Using distributed cache provides
scalability, and enhances the durability of cache items by saving copies of the cache items on
participating nodes and by avoiding the need to recreate the cache items on server temporary failure.
Distributed cache requires cached objects to be serialiazable for them to be transported to other
nodes in the cache cluster.

HTTP-Based APIs
A vast variety of technologies are used to create client applications that consume data from services. This
illustrates the importance of exposing data in standard and widespread protocols such as HTTP, which
provides an easy, standard, resource based access to data.
Both Windows Communication Foundation (WCF) Data Services and ASP.NET Web API provide the ability
to perform CRUD operations in services by using the OData open protocol. OData uses URIs and standard
HTTP requests. Using OData in ASP.NET Web API will be discussed in Lesson 2, "Creating OData Services",
of Module 4, "Extending and Securing ASP.NET Web API Services" in Course 20487.

Windows Azure Storage provides both HTTP and Managed APIs to access large unstructured data objects,
such as videos and images. Azure Table Service provides a NoSQL, key-value store for storing small
objects, up to 1 megabyte (MB) per entity. Objects can also be stored by using Blob Service as binary
blocks of data with a size limit of 200 gigabytes (GB) per object.

LINQ

LINQ is a .NET Framework infrastructure used for querying in a declarative fashion. LINQ technology can
be used to support any kind of data source, and provides a standard, consistent way to integrate data
from different sources.

Question: Why is it important for applications to support HTTP for data access?
Developing Windows Azure and Web Services 1-9

Lesson 3
Service Technologies
Services constitute a layer in application architecture, which exposes business logic capabilities to other
application components to improve component modularity and reusability.

Services are the core of distributed applications providing access to data and making it possible for users
to interact with other applications.
Services provide distributed applications with the ability to scale and meet the growing demands for
better performance, robustness, and interoperability for various consumers, whether it is a web
application, a mobile application, or even another service.

Using service as a layer for the application business logic also contributes to the maintainability and
testability of the application, therefore improving the application's quality. Separation of layers helps to
ensure the existence of Single Responsibility Principle (SRP), making it possible to test each layer as an
independent portion.

In this lesson, you will learn about services and how services are integrated in application architecture,
services technologies, and .NET services technologies.

Lesson Objectives
After completing this lesson, you will be able to:
Explain the differences between SOAP and HTTP-based services.

Explain the differences between ASP.NET Web API and WCF.

SOAP and HTTP-Based Services


Web service is a method of data transfer between
software components based on web technologies.
Web technologies are mostly based on plain text
data formats being sent between computers,
improving interoperability and easy integration
processes.

Web services are based on standards and use


application-level protocols to communicate with
each other. Although a variety of protocols exist,
there are two protocols that are most often used
by web services: SOAP and HTTP.

SOAP-Based Services

The SOAP protocol is based on XML and uses structured elements to assemble a message. Messages in
SOAP are XML documents enclosed by the <envelope> root element. SOAP request messages describe
the required action to be performed on the remote computer by name, and provide additional arguments
to be passed to the relevant action. SOAP response messages describe the result of the action initiated by
the request.

The following is an example of a SOAP message describing a request for a Calculator service that provides
an Add method for adding two numbers. The numbers 1 and 2 are provided as arguments.
1-10 Overview of Service and Cloud Technologies

SOAP request message


<soap:Envelope xmlns:soap="http://www.w3.org/2001/12/soap-envelope"
soap:encodingStyle="http://www.w3.org/2001/12/soap-encoding">
<soap:Body xmlns:m="http://www.example.org/calculator">
<m:Add>
<m:FirstNumber>1</m:FirstNumber>
<m:SecondNumber>2</m:SecondNumber>
</m:Add>
</soap:Body>
</soap:Envelope>

The following is an example of a SOAP message describing the reply received by the calling the Add
method on the Calculator service with the arguments of 1 and 2.

SOAP response message


<soap:Envelope xmlns:soap="http://www.w3.org/2001/12/soap-envelope"
soap:encodingStyle="http://www.w3.org/2001/12/soap-encoding">
<soap:Body xmlns:m="http://www.example.org/calculator">
<m:AddResponse>
<m:Result>3</m:Result>
</m:AddResponse>
</soap:Body>
</soap:Envelope>

SOAP-based web services provide the infrastructure for handling SOAP messages. SOAP-based web
services also provide both client and server an easy method to integrate their code and establish
communication between nodes. SOAP web services use standards Web service specifications to simplify
application development. Developers can use specifications such as Web Service Description Language
(WSDL) for describing service methods and arguments in a self-descriptive XML format that is published
along with the service itself.

By using WSDL, you enable other programming technologies to easily consume SOAP web services. This is
because WSDL automatically generates proxy classes that expose methods, which correspond to the web
service methods. Proxies take care of establishing a connection to the remote node, serialization and
deserialization of data, and additional properties.

SOAP-based web services support protocol extensions for security negotiation, reliable messaging,
transaction support, and more. The protocol extensions are referred to as the WS-* standard. You will
learn more about the SOAP specification in Module 5, "Creating WCF Services", Lesson 1, "Advantages of
Creating Services with WCF" in Course 20487. In the .NET Framework, you implement SOAP-based
services by using WCF.

Note: You can also implement SOAP-based web services with the ASP.NET Web Services
(ASMX) framework. However, this technology is considered deprecated, and is fully replaceable
by WCF.

Although web services are often considered to only use HTTP and run in the Internet, SOAP-based web
services are not required to use HTTP as their transport protocol. You can create SOAP-based web services
for both Internet and intranet environments. You can use the SOAP protocol over transports other than
HTTP, such as SMTP, UDP, and Advanced Message Queuing Protocol (AMQP). There are also proprietary
implementations of SOAP over other transports, such as TCP and Named Pipes, however these
implementation are not fully interoperable.
For additional information about SOAP and its implementation, see:

Understanding SOAP
Developing Windows Azure and Web Services 1-11

http://go.microsoft.com/fwlink/?LinkID=313726

HTTP-based Services

HTTP is an application-layer protocol, which defines a set of characteristics for establishing request-
response communication between two networked nodes. HTTP characteristics consist of methods, which
are usually referred to as Verbs that can be performed on a remote computer, security extensions (HTTPS),
authentication, status codes conventions, and more.

HTTP-based web services are mostly used to manage resources that are a part of the HTTP paradigm,
custom structured textual resources, images, and more. Managing resources by using HTTP web services is
natural, and is based on URI for resource identification and verbs for performing operations on the
selected resource.

You can use WCF to create both SOAP-based services and HTTP-based services, however, the ASP.NET
Web API offers a richer, testable, and customizable environment for creating HTTP-based services.
HTTP-based services are covered in Module 3, "Creating and Consuming ASP.NET Web API Services",
Lesson 1, "HTTP Services" in Course 20487.

ASP.NET Web API and WCF


The variety of application platforms and
development technologies, along with different
operating systems and mobile devices, highlight
the need for simplicity and interoperability when
developing public or private services.
Choosing the proper set of technologies to be
used can affect business aspects in terms of
accessibility, and the ways it can be consumed by
various operating systems, mobile devices, and
development platforms.
Over the past decade, Microsoft has released
several technologies for creating web services.
Among them are WCF and ASP.NET Web API, which will be introduced in this lesson.

ASP.NET Web API


With ASP.NET Web API, you can create HTTP services utilizing HTTP verbs and URIs, providing the support
for fully interoperable services that can be consumed by many platforms due to wide support across
different environments for HTTP.

Based on HTTP characteristics, the ASP.NET Web API uses HTTP headers to help consumers determine the
format of data they expect to get back from the service. Single service implementation can generate
responses in the JSON format, a human-readable text-based standard, XML and other encoding formats,
without special handling on the service side.

The ASP.NET web API is covered in depth in Module 3, "Creating and Consuming ASP.NET Web API
Services", and Module 4, "Extending and Securing ASP.NET Web API Services" of Course 20487.

WCF

WCF is a communication framework that was introduced in .NET Framework 3.0. WCFs primary goal is to
support SOAP and comply with the WS-* specification.
1-12 Overview of Service and Cloud Technologies

By using WCF, you can build robust services that can serve numerous types of client applications. WCF
provides support for various transport technologies that can be used in different scenarios. Some
examples are:

HTTP for exposing services publically.


TCP for WCF to WCF communications.

IPC (inter-process communication) using Named Pipes, for providing a high-performance


communication mechanism between processes on the same computer.

MSMQ (Microsoft Message Queuing) for providing reliability.

WCF also provides flexibility to support different technologies by separating the service logic, which is
implemented in .NET classes, from the communication technology, which is configured using
configuration files and code.

WCF supports hosting on various infrastructures, which include Microsoft Internet Information Service
(IIS), Windows Services, Console applications, and other .NET executables. WCF will be covered in Module
5, "Creating WCF Services", Appendix A, "Designing and Extending WCF Services", and in Appendix B,
"Implementing Security in WCF Services" of Course 20487.

Question: What is the benefit of WCF's support of multiple protocols?


Developing Windows Azure and Web Services 1-13

Lesson 4
Cloud Computing
Cloud computing is revolutionizing the way you develop services and applications. The on-demand model
of cloud computing provides new ways to scale and provide better availability of services.

The continuous growth of data, platforms, and users requires a more robust and capacity-unlimited
platform to take on the expected load.
In this lesson, you will learn about cloud computing and its benefits, some architectural considerations for
setting up cloud computing, and the cloud computing products from Microsoft that are based on
Windows Azure.

Lesson Objectives
After completing this lesson, you will be able to:
Explain what cloud computing is.

Describe the benefits of cloud computing.

Explain the difference between IaaS, PaaS, and SaaS.


Describe the Windows Azure cloud platform.

Explain how to use Windows Azure to host your application.

Describe the components of the Windows Azure ecosystem.


Explain how to create new virtual machines with Windows Azure.

Introduction to Cloud Computing


Setting up a data center for hosting applications is
a complicated and costly task. Choosing the right
hardware, installing operating system, planning
network components, and configuring load
balancing solution are only a portion of the
required tasks. Maintaining the data center is also
equally complicated because it includes time
consuming tasks such as upgrading obsolete
hardware, installing software updates, backing up
applications and data, and replacing
malfunctioning hardware.

When designing a data center, you must consider


the expected capacity, usage during peak times, and also expected growth. You need to take into account
the fact that a part of the data center will be idle for most of the time while still consuming electricity and
needing maintenance.
Cloud computing can handle these issues by using an on-demand approach for computing. In cloud
computing, you lease computer resources from a cloud vendor, based on your current needs.

Typically, cloud services consist of a group of servers and storage resources scattered in different physical
locations. Cloud services share resources to provide hosted application high-availability, flexibility, and
maximum utilization of hardware.
1-14 Overview of Service and Cloud Technologies

Benefits of Cloud Computing


Setting up data centers for hosting applications
and services is often costly and requires high
maintenance.

Owning a data center introduces challenges to IT


departments in addition to the establishment
charges. Some of the challenges include::

Supporting redundancy and high-availability


requires adding additional servers to the data
center in case of a hardware failure.

Handling unpredicted load of incoming traffic


can cause temporary unavailability and loss of
potential customers.

Trying to prepare for unpredicted load, which leads to adding more hardware that will be under-
utilized most of the time, thereby making the data center even less efficient.
The following illustration demonstrates the utilization of resources in hosting a service or an application
on a local data center compared to the cloud.

While cloud provisioning maintains a stable provisioning slightly above the application usage, as shown in
the preceding graph, on-premises provisioning fails to keep up with the application usage needs in two
scenarios. When the application grows rapidly, the static on-premises provisioning causes under-
provisioning. When the application usage drops, on-premises provisioning cannot scale down, and causes
over-provisioning.

Cloud computing provides unlimited scaling in case of unpredicted load and enhances high-availability
and performance by taking advantage of the large capacity of available bandwidth, storage, and
computing resources.

Hosting application and services on the cloud also improves utilization of resources by using an elastic
approach. An elastic approach is the scaling out of resources to meet the growing demand when needed,
and scaling down when the demand is down again. This improves flexibility and reduces operational costs.

The following illustration shows some of the growth patterns that are common in modern applications,
and can benefit from using cloud computing
Developing Windows Azure and Web Services 1-15

Cloud computing vendors, such as Windows Azure, also provide a wide range of features for hosted
services and applications, for data storage, caching, and more. Windows Azure features will be covered in
detail in later modules and lessons.

Cloud Computing Strategies


Cloud computing can be customized to meet
different needs of software vendors. When
considering a cloud-based solution, select from
the following strategies:
Infrastructure as a Service (IaaS). With IaaS,
the cloud platform provides the ability to
create virtual machines located on a cloud,
manually manage the IT aspects of an
application, install different operating
systems, configure load-balancers, and
manage high-availability policies. The cloud
provides on-demand provisioned servers as
needed. IaaS in the fundamental building block of the cloud platform.

Platform as a Service (PaaS). With PaaS, the cloud platform provides a ready-to-use infrastructure,
which includes an operating system, storage, databases, auto-configured load-balancer, backup,
replication and more. The software vendor can focus on creating the required database schema and
data, and deploy the application. The platform will take care of the rest, providing on-demand
application-hosting environment that can be cloned and scales automatically.
Software as a Service (SaaS). With SaaS, software vendors can provide their users with a ready to
use on-demand software that benefits the inherent capabilities of a cloud platform. SaaS provides
business flexibility by enhancing the cloud platform features such as scalability, high availability,
self-managed, backup and more.

Examples of SaaS include Outlook.com web email and Office365.

The following diagram shows the difference between the various cloud computing strategies.
1-16 Overview of Service and Cloud Technologies

Introduction to Windows Azure


Windows Azure is the cloud computing platform
offered by Microsoft. Windows Azure consists of
pairs of data centers located in some key areas in
North America, Europe, and Asia.
Windows Azure data centers are physically
secured, protected from power failures, and
network outages, and designed to operate
continually, thereby providing reliability and
operational sustainability.

Windows Azure also provide Geo-Replication,


which is a disaster recovery solution that will
automatically back up data to a remote location
in case of a disaster on the hosting data center, providing durability to your applications and services.

As a complete cloud computing solution, Windows Azure provides an on-demand, scalable, self-service
computing and storage resource platform for hosting services and applications from a wide variety of
technologies, such as .NET Framework applications, Java applications, Python, PHP and others, using SQL
databases, MySQL, hosted on Windows or Linux operating systems.

Windows Azure supports a wide variety of platforms and technologies making it possible to host whole
solutions and not only standalone services.

Windows Azure also offers a set of building blocks services for managing identities, communication, and
media.

Windows Azure also includes inherent features for scalability, replication and backup, and advanced
storage types, which will be introduced in the following modules.
Developing Windows Azure and Web Services 1-17

Windows Azure Cloud Services


Windows Azure Cloud Services is the Microsoft
offering for PaaS. Cloud Services provides a set of
hosting options, referred to as Roles, for meeting
various needs and constraints when you plan
migration to the cloud.

Windows Azure Roles are stateless, provision-


ready servers that provide additional features that
together form the PaaS base for hosting
applications. Windows Azure consists of two roles:
a web role and a worker role.

Web Role
This is a role designed to run IIS-hosted applications and services in multi-tiered applications. Web roles
are exposed to the Internet and are located behind a load balancer, making them a prime choice to host
the front-end part of the application. Web roles support advanced management possibilities, which
include Remote Desktop access to the underlying virtual machine, network isolation, and running elevated
privileges code.
Because a web role hosts applications in IIS, it is possible to host applications written in various platforms
including ASP.NET, WCF, PHP, Node.js and more.

Worker Role
This is a role that meets the need for running background processing asynchronously and can also be
used as the web role backend. A worker role brings the same advanced management capabilities as
introduced in web role, differentiated by not having IIS pre-installed on the machine. Worker role can be
used without having a web role as front end and act as an unlimited scale computing capabilities on the
cloud. Extending web roles with a worker role as a back end better distributes application logic to
independent portions that can be scaled and load-balanced, making it possible for maximum utilization
of resources.
Because roles are designed to be stateless virtual machines, Windows Azure can automatically replace a
malfunctioned machine with a new machine, and then deploy the package containing your application to
the new machine. Therefore, a stateful application such as SQL Server, SharePoint and so on, are not
suitable to be hosted on various Windows Azure roles. For these types of applications, you should
consider using Windows Azure IaaS solutions.
Windows Azure Cloud Services is covered in depth in Module 8, "Hosting Services" in Course 20487.
1-18 Overview of Service and Cloud Technologies

Windows Azure Application Components


Windows Azure consists of several components
that simplify the development of cloud services.

Windows Azure Storage


Due to the unpredictable nature of cloud services,
data cannot be persisted reliably on the virtual
machines. Windows Azure storage is a cloud-
based large scale data store for persisting data in
the cloud. Windows Azure storage provides an
HTTP API for storing data objects. Data objects
can be stored into three available types of
storage: Blob storage, Table storage, and Queues.

There are three types of storage services in Windows Azure:

Blob storage. This type of storage is a non-structured collection of objects that can be accessed by
using a resource identifier and can be used for storing files, such as images, videos, large texts, and
other non-structured data.

Table Storage. This type of storage is a semi-structured collection of objects that can have fields, but
cannot have relations between objects. The fields are not bound to a schema structure, and different
objects can have different fields within the same collection. Table storage also provides a queryable
API access to find objects.
Queue Storage. This type of storage provides a persistent messaging queue.
Windows Azure Storage is covered in depth in Module 9, "Windows Azure Storage" in Course 20487.

Windows Azure Services Bus


Integrating hybrid services (running on-premises and in the cloud) can be challenging because
connecting systems across network boundaries in a secure and reliable manner is not an easy task.
Windows Azure Service Bus provides a messaging infrastructure to exchange messages in applications,
between the cloud or outside it.
Windows Azure Service Bus is covered in depth in Module 7, "Windows Azure Service Bus" in Course
20487.

Windows Azure Access Control Services (ACS)


Windows Azure ACS is an infrastructure for authentication, authorization, and identity management.

ACS has integrated support for Windows Identity Foundation (WIF) and industry standards such as OAuth
(Open Standard for Authorization), WS-Federation, SAML (Security Application Markup Language), and
more.

ACS leverages out-of-the-box identity providers, integrating both with corporate identity providers such
as Active Directory Federated Services (AD FS v2), and social identity providers such as Windows Live,
Facebook, and Google, and can be extended to support custom identity providers. Using ACS saves the
need to implement complicated mechanisms to support user management and identities, and integration
with other applications.
ACS is covered in depth in Module 11, "Identity Management with Windows Azure Access Control
Services" in Course 20487.
Developing Windows Azure and Web Services 1-19

Windows Azure Caching


Windows Azure caching provides low latency and high throughput store for data objects on a local or
distributed shared in-memory cache. Windows Azure Caching improves performance by reducing load
from the database and help applications to scale.

Windows Azure caching simplifies migrating for applications that use on-premises in-memory or
distrusted cache solutions. You can also use Windows Azure caching to replace the session state and
output cache provider of ASP.NET.
Windows Azure Cache is covered in depth in Module 12, "Scaling Services" in Course 20487.

Windows Azure Content Delivery Network (CDN)


Windows Azure CDN is a network of servers that are located around the world and provide static content
caching. CDN servers can store videos, images, and other static data that can be accessed by users who
are geographically close. Having scattered CDN data centers helps reduce the latency and therefore,
improves application performance. For example, a user from Asia can use cloud services that are hosted in
the North American data center for browsing metadata for movies, but will be redirected to CDN servers
geographically closer to download the movie or watch the movie online.

CDN is covered in depth in Module 12, "Scaling Services" in Course 20487.

SQL Database
Windows Azure SQL Database is a cloud-based relational database as a service, based on Microsoft SQL
Server technologies. SQL Database is fully scalable and provides high-availability access, support for SQL
Reporting, and enables data replication between cloud and on-premises databases.

Windows Azure Mobile Services


Windows Azure Mobile Services is a platform for creating scalable back-end services for applications that
support storing and accessing structured data, authentication and push-notification without
implementing server-side code.
Windows Azure Mobile Services are beyond the scope of this course.

Windows Azure IaaS


Hosting applications and services on the Windows
Azure platform is easy and the power and
productivity of the Windows Azure PaaS
infrastructure enables you to meet most of the
requirements of common services. Sometimes,
you might need more fine-grained control to host
applications and services on different operating
systems such as Linux, or to host multi-server
environments.

Windows Azure IaaS provides a platform for


hosting a custom virtual machine on the cloud
you with better control over the hosting servers
by. This makes it possible to manage every aspect of the desired solution, starting from the operating
system, virtual network configuration, complicated software pre-installation requirements, and local disk
persistency.

Windows Azure provides a set of operating system images to choose from while creating a virtual
machine, which can include Linux distributions and partners solutions. You can also create a custom
MCT USE ONLY. STUDENT USE PROHIBITED
1-20 Overview of Service and Cloud Technologies

virtual machine on-premises, upload, and then deploy it to the cloud. Windows Azure provides various
ways to host all kind of software and services.

You can migrate currently deployed applications by uploading a whole solution consisting of multiple
machines to the cloud for seamless continuation. Downloading virtual machines from Windows Azure to
be hosted on-premises is supported as well.

Windows Azure Virtual machine uses Virtual Hard Drives (VHD) that are stored on Window Azure Storage
solution. By storing the VHDs in Windows Azure Storage, you get durability, because the disks are
replicated to three copies and are saved on two different data centers.

Windows Azure provides an API for deployment and management capabilities, both in PowerShell
cmdlets (scripts), and programmatically using HTTP API, making it possible to create custom management
tools integrated in any software solution.

Question: When would you choose IaaS over PaaS?

Demonstration: Exploring the Windows Azure Management Portal


In this demonstration you will open the Windows Azure Management Portal, create a new Windows Azure
Cloud Service, and configure it by using the portal.

Demonstration Steps
1. Open Internet Explorer and browse to Windows Azure Management Portal at
https://manage.windowsazure.com.
2. Explore the management portal main screen. Observe the list of services you can manage on the
pane to the left.

3. Create a new Windows Azure Cloud Service named CloudServiceDemoYourInitials (YourInitials


contains your initials), and choose to create it in a region closest to your location. The cloud service
name you enter is going to be part of the URL you will use when connecting to the roles running in
the cloud service, and the region you select will determine the data center where the VMs will be
created.

4. Open the dashboard of the cloud service that you created in the previous step (the one that named
CloudServiceDemoYourInitials (YourInitials contains your initials). Currently the cloud service is
empty and has no roles, so none of the tabs is showing any content. In future modules, you will learn
how to deploy new roles to the cloud service and how to configure it.

5. Go over the different tabs:

a. DASHBOARD. Provides an overview for the state and configuration of the cloud service and its
roles.

b. MONITOR. Shows performance counter metrics for the roles, such as their CPU and memory
usage.

c. CONFIGURE. Control settings such as monitoring capabilities, remote access, and selection of
guest operating system.

d. SCALE. Control the number of instances (VMs) used for each role.

e. INSTANCES. Control the existing instances (shutdown/start, reboot, and remote desktop)

f. LINKED RESOURCES. Manage the list of dependent resources, such as storage and databases. By
linking to resources you can monitor and control the scale of all resources from the cloud service
configuration.
Developing Windows Azure and Web Services 1-21

g. CERTIFICATES. Manage the certificates used by the roles, for example for HTTPS communication.

6. Close the browser.


1-22 Overview of Service and Cloud Technologies

Lesson 5
Exploring the Blue Yonder Airlines Travel Companion
Application
In this course, you will learn how to create services and deploy them to hybrid environments, by
developing parts of the Blue Yonder Airlines' Travel Companion application. The Travel Companion
application is a large system which includes, among other components, a set of databases, two back-end
services, a frontend service, and a client application. Before starting to develop the various parts of the
application, you need to be familiarized with the architecture of the system, to better understand the
purpose of each component.

In this lesson you will be introduced to the architecture of the Travel Companion application and learn
how the application works.

Lesson Objectives
After completing this lesson, you will be able to:
Describe the architecture of the Travel Companion application.

Describe how to use the Travel Companion application.

Architecture of the Travel Companion Distributed Application


In this course, you will develop different
components of the Blue Yonder Travel
Companion application. Travel Companion is a
new application that enables end users book
flights to various destinations offered by Blue
Yonder Airlines, and then manages their
reservations, including printing boarding passes,
receiving flight and weather updates, and
publishing photos taken during the trip.

This new application is developed by two teams,


client and server, and you are one of the
developers in the server team. During this course,
you will develop the different server parts required by the application by using various server
technologies:
Front-end services for the client application with ASP.NET Web API

Back-end services that will handle the flight reservations with WCF

Data access layer with EF.

In addition, you will use several Windows Azure components, such as Windows Azure Storage, Windows
Azure Service Bus, and Windows ACS.
During the course, you will also deploy the server components to a hybrid environment that includes on-
premises servers and Windows Azure servers.

The following architectural diagram depicts the components that comprise the overall server solution:
Developing Windows Azure and Web Services 1-23

FIGURE 1.1: BLUE YONDER COMPANION SYSTEM ARCHITECTURE

System Components
On-premises SQL Server. The on-premises SQL Server will hold all the reservation data that is
managed by the backend WCF service.
On-premises WCF service. The WCF reservation service, which is deployed to on-premises servers, will
control the booking process. Validation against the internal systems of the airline such as payment
approval.

SQL Database. The SQL Database in Windows Azure will hold all the data regarding traveling
destinations, flight schedules, frequent flyer members, and a history of booked flights.

Windows Azure Web Role. The Windows Azure Web Role will host the ASP.NET Web API front-end
services used by the Travel Companion client application. At the beginning of the course, these
services will be developed and hosted on-premises, and during the course they will be deployed to
Windows Azure. The following services will be created and hosted in the Windows Azure Web Role:
o Destinations service. Lists known destinations across the globe. This service supports basic search
operations.

o Flight schedules service. Returns a list of flight schedules according to origin and destination.
Supports different search criteria.

o Reservation service. Supports creating new flight reservations, linking reservations of frequent
flyer club members to their member account, and retrieving information for previously booked
flights.

o Photos services. Handles uploading photos to Windows Azure Storage and creating URL with
shared access signatures for private blob storage containers.
Windows Azure Web Site. The Windows Azure Web Site will host the companys flight management
administration web application. This application will enable operators to change flights departure
time. Changes made in this web application will be populated to the client application, by using push
notification.
Windows Azure Caching. The destinations service will cache its list of destinations in Windows Azure
Caching.
Windows Azure Service Bus Relays. The communication between the ASP.NET Web API reservation
service and the WCF reservation service will be through a Windows Azure Service Bus Relay.
1-24 Overview of Service and Cloud Technologies

Windows Azure Storage. The Windows Azure Storage is used for storing photos uploaded by users
from their trips and their related metadata. The blob storage holds both private and public photos, in
separate containers, whereas the table storage holds a list of all public photos, including information
such as location where the photo was taken, date and time it was taken, the name of the user who
uploaded the photo etc. Users can use the Travel Companion client application to upload their
photos (public and private), get a list of public photos taken in a specific destination, or view their
own private photos. Uploading the photos is done by using the ASP.NET Web API photos service.

Windows Azure Service Bus Queue. Changes made to flights departure in the flight management
administration web application will not automatically populate to the clients, but rather queued using
Windows Azure Service Bus Queues.
Windows Azure Worker Role. The worker role is used to host a background process that pulls flight
update messages from the queue, locate which clients are affected by the changed flight departure
time, and populate the flight updates to the client's device by using Windows Push Notifications
(WNS).

Access Control Service. The ACS will enable users to login to the Travel Companion client application
with their Windows Live ID. After they are logged in, users will automatically authenticate against the
ASP.NET Web API services, and will be able to book flights and access their reservations.

Demonstration: Using the Travel Companion Application


This demonstration will provide you with a quick overview of the Blue Yonder Companion client app.

Demonstration Steps
1. In virtual machine 20487B-SEA-DEV-A, run the setupIIS.cmd file from
D:\AllFiles\Mod01\DemoFiles\BlueYonderDemo\Setup.

This script builds the server solutions and deploys them to the local IIS machine

2. In virtual machine 20487B-SEA-DEV-C, open the BlueYonder.Companion.Client.sln solution from


D:\AllFiles\Mod01\DemoFiles\BlueYonderDemo\BlueYonder.Companion.Client in a new Visual
Studio 2012 instance.
3. Run the application without debugging. The Blue Yonder Companion app is a travel reservation and
management app. It can help you search and book flights, manage your trip schedule, store and
manage pictures and video from trips, and provide weather information about trip destinations.

4. Open the app bar and search for flights to New York by typing New. The app now communicates
with the front-end service to retrieve a list of flights to a location that begins with New, for example,
New York. You will implement this search future labs.

5. Select the trip from Seattle to New York and purchase it. Complete the purchase by entering your
personal information, and then click Purchase. The app will now send the purchase request to the
front-end service. The front-end service will save the purchase information and then send a separate
purchase request to the back-end service for additional processing. After the back-end and front-end
services complete their task, the client app will show a confirmation message. You will implement the
purchase feature in future labs, including the back-end service purchase.

6. Close the confirmation window to return to the Blue Yonder Companion page. Observe the weather
forecast which is shown under the New York at a Glance. The weather forecast is retrieved from the
front-end service. You will implement the weather service in future labs.

7. Select the current trip from Seattle to New York, and then select Media from the app bar.
Developing Windows Azure and Web Services 1-25

8. Open the app bar and observe the available options. You can upload images and videos to Windows
Azure Storage, and share them with other clients. In future labs, you will implement both the upload
and download features.

Note: Do not click the upload buttons, as you have not created any Windows Azure
Storage accounts yet. If you click any of the upload buttons, the app will fail and close.

9. Close the client app, return to the 20487B-SEA-DEV-A virtual machine, and then run the
CleanIIS.cmd script from the D:\AllFiles\Mod01\DemoFiles\BlueYonderDemo\Setup folder.
1-26 Overview of Service and Cloud Technologies

Lab: Exploring the Work Environment


Scenario
In this lab you will explore several of the frameworks and platforms used for creating distributed
applications, such as Entity Framework, ASP.NET Web API, and Windows Azure.

Objectives
After completing this lab, you will be able to:

Create an entity data model by using the data model wizard.

Create an ASP.NET Web API service.

Create a Windows Azure SQL database.

Deploy a web application to a Windows Azure website.

Lab Setup
Estimated Time: 30 minutes
Virtual Machine: 20487B-SEA-DEV-A

User name: Administrator


Password: Pa$$w0rd
For this lab, you will use the available virtual machine environment. Before you begin this lab, you must
complete the following steps:

1. On the host computer, click Start, point to Administrative Tools, and then click Hyper-V Manager.
2. In Hyper-V Manager, click MSL-TMG1, and in the Action pane, click Start.

3. In Hyper-V Manager, click 20487B-SEA-DEV-A, and in the Action pane, click Start.

4. In the Action pane, click Connect. Wait until the virtual machine starts.
5. Sign in using the following credentials:

o User name: Administrator


o Password: Pa$$w0rd

6. Verify that you received credentials to log in to the Azure portal from you training provider, these
credentials and the Azure account will be used throughout the labs of this course.

Exercise 1: Creating a Windows Azure SQL Database


Scenario
In this exercise, you will use Windows Azure Management Portal and create a new Windows Azure SQL
Server. You will connect the new SQL server with SQL Server Management Studio and create a BlueYonder
database from the .bacpac file.

The main tasks for this exercise are as follows:

1. Create a new Windows Azure SQL database server


2. Manage the Windows Azure SQL Database Server from the SQL Server Management Studio.
Developing Windows Azure and Web Services 1-27

Task 1: Create a new Windows Azure SQL database server


1. Open Internet Explorer and browse to the Windows Azure Management Portal at
https://manage.windowsazure.com.

2. From the SERVERS tab on the SQL DATABASES page, add a Windows Azure SQL Database server in
a region closest to you. Use the login name SQLAdmin and password Pa$$w0rd. Wait for the server
to appear in the list of servers and its status changed to Started. Write down the name of the newly
created server.

Task 2: Manage the Windows Azure SQL Database Server from the SQL Server
Management Studio.
1. On the Configure tab of the newly created SQL Database server, add a rule to allow access from any
IP address (0.0.0.0-255.255.255.255). Click Save to save the new rule.

Note: As a best practice, you should allow only your IP address, or your organization's IP
address range to access the database server. However, in this course, you will use this database
server for future labs, and your IP address might change in the meanwhile, therefore you are
required to allow access from all IP addresses.

2. Open Microsoft SQL Server Management Studio 2012 and connect to the new server. Use the server
name SQLServerName.database.windows.net, and the login name and password you used in the
previous task (Replace SQLServerName with the server name you wrote down in the previous task).
3. In Object Explorer, right-click the Databases node, and then click Import Data Tier Application.
Import the BlueYonder.bacpac file from the D:\AllFiles\Mod01\LabFiles\Assets folder. Verify that
the BlueYonder database is created.

Results: After completing this exercise, you should have created a Windows Azure SQL Database in your
Windows Azure account.

Exercise 2: Creating an Entity Data model


Scenario
In this exercise, you will create a new Class Library project that connects Windows Azure SQL database
and use the Entity Framework database-first development approach to generate classes for the
BlueYonder database.

The main tasks for this exercise are as follows:


1. Create an Entity Framework Model

Task 1: Create an Entity Framework Model


1. Open Visual Studio 2012 and create a new Class Library project. Name the project
BlueYonder.Model.
2. Add an ADO.NET Entity Data Model to the project.

o Connect to the SQLServerName.database.windows.net server with the login name and


password you used in the previous task (replace SQLServerName with the server name you have
written down in the previous exercise), and select the BlueYonder database.

o Make sure to check the option to include the database password in the connection string.

o Import the Locations and Travelers tables.


1-28 Overview of Service and Cloud Technologies

o Save the EDMX file after it opens and then close it.

Results: After completing this exercise, you should have created Entity Framework wrappers for the
BlueYonder database.

Exercise 3: Managing the Entity Framework Model with an ASP.NET Web


API Project
Scenario
Create a new ASP.NET MVC 4 web application that exposes Web API for the BlueYonder database CRUD
operations by using Entity Framework models.

The main tasks for this exercise are as follows:

1. Create an ASP.NET Web API Project

2. Add a Web API Controller with CRUD Actions, Using the Add Controller Wizard

Task 1: Create an ASP.NET Web API Project


1. Add a new ASP.NET MVC 4 Web Application project named BlueYonder.MVC with the Web API
template.

Task 2: Add a Web API Controller with CRUD Actions, Using the Add Controller
Wizard
1. In the BlueYonder.MVC project, add a reference to the BlueYonder.Model project.
2. Copy the connection string from the BlueYonder.Model project to the BlueYonder.MVC project
(from the App.config to the Web.config).

3. Build the solution, then open Server Explorer, and then refresh the Data Connections.
4. In Solution Explorer, in the BlueYonder.MVC project, right-click the Controllers folder, and add a
new controller named LocationsController.

o Create the new controller using the API controller with read/write actions, using Entity
Framework template.

o Select the Location model class.

o Select the BlueYonderEntities data context class.

Note: You now have a Web API controller for the Location model.

5. Run the BlueYonder.MVC Web application and in the browser, append the api/locations string to
the URL to get the list of locations. Open the downloaded file and verify you see the list of locations
in a JSON format.

Results: After completing this exercise, you will have a website that exposes the Web API for CRUD
operations on the BlueYonder database.

Exercise 4: Deploying a web application to Windows Azure


Scenario
In this exercise, you will create a Windows Azure Web Site to host the ASP.NET MVC 4 web application.

The main tasks for this exercise are as follows:


Developing Windows Azure and Web Services 1-29

1. Create a New Windows Azure Web Site

2. Deploy the Web Application to the Windows Azure Web Site

3. Delete the Windows Azure SQL Server

Task 1: Create a New Windows Azure Web Site


1. Open the Windows Azure Management Portal at https://manage.windowsazure.com.

2. On the WEB SITES page, click NEW, and then click QUICK CREATE to create a Windows Azure Web
Site. Name the web site BlueYonderWebSiteYourInitials (Replace YourInitials with your initials) and
select the region closest to your location. After you create the web site, wait until its status changes to
Running.

3. On the web site's DASHBOARD page, click the link to download the web site's publish profile file.

Task 2: Deploy the Web Application to the Windows Azure Web Site
1. In Visual Studio 2012, right-click the BlueYonder.MVC project in Solution Explorer, and click Publish.
Import the profile settings file you downloaded, and use the wizard to deploy the Web application to
the Windows Azure Web Site you created. Wait for the deployment to finish, and for a browser to
open.

Note: At this point, you can simply click Next at every step of the wizard, and then click
Publish to start the publishing process. Later in the course you will learn how the deployment
process works and how to configure it.

2. In the browser, append the api/locations string to the URL to get the list of locations. Open the
downloaded file and verify you see the list of locations in a JSON format.

Task 3: Delete the Windows Azure SQL Server


1. Open Internet Explorer and browse to the Windows Azure Management Portal at
https://manage.windowsazure.com.

2. Open the SQL DATABASES page and on the SERVERS tab, click the STATUS column of the server
you created in the first exercise, and then click DELETE. Follow the instructions in the Delete Server
Confirmation dialog box to delete both the database and the server.

Note: Windows Azure free subscriptions have a resource limitation and a restriction on the
total working hours. To avoid exceeding those limitations, you have to delete the Windows Azure
SQL Databases.

Results: After completing this exercise, you should ensure that all your products will be hosted on the
Windows Azure cloud by using Windows Azure SQL Database and Windows Azure Web Site.

Question: Why did you have to allow your machine IP when creating a new Windows Azure SQL server?
1-30 Overview of Service and Cloud Technologies

Module Review and Takeaways


In this module, you have been introduced to the characteristics of distributed applications and the
benefits that can be provided by using distributed architecture. You became familiar with databases and
data access technologies. You also learned about service technologies and different services approaches,
and the considerations for choosing service technology. Lastly, you have been introduced to cloud
computing concepts and Windows Azure cloud platform.

Best Practices: Plan your application architecture to be appropriate with the technical
requirements, while understanding the limitations of distributed architecture.

Choose the database technology that will let you scale according to your application usage
combine different approaches when appropriate (Relational DB, NoSQL)

Think of your consumers while choosing service technology. Choose HTTP services for high-
compatibility and resource-based communications Choose SOAP services when transaction
management support, reliable messaging, security negotiations and extended WS-* support is
needed.
Describe your software deployment and configuration with details before choosing Cloud Computing
strategy (IaaS, PaaS)

Review Question(s)
Question: In which scenarios will you use HTTP services and in which scenarios will you use
SOAP?
2-1

Module 2
Querying and Manipulating Data Using Entity Framework
Contents:
Module Overview 2-1

Lesson 1: ADO.NET Overview 2-2

Lesson 2: Creating an Entity Data Model 2-6


Lesson 3: Querying Data 2-20

Lesson 4: Manipulating Data 2-26


Lab: Creating a Data Access Layer by Using Entity Framework 2-34
Module Review and Takeaways 2-41

Module Overview
Typically, all applications store some data in a database. Some examples of data include configuration
settings, application data, user information, documents, and many others.
The .NET Framework provides a set of tools that helps you access and manipulate data that is stored in a
database. In this module, you will learn about the Entity Framework data model, and about how to create,
read, update, and delete data. Entity Framework is a rich object-relational mapper, which provides a
convenient and powerful application programming interface (API) to manipulate data.
This module focuses on the Code First approach with Entity Framework, but explains other options for
creating the data model also.

Objectives
After completing this module, you will be able to:

Describe basic objects in ADO.NET and explain how asynchronous operations work.
Create an Entity Framework data model.

Query data by using Entity Framework.

Insert, delete, and update entities by using Entity Framework.


2-2 Querying and Manipulating Data Using Entity Framework

Lesson 1
ADO.NET Overview
ADO.NET is the original low-level data access API in the .NET Framework. Although this module does not
focus on ADO.NET, understanding basic objects and operations from the ADO.NET library is essential for
using higher-level approaches, such as Entity Framework.

This lesson describes fundamental ADO.NET operations and its asynchronous support.

Lesson Objectives
After completing this lesson, you will be able to:

Describe the basic objects in ADO.NET.

Use asynchronous database operations with ADO.NET.

ADO.NET Basic Objects


ADO.NET is the basic data access API in the .NET
Framework. It contains a set of data providers that
support most free and commercial databases
available today. A data provider is responsible for
implementing database-specific protocols and
features, and at the same time presenting a
consistent API so that replacing the application's
data provider does not involve many code
changes.

ADO.NET contains the following data providers:


System.Data.SqlClient. For connecting to
Microsoft SQL Server databases and Windows
Azure SQL databases

System.Data.OleDb. For connecting to databases that support the OLE DB API


System.Data.Odbc. For connecting to databases that support the ODBC API

System.Data.OracleClient. For connecting to Oracle databases

For other databases, you can often find third-party data providers online, or you can implement your own
data provider.

For more information about ADO.NET Data Providers, see


http://go.microsoft.com/fwlink/?LinkID=298748

The rest of this topic focuses on fundamental ADO.NET concepts and classes. Each data provider has its
own classes, which implement a set of common interfaces.
Connection

Use the ADO.NET connection object to connect to your database. There are four types of ADO.NET
connection objects (one for each provider), which implement the IDbConnection interface:

SqlConnection

OleDbConnection
Developing Windows Azure and Web Services 2-3

OdbcConnection

OracleConnection

A connection object is responsible for connecting to the database and initiating additional operations,
such as executing commands or managing transactions. Typically, you create a connection object with a
connection string, which is a locator for your database and may contain connection-related settings, such
as authentication credentials and timeout settings.

Command

Use the ADO.NET command object to send commands to the database. Commands can either return data,
such as the result of a select query or a stored procedure, or have no data returned, such as when you use
an insert or delete statement, or a DDL (Data Definition Language) query. There are four types of ADO.NET
command objects, which implement the IDbCommand interface:

SqlCommand
OleDbCommand

OdbcCommand

OracleCommand
A command object can represent a single command or a set of commands. Query commands return a set
of results, as a DataReader object or a DataSet object, or a single value, usually the result of an
aggregated action, such as a row count, or calculation of an average.

DataReader
Use the ADO.NET data reader to dynamically iterate a result set obtained from the database. If you use a
data reader to access data, you must maintain a live connection while you read from the database.
Additionally, data readers can only move forward while iterating the data. This data-access strategy is also
referred to as the connected architecture. There are four types of ADO.NET data reader objects, which
implement the IDataReader interface:
SqlDataReader

OleDbDataReader

OdbcDataReader
OracleDataReader

The following code example demonstrates how to query a database with a data reader.

Querying a Database with a Data Reader


var connection = new
SqlConnection("Server=myServer;Database=StudentsDatabase;Trusted_Connection=True;")
using (connection)
{
var command = new SqlCommand("SELECT StudentID, StudentName FROM Students",
connection);
connection.Open();

SqlDataReader reader = command.ExecuteReader();

if (reader.HasRows)
{
while (reader.Read())
{
Console.WriteLine("{0}\t{1}",
reader.GetInt32(0),
reader.GetString(1));
}
2-4 Querying and Manipulating Data Using Entity Framework

}
else
{
Console.WriteLine("No data found.");
}
reader.Close();
}

When using a data reader, you can access only one database record at a time, as shown in the preceding
example. If you need multiple records at once, it is your responsibility to store them as you move along to
the next record. Although this seems like a major inconvenience, data readers are very efficient in terms of
memory utilization, because they do not require the entire result set to be fetched into memory.

DataAdapter

Use the ADO.NET data adapter to load a result set obtained from a database into the memory. After
loading the entire result set and caching it in the memory, you can access any of its rows, unlike the data
reader, which only provides forward iteration. You should use this data-access strategy, referred to as the
disconnected architecture, when you do not want to maintain a live connection to the database while
processing the data.

Data adapters store the results in a tabular format. You can also change the data after it is loaded, and use
the data adapter to apply the changes back to the database. There are four types of ADO.NET data
adapter objects, which implement the IDataAdapter interface:

SqlDataAdapter

OleDbDataAdapter
OdbcDataAdapter

OracleDataAdapter

Although data adapters are convenient to use (especially in conjunction with the DataSet class, which is
explained in the next section), they impose a larger overhead than data readers because the entire result
set must be fetched into memory before you can perform any operations.

DataSet
The DataSet class is one of the most frequently used objects in ADO.NET. You use it to retrieve tabular
data from a database. Although you can fill a DataSet object manually with data, you typically load it by
using the DataAdapter class.

The following code example demonstrates how to load data to a DataSet object by using a data adapter.

Loading Data Into a DataSet Object with the SqlDataAdapter Class


string query = "SELECT * FROM Students WHERE StudentName LIKE 'a%'";
var connection = new SqlConnection(connectionString);
var adapter = new SqlDataAdapter(query, connection);
var data = new DataSet();
adapter.Fill(data);

You can use DataSet objects to hold information from more than one table at one time, and maintain
relationships between tables inside a DataSet object.

Question: Why would you prefer using data readers to data adapters, and vice versa?
Developing Windows Azure and Web Services 2-5

Asynchronous Operations with ADO.NET


Database operations can take a long time to
complete. For example, using a very complex
query, or running a complex stored procedure can
take a considerable amount of time. When
executing long-running queries in desktop
applications, it is common to run them in a
background thread, to prevent the UI thread from
becoming unresponsive. However, in server-side
applications, such as web applications or web
services, having too many managed threads
waiting for a database operation to complete can
adversely affect the performance of the
application. With ADO.NET, you can use asynchronous operations to execute long-running queries
without creating and blocking a managed thread. When the database returns results, one of your threads
continues execution, and you can execute your business logic for the returned data.

To execute a command asynchronously, you use the ExecuteXXAsync methods. For example, the
ExecuteReaderAsync is the asynchronous version of the ExecuteReader method. The asynchronous
methods return a Task<T> object, where the generic type parameter T is the type returned by the
corresponding synchronous method. For example, the ExecuteReaderAsync method returns a
Task<DbDataReader> object, whereas the corresponding synchronous method, ExecuteReader, returns
a DbDataReader object.

This code example demonstrates how to use an asynchronous data reader.

Using an Asynchronous Data Reader


using (var connection = new SqlConnection(connectionString))
{
await connection.OpenAsync();
using (var command = new SqlCommand(commandString, connection))
{
using (SqlDataReader reader = await command.ExecuteReaderAsync())
{
while (await reader.ReadAsync())
{
Console.WriteLine("{0}\t{1}",
reader.GetInt32(0),
reader.GetString(1));
}
}
}
}

In addition to the ExecuteXXAsync methods, you can also use the DbConnection.OpenAsync method
to open a database connection asynchronously. You can also use the DbDataReader.ReadAsync method,
as shown in the preceding example, to advance the reader asynchronously to the next row.

Note: The code in the preceding example uses the await keyword introduced in C# 5 to
schedule a continuation when the operation completes. You can also use the
Task.ContinueWith method to provide a delegate as the continuation of the task.

For additional examples of Asynchronous Programming in ADO.NET, see

http://go.microsoft.com/fwlink/?LinkID=298749&clcid=0x409
2-6 Querying and Manipulating Data Using Entity Framework

Lesson 2
Creating an Entity Data Model
This module describes how to create an Entity Framework model. You will learn about the different
approaches for accessing data with Entity Framework, including Model First and Code First.

Lesson Objectives
After completing this lesson, you will be able to:

Explain the need for object-relational mappers (ORMs).

Describe the ORM development approaches: database-first, model-first, and code-first.

Create an Entity Framework data context.

Map classes to tables with data annotations.

Map class properties to database foreign keys.

Map type hierarchies with inheritance to database tables.


Map classes to tables by using the Entity Framework Fluent API.

The Need for Object Relational Mappers


When you use the ADO.NET classes to interact
with your database, your application becomes
strongly coupled to the database. With ADO.NET,
you implement most of your data access as plain
text SQL statements or call stored procedures
implemented on the database server in some SQL
dialect. This approach is fragile and error-prone.
Additionally, it is not flexible. Changing the
database schema or database type altogether
might require considerable modifications to the
code of the application.
ORMs simplify your interaction of the application
with data, and present an abstraction that maps application objects to database records. By accessing and
modifying objects, you modify the corresponding database records. Entity Framework keeps track of the
changes you make to the entities and only persists the changes to the database when you call the save
method. This introduces an abstraction layer between the database schema and the code of your
application, which makes your application more flexible. Additionally, because you access objects by using
strongly-typed code and query them with LINQ, you no longer have to rely on SQL statements. This
makes your code more robust.

Entity Framework is an ORM that provides a one-stop solution to interact with data that is stored in a
database. Instead of writing stored procedures and plain text SQL statements, you work with your own
domain classes, and you do not have to parse the results from a tabular structure to an object structure.

The Entity Data Model (EDM) is how Entity Framework maps database tables and relationships to objects
and properties. Visual Studio provides a visual designer for EDMs, the Entity Designer, which you can use
to modify the mapping. This module does not focus on EDM and the Entity Designer.
Developing Windows Azure and Web Services 2-7

Development Approaches: Database-first, Model-first, and Code-first


Entity Framework provides three general
approaches to create your data access layer (DAL)
of the application:

Model-first
Database-first

Code-first

For each of these approaches, Entity Framework


can either create a new database, according to the
data model, or use an existing database.

Model-First and Database-First


In this approach, you generate the data model by using the Entity Designer tool, which stores the model
in an .edmx file, and then you generate the entity classes and the relationships from the model.

If you do not have a database already, after you design your data model, you can generate database
scripts from the model by using the Entity Designer tool. You can then run the script in a new database to
create the tables. On the other hand, if your database already existed prior to creating the data model,
you can use the Entity Designer to reverse engineer the data model from the database tables. This
procedure is also referred to as Database-First approach.

Code-First
In this approach, you do not use an .edmx file to design your model, and do not rely on the Entity
Designer tool. The domain model is simply a set of classes with properties that you provide.
In the code-first approach, Entity Framework scans your domain classes and their properties, and tries to
map them to the database based on naming conventions. Tables are named in the plural form of your
class name and columns should have names identical to those of your class properties. For example, for a
class named Car with a property named Model, its mapped table will be named Cars and it will have a
column named Model. There are several other conventions used by Entity Framework. For example, if the
class has a property named Id, it will be assigned as the tables primary key column. If you need to
customize these mappings, you can use special data annotation attributes or the Fluent API. These
customization options will be discussed later in this module.

You can use the code-first approach both with new databases, and with existing ones. If you do not have
a database, the default behavior of code-first will be to create the database for you the first time you run
your application. If your database already exists, Entity Framework will connect to it and use the defined
mappings between your model classes and the existing database tables.

In this course, you will focus on the code-first approach with Entity Framework.

For more information on ADO.NET Entity Framework, see


http://go.microsoft.com/fwlink/?LinkID=298750&clcid=0x409

For more information on Entity Framework Development Workflows, see


http://go.microsoft.com/fwlink/?LinkID=298751&clcid=0x409
2-8 Querying and Manipulating Data Using Entity Framework

Creating a DbContext
In Entity Framework, a context is how you access
the database, without the need for additional
wrappers or abstractions. Context is the glue
between your domain model (classes) and the
underlying framework that connects to the
database and maps object operations to database
commands.

Entity Framework provides two types of context


objects: the ObjectContext class and the
DbContext class, which is a lightweight wrapper
around the ObjectContext class that streamlines
many of the common tasks you have to perform
with the context. This module focuses on the DbContext class, and explains how it:

Handles the database generation for Entity Framework Code First.

Provides basic create, read, update, and delete (CRUD) operations, and simplifies the code that you
must write to execute these operations.
Handles the opening and closing of database connections.
Provides a change tracking mechanism.

Note: The DbContext class was introduced as a lightweight context instead of the
ObjectContext class. The ObjectContext class was also found to be less accommodating when
used in unit testing, because it lacked a base class or interface that you can use to create your
own implementation, as a mockup of the database. The DbContext class, which was created as a
wrapper around ObjectContext, was created with unit testing in mind, and offers a set of
interfaces that you can implement to create a mockup context for unit tests. Today, it is common
to use DbContext for both code-first and model-first approaches.

Initializing the DbContext Class


The DbContext class constructor accepts the name of a database and creates a connection string by
using SQL Express or LocalDb. If both SQL Express and LocalDb are installed, the DbContext class uses
SQL Express.

Note: SQL Express is the free, lightweight version of SQL Server that can be installed on
development machines and ships with Visual Studio 2010. LocalDb is an extension of SQL Express
that offers an easier way to create multiple database instances by using SQL Express. LocalDb
ships with Visual Studio 2012.

You can use a different database (that is not SQL Express or LocalDb) by providing a connection string in
your application configuration file (app.config or web.config). If you pass the name of that connection
string to the DbContext class constructor, it will use the connection string instead of the default database
engine.

The following code demonstrates how to put a connection string in your application configuration file,
and how to use it when creating an instance of the DbContext class.
Developing Windows Azure and Web Services 2-9

DbContext with Named Connection Strings


XML
<configuration>
<connectionStrings>
<add name="StudentsDB" providerName="System.Data.SqlServerCe.4.0"
connectionString="Data Source=Students.sdf" />
</connectionStrings>
</configuration>

C#
DbContext context = new DbContext("StudentsDB");

Deriving from DbContext


Often, you will find it convenient to create your own class that derives from the DbContext class and
provides some helper methods or properties. It is common to derive from the DbContext class and
provide a property of type DbSet<T> for each entity type that is mapped to your database schema.

The following example demonstrates how to create a custom class that derives from the DbContext class.

Deriving from the DbContext Class


public class StudentsContext : DbContext
{
public StudentsContext() : base("StudentsDB") { }
public DbSet<Student> Students { get; set; }
public DbSet<Course> Courses { get; set; }
}

When you create an instance of the StudentsContext class depicted in the preceding code example,
Entity Framework will connect to the database and map the Students and Courses properties according
to the mapping information provided by the Student and Course classes.

Note: If you do not pass a database name or connection string name to the DbContext
class constructor, it will use the fully-qualified name of your custom DbContext-derived class as
the database name. For example, if the StudentsContext class depicted in the preceding code
example were in the StudentsManagement namespace, the database name would be
StudentsManagement.StudentsContext.

Creating the Database If It Does Not Exist


When the DbContext object is initialized, it detects whether the target database already exists. If the
database does not exist, Entity Framework can create the database based on the information in your
DbContext-derived class. To create the database, you can use the CreateDatabaseIfNotExists<T>
generic class.

The following example illustrates how to create the database, if it does not already exist, by using the
CreateDatabaseIfNotExists<T> generic class and a custom DbContext-derived class.

Creating the Database If It Doesnt Exist


var initializer = new CreateDatabaseIfNotExists<StudentsContext>();
initializer.InitializeDatabase(new StudentsContext());

Updating Databases with Code First Migrations


If your database was created by DbContext and you decided later on to change something in your
domain model classes, Entity Framework will not update the database automatically. You might encounter
exceptions while running queries or saving your changes to the database.
2-10 Querying and Manipulating Data Using Entity Framework

You can use Code First Migrations to update the database schema automatically to match the changes
you made in your classes without having to recreate the database.

With Code First Migrations, you define the initial state of your classes and your database. After you
change your classes and execute the Code First Migrations in design time, the set of changes you
performed over your classes is translated to the required migration steps for the database, and then those
steps are generated as database instructions in code. You can apply the changes to the database in
design-time, before deploying the version of the application. Alternatively, you can have the application
execute the migration code after it starts. Code First Migrations is outside the scope of this course, but
you can read more about it on MSDN:

For more information on Code First Migrations, see


http://go.microsoft.com/fwlink/?LinkID=313728

Working with DbContext


After you initialize the DbContext object, you can
use it to perform CRUD operations on the
database by using the domain model classes you
authored. You will learn how to perform CRUD
operations in Lesson 3, "Querying Data", and
Lesson 4, "Manipulating Data", and learn how to
map domain classes to database tables later in
this lesson.
The following example illustrates how to query
the database by using the DbContext class,
retrieve a set of objects, manipulate them, and
save the results back to the database.

Querying and Manipulating the Database by Using the DbContext Class


using (DbContext context = new DbContext("StudentsDB"))
{
Student student = context.Students.Find("Daniel");
student.GraduationDate = DateTime.Now;
context.SaveChanges();
}

In the preceding code example, the context.Students property returns an instance of the DbSet<T>
generic class. The DbSet<T> generic class represents a set of entities that you can use to perform CRUD
operations. You can think of it as the object representation of a database table. This class provides the
Find method, which can locate an object based on the database primary key. The example concludes by
calling the SaveChanges method of the DbContext class, which propagates the changes to the database.

Best Practice: It is very important to keep the number of concurrent DbContext objects in
your application low. Each object can open a connection to the database and keep it open for
some time. Too many open connections can cause performance issues, both in your application
and your database. When declaring an instance of DbContext object, use a using statement.
This will ensure that the database connection is closed, and that any in-memory caches for
objects you recently queried are purged from memory.
Developing Windows Azure and Web Services 2-11

Change Tracking
When you query the database and retrieve objects by using Entity Framework, the DbContext class can
track changes you make to these objects to facilitate saving them back into the database easily. The Entity
Framework change tracking system supports two modes of operation:

Active change tracking. Every property informs the context if it was changed.

Passive change tracking. The context attempts to detect changes before it determines which
property to save.

When you call the SaveChanges method of the DbContext class, the context checks if active change
tracking is enabled. If only passive change tracking is available, the DbContext object calls the
DetectChanges method. This method enumerates all entities retrieved by the context and compares
every property of every entity to the original value it had when it was retrieved. Any changed properties
are updated in the database.

To support active change tracking, you should mark all your properties on your domain classes (such as
the Student class in the preceding code example) with the virtual keyword. If you do so, Entity
Framework will create proxies at run time that derive from your class and track assignments to the virtual
properties of your model.

Mapping Classes to Tables with Data Annotations


By default, Entity Framework code-first uses
conventions to name tables and columns. In
addition, Entity Framework has conventions for
identifying which property should be used as a
primary key, and how to name foreign key
columns.
If you already have an existing database, and the
tables or columns are not named according to the
convention, you will need to manually map the
tables and columns to classes and properties. For
example, your table names might use underscore
(_) to separate words, and have a prefix of T_ for
table names, such as T_Order_Details. If you do not yet have a database, but the database administrators
have their own convention for naming tables and columns, you will need to follow their convention, and
manually set the names for the new tables and columns that will be generated by Entity Framework.

If you need to map classes and properties manually to database schema objects such as tables, columns,
and keys, you can do so by using data annotation attributes. You can also use these attributes to specify
validation rules for your domain classes. Validation rules are outside the scope of this module. To use data
annotation attributes, add a reference to the System.ComponentModel.DataAnnotations assembly.

Note: With the new Entity Framework 6, you will be able to create custom conventions
instead of manually applying attributes to each class and property you want to map. This feature
is very useful for companies where the database administrators (DBAs) have their own set of
naming conventions. This module was written prior to the release of Entity Framework 6, and
therefore this topic is not covered in the module.

To map a class to a database table, add the [Table] attribute to the class declaration and specify the table
name. For example, [Table("Products")] maps the class to the Products table. To map a property to a
2-12 Querying and Manipulating Data Using Entity Framework

database column, add the [Column] attribute to the property declaration. For example,
[Column("ProductName")] maps the property to the ProductName column.

Note: By default, Entity Framework will use the plural form of the class name when
mapping a class to a database table. For example, the class Product will be mapped to a table
named Products), and properties will be mapped to database columns of the same name. You
should use the [Table] and [Column] attributes only if you want to customize these defaults.

The following example shows how to map a class to a database table by using code-first data annotations.

Mapping a Table by Using Data Annotations


using System.ComponentModel.DataAnnotations;
using System.ComponentModel.DataAnnotations.Schema;

[Table("GlobalProducts")]
public class Product
{
public int Id { get; set; }

[Column("ProductName")]
public string Name { get; set; }
}

In the preceding code example, the Product class is mapped to a database table named GlobalProducts,
the Id property is mapped implicitly to a database column named Id, and the Name property is mapped
to a database column named ProductName.
The Id property in the preceding code example will be set as the primary key of the table, because the
convention for primary key is that either the property is named Id (or ID, the casing is ignored) or named
after the class, followed by Id, for example, ProductID.
When you map a property to a primary key column, by default, Entity Framework will set the value of the
column to be generated by the database automatically. For integer columns, the value will be auto-
incremented; for columns of type GUID, the database will generate a new GUID for each row. If you do
not want to use generated primary keys, and instead you want to provide the primary key value yourself
when creating the entity object, configure the primary key property with the
[DatabaseGenerated(DatabaseGeneratedOption.None)] attribute. To use the DatabaseGenerated
attribute, add a using directive to the System.ComponentModel.DataAnnotations.Schema
namespace.

Mapping Properties to Foreign Keys


Tables in a relational database cross-reference
each other by using foreign keys. Logically, foreign
keys represent one-to-many, many-to-one, or
many-to-many relationships. For example, a table
representing orders may have a foreign key
referencing a table of products. This relationship is
many-to-many, as each order may reference a
number of different products, and each product
may feature in a number of different orders.

As explained in the previous topic, you can map


class properties to table columns by using data
Developing Windows Azure and Web Services 2-13

annotations. You use data annotations to add foreign keys and to map object relationships between
instances of your classes to the database relationships. Specifically, you define two properties to express
the foreign key relationship: a foreign key property, whose type matches the database type of the foreign
key column, and an entity property, whose type is a class from your domain model.

You can map a foreign key relationship in two ways by using data annotations:

From the foreign key property to the entity property.

From the entity property to the foreign key property.

The following code example shows how to set a foreign key of a nested object to a property of your class
by using two approaches.

Mapping Foreign Keys by Using Data Annotations


//From the foreign key property to the entity property
[ForeignKey("Course")]
public Guid CourseId { get; set; }
public Course Course { get; set; }

//From the entity property to the foreign key property


public Guid CourseId { get; set; }
[ForeignKey("CourseId")]
public Course Course { get; set; }

Note: The preceding code example illustrates a to-one relationship (either one-to-one or
many-to-one) from the enclosing entity to the Course entity. To specify a to-many relationship
(either one-to-many, or many-to-many), change the type of the entity property to
ICollection<T> or IEnumerable<T>.

By having both the foreign key property and entity property for each foreign key relationship, you gain
flexibility. If necessary, you can ask Entity Framework to fetch the referenced entity (as shown in the
Course class in the preceding code example) along with the enclosing entity, or you can refrain from
fetching it and rely only on its key, for performance reasons.

Note: If you do not use data annotations, and instead rely on the Code First convention for
foreign keys, you will need to make sure that the foreign key property is named as the entity
property, followed by Id (casing is ignored). In the preceding example, the entity property is
named Course and the foreign key property is named CourseId, therefore the data annotation
attributes are not required.

For more information on Code First Conventions, see


http://go.microsoft.com/fwlink/?LinkID=313729

Demonstration: Creating Classes and Database By Using Code-first


In this demonstration, you will define a set of domain classes and use Entity Framework Code First to
generate a database based on those classes.

Demonstration Steps
1. Open Visual Studio 2012, create a new Console Application project, and name it MyFirstEF.

2. Add the Entity Framework NuGet package to the project.


2-14 Querying and Manipulating Data Using Entity Framework

3. Add a new class to the project, name it Product, and make it public. The class should include:

o A public property Id of type int.

o A public property Name of type string.

4. Add a new class to the project, name it Store, and make it public. The class should include:

o A public property Id of type int.

o A public property Name of type string.

o A public property Products of type IEnumerable<Product>.

5. Add a new class to the project, name the class MyDbContext, and make it public. The class should
inherit from DbContext, and include the following:
o A public property Products of type DbSet<Product>.

o A public property Stores of type DbSet<Store>.

6. In the Main method, use the CreateDatabaseIfNotExists generic class to create a new database
initializer, and use it to initialize the database.
7. Remove the <entityFramework> element and its content from the App.config file.

Note: This demonstration requires you to use SQL Server Express and not LocalDb, because with
LocalDb the newly created database will not show in the SQL Server Management Studio (LocalDb
detaches the application's database after the application stops). The SqlConnectionFactory class uses
LocalDb, so by deleting the <entityFramework> element, the creation of the database will be in the
local SQL Server Express instance.

8. Run the application. The application creates a new database; this might take a couple of seconds.

9. Open SQL Management Studio, connect to local SQL Express database, and make sure you see the
newly created database named MyFirstEF.MyDbContext.
10. Explore the database structure and observe the newly created Product and Store tables, and related
columns.

Note: Database tables are usually named in the plural form, which is why Entity Framework
changed the names of the generated tables from Store and Product to Stores and Products.

Question: How does the CreateDatabaseIfNotExists<T> generic class know which


database schema to create?
Developing Windows Azure and Web Services 2-15

Mapping Type Inheritance to Tables


When you work in an object-oriented
environment, you can use inheritance to reflect
real-world relationships. When you work with an
ORM, the inheritance relationships should hold
when you map objects to database tables.

Entity Framework provides three approaches to


represent inheritance:
TPT (Table per type)

TPH(Table per hierarchy)

TPC (Table per concrete type)


In the examples in this topic, you will see how to implement inheritance for the base class Person and two
inheriting classes: Student and Teacher.

TPT

In the TPT approach, a separate table represents each class. The derived class table has a foreign key
property that associates it with the base class table. The derived class table contains columns only for
properties declared in that class.

This image describes the TPT representation in the database.

FIGURE 2.1: ENTITY FRAMEWORK TABLE PER TYPE (TPT)


As you can see in the diagram, a table represents the Person class, and an additional table represents
each inherited type, that adds more columns and contains a foreign key to the parent table.

To create such an object-relational mapping, use data annotations to give each class a different table
name.

TPH
In the TPH approach, a single table represents the entire inheritance hierarchy. All the inherited types are
represented in the same table. When you map the table to domain classes (such as the Teacher and
Student classes), you only map the relevant properties for each class. This means that the database
2-16 Querying and Manipulating Data Using Entity Framework

representation of a Teacher object will have a null value for the Grade column, which only the Student
class has.

This image describes the TPH representation in the database.

FIGURE 2.2: ENTITY FRAMEWORK TABLE PER


HIERARCHY (TPH)
As you can see in the diagram, the database table holds all the properties without differentiating between
inherited types, all types are represented in a single table, and the differences are configured in the
mapping definition.

To create such an object-relational mapping, use data annotations to give all classes the same table name.
You can also remove the [Table] attribute from the classes, because this is the default behavior of Code
First for handling inheritance mapping.

Note: When creating the Person table, Entity Framework Code First will add a
discriminator column to the table and use the type names (Person, Student, and Teacher) to
indicate which object type is stored in each row. You need not be aware of the discriminator
column or use it directly.

TPC

In the TPC type approach, each concrete (non-abstract) class is represented in the database as its own
table. As a result, the database schema is not normalized, but mapping the tables to classes is much
easier.

This image describes the TPC type representation in the database.


Developing Windows Azure and Web Services 2-17

FIGURE 2.3: ENTITY FRAMEWORK TABLE PER CONCRETE TYPE (TPC)


To create such an object-relational mapping, make sure that your context class does not have a
DbSet<T> property for the abstract type. You can also use data annotations to set the table name for
each of the derived classes.

This example shows how to implement inheritance by using the TPT approach. The code defines three
classes named Person, Student, and Teacher. Student and Teacher inherit from Person, and every class
is mapped to a different database table.

TPT Example
public class MyDbContext : DbContext
{
public DbSet<Person> Persons { get; set; }
public DbSet<Student> Students { get; set; }
public DbSet<Teacher> Teachers { get; set; }
}

[Table("Person")]
public abstract class Person
{
public int Id { get; set; }
public string Name { get; set; }
public DateTime DateOfBirth { get; set; }
}

[Table("Student")]
public class Student : Person
{
public int Grade { get; set; }
}

[Table("Teacher")]
public class Teacher : Person
{
public decimal Salary { get; set; }
}

Mapping Classes to Tables with the Fluent API


The Fluent API is a code-based declaration for
database mapping. It provides an efficient way to
map your whole database in one single file. If you
use the Fluent API, your domain classes are not
cluttered with data annotation attributes. In fact,
your domain classes can be declared in an
assembly that does not reference the Entity
Framework at all.

There are two ways to use the Fluent API:


Override the OnModelCreating method of
your DbContext-derived class.

Derive from the EntityTypeConfiguration class for each mapped class.


You can use the Fluent API by overriding the OnModelCreating method of your DbContext class. The
OnModelCreating method gives you access to a DbModelBuilder object, which you use to declare the
association between your domain classes and the database tables, columns, and keys.
2-18 Querying and Manipulating Data Using Entity Framework

The following code example shows how to map a class to a database table, then map the key field of the
class, and then map a property of the class to a database column.

Fluent API with the OnModelCreating Method


protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
modelBuilder.Entity<Product>().ToTable("GlobalProducts");
modelBuilder.Entity<Product>().HasKey(c => c.Id);
modelBuilder.Entity<Product>().Property(c =>
c.Name).HasColumnName("ProductName");
}

In the preceding code example, the DbModelBuilder object is used to map the Product class to the
GlobalProducts table to declare that its Id property is the primary key and to associate the Name
property with the ProductName database column. This achieves the same result as the data annotations
example illustrated in Topic 4, "Mapping Classes to Tables with Data Annotations".

You can also use the Fluent API by using a class that derives from the EntityTypeConfiguration class for
each domain class you have. You still need to associate the configuration classes with your DbContext-
derived class by using the OnModelCreating method.

Best Practice: You should consider creating a separate class that derives from the
EntityTypeConfiguration class if your model mapping is complex. By separating the mapping
into several types, you make your mapping layer more readable, and avoid littering the
OnModelCreating method with hundreds of lines of mapping code.

The following code example illustrates how to use the Fluent API with a class derived from the
EntityTypeConfiguration class.

Fluent API with the EntityTypeConfiguration Class


//In the DbContext-derived class:
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
modelBuilder.Configurations.Add(new ProductMapping());
}

public class ProductMapping : EntityTypeConfiguration<Product>


{
public ProductMapping()
{
ToTable("GlobalProducts");
HasKey(t => t.Id);
Property(t => t.Id).HasColumnName("Id");
Property(t => t.Name).HasColumnName("ProductName");
}
}

The ProductMapping class in the preceding example derives from the EntityTypeConfiguration generic
class, and calls numerous methods in its constructor to associate the Product class with the
GlobalProducts table. This again achieves the same result as using data annotations.
For additional examples of Configuring/Mapping Properties and Types with the Fluent API,
see
http://go.microsoft.com/fwlink/?LinkID=313730
Developing Windows Azure and Web Services 2-19

Question: Why would you use the Fluent API as opposed to data annotations?
2-20 Querying and Manipulating Data Using Entity Framework

Lesson 3
Querying Data
So far, you learned how to map domain classes in your application to database tables. This lesson explains
how to query data from a database by using SQL and Entity Framework.

Lesson Objectives
After completing this lesson, you will be able to:

Query data by using LINQ to Entities.


Query data by using Entity SQL.

Query data by using direct SQL statements.

Configure lazy and eager entity loading.

Query the Database By Using LINQ to Entities


Language Integrated Query (LINQ) is used to
query collections and objects in the .NET
Framework. LINQ to Entities provides a LINQ
wrapper to query your database and retrieve data
by using your domain classes. LINQ to Entities
resembles standard LINQ to Objects, but has a
few differences:
LINQ to Entities requires a DbContext object
to communicate with a database.

LINQ to Objects queries return an object


implementing the IEnumerable<T>
interface, whereas LINQ to Entities queries
return an object implementing the IQueryable<T> interface that extends the IEnumerable<T>
interface. The IQueryable<T> interface extends IEnumerable<T> by containing an expression tree
object, which represents the query you wrote. Entity Framework uses the expression tree to translate
your LINQ query to an SQL query.

LINQ to Objects queries execute in memory on a collection of items, whereas LINQ to Entities queries
are translated to SQL statements and executed in the database.

Note: Every LINQ to Entities query is translated to SQL statements and executed at the
database level as a plain SQL statement by using ADO.NET. This is extremely important for
performance reasons. Executing a LINQ to Objects query on a table with millions of records
requires fetching the entire table into memory, whereas executing a LINQ to Entities query on the
same table can be extremely fast because the query executes on the database server.

This example shows how to retrieve a list of students from the database and filter it by the name of the
student. The context variable is a reference to a custom DbContext-derived class instance, and its
Students property returns a reference to a DbSet<Student> object.

Querying Data by Using LINQ to Entities


var studentsQuery = from s in context.Students
Developing Windows Azure and Web Services 2-21

where s.Name.ToLower().Contains("a")
select s;

There are some limitations as to which operators and methods you can use in your LINQ to Entities
queries. Because every LINQ to Entities query is translated to SQL and executed on the database server,
some LINQ features and.NET Framework methods are not supported by Entity Framework. For example,
you cannot use the String.IsNullOrWhiteSpace method and the Last LINQ query operator.

Best Practice: As with LINQ to Objects, queries written with LINQ to Entities are not
executed until they are enumerated, for example, by using foreach, or by calling the ToList or
FirstOrDefault extension methods. If you enumerate a LINQ to Entities query for the second
time, it will execute again in the database. For example, if you invoke the Count method of the
query several times, each invocation will execute the SQL statement again in the database.
Therefore, as a best practice, if you need to use the result of the query more than once, you
should store the result in a local variable.

For more information on LINQ to Entities, see


http://go.microsoft.com/fwlink/?LinkID=298752&clcid=0x409

Demonstration: Using LINQ to Entities


In the following demonstration, you will create a DbContext object and use it to query a database by
using LINQ to Entities.

Demonstration Steps
1. Using Visual Studio, open the EF_CodeFirst solution located in
D:\Allfiles\Mod02\Democode\UsingLINQtoEntities\Begin folder.

2. In Program.cs, within the Main method, initialize a new SchoolContext object to access data in the
database.

Note: The context uses unmanaged resources, such as a database connection, so do not
forget to dispose the context when you finish using it.

3. Using LINQ to Entities, select all courses and save the results in a variable.
4. Print the courses list and the students in each course to the console window.

5. Run the project and observe the console windows for the course and student lists. Use the IntelliTrace
window in Visual Studio 2012 to view the list of queries executed by Entity Framework.

Note: IntelliTrace will be covered in Module 10, "Monitoring and Diagnostics" in Course
20487.

Question: What are the SQL statements executed by Entity Framework in the preceding
demonstration?
2-22 Querying and Manipulating Data Using Entity Framework

Query the Database By Using Entity SQL


Entity SQL resembles Structured Query Language
(SQL), but instead of using table or view names
inside the statement, you use domain class names.
Entity SQL statements can return entities and
scalar values, and should be used when the
corresponding LINQ to Entities statement
becomes too complex. Additionally, you can use
Entity SQL to construct a query statement
dynamically as a string, instead of committing to
its structure in code.

To issue an Entity SQL query, you need to obtain


an ObjectContext instance from the DbContext
object (or derived class thereof). Then, you use the CreateQuery<T> generic method of the
ObjectContext class to execute the Entity SQL query and retrieve objects from the database.
The following example shows how to use Entity SQL to query a database.

Querying the Database with Entity SQL


var context = new StoreContext(); //derived from DbContext
var objectContext = (context as IObjectContextAdapter).ObjectContext;
string eSql = "SELECT VALUE prod FROM StoreContext.Product AS prod ORDER BY
prod.productName";
var query = objectContext.CreateQuery<Product>(eSql);
List<Product> products = query.ToList();

In the preceding code example, calling the CreateQuery<T> generic method does not execute the query
yet; this method only prepares the query for execution. The query is executed and objects are returned
only when you enumerate the query variable by using the foreach statement or the ToList method.

For more information on Entity SQL, see


http://go.microsoft.com/fwlink/?LinkID=298753&clcid=0x409
Best Practice: You would not typically use Entity SQL if a LINQ to Entities query can work
for you. LINQ to Entities has the advantage of being type-safe and available at compile time.
However, at times you need the flexibility of complex queries constructed at run time, which is
when you would use Entity SQL.

Query the Database by Using Direct SQL Statements


SQL statements run on the database level. Your
application can issue SQL statements or execute a
stored procedure in the database. You should
issue SQL statements directly only if you cannot
express your query by using LINQ to Entities or
Entity SQL. To use an SQL query statement from
Entity Framework, use the
ExecuteStoreQuery<T> generic method of the
DbContext class. Note that the SQL query should
use the database table names, and not the names
of your domain classes. The difference between
executing SQL statements directly with ADO.NET
Developing Windows Azure and Web Services 2-23

and executing them with Entity Framework is that with Entity Framework, the result is automatically
translated to the domain classes, instead of returning a DbDataReader object.

The following code example demonstrates how to execute an SQL query statement with Entity Framework
to retrieve objects from the database.

Retrieving Objects by Using Direct SQL


string sql = "select * from Products where Price > 5000";
var products= context.ExecuteStoreQuery<Product>(sql);

Finally, you can also execute SQL statements that return a single value or no value at all. For example, you
could execute an insert statement to insert a new entity into the database, or execute a stored procedure
that returns a scalar value. To execute an SQL statement that returns a scalar value, use the
ExecuteSqlCommand method.

The following example demonstrates how to execute an SQL statement that returns a scalar value by
using the ExecuteSqlCommand method.

Executing an SQL Statement that Returns a Scalar Value


using (var context = new StudentsContext()) //derived from DbContext
{
context.Database.ExecuteSqlCommand(
"update Students set GraduationYear = 2016 where GraduationYear = 2015");
//ExecuteSqlCommand returns the number of records updated, in case this information is
necessary
}

For more information on Transact-SQL Reference, see


http://go.microsoft.com/fwlink/?LinkID=298754&clcid=0x409

Question: Why would you use Entity SQL or direct SQL instead of LINQ to Entities?

Demonstration: Running Stored Procedures with Entity Framework


In this demonstration, you will use Entity Framework to execute a stored procedure and retrieve
structured data from its execution.

Demonstration Steps
1. In the D:\Allfiles\Mod02\Democode\StoredProcedure\Begin folder, open the EF_CodeFirst
solution using Visual Studio.
2. In Program.cs, navigate to the Main method, and notice that a SchoolContext instance has been
created.

3. Observe the query being executed and assigned to the averageGradeInCourse local variable for
calculating the WCF course average grade, and printed to the console.
4. View the ExecuteSqlCommand statement, and observe that it executes a stored procedure for
updating the grade for the students in a course and providing it the course name and required grade
change as parameters.

5. Run the client application and notice that the average grade in the WCF course is recalculated and
printed to the console, and the average grade is changed by 10 points.
2-24 Querying and Manipulating Data Using Entity Framework

Question: When would you invoke stored procedures from your application instead of
performing object manipulations by using Entity Framework?

Load Entities by Using Lazy and Eager Loading


When you work with large databases, some
queries can take longer than others. Lazy loading
and eager loading refer to the number of round
trips Entity Framework makes to load the data
from the database.

When using lazy loading, only the top level of the


data is returned, and the nested levels are
retrieved on demand. For example, if a Student
entity has a Courses entity property referring to a
list of courses in which the student is enrolled,
only the top-level Student entity information will
be fetched, and the Courses property will not be
fetched. With lazy loading, each round trip to the database returns a portion of the data, making queries
faster and minimizing the amount of memory required to store the results. Furthermore, the queries tend
to be simpler because they do not require joins across multiple database tables.
When using eager loading, Entity Framework returns the entire data set in one big round trip to the
database. Eager loading might take longer than multiple small round trips that return only part of the
result, depending on the complexity of the large query. The default behavior in Entity Framework is lazy
loading.

When issuing a query, call the Include method to specify which entities should be eagerly loaded with the
containing entity. This is the most flexible way to instruct Entity Framework when you want to use eager
loading, and is recommended.

The following code example demonstrates how to use eager loading with the Include method to retrieve
the property contents of the Courses entity along with the Student entity.

Eager Loading with Include


using (var context = new StudentsContext())
{
var studentsWithCourses = context.Students.Include(s => s.Courses).ToList();
//This example could also use Include("Courses") instead of the lambda expression
}

To enable lazy loading of your related entities, you need to declare your relationship properties, which
contain references to other entities, as virtual. If you reference a list of related entities, your virtual
property must be of type ICollection<T> or a derivative of it, such as IList<T>. You cannot use lazy
loading with IEnumerable<T>. By setting the properties to virtual, you ensure that Entity Framework
derives a new proxy class from the original class and adds the lazy load logic to the property.

If you have non-virtual properties, you can explicitly load them at run time by using the Load method.

The following code example shows how to load a non-virtual referenced entity explicitly.

Explicitly Loading a Non-Virtual Referenced Entity


context.Entry(student).Collection(s=> s.Courses).Load(); // Load a referenced collection
of entities
context.Entry(student).Reference(s=> s.Department).Load(); // Load a referenced entity
Developing Windows Azure and Web Services 2-25

In the preceding example, the Entry method returns a DbEntityEntry object, which you can use to access
information about the entity type, such as its original values and its state such as unmodified, deleted, and
so on. The DbEntityEntry provides information about the referenced entities and collections through
which you can explicitly load each relation. Similar to the Include method, the Collection and Reference
methods can also use a string parameter instead of the lambda expression.

If you have defined your reference and collection properties as virtual, and you want at some point to
momentarily turn off lazy loading on an entire context, set the LazyLoadingEnabled property of the
DbContext instance to false.

The following code example shows how to turn off lazy loading for the entire context. The context
variable refers to a DbContext object.

Turning Off Lazy Loading


context.ContextOptions.LazyLoadingEnabled = false;

For more information about Loading Related Data, see


http://go.microsoft.com/fwlink/?LinkID=313731
2-26 Querying and Manipulating Data Using Entity Framework

Lesson 4
Manipulating Data
Until this point, you learned how to query data from a database by using LINQ to Entities, Entity SQL, and
even direct SQL statements. However, querying data is not the whole story. This lesson explains how to
manipulate data by using Entity Framework.

Lesson Objectives
After you complete this lesson, you will be able to:

Enable change tracking for entities.


Insert an entity into a database.

Delete an entity from a database.

Update an entity in a database.


Use transactions with Entity Framework.

Change Tracking with Entity Framework


Entity Framework can track domain objects that
you retrieve from the database. Entity Framework
uses change tracking so that when you call the
SaveChanges method on the DbContext object,
it can synchronize your updates with the
database. You can check the status of any object
(such as whether it was modified), inspect the
history of your changes, and undo changes if you
see fit.

The DbContext object records the state of the


entity as it was when you retrieved it from the
database. The domain object itself contains the
current state of the entity. The DbContext can then determine whether the entity is:

Added. The entity was added to the context and did not exist in the database.

Modified. The entity was changed since it was retrieved from the database.

Unchanged. The entity was not changed since it was retrieved from the database.

Detached The entity was detached from the context, so that changes to it will not be reflected in the
database.
Deleted. The entity was deleted since it was retrieved from the database.

You can inspect the state of all the entities that have been changed in some way by using the
DbContext.ChangeTracker.Entries method. This could be useful for logging purposes or for reverting
certain changes in an overridden implementation of the SaveChanges method of the DbContext class.

The following code example demonstrates how you can enumerate all the objects that have been added,
modified, or deleted in an overriding implementation of the SaveChanges method.

Enumerating Changes to Objects


public class StudentsContext : DbContext
Developing Windows Azure and Web Services 2-27

{
public override void SaveChanges()
{
var changes = this.ChangeTracker.Entries().Where(entry => entry.State !=
EntityState.Unchanged);
foreach (var change in changes)
{
var entity = change.Entity;
//Inspect the object, the change, and possibly introduce additional changes
}
base.SaveChanges();
}
}

Furthermore, from an instance of the DbContext class you can retrieve and modify state information for
any entity that has been loaded into the context by using the Entry method. One use of this would be to
mark an entity as deleted; another use would be to replace the values of an entity with new values
provided externally to your API.
The following code example illustrates how you can modify state information for an entity and how you
can copy the values from one entity to another.

Modifying Entity State


using (var context = new StudentsContext())
{
//Delete the students school:
Student student = context.Students.Find("Dave Barnett");
context.Entry(student.School).State = EntityState.Deleted;

//Copy student values over from another object:


context.Entry(student).CurrentValues.SetValues(otherStudent);
}

Finally, you can turn change tracking on and off globally by using the AutoDetectChangesEnabled
property of the Configuration property of the DbContext class.

The following code example shows how you can turn change tracking on and off.

Turning Change Tracking Off


DbContext.Configuration.AutoDetectChangesEnabled = false;

If you use the preceding code to turn off automatic change tracking, you will have to call the
DbContext.ChangeTracker.DetectChanges method manually before you save any changes.

Note: Automatic change tracking is enabled by default, but only applies to properties
marked as virtual. Non-virtual properties cannot be derived and therefore Entity Framework
cannot detect when the property's value changes.
2-28 Querying and Manipulating Data Using Entity Framework

Inserting New Entities


To add an entity to the database, you use the
DbContext object. When you use the DbContext
object to add a new entity to a database, the
context marks the change tracking status of the
entity as Added. When you call the SaveChanges
method, the DbContext object adds the entity to
the database. No changes are applied to the data
until you call the SaveChanges method.

The following code example shows how to add a


new entity to a database by using the DbContext
object. The Persons property of the
MyDbContext class is of type DbSet<Person>.

Adding an Entity
using (var context = new MyDbContext())
{
context.Persons.Add(
new Person
{
DateOfBirth = new DateTime(1978, 7, 11),
Name = "John Doe"
});
context.SaveChanges();
}

Deleting Entities
To delete an entity from the database, you use the
DbContext object. When you delete an entity
from a database, the context marks the change
tracking status of the entity as Deleted. When you
call the SaveChanges method, the DbContext
object deletes the entity from the database.

The following code example shows how to delete


an entity from a database by using the
DbContext object. The Products property of the
ProductsContext class is of type
DbSet<Product>.

Deleting an Entity
using (var ctx = new ProductsContext())
{
var product = (from m in ctx.Products where m.Name == "Orange Juice" select
m).Single();
ctx.Products.Remove(product);
ctx.SaveChanges();
}

If you already know the primary key of the entity that you want to delete, you do not need to retrieve it
from the database to delete it. You can manually add an entity with the desired primary key to the
Developing Windows Azure and Web Services 2-29

context, use the Entry method of the DbContext to access the state of the entity, and then mark it as
deleted.

The following code example shows how to delete an entity from a database without first retrieving it from
the database.

Deleting an Entity without First Retrieving it From the Database


using (var ctx = new ProductsContext())
{
var product = new Product {Id = 72};
ctx.Products.Add(product);
ctx.Entry(product).State = EntityState.Deleted;
ctx.SaveChanges();
}

Updating Entities
To update an entity in the database, you can use
the DbContext object and make changes in an
incremental fashion. When you update an entity,
the context marks the change tracking status of
the entity as Modified. When you call the
SaveChanges method, the DbContext object
updates the entity in the database. The exact
procedure of how these incremental updates are
performed depends on the change tracking status.

If active change tracking is enabled, the


DbContext object uses information maintained
internally to determine which columns must be
updated. If only passive change tracking is available, the DbContext object invokes the DetectChanges
method, which compares the entity to the snapshot that was taken when it was retrieved from the
database. In both cases, the DbContext object executes an SQL statement that updates only the columns
that were changed.

Note: For more information about change tracking, see Lesson 2, "Creating and Entity Data
Model", Topic 3, "Context and Entities" in Course 20487.

The following code example shows how to retrieve and update an entity by using the DbContext object.

Updating an Entity
using (var context = new MyDbContext())
{
var student = (from s in context.Students where s.Name.ToLower().Contains("john")
select s).Single();
student.Name = "Jonathan";
context.SaveChanges();
}

You can update an entity that is not tracked by the context, such as an entity you received as a method
parameter, by attaching the entity to the context, and then manually setting the entity's state to
Modified.
2-30 Querying and Manipulating Data Using Entity Framework

Note: Updating a detached entity is a common scenario when working with services,
because the updated entity is sent to the service and not loaded from the context.

The following code example shows how to update an entity that is not tracked by the context.

Updating a Non-Tracked Entity


using (var context = new MyDbContext())
{
context.Entry(updatedStudent).State = EntityState.Modified;
context.SaveChanges();
}

The preceding code uses the Entry method to attach the updatedStudent object to the context, and
then sets the entity's state to Modified. When the context tries to save the attached entity, it cannot
detect which properties were changed, because it does not know the original values of the properties.
Therefore, in this scenario, the SQL statement will update all the columns, even those that have not
changed.

If you are not sure whether the entity you want to update is already tracked or not by the context you are
using, such as when you receive the context as a parameter, do not use the Entry method. If your context
already tracks an instance of an entity, and you call the Entry method with a different instance of the
same entity, an exception will be thrown because the context cannot track two instances of the same
entity. If you do not know whether an entity is tracked or not, you have two options:

1. Use the Find method to load the entity to the context, and then use the
DbEntityEntry<T>.CurrentValues.SetValues method to update the loaded entity with the values of
the updated entity instance. The Find method will first search the context for the entity and if not
found, will load the entity from the database.
2. Search only the entities already loaded by the context for the entity to update, by using the Local
property of the DbSet. If it is found, use the DbEntityEntry<T>.CurrentValues.SetValues method
to update the entity according to the values of the updated entity. If it is not found, use the Entry
method to attach the entity to the context, and then set its state to Modified. By using the Local
property, you can avoid accessing the database if the entity is not found in the context.

This example shows the two ways to update a detached entity, if you do not know whether the context
already tracks the entity or not.

Updating a Detached Entity with an Existing Context


// Option 1
var originalStudent = context.Students.Find(updatedStudent.StudentId);
context.Entry(originalStudent).CurrentValues.SetValues(updatedStudent);
context.SaveChanges();

// Option 2
var existingStudent = context.Students.Local.FirstOrDefault(r => r.StudentId ==
updatedStudent.StudentId);
if (existingStudent == null)
{
context.Entry(originalStudent).State = EntityState.Modified;
}
else
{
context.Entry(existingStudent).CurrentValues.SetValues(updatedStudent);
}
context.SaveChanges();
Developing Windows Azure and Web Services 2-31

For more information about Add/Attach and Entity States, see


http://go.microsoft.com/fwlink/?LinkID=313732

Demonstration: CRUD Operations in Entity Framework


Entity Framework provides CRUD operations. This keeps those operations simple and helps maintain
readable code. In this demonstration, you will see how to perform CRUD operations with Entity
Framework.

Demonstration Steps
1. In the D:\Allfiles\Mod02\Democode\CRUD\Begin folder, open the EF_CodeFirst solution using
Visual Studio.

2. To access data in the database, in Program.cs, within the Main method, initialize a new
SchoolContext object. Add the code after the call to the InitializeDatabase method.
3. Using the context object select a course named WCF.

4. Create two new students and add them to the WCF course.
5. Give the teacher of the WCF course a $1000 salary raise.
6. Query a student named Student_1 from the WCF course and remove it from the course students.

7. Save the changes in the context and write the WCFCourse object to the console window.
8. Run the console application and observe the changes to the salary of the teacher and the list of
students: the salary is now 101000, there are two new students, and student 1 is missing from the list.
Use the IntelliTrace window in Visual Studio 2012 to view the list of SQL statements executed by
Entity Framework.
Question: How do you create or modify a relationship (based on a foreign key) by using
Entity Framework?

Entity Framework Transactions


When you perform a number of operations on a
database, such as inserting an entity, updating the
properties of another entity, and deleting a third
one, handling individual failures along the way is a
daunting task. If the third of three operations fails,
you must dedicate considerable programming
resources to cleaning up the effects of the first
two operations; and even these cleanup
operations can fail, in turn. Furthermore, if every
operation becomes visible to other threads or
processes as soon as it is performed, the cleanup
process may affect the entire application or even
other applications accessing the same database.

For example, when you insert an order of a customer into the database, it may consist of multiple update
and insert operations. You might have to insert a record into the Orders table, a record into the Shipping
table, and modify the Inventory table to reflect the inventory changes as a result of fulfilling the order. If
any of these updates failfor instance, if the Inventory table update fails because the item is no longer
2-32 Querying and Manipulating Data Using Entity Framework

available in stockyou need to carefully roll back the changes to the Orders and Shipping tables to
make sure you do not have an orphaned order that cannot be fulfilled. Similarly, if the Inventory table
update succeeds but an error occurs while inserting a record into the Shipping table, you must undo the
change in the Inventory table to make sure you do not lose inventory items. To aggravate the matter,
any updates you performed to the Inventory table may have been made visible to other applications, so
another process may have decided that an item is no longer in stock although your order has not been
successfully fulfilled.

Transactions
Transactions address the compensation and visibility issues by providing a scope of operations. A
transaction is a set of operations that runs in a sequence, and if one of the operations fails, the transaction
rolls back, and no operations are committed. You should use transactions if one operation depends on a
previous operation and cannot be committed without verifying that the previous operation was
successful. Also, you should use transactions when visibility is a concern, and you do not want to make a
change visible to other applications until the entire transaction completes.

By default, Entity Framework is transactional. When you call the SaveChanges method, it translates the
change set to SQL statements and starts with the BEGIN TRANSACTION SQL declaration. The SQL
transaction is not committed unless all the items are added, updated, or deleted successfully.

The TransactionScope Class


When you have more than one connection to one or more databases, you have to work with the
TransactionScope class (from the System.Transactions assembly) to wrap a set of operations in a single
transaction. When you use this class, you create a scope of operations that belong to a single transaction
that may be distributed across multiple machines. The TransactionScope class works by using an
underlying distributed transaction coordinator (DTC) to coordinate between one or more resource
managers, such as multiple relational databases.
To use multiple transactions in Entity Framework, you declare a TransactionScope object before you
declare the DbContext object. Call the Complete method when you know all the processes have
completed successfully. The transaction only commits if you call the Complete method. If the Complete
method is not called, all the nested transactions will roll back when the transaction scope closes, even if
each individual transaction completed successfully.

The following code example shows how to use the TransactionScope class with Entity Framework.

Using the TransactionScope Class


using (var scope = new TransactionScope())
{
using (var ctx1 = new DataContext1())
{
// Update an entity
ctx1.SaveChanges();
}
using (var ctx2 = new DataContext2())
{
// Update an entity
ctx2.SaveChanges();
// Update an entity
ctx2.SaveChanges();
}
scope.Complete();
}

In the preceding code example, the changes made by the three SaveChanges method calls are
committed to the database (or databases) only when the TransactionScope block ends, and only because
the entire scope was marked as complete by calling the Complete method.
Developing Windows Azure and Web Services 2-33

Best Practice: Use the TransactionScope class inside a using block to make sure that it is
disposed of. If the object is disposed of before you call the Complete method (for example, if an
exception occurs within the using block), the transaction is aborted automatically and its changes
are rolled back.

Question: When should you use transactions and distributed transactions?


2-34 Querying and Manipulating Data Using Entity Framework

Lab: Creating a Data Access Layer by Using Entity


Framework
Scenario
Blue Yonder Airlines is creating the new Travel Companion app for mobile devices that will help end-users
browse the travel destinations of the company and book flights online. In this lab, you will create the
server-side data layer by using Entity Framework and test your code by using integration tests.

Objectives
After completing this lab, you will be able to:

Create an Entity Framework model by using Code First.

Query an Entity Framework model by using LINQ to Entities and Entity SQL.

Manipulate data by using Entity Framework.


Create a database transaction with the TransactionScope class.

Lab Setup
Estimated Time: 60 minutes
Virtual Machine: 20487B-SEA-DEV-A, 20487B-SEA-DEV-C

User name: Administrator, Admin


Password: Pa$$w0rd, Pa$$w0rd
For this lab, you will use the available virtual machine environment. Before you begin this lab, you must
complete the following steps:

1. On the host computer, click Start, point to Administrative Tools, and then click Hyper-V Manager.
2. In Hyper-V Manager, click MSL-TMG1, and in the Action pane, click Start.

3. In Hyper-V Manager, click 20487B-SEA-DEV-A, and in the Action pane, click Start.

4. In the Action pane, click Connect. Wait until the virtual machine starts.
5. Sign in using the following credentials:

o User name: Administrator


o Password: Pa$$w0rd

In this lab, you will install NuGet packages. It is possible that some NuGet packages will have newer
versions than those used when developing this course. If your code does not compile, and you identify
the cause to be a breaking change in a NuGet package, you should uninstall the NuGet package and
instead, install the old version by using Visual Studio's Package Manager Console window :

1. In Visual Studio, on the Tools menu, point to Library Package Manager, and then click Package
Manager Console.

2. In Package Manager Console, enter the following command and then press Enter.

install-package PackageName -version PackageVersion -ProjectName ProjectName

(The project name is the name of the Visual Studio project that is written in the step where you were
instructed to add the NuGet package).

3. Wait until Package Manager Console finishes downloading and adding the package.
The following table details the compatible versions of the packages used in the lab:
Developing Windows Azure and Web Services 2-35

Package name Package version

EntityFramework 5.0.0

Exercise 1: Creating a Data Model


Scenario
Before you create the services for the Blue Yonder Airlines Travel Companion application, you need to
create the data access layer the services will use. The data access layer uses Entity Framework to load
entities from the database, and to save new and updated entities back to the database.

In this exercise, you will create data model classes to represent trips and reservations, implement a
DbContext-derived class, and create a new repository class for the Reservation entity.
The main tasks for this exercise are as follows:

1. Explore the existing Entity framework data model project

2. Prepare the data model classes for Entity Framework


3. Add the newly created entities to the context

4. Implement a the reservation repository

Task 1: Explore the existing Entity framework data model project


1. Open the BlueYonder.Companion solution file from the
D:\AllFiles\Mod02\LabFiles\begin\BlueYonder.Companion folder.
2. In the BlueYonder.Entities project, locate the FlightSchedule class.

Locate the FlightScheduleId field and explore the use of the DatabaseGenerated attribute.
Locate the Flight property and explore the ForeignKey attribute.
3. Locate the FlightRepository class under the Repositories folder, in the BlueYonder.DataAccess
project, and explore its contents.

Note: The FlightRepository class implements the Repository pattern. The Repository
pattern is designed to decouple the data access strategy from the business logic layer that
handles the data. The repository exposes the data access functionality and implements it
internally by using a specific data access strategy, which in this case is Entity Framework. By using
repositories, you can easily create a mock, replacing the repository, and improve the testability of
the business logic.
For more information about the Repository pattern and its related patterns, see
http://go.microsoft.com/fwlink/?LinkID=298756&clcid=0x409.

In Lab 4, "Extending Travel Companions ASP.NET Web API Services", Module 4, "Extending and
Securing ASP.NET Web API Services", you will see how to increase testability by using mocked
repositories.

Task 2: Prepare the data model classes for Entity Framework


1. Open the Trip class from the BlueYonder.Entities project, set the FlightInfo property as virtual, and
decorate it with the [ForeignKey] attribute.

Declare the FlightScheduleID property to be virtual.

Add a [ForeignKey] attribute to the FlightInfo property.


2-36 Querying and Manipulating Data Using Entity Framework

Set the attribute to use the FlightScheduleID property as the foreign key property.

Note: In addition, Entity Framework will detect the virtual property in the Trip class and
will create a new derived proxy class that implements lazy loading for the FlightInfo property.
When you load trip entities from the database, the entity object will be of the derived trip proxy
type, and not of the Trip type.

2. Open the Reservation class from the BlueYonder.Entities project, and make the following changes:

o Declare the DepartureFlight property to be virtual, and add a [ForeignKey] attribute to it. The
attribute should use the DepartFlightScheduleID property for the foreign key.

o Declare the ReturnFlight property to be virtual, and add a [ForeignKey] attribute to it. The
attribute should use the ReturnFlightScheduleID property for the foreign key.

o Set the ReturnFlightScheduleID property as nullable to make it optional.

Note: Setting the ReturnFlightScheduleID foreign key property to a nullable int indicates
that this relation is not mandatory (0-N relation, meaning a reservation does not require a return
flight). The DepartFlightScheduleID foreign key property is not nullable and therefore indicates
the relation is mandatory (1-N relation, meaning every reservation must have a departing flight)

Task 3: Add the newly created entities to the context


1. Open the TravelCompanionContext class from the BlueYonder.DataAccess and in it, add a new
DbSet<T> property for the Reservation entity. Name the property Reservations.
2. Add another DbSet<T> property for the Trip entities, and name the property Trips.

Task 4: Implement a the reservation repository


1. Open the ReservationRepository class under the Repositories folder, in the
BlueYonder.DataAccess project, and implement the GetSingle method. The method retrieves a
Reservation entity by its primary key.

o Create a LINQ query to retrieve the reservation from the TravelCompanionContext.


Reservations DbSet.

o If the entity is not found, return null.

Note: The ReservationRepository class already contains a TravelCompanionContext


instance by the name context.

2. Implement the Edit method. The method should make sure that the reservation is loaded in the
context by calling the Find method, and then apply the new values to the loaded entity.

Note: You can refer to Lesson 4, "Manipulating Data", Topic 4, "Updating Entities" in
Course 20487, for an example of how to update a detached entity by using the Find method.

3. Implement the Dispose method to dispose the private context member.

o Make sure you only dispose the context member if the object was initiated (not null).

o Set the context member to null after calling its Dispose method.
Developing Windows Azure and Web Services 2-37

Note: If you want to see examples of the implementation of the Dispose method, refer to
its implementation in other repositories.

4. To complete the implementation of the ReservationRepository class, uncomment the comments


from the following methods:

o GetAll

o Add

o Delete

Note: Review the implementation of the Delete method to understand how cascade delete
was implemented, so that when a Reservation is deleted, its DepartureFlight and ReturnFlight
objects are deleted as well.

Results: After you complete this exercise, the Entity Framework Code First model is ready for testing.

Exercise 2: Querying and Manipulating Data


Scenario
In the previous exercise, you created the entities, context, and repository classes to handle the data in the
database. Before you create the services that use the new repositories, you need to test the repositories, to
verify they work properly. In this exercise, you will create tests for the repositories that will issue queries
and manipulate data. In addition, you will create tests for using multiple repositories with a single
transaction.
The main tasks for this exercise are as follows:
1. Explore the existing integration test project

2. Create queries by using LINQ to Entities


3. Create queries by using Entity SQL
4. Create a test for manipulating data

5. Use cross-repositories integration tests with System.Transactions

6. Run the tests, and explore the database created by Entity framework

Task 1: Explore the existing integration test project


1. Explore the FlightQueries class in the BlueYonder.IntegrationTests project.

The TestInitialize static method is responsible for initializing the database and the test data, and all
the other methods are intended to test various queries with lazy load and eager load.
2. Explore the insert, update, and delete tests in the FlightActions class.

Observe the use of the Assert static class to verify the results of the test.

Task 2: Create queries by using LINQ to Entities


1. In the BlueYonder.IntegrationTests project, open the ReservationQueries test class, and
implement the GetReservationWithFlightsEagerLoad test method.

Create a ReservationRepository object in a using block.


2-38 Querying and Manipulating Data Using Entity Framework

Write a LINQ to Entities query that retrieves a Reservation entity having confirmation code 1234,
and performs an eager loading of its departure and return flights.

Use the repository's GetAll method to get a data source for the query.
For the eager load, use the Include method.

Use the Assert static class to verify the reservations entity was loaded, as well as its departing and
returning flights.

To prevent any lazy load operations, use the Assert static class outside the using block scope.
2. In the GetReservationWithFlightsLazyLoad test method, add two Assert tests to verify that lazy
load works.

Use the Assert static class to verify that the departing and returning flights of the reservation are not
null.

Place the call to the Assert static class in the using block, after the comment.

Note: By examining the value of the navigation properties, you are invoking the lazy load
mechanism.

3. In the GetReservationsWithSingleFlightWithoutLazyLoad test method, turn lazy loading off.

Add the code to turn off lazy loading before the repository is created.

Note: You can refer to Lesson 3, "Querying Data", Topic 4, "Load Entities by Using Lazy and
Eager Loading" in Course 20487, for an example of how to turn off lazy load.

Task 3: Create queries by using Entity SQL


1. In the GetOrderedReservations test method, create and run an Entity SQL query.
The query retrieves all the Reservation entities in descending order of their confirmation code.

To create an Entity SQL query, use the ObjectContext.CreateQuery<T> generic method.


The context variable already contains an inner ObjectContext object. To access it, cast the context
variable to the IObjectContextAdapter interface, and then access its ObjectContext property.

Note: Refer to Lesson 3, "Querying Data", Topic 2, "Query the Database By Using Entity SQL" in
Course 20487, for an example of how to write Entity SQL queries and execute them with the
ObjectContext object.

Task 4: Create a test for manipulating data


1. In the BlueYonder.IntegrationTests project, open the FlightActions test class, and in UpdateFlight
test method implement the rest of the test.
Create a FlightRepository object within a using block.

Call the repository's Edit method and then the Save method to update the Flight entity in the
database.

In a new repository, after the using block, search for the updated flight by its new flight number.

Verify that the flight was found.


Developing Windows Azure and Web Services 2-39

Note: Most of the boilerplate code for creating a repository, saving the entity, and then
locating the entity in a new repository can be found in the DeleteFlight test method in the
FlightActions class.

Task 5: Use cross-repositories integration tests with System.Transactions


1. In the FlightActions class, locate the UpdateUsingTwoRepositories and inspect its code.
Locate the code that creates the Location and Flight repositories.

Each repository is created with a separate context, meaning each repository will use a separate
transaction when saving changes.

Locate the code for loading and updating the flight and location objects.

Each entity is updated and saved in a separate transaction, but because both transactions are located
in the same transaction scope, both transactions are not yet committed.

2. In the UpdateUsingTwoRepositories method, locate the query below the comment //TODO: Lab
02, Exercise 2 Task 5.2 : Review the query for the updated flight that is inside the transaction scope.

Note: When querying from inside a transaction scope, you will get the updated values of
entities, while other users, not participating in the transaction, will see the old values, until the
transaction commits.

3. Locate the commented call to the Complete method.

Note: Without setting the transaction as complete, both transactions will roll back after the
transaction scope closes.

4. Locate the query below the comment //TODO: Lab 02, Exercise 2 Task 5.4 : Review the query for the
updated flight that is outside the transaction scope.

Note: After the transaction is rolled back, attempts to locate the updated entity will fail.

Task 6: Run the tests, and explore the database created by Entity framework
1. In the TravelCompanionDatabaseInitializer class, complete the implementation of the Seed
method by adding the two reservations to the context and saving the changes.
Add the reservation1 and reservation2 variables to the Reservations collection of the context.

Call the SaveChanges method of the context.

2. Run all the tests in the solution and verify they pass.
To run all tests, open the Test Explorer window from the Test menu, and then click Run All.

Verify that all 16 methods have passed the test.

3. Open SQL Server Management Studio, connect to the .\SQLEXPRESS database server, then locate the
BlueYonder.Companion.Lab02 database in Object Explorer, and browse the tables that were
created by Entity Framework.

Results: The Entity Framework data model works as designed and is verified by tests.
2-40 Querying and Manipulating Data Using Entity Framework

Question: What is the advantage of using LINQ to Entities as opposed to Entity SQL or raw
SQL statements?
Developing Windows Azure and Web Services 2-41

Module Review and Takeaways


In this module, you learned how to use Entity Framework to implement a data access layer for your
application. First, you learned about fundamental ADO.NET concepts, such as connection, command, and
data reader. Next, you created database models by using the Entity Framework Code First approach, and
mapped classes to database tables and columns with data annotations, such as the [Table] and [Column]
attributes, and the Entity Framework Fluent API. Then, you learned how to query the database with LINQ
to Entities, Entity SQL, and raw SQL statements when necessary, and how to use lazy loading to improve
application performance and memory utilization. Finally, you learned how to use change tracking with the
DbContext class when inserting, deleting, and updating entities, and how to protect a number of
database operations as an atomic unit by using transactions.

Best Practices: Always use transactions when performing multiple operations that depend on each
other, and may require compensation when they fail in isolation.

Prefer using LINQ to Entities and not Entity SQL or raw SQL to query the database. This makes your
code less fragile and easier to refactor.

Beware of lazy loading behavior when you return an entity to a higher layer in your application. If the
DbContext object is disposed and the entity has not been fully loaded, accessing its nested
properties may cause an exception.
Use the Entity Framework Fluent API (instead of data annotations) when you map an existing object
model to a database, and when the object model should not change as a result of the mapping.

Review Question(s)
Question: Why should you use Entity Framework and not direct database manipulation with
SQL statements in ADO.NET?

Tools
Visual Studio 2012
SQL Server 2012

SQL Management Studio


2-42 Querying and Manipulating Data Using Entity Framework
3-1

Module 3
Creating and Consuming ASP.NET Web API Services
Contents:
Module Overview 3-1

Lesson 1: HTTP Services 3-2

Lesson 2: Creating an ASP.NET Web API Service 3-13


Lesson 3: Handling HTTP Requests and Responses 3-20

Lesson 4: Hosting and Consuming ASP.NET Web API Services 3-24


Lab: Creating the Travel Reservation ASP.NET Web API Service 3-31
Module Review and Takeaways 3-36

Module Overview
ASP.NET Web API provides a robust and modern framework for creating HTTP-based services. In this
module, you will be introduced to the HTTP-based services. You will learn how HTTP works and become
familiar with HTTP messages, HTTP methods, status codes, and headers. You will also be introduced to the
REST architectural style and Hypermedia.

You will learn how to create HTTP-based services by using ASP.NET Web API. You will also learn how to
host the services in IIS and how to consume them from various clients. At the end of this module, in the
lab "Creating the Traveler ASP.NET Web API Service", you will build an ASP.NET Web API service
application and host it in Internet Information Services (IIS).

Objectives
After you complete this module, you will be able to:

Design services by using the HTTP protocol.


Create services by using ASP.NET Web API.

Use the HttpRequestMessage/HttpResponseMessage classes to control HTTP messages.

Host and consume ASP.NET Web API services.


3-2 Creating and Consuming ASP.NET Web API Services

Lesson 1
HTTP Services
Hypertext Transfer Protocol (HTTP) is a communication protocol that was created by Tim Berners-Lee and
his team while working on WorldWideWeb (later renamed to World Wide Web) project. Originally
designed to transfer hypertext-based resources across computer networks, HTTP is an application layer
protocol that acts as the primary protocol for many applications including the World Wide Web.

Because of its vast adoption and also the common use of web technologies, HTTP is now one of the most
popular protocols for building applications and services. In this lesson, you will be introduced to the basic
structure of HTTP messages and understand the basic principles of the REST architectural approach.

Lesson Objectives
After you complete this lesson, you will be able to:

Explain the basic structure of HTTP.

Explain the structure of HTTP messages.

Describe resources by using URIs.


Explain the semantics of HTTP verbs.

Explain how status codes are used.

Explain the basic concepts of REST.


Use media types.

Introduction to HTTP
HTTP is a first class application protocol that was
built to power the World Wide Web. To support
such a challenge, HTTP was built to allow
applications to scale, taking into consideration
concepts such as caching and stateless
architecture. Today, HTTP is supported by many
different devices and platforms, reaching most
computer systems available today.
HTTP also offers simplicity, by using text messages
and following the request-response messaging
pattern. HTTP differs from most application layer
protocols because it was not designed as a
Remote Procedure Calls (RPC) mechanism or a Remote Method Invocation (RMI) mechanism. Instead,
HTTP provides semantics for retrieving and changing resources that can be accessed directly by using an
address.
Developing Windows Azure and Web Services 3-3

HTTP Messages
HTTP is a simple request-response protocol. All
HTTP messages contain the following elements:

Start-line

Headers

An empty line
Body (optional)

Although requests and responses share the same


basic structure, there are some differences
between them that you should be aware of.

Request Messages
Request messages are sent by the client to the server. Request messages have a specific structure based
on the general structure of HTTP messages.

This example shows a simple HTTP request message.

An HTTP Request
GET http://localhost:4392/travelers/1 HTTP/1.1
Accept: text/html, application/xhtml+xml, */*
Accept-Language: en-US,en;q=0.7,he;q=0.3
User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.2; WOW64; Trident/6.0)
Accept-Encoding: gzip, deflate
Host: localhost:4392
DNT: 1
Connection: Keep-Alive

The first and the most distinct difference between request and response messages is the structure of the
start-line which are called request-lines.

Request-line
This HTTP request messages start-line has a typical request-line with the following space-delimited parts:
HTTP method This HTTP request message uses the GET method, which indicates that the client is
trying to retrieve a resource. Verbs will be covered in-depth in the topic Using Verbs later in this
lesson.

Request URI This part represents the URI to which the message is being sent.

HTTP version This part indicates that the message uses HTTP version 1.1.

Headers
This request message also has several headers that provide metadata for the request. Although headers
exist in both response and request messages, some headers are used exclusively by one of them. For
example, the Accept header is used in requests to communicate the kinds of responses the clients would
prefer to receive. This header is a part of a process known as content negotiation that will be discussed
later in this module.

Body
The request message has no body. This is typical of requests that use the GET method.
3-4 Creating and Consuming ASP.NET Web API Services

Response Messages
Response messages also have a specific structure based on the general structure of HTTP messages.

This example shows a simple HTTP response message.

The HTTP Response returned by the Server for the above Request
HTTP/1.1 200 OK
Server: ASP.NET Development Server/11.0.0.0
Date: Tue, 13 Nov 2012 18:05:11 GMT
X-AspNet-Version: 4.0.30319
Cache-Control: no-cache
Pragma: no-cache
Expires: -1
Content-Type: application/json; charset=utf-8
Content-Length: 188
Connection: Close

{"TravelerId":1,"TravelerUserIdentity":"aaabbbccc","FirstName":"FirstName1","LastName":"L
astName1","MobilePhone":"555-555-5555","HomeAddress":"One microsoft
road","Passport":"AB123456789"}

Status-Line
HTTP response start-lines are called status-lines. This HTTP response message has a typical status-line with
the following space-delimited parts:

HTTP version This part indicates that the message uses HTTP version 1.1.

Status-Code Status-codes help define the result of the request. This message returns a status-code
of 200, which indicates a successful operation. Status codes will be covered in-depth later in this
lesson.
Reason-Phrase A reason-phrase is a short text that describes the status code, providing a human
readable version of the status-code.

Headers
Similar to the request message, the response message also has headers. Some headers are unique for
HTTP responses. For example, the Server header provides technical information about the server software
being used. The Cache-Control and Pragma headers describe how caching mechanisms should treat the
message.
Other headers, such as the Content-Type and Content-Length, provide metadata for the message body
and are used in both requests and responses that have a body.

Body
A response message returns a representation of a resource in JavaScript Object Notation (JSON). The
JSON, in this case, contains information about a specific traveler in a traveling management system. The
format of the representation is communicated by using the Content-Type header describing what is
known as media type. Media types are covered in-depth later in this lesson.
Developing Windows Azure and Web Services 3-5

Identifying Resources By Using URI


Uniform Resource Identifier (URI) is an addressing
standard that is used by many protocols. HTTP
uses URI as part of its resource-based approach to
identify resources over the network.

HTTP URIs follow this structure:

"http://" host [ ":" port ] [ absolute path [ "?" query


]]
http://. This prefix is standard to HTTP
requests and defines the HTTP URI schema to
be used.

Host. The host component of the URI


identifies a computer by an IP address or a registered name.

Port (optional). The port defines a specific port to be addressed. If not present, a default port will be
used. Different schemas can define different default ports. The default port for HTTP is 80.
Absolute path (optional). The path provides additional data that together with the query describes
a resource. The path can have a hierarchical structure similar to a directory structure, separated by the
slash sign (/).

Query (optional). The query provides additional nonhierarchical data that together with the path
describes a resource.
Different URIs can be used to describe different resources. For example, the following URIs describe
different destinations in an airline booking system:

http://localhost/destinations/seattle

http://localhost/destinations/london
When accessing each URI, a different set of data, also known as a representation, will be retrieved.

The URI Request For Comments (RFC 3986)


http://go.microsoft.com/fwlink/?LinkID=298757&clcid=0x409

Using Verbs
HTTP defines a set of methods or verbs that add
an action like semantics to requests. HTTP 1.1
defines an extensible set of eight methods, each
with different behavior. For example, the
following request uses the GET method to retrieve
information about a specific traveler in an airline
traveler system.

This example shows an HTTP GET request


message.

An HTTP GET request retrieving data about a


specific traveler
GET http://localhost:4392/travelers/1 HTTP/1.1
Accept: text/html, application/xhtml+xml, */*
3-6 Creating and Consuming ASP.NET Web API Services

Accept-Language: en-US,en;q=0.7,he;q=0.3
User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.2; WOW64; Trident/6.0)
Accept-Encoding: gzip, deflate
Host: localhost:4392
DNT: 1
Connection: Keep-Alive

In the above example, a method is defined in the first segment of the request-line and communicates
what the request is intended to perform. For example, the GET method used in the request above
communicates that the request is intending to retrieve data about an entity and not trying to modify it.
This behavior makes GET compatible with both of the properties an HTTP method might have: it is both
safe and idempotent.

Safe verbs. These are verbs that are intended to have any side effects on the resource state by the
server other than retrieving data.
Idempotent verbs. These are verbs that are intended to have the same effect on the resource state
when the same request is sent to the server multiple times. For example, sending a single DELETE
request to delete a resource should have the same effect as sending the same DELETE request
multiple times.
Verbs are a central mechanism in HTTP and one of the mechanisms that make HTTP the powerful
protocol it is. Understanding what each verb does is very important for developing HTTP-based services.
The following verbs are defined in HTTP 1.1:

Method Description Properties Usage

GET Requests intended to retrieve data Safe, Used to retrieve a


based on the request URI. Idempotent representation of a
resource.

HEAD Requests intended to have the identical Safe, Used to check request
result of GET requests but without Idempotent validity and retrieving
returning a message body. headers information
without having the
message body.

OPTIONS Requests intended to return Safe, Used to retrieve a comma-


information about the communication Idempotent delimited list of the HTTP
options and capabilities of the server. verbs supported by a
resource or a server in the
Allow header.

POST Requests intended to send an entity to Used to create, update,


the server. The actual operation that is and by some protocols,
performed by the request is determined retrieve entities from the
by the server. The server should return server. POST is the least
information about the outcome of the structured HTTP method.
operation in result.

PUT Requests intended to store the entity Idempotent Used to create and update
sent in the request URI, completely resources.
overriding any existing entity in that
URI.

DELETE Requests intended to delete the entity Idempotent Used to delete resources.
identified by the request URI.

TRACE Requests intended to indicate to clients Safe, Rarely implemented, used


Developing Windows Azure and Web Services 3-7

Method Description Properties Usage


what is received at the server end. Idempotent to identify proxies the
message passes on the
way to the server.

CONNECT Requests intended to dynamically Safe, Used to start SSL


change the communication protocol. Idempotent tunneling.

For more information about HTTP methods, see the HTTP 1.1 Request For Comments (RFC 2616).

Methods definition in the HTTP 1.1 Request For Comments (RFC 2616)
http://go.microsoft.com/fwlink/?LinkID=298758&clcid=0x409

Status-Codes and Reason-Phrases


Status-codes are 3-digit integers returned as a
part of response messages status-lines. Status
codes describe the result of the effort of the server
to satisfy the request. The next section of the
status line after the status code is the reason-
phrase, a human-readable textual description of
the status-code.
Status codes are divided into five classes or
categories. The first digit of the status code
indicate the class of the status:

Class Usage Examples

1xx Codes that return informational response 101 Switching Protocols


Informational about the state of the connection.

2xx Codes that indicate the request was 200 OK


Successful successfully received and accepted by the
201 Created
server.

3xx Codes that indicate that additional action 301 Moved Permanently
Redirection should be taken by the client (usually in
302 Found
respect to a different network addresses)
in order to achieve the result that you 303 See Other
want.

4xx Client Codes that indicate an error that is caused 400 Bad Request
Error by the clients request. This might be
401 Unauthorized
caused by a wrong address, bad message
format, or any kind of invalid data passed 404 Not Found
in the clients request.

5xx Server Codes that indicate an error that was 500 Internal Server
Error caused by the server while it tried to
505 HTTP Version Not Supported
process a seemly valid request.

For more information about HTTP status codes, see the HTTP 1.1 Request For Comments (RFC 2616).
3-8 Creating and Consuming ASP.NET Web API Services

HTTP Status-Codes definition in the HTTP 1.1 Request For Comments (RFC 2616)
http://go.microsoft.com/fwlink/?LinkID=298759&clcid=0x409

Introduction to REST
Until now in this module, you have learned how
HTTP acts as an application layer protocol. HTTP is
used to develop both websites and services.
Services developed by using HTTP are generally
known as HTTP-based services.

The term Representational State Transfer (REST)


describes an architectural style that takes
advantage of the resource-based nature of HTTP.
It was first used in 2000 by Roy Fielding, one of
the authors of the HTTP, URI, and HTML
specifications. Fielding described in his doctoral
dissertation an architectural style that uses some
elements of HTTP and the World Wide Web for creating scalable and extendable applications.
Today, REST is used to add important capabilities to service. These capabilities include the following:
Service discoverability

State management

In this lesson, you will learn about these capabilities. For more information about REST, see Roy Fieldings
dissertation, Architectural Styles and the Design of Network-based Software Architectures.

Architectural Styles and the Design of Network-based Software Architectures by Roy Fielding
http://go.microsoft.com/fwlink/?LinkID=298760&clcid=0x409

Services that use the REST architectural style are also known as RESTful services. A simple way to
understand what makes a service RESTful is using a taxonomy called the Richardson Maturity Model, first
suggested by Leonard Richardson during his talk during the QCon San Francisco Conference in 2008.

The Richardson Maturity Model


The Richardson Maturity Model describes four levels of maturity for services, starting with the least
RESTful level advancing toward fully RESTful services:

Level zero services. Use HTTP as a transport protocol by ignoring the capabilities of HTTP as an
application layer protocol. Level zero services use a single address, also known as endpoint and a
single HTTP method, which is usually POST. SOAP services and other RPC-based protocols are
examples of level zero services.

Level one services. Identify resources by using URIs. Each resource in the system has its own URI by
which the resource can be accessed.

Level two services. Uses the different HTTP verbs to allow the user to manipulate the resources and
create a full API based on resources.

Level three services. Although the first two services only emphasize the suitable use of HTTP
semantics, level three services introduce Hypermedia, an extension of the term Hypertext as a means
for a resource to describe their own state in addition to their relation to other resources.
Developing Windows Azure and Web Services 3-9

For more information about the Richardson Maturity Model, see Leonard Richardsons presentation and
notes.

Leonard Richardsons QCon 2008 presentation and notes


http://go.microsoft.com/fwlink/?LinkID=298761&clcid=0x409

Hypermedia
When the World Wide Web started, it strongly affected the way humans consume data. Alongside
abilities, such as remote access to data and the ability to search a global knowledge base, the World Wide
Web also introduced Hypertext. Hypertext is a nonlinear format that enables readers to access data
related to a specific part of the text by using Hyperlinks. The term Hypermedia describes a logical
extension to the same concept. Hypermedia-based systems use Hypermedia elements, known as
hypermedia controls, such as links and HTML forms, to enable resources to describe their current state
and other resources that are related to them.

Hypermedia and Discoverability


A simple example for resource discoverability can be found in the Atom Syndication Format. At first, the
Atom Syndication Format was developed as an alternative to RSS for publishing web feeds. Atom feeds
are resources with their own URIs that contain items. Feed items are resources themselves with their own
URIs published as links in the feed representation, which makes them discoverable for clients.
This feed describes different instances of a flight in the BlueYonder Companion app. The Hypermedia
control entry is used here to refer clients to different instances of a specific flight.

A simple Atom Feed


HTTP/1.1 200 OK
Cache-Control: no-cache
Content-Type: application/atom+xml
Content-Length: 746
Connection: Close

<?xml version="1.0" encoding="utf-8"?>


<feed xmlns="http://www.w3.org/2005/Atom">
<title type="text">Blue Yonder flights</title>
<id>uuid:460f9be6-3503-43c5-8168-5cb86127b572;id=1</id>
<updated>2012-11-16T21:50:17Z</updated>
<entry xml:base="http://localhost:4392/Flights/BY002/1117">
<id>BY002</id>
<title type="text">Flight BY002 November 17, 2012</title>
<updated>2012-11-16T21:50:17Z</updated>
</entry>
<entry xml:base="http://localhost:4392/Flights/BY002/1201">
<id>BY002</id>
<title type="text">Flight BY002 December 01, 2012</title>
<updated>2012-11-16T21:50:17Z</updated>
</entry>
<entry xml:base="http://localhost:4392/Flights/BY002/1202">
<id>BY002</id>
<title type="text">Flight BY002 December 02, 2012</title>
<updated>2012-11-16T21:50:17Z</updated>
</entry>
</feed>

Hypermedia and State Transfer


Another pattern supported by hypermedia is state transfer. To manage the state of resources, RESTful
services use hypermedia to describe what can be done with the resource when it returns its
representation. For example, if a resource representing a flight enables the user to book tickets, a
Hypermedia control describing how you can do this should be present. As soon as the flight does not let
3-10 Creating and Consuming ASP.NET Web API Services

the user additionally book because of any number of reasons (it is fully booked, canceled, and so on), the
Hypermedia control should not be returned in the resources representation.

This response represents a flight that enables booking in its current state.

A Response with Hypermedia Control for booking flights


HTTP/1.1 200 OK
Cache-Control: no-cache
Pragma: no-cache
Content-Type: application/json; charset=utf-8
Expires: -1
Server: Microsoft-IIS/8.0
X-AspNet-Version: 4.0.30319
X-SourceFiles: =?UTF-
8?B?QzpcU2VsYVxNT0NcMjA0ODdBXFNvdXJjZVxBbGxmaWxlc1xNb2QwM1xMYWJmaWxlc1xCbHVlWW9uZGVyLkNvb
XBhbmlvblxCbHVlWW9uZGVyLkNvbXBhbmlvbi5Ib3N0XGZsaWdodHM=?=
X-Powered-By: ASP.NET
Date: Wed, 05 Dec 2012 11:12:19 GMT
Content-Length: 312

{
"Source":{"Country":"Italy","City":"Rome"},
"Destination":{"Country":"France","City":"Paris"},
"Departure":"2014-02-01T08:30:00",
"Duration":"02:30:00",
"Price":387.0},
FlightNumber":"BY001",
"links":[
{
"rel": "booking",
"Link": "http://localhost/flights/by001/booking"
}
]
}

Hypermedia is what differentiates REST from HTTP-based services. It is a simple but powerful concept that
enables a range of capabilities and patterns including service versioning, aspect management, and more
which are beyond the scope of this course. Today, more and more formats and APIs are created by using
Hypermedia.

One of the media types supporting Hypermedia is the Hypertext Application Language (HAL). The HAL
media type offers link-based Hypermedia. For more information about HAL, see the HAL format
specifications.

Hypertext Application Language (HAL)


http://go.microsoft.com/fwlink/?LinkID=298762&clcid=0x409
Developing Windows Azure and Web Services 3-11

Media Types
HTTP was originally designed to transfer
Hypertext. Hypertext is a nonlinear format that
contains references to other resources, some of
which are other Hypertext resources. However,
some resources contain other formats such as
Image files and videos, which required HTTP to
support the transfer of different types of message
formats. To support different formats, HTTP uses
Multipurpose Internet Mail Extensions (MIME)
types, also known as media types. MIME types
were originally designed for use in defining the
content of email messages sent over SMTP.

Media types are made out of two parts, a type and a subtype, optionally followed by type-specific
parameters. For example, the type text indicates a human-readable text and can be followed by subtypes
such as HTML, which indicates HTML content and plain indicates a plain text payload.

Common Text Media Types


text/html
text/plain

In addition, the text type gives a charset parameter, so that the following declaration is also valid.

The Charset Parameter used in Text Media Types


text/html; charset=UTF-8

In HTTP, media types are declared by using headers as part of a process that is known as content
negotiation. Content negotiation is not restricted for media type and includes support for language
negotiation, encoding, and more. The following section shows how content negotiation is used for
handling media types.

The Accept Header


When a client sends a request, it can send a list of requested media types, and in order of preference, it
can accept in the response.

This request message uses the Accept header in order to communicate to the server what media types it
can accept.

An HTTP request message starting content negotiation


GET http://localhost:4392/travelers/1 HTTP/1.1
Accept: text/html, application/xhtml+xml, */*
Accept-Language: en-US,en;q=0.7,he;q=0.3
User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.2; WOW64; Trident/6.0)
Accept-Encoding: gzip, deflate
Host: localhost:4392
DNT: 1
Connection: Keep-Alive

Although the server should try to fulfill the request for content, this is not always possible. Be aware that
in the previous request, the type */* indicates that if text/html and application/xhtml+xml are not
available, the server should return whatever type it can.
3-12 Creating and Consuming ASP.NET Web API Services

The Content-Type Header


In HTTP, any message that contains an entity-body should declare the media type of the body by using
the Content-Type header.

This request message uses the Content-Type header in order to declare what media types it uses for the
entity-body.

An HTTP response message returning application/json representation of a traveler entity

HTTP/1.1 200 OK
Server: ASP.NET Development Server/11.0.0.0
Date: Sat, 17 Nov 2012 13:27:20 GMT
X-AspNet-Version: 4.0.30319
Cache-Control: no-cache
Pragma: no-cache
Expires: -1
Content-Type: application/json; charset=utf-8
Content-Length: 188
Connection: Close

{"TravelerId":1,"TravelerUserIdentity":"aaabbbccc","FirstName":"FirstName1","LastName":"L
astName1","MobilePhone":"555-555-5555","HomeAddress":"One microsoft
road","Passport":"AB123456789"}

Media types give the structuring of the HTTP messages. Content negotiation enables servers and clients to
set the expectation for what content they should expect during their HTTP transaction. Content
negotiation is not limited to media types. For example, content negotiation is used to negotiate content
compression by using the Accept-Encoding header, localization by using the Accept-Language header,
and more.

Content negotiation in the HTTP 1.1 Request For Comments (RFC 2616)
http://go.microsoft.com/fwlink/?LinkID=298763&clcid=0x409

Question: Why do you need different Http Verbs?


Developing Windows Azure and Web Services 3-13

Lesson 2
Creating an ASP.NET Web API Service
ASP.NET Web API is the first full-featured framework for developing HTTP-based services in the .NET
Framework. Using ASP.NET Web API gives developers reliable methods for creating, testing, and
deploying HTTP-based services. In this lesson, you will learn how to create ASP.NET Web API services and
how they are mapped to the different parts of HTTP. You will also learn how to interact directly with HTTP
messages and how to host ASP.NET Web API services.

Lesson Objectives
After you complete this lesson, you will be able to:

Describe ASP.NET Web API and how it is used for creating HTTP-based services.
Create routing rules.

Create ASP.NET Web API controllers.

Define action methods.


Create and run an HTTP-based service by using ASP.NET Web API.

Introduction to ASP.NET Web API


HTTP has been around ever since the World Wide
Web was created in the early 1990s, but adoption
of it as an application protocol for developing
services took time. In the early years of the web,
SOAP was considered the application protocol of
choice by most developers. SOAP provided a
robust platform for developing RPC-style services.
With the appearance of Internet scale applications
and the growing popularity of web 2.0, it became
clearthat SOAP was not fit for such challenges and
HTTP received more and more attention.

HTTP in the .NET Framework


For the better part of the first decade of its existence, the .NET Framework did not have a first-class
framework for building HTTP services. At first, ASP.NET provided a platform for creating HTML-based
web-pages and ASP.NET web services, and later-on Windows Communication Foundation (WCF) provided
SOAP-based platforms. For these reasons, HTTP never received the attention it deserved.
When WCF first came out in .NET 3.0, it was a SOAP-only framework. As the world started to use HTTP as
the an application-layer protocol for developing services, Microsoft started to make investments in
extending WCF for supporting simple HTTP scenarios. By the next release of the .NET Framework (.NET
3.5), WCF had new capabilities. These included a new kind of binding called WebHttpBinding and the
new attributes for mapping methods to HTTP requests.

In 2009, Microsoft released the WCF REST Starter Kit. This added the new WebServiceHost class for
hosting HTTP-based services, and also new capabilities like help pages and Atom support. When the .NET
Framework version 4 was released most of the capabilities of the WCF REST Starter Kit were already rolled
into WCF. This includes support for IIS hosting in addition to new Visual Studio templates available
3-14 Creating and Consuming ASP.NET Web API Services

through the Visual Studio Extensions Manager. But even then, WCF was still missing support a lot HTTP
scenarios.

The need for a comprehensive solution for developing HTTP services in the .NET Framework justified
creating a new framework. Therefore, in October 2010, Microsoft announced the WCF Web API, which
introduces a new model and additional capabilities for developing HTTP-based services. These capabilities
included:
Better support for content negotiation and media types.

APIs to control every aspect of the HTTP messages.

Testability.
Integration with other relevant frameworks like Entity Framework and Unity.

The WCF Web API team released 6 preview versions until in February 2012, the ware united with the
ASP.NET team, forming the ASP.NET Web API.

Creating Routing Rules


One of the first challenges when developing
HTTP-based services is mapping HTTP requests to
the code being executed by the server based on
the request-URI and HTTP-method. The process of
mapping the request URI to a class or a method is
called routing.

Routing Tables
ASP.NET uses the
System.Web.Routing.RouteTable class to hold a
data structure that contains different routes that
were configured before the initialization of the
host. A route contains a URI template and default
values for the template. ASP.NET uses routes to map HTTP requests based on their request-URI and HTTP
method to the correlating code in the server.

Defining Routes
ASP.NET Web API routes are defined by using the MapHttpRoute extension method as is shown in the
following code.
This example shows the configuration of a simple route based on the name of the controller.

Configuring a route by using the System.Web.Http.HttpConfiguration class


GlobalConfiguration.Configuration.Routes.MapHttpRoute(
name: "DefaultApi",
routeTemplate: "api/{controller}/{id}",
defaults: new { id = RouteParameter.Optional }
);

Routes Definition in Details


To understand routes, you have to understand how ASP.NET Web API services are implemented. ASP.NET
Web API services are implemented in classes called controllers, each controller exposing one or more
public methods called actions. The hosting environment uses routing to deliver HTTP requests to the
actions designed to handle those requests.
Developing Windows Azure and Web Services 3-15

The following headings discuss controllers and actions in-depth, because understanding what controllers
and actions are is important for understanding routes.

How Controllers Are Mapped


ASP.NET Web API services are implemented in classes called controllers. Controllers are implemented by
using two constraints:

They must derive from the System.Web.Http.ApiController class


By convention, they must be named with the Controller suffix

When ASP.NET Web API receives a request that matches the template in the route, it looks for a controller
that matches the value that was passed in the controller placeholder of the URI template by name. For
example, a URI with the following URI relative path, "api/flights/by001", will be evaluated against the
template defined in the earlier example ("api/{controller}/{id}"). ASP.NET Web API will look for a
controller that is named FlightsController.

This controller maps when the flights value is passed as the value for the {controller} placeholder.

The flights controller


public class FlightsController : ApiController
{
}

How Actions Are Mapped


Conventions play a big role in the ASP.NET ecosystem and ASP.NET Web API is not different. When
looking at the route template, you can see that there is no placeholder for actions even though action
methods must be executed in order to handle incoming requests. This is possible because of a convention
that maps methods based on their prefix to HTTP verbs.
This action method is chosen when sending a GET request by using the flights/by001 path.

An Action definition
public class FlightsController : ApiController
{
public HttpResponseMessage Get(string id)
{
// Place code here to return an HttpResponseMessage object
}
}

Note: This convention only supports the GET, HEAD, PUT, POST, OPTION, PATCH and
DELETE methods. However, actions also support attribute-based routing described later in this
lesson.

How Parameters Are Mapped


In this example, a parameter called id was also mapped as a part of the absolute path of the URI. In
ASP.NET, parameters are matched as part of a process that is known as Parameter Binding. Parameter
Binding default behavior is to bind simple types from the URI and complex types from the entity-body of
the request.

For parameter bindings, simple types include all .NET Primitives with the addition of DateTime, Decimal,
TimeSpan, String, and Guid.
3-16 Creating and Consuming ASP.NET Web API Services

The ApiController Class


ASP.NET Web API services are implemented in
classes called controllers, which derive from the
System.Web.Http.ApiController class. As soon
as a request is routed to a controller based on a
URI, the ApiController takes control in finding
and executing the appropriate action.

The ApiController also provides APIs for handling


HTTP request, accessing the HttpConfiguration,
validating input parameters, and interacting with
context of the operation. In fact, a big part of the
capabilities of ASP.NET Web API that are
described in this module and also in Module 4,
"Extending and Securing ASP.NET Web API Services", are exposed and managed by the ApiController.

Defining Controllers
To create a controller, you have to do the following:

Create a class that derives from the System.Web.Http.ApiController class.


Name the class with the Controller suffix.

This code example shows how to define a controller.

The flights controller


public class FlightsController : ApiController
{
}

The Responsibilities of the ApiController


ASP.NET Web API controllers have to derive from the ApiController class. The reason for deriving from
ApiController class is that in addition to defining a logical unit of the service, the ApiController class
does lots of work. Among the responsibilities of the ApiController class, you can find:
Action Selection. The ApiController class is responsible for calling the Action Selector that is
responsible to execute the action method. Action selection is described in-depth in the next topic,
Action Methods and HTTP Verbs.

Applying Filters. ASP.NET Web API filters let developers extend the request/response pipeline.
Before executing an action method, the ApiController class is in charge of applying and executing
the filters in the correct order before and after the execution of the action methods.

Additional APIs in the ApiController


The ApiController class also exposes additional APIs. Most of them are based on the
System.Web.Http.Controllers.HttpControllerContext class, which represents the context of the current
HTTP request. The request contains information such as the current route that is being used and the
request message.

The ApiController exposes the HttpControllerContext by using the ControllerContext property. In


addition, the ApiController also provides some properties that expose specific data that is a part of the
ControllerContext property including:

The Request property. This API provides access to the HttpRequestMessage representing the HTTP
request for the operation. HttpRequestMessage class is discussed in-depth in lesson 3, Handling
HTTP Requests and Responses, of this module.
Developing Windows Azure and Web Services 3-17

The Configuration property. The Configuration property exposes the configuration that is being
used by the host.

Additional Reading: Filters are discussed in-depth throughout Module 4. Action filters are
discussed in Module 4, Lesson 1, The ASP.NET Web API Request Pipeline; Exception filters are
discussed in Module 4, lesson 2, The ASP.NET Web API Response Pipeline; and Authorization
filters are discussed in Module 4, lesson 4, Implementing security in ASP.NET Web API
Services.

Action Methods and HTTP Verbs


As soon as ASP.NET Web API chooses a controller,
it can start to handle the next step of choosing the
method that will handle the request. The selection
of the action is performed by the ApiController
class. When choosing an action method, the
ApiController class gets an instance of a class
implementing
System.Web.Http.Controllers.IHttpActionSelec
tor interface from the ControllerContext. The
default implementation is using the
System.Web.Http.Controllers.ApiControllerAct
ionSelector class.

The method selection can be done based on the requests HTTP method has used and on the request-URI.
There are several techniques for mapping Actions:
Mapping to HTTP methods based on convention.

Mapping to request-URIs based on the {action} placeholder in route templates.

In addition to matching the HTTP method or request-URI to the method name or attribute, ASP.NET Web
API takes the parameters that are passed to the method into consideration and makes sure that they
match.

Mapping by Action Name


In addition to the controller placeholder in routes, ASP.NET Web API also has a special placeholder for
action. When the controller identifies an action in the route, it will first map the action to action methods,
which match the name of the action. Matching is performed for methods that fit one of the following
criteria:

Criteria An action matching the name Flights

The name of the method is the same as public HttpResponseMessage Flights()


the action name.
{
}

The name of the method matches the public HttpResponseMessage GetFlights()


action with the prefix of a valid HTTP
method name. {
}
3-18 Creating and Consuming ASP.NET Web API Services

Criteria An action matching the name Flights

The method has an action name that is [ActionName("Flights")]


defined by using the [ActionName]
attribute. public HttpResponseMessage AirTrips()

{
}

Mapping by HTTP Method


The ApiControllerActionSelector also maps actions by HTTP methods. This can be done by using one of
the following techniques:

public HttpResponseMessage GetFlights()


Matching by using a prefix or method
name.
public HttpResponseMessage Get()

Matching by using the [AcceptVerbs] [AcceptVerbs("GET")]


attribute.
public HttpResponseMessage AirTrips()

[AcceptVerbs(AcceptVerbs.Get)]
public HttpResponseMessage AirTrips()

Matching by using specific implementation [HttpGet]


of ActionMethodSelectorAttribute.
public HttpResponseMessage Flights(int id)

[HttpDelete]
public HttpResponseMessage Flights(int id)

Note: This convention and HttpVerb enum support only the GET, HEAD, PUT, POST,
OPTION, PATCH, and DELETE methods.

Demonstration: Creating Your First ASP.NET Web API Service


In this demonstration, you will create a new ASP.NET Web application project using the Web API
template, view the code generated by Visual Studio 2012, and apply changes to the actions and routing
templates.

Demonstration Steps
1. Open Visual Studio 2012 and create a new ASP.NET 4 MVC Web Application project, by using the
Web API template. Name the new project MyApp.

2. Review the content of the WebApiConfig.cs file that is under the App_Start folder.
Developing Windows Azure and Web Services 3-19

3. Review the content of the ValuesController.cs file that is under the Controllers folder. The
parameterless Get action method can be invoked by using HTTP (for example, using the /api/values
relative URI).

4. Run the project without debugging, and access the parameterless Get action method from the
browser.

In the browser, append the api/values to the end of the address, and press Enter.

Open the values.json file in Notepad and observe its content.

5. In the ValuesController class, decorate the parameterless Get action with the [ActionName]
attribute, and set the action name to List.
6. In the WebApiConfig class, add a new route to support MVC-style invocation.

Add the route by using the config.Routes.MapHttpRoute method.

Set the parameters of the method to the following values.

Parameter Value

name ActionApi

routeTemplate api/{controller}/{action}/{id}

defaults New anonymous object with an id property set to


RouteParameter.Optional

Place the call to the method before the existing routing code.
7. Run the project and access the parameter less Get action method from the browser, using the MVC-
style routing.

In the browser, append the api/values/list to the end of the address, and press Enter.
Open the list.json file in Notepad and observe its content.

Question: How ASP.NET Web API knows which method to invoke when received a request
from the client?
3-20 Creating and Consuming ASP.NET Web API Services

Lesson 3
Handling HTTP Requests and Responses
Creating an instance of a class and finding the method to execute is not always enough. In order to
provide a real solution for HTTP-based services, ASP.NET Web API has to provide additional functionality
for interacting with HTTP messages. This functionality includes mapping parts of the HTTP request to
method parameters in addition to a comprehensive API for processing and controlling HTTP messages.
Using that API, you can now easily interact with headers in the requests and response messages, control
status codes, and more.

Lesson Objectives
After completing this lesson, you will be able to:

Describe how parameter binding works in ASP.NET Web API.


Use HttpRequestMessage class to handle incoming requests.

Use HttpResponseMessage class to control the response of an action.


Throw HttpResponseExceptions exception to control HTTP errors.

Binding Parameters to Request Message


After locating the controller and action method,
there is still one last task that ASP.NET Web API
must handle, which is mapping data from the
HTTP request to method parameters. In ASP.NET
Web API, this process is known as parameter-
binding.
HTTP messages data can be passed in the
following ways:

The message-URI. In HTTP, the absolute


path and query are used to pass simple values
that help identify the resource and influence
the representation.

The Entity-body. In some HTTP messages, the message body passes data.

Note: Headers are also used to pass metadata and not as part of the business logic.
Headers data is not bound to methods parameters by default and is accessed by using the
HttpRequestMessage class described later in this lesson.

By default, ASP.NET Web API differentiates simple and complex types. Simple types are mapped from the
URI and complex types are mapped from the entity-body of the request. For parameter bindings, simple
types include all .NET primitive types (int, char, bool, and so on) with the addition of DateTime, Decimal,
TimeSpan, String, and Guid.
Developing Windows Azure and Web Services 3-21

The HttpRequestMessage Class


Invoking a method is an important aspect of
HTTP-based services. But HTTP provides a vast
functionality that requires analyzing the request
message in its entirety. For example, request
headers can provide important information,
including the version of the entity passed, user
credentials, cookie data, and the requested
response format. ASP.NET Web API uses the
System.Net.Http.HttpRequestMessage class to
represent incoming HTTP request messages.

The HttpRequestMessage class can be accessed


by most of the run time components that
compose the request pipeline including: message handlers, formatters and filters. HttpRequestMessage
can also be accessed inside ASP.NET Web API controllers by using the Request property.
This code example uses the AcceptLanguage property of HttpRequestMessage class, which lists
languages in order to retrieve the value of the Accept-Language header returning a localized greeting
message.

Retrieve the value of the Accept-Language header by using the Request property
public string Get(int id)
{

var lang = Request.Headers.AcceptLanguage;


var bestLang = (from l in lang
orderby l.Quality descending
select l.Value).FirstOrDefault();

switch (bestLang)
{
case "en":
return "Hello";
case "da":
return "Hej";
}

return string.Empty;
}

The HttpResponseMessage Class


Action methods can return both simple and
complex types that are serialized to a format
based on the Accept header. Although ASP.NET
Web API can handle the content negotiation and
serialization, it is sometimes required to handle
other aspects of the HTTP response message (for
example, returning a status code other than 200
or adding headers).

The System.Net.Http.HttpResponseMessage
class enables programmers to define every aspect
of the HTTP response message the action returns.
3-22 Creating and Consuming ASP.NET Web API Services

In order to control the HTTP response, you must create an action with HttpResponseMessage as its
return type. Inside the action, you have to use the Request.CreateResponse or
Request.CreateResponse<T> methods to create a new HttpResponseMessage.

This code example creates a new flight reservation and returns an HTTP message that has two important
characteristics: a 201 created status and a Location header with the URI of the newly created resource.

Using the HttpResponseMessage class to control the HTTP response


public HttpResponseMessage Post(Reservation reservation)
{
Reservations.Add(reservation);
Reservations.Save();

var response = Request.CreateResponse(HttpStatusCode.Created, reservation);


response.Headers.Location = new Uri(Request.RequestUri,
reservation.ConfirmationNumber.ToString());
return response;
}

Throwing Exceptions with the HttpResponseException Class


In HTTP, errors are communicated by using two
mechanisms:
HTTP status-codes Provides a numeric
representation of the result of the request to
the server. Status codes provide an
application readable representation of the
result of a request.
Entity-body For most status codes, HTTP
accepts an entity body to provide clients with
details about the error that occurred.
Although both aspects of HTTP errors can be
accessed by using HttpResponseMessage class, when you deal with more complex scenarios, returning
different results can create a complex code base. Modern programming languages use exceptions to
provide simple control flow when an error occurs. ASP.NET Web API provides the
System.Web.Http.HttpResponseException that enables programmers to control the
HttpResponseMessage by throwing an exception.

To throw an HttpResponseException, you have to create a new response message


HttpResponseMessage and set the status code, headers, and content that you want the response to
have. Next, you throw a new HttpResponseException accepting the HttpResponseMessage as a
constructor parameter.

This code example throws HttpResponseException to return a 404 Not found response.

Throwing an HttpResponseException
if (flight == null)
{
throw new HttpResponseException(
new HttpResponseMessage(HttpStatusCode.NotFound);
}
Developing Windows Azure and Web Services 3-23

Demonstration: Throwing Exceptions


In this demonstration, you will learn how to handle exceptions in ASP.NET Web API. You will learn how to
use the HttpResponseMessage class to control the status-code of the HTTP response message, and how
to use the HttpResponseException class to provide better control flow if there is an error.

Demonstration Steps
1. In Visual Studio 2012, open the
D:\Allfiles\Mod03\Democode\ThrowHttpResponseException\start\start.sln solution.

2. Open the DestinationsController.cs from the Controllers folder, and review the contents of the Get
method.
3. Change the Get method so that it returns a Destination object and not an HttpRequestMessage
object.

Change the return type of the method from HttpRequestMessage to Destination.

Remove the if-else statement and return the destination variable.

4. Add handling for non-existing destinations by throwing an HttpResponseException.

If the destination was not found, throw a new HttpResponseException object.

Initialize the exception with a new HttpResponseMessage, and set the status code of the message to
HttpStatusCode.NotFound.
5. Run the project without debugging, and verify the Get method returns an HTTP 404 for unknown
destinations.

In the browser, append the api/destinations/1 to the end of the address, and press Enter. Open the
file in Notepad and verify you see information for Seattle.
In the browser, append the api/destinations/6 to the end of the address, and press Enter. Verify you
get an HTTP 404 response.
Question: In which case should you use HttpResponseException?
3-24 Creating and Consuming ASP.NET Web API Services

Lesson 4
Hosting and Consuming ASP.NET Web API Services
As with any other application, ASP.NET Web API services need a process to give them a run time
environment. This run time must accommodate code that potentially serves a large amount of clients.
When developing services, hosting environments provide the majority of the capabilities needed to
service client requests and maintain a quality of service. After hosting the service, you will learn how to
consume the service from various client environments including HTML, JavaScript, and the .NET
Framework.

Lesson Objectives
After you complete this lesson, you will be able to:

Describe the main capabilities of IIS.


Host ASP.NET Web API in IIS.

Self-host ASP.NET Web API in .NET applications.


Consume ASP.NET Web API by using browser-based applications.

Consume ASP.NET Web API from .NET Framework applications.

Introduction to IIS
Internet Information Services (IIS) is a web server
service that is a part of Microsoft Windows. IIS was
first released in 1995 as an update to Windows NT
3.51 and has been included in every version of
Microsoft server operating systems since. IIS
provides a hosting environment for applications
and services and also a set of utilities and
extensions for managing different aspects of the
applications and serves life cycle.

Core capabilities of IIS


IIS has the following capabilities:

Extensibility. Provides a processing pipeline for messages that is built out of an extensible set of
components called modules. Each model is in charge of performing a specific action based on the
messages passing through the pipeline.

Security. Provides built-in modules for handling security. This includes capabilities for managing
secured conversation with Secure Socket Layer (SSL), handling different kinds of HTTP authentication
(Basic, Digest, and so on), and IP restriction.

Reliability. Provides a set of worker processes for application and services. These worker processes
are called application pools and provide process management for one or more applications.
Application pools are monitored and managed by IIS and provide benefits like isolation between
services and applications, resources, and fault management.
Manageability. Provides management tools including an MMC-based UI and a set of Microsoft
PowerShell commands that give managing IIS core functionality and extensions. IIS also provides
built-in logging and diagnostics capabilities that simplify managing production environments.
Developing Windows Azure and Web Services 3-25

Performance. Provides built-in caching and compression modules to improve HTTP applications
performance. IIS also uses the HTTP.SYS kernel mode driver to listen to HTTP traffic. HTTP.SYS also
provides HTTP caching mechanisms that enable the users to handle cached requests completely in
kernel mode providing a high-performance caching mechanism.

Scalability. Provides a set of tools to manage multiple servers. These include centralized
configuration, sharing application files between server, and a remote administration module that
enables the users to manage servers in a centralized manner. IIS provides infrastructure for load
balancing by using extensions, such as Microsoft Application Request Routing (ARR), that provide
HTTP-based message routing. It also provides infrastructure for load balancing and also Network
Load Balancing (NLB), which provides infrastructure for load balancing at the network layer.

Hosting ASP.NET Web API Services by Using IIS


IIS has been hosting the ASP.NET line of products
ever since .NET 1.0. This makes IIS a very
appealing host for ASP.NET Web API services. By
hosting your ASP.NET Web API projects in IIS, you
can gain access to many of the features of IIS. This
includes the following:

Easy deployment to remove computers and


web farms with Web Deploy.

Performance features, such as response


compression and caching.

Reliability support by running your ASP.NET


Web API service in a monitored application pool.
Ability to run multiple ASP.NET Web API projects on the same computer by using IIS virtual
directories and isolated web applications.

Note: Deploying web applications by using Web Deploy is covered in Module 8,


Deploying Services.

By default, when you create an ASP.NET Web API project, Visual Studio 2012 uses IIS Express to host your
project, and not the regular IIS. IIS Express is a lightweight, self-contained version of IIS optimized for the
development environment. IIS Express can be installed on computers that do not have IIS installed on
them, or on computers that cannot be installed with the latest version of IIS. For example, if you are
developing on a computer that is running Windows Server 2008, you have IIS 7 and you cannot upgrade
it to IIS 8. However, you can install IIS 8.0 Express on that computer.

The following link describes in details the differences between IIS and IIS Express.

IIS Express Overview


http://go.microsoft.com/fwlink/?LinkID=298764&clcid=0x409

The main difference between IIS and IIS Express is in the security context. IIS Express uses the security
context of the logged-on user to start the hosting process, whereas IIS use the identity defined in the
application pool. This is usually a non-privileged built-in account.
3-26 Creating and Consuming ASP.NET Web API Services

Note: Using a different security context can lead to differences in the behavior of the
application. For example, when you host your application in IIS Express, it might be able to access
the database because it uses your logged on identity which has administrative permissions in the
database. However, when the application is hosted in IIS, it might fail trying to access the
database because the identity used by the application pool does not have the required
permissions to log on to the database.

Ideally, after you have verified your application is running correctly, use IIS to host your application,
instead of IIS Express. To instruct Visual Studio 2012 to use IIS, right-click the ASP.NET Web API project in
the Solution Explorer window, and then click Properties. On the Web tab, scroll to the Servers group,
remove the selection from the Use IIS Express, and then click Create Virtual Directory to create a
directory for your web application in IIS.

This image is a snapshot of the ASP.NET web project properties in Visual Studio 2012, where you set the
kind of hosting server (IIS or IIS Express).

FIGURE 3.1: THE WEB PROPERTIES TAB IN VISUAL STUDIO 2012

Consuming Services from Browsers


When HTTP was built in the early 1990s, it was
made for a very specific kind of client: web
browsers running HTML. Before the creation of
JavaScript in 1995, HTML was using two of the
three HTTP methods in HTTP 1.0: GET and POST.
GET requests are usually invoked by entering a
URI in the address bar or in kinds of hypertext
references such as .img and script tags.
For example, entering the
http://localhost:2300/api/flights/ URI generates
the following GET request.

A GET request invoked by a web browser


GET http://localhost:7086/Locations HTTP/1.1
Accept: text/html, application/xhtml+xml, */*
Accept-Language: en-US,en;q=0.7,he;q=0.3
User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.2; WOW64; Trident/6.0)
Accept-Encoding: gzip, deflate
Host: localhost:7086
Developing Windows Azure and Web Services 3-27

DNT: 1
Connection: Keep-Alive

Another way to start HTTP requests from a browser is using HTML forms. HTML forms are HTML elements
that create a form-like UI in the HTML document that lets the user insert and submit data to the server.
HTML forms contain sub elements, called input elements, and each represents a piece of data both in the
UI and in the resulting HTTP message.

This HTML form lets users submit a new location to the server from a web browser, generating a POST
request.

An HTML form for submitting a new flight


<form name="newLocation" action="/locations/" method="post">
<input type="text" name="LocationId" /><br />
<input type="text" name="Country" /><br />
<input type="text" name="State" /><br />
<input type="text" name="City" /><br />
<input type="submit">
</form>

This HTTP message was generated by submitting the newLocation HTML form.

An HTML form generated POST request


POST http://localhost:7086/locations/ HTTP/1.1
Accept: text/html, application/xhtml+xml, */*
Referer: http://localhost:7086/default.html
Accept-Language: en-US,en;q=0.7,he;q=0.3
User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.2; WOW64; Trident/6.0)
Content-Type: application/x-www-form-urlencoded
Accept-Encoding: gzip, deflate
Connection: Keep-Alive
Content-Length: 49
DNT: 1
Host: localhost:7086
Pragma: no-cache

LocationId=7&Country=Belgium&State=&City=Brussels

The most flexible mechanism to start HTTP from a browser environment is using JavaScript. Using
JavaScript provides two main capabilities for that are lacing in other browser-based techniques:

Complete control over the HTTP requests (including HTTP method, headers, and body).

Asynchronous JavaScript and XML (AJAX). Using AJAX, you can send requests from the client after the
browser completed loading the HTML. Based on the result of the calls, you can use JavaScript to
update parts of the HTML page.

Demonstration: Consuming Services Using JQuery


In this demonstration, you will call an ASP.NET Web API service using jQuery AJAX calls.

Demonstration Steps
1. In Visual Studio 2012, open the
D:\Allfiles\Mod03\Democode\ConsumingFromJQuery\Begin\JQueryClient\JQueryClient.sln
solution.
3-28 Creating and Consuming ASP.NET Web API Services

2. In the JQueryClient project, expand the Views folder, then expand the Home folder. Review the
content of the Index.cshtml file. Locate the script section and observe how the code uses jQuery to
retrieve the data from the server.

3. Add the following JavaScript code to the <script> element, to override the default behavior for
submitting a DELETE request to the server.

$("#deleteLocation").submit(function (event) {
event.preventDefault();
var desId = $(this).find('input[name="LocationId"]').val();
$.ajax({
type: 'DELETE',
url: 'destinations/' + desId
});
});

4. Run the project and use the form to delete a flight.

In the JQueryClient project, expand the Controllers folder, open the DestinationsController.cs file,
and place a breakpoint in the Delete method.

Press F5 to debug the application. In the browser, type 1 in the Location id box, and then click
Delete.

Verify the breakpoint in the Delete method is reached.


Press Shift+F5 to stop the debugger.
Question: Why do you need to use JQuery to consume Web API services?

Consuming Services from .NET Clients with HttpClient


ASP.NET Web API also provides a new client-side
API for consuming HTTP in .NET Framework
applications. The main class for this API is
System.Net.Http.HttpClient. This provides basic
functionality for sending requests and receiving
responses.

HttpClient keeps a consistent API with ASP.NET


web API by using HttpRequestMessage and
HttpResponseMessage for handling HTTP
messages. The HttpClient API is a task-based
asynchronous API providing a simple model for
consuming HTTP asynchronously.
This code example uses the HttpClient to send a GET request receive an HttpResponseMessage from
the server, and then read its content as a string.

Using the HttpClient GetAsyc method.


var client = new HttpClient
{
BaseAddress = new Uri("http://localhost:12534/")
};

HttpResponseMessage message = await client.GetAsync("destinations");


var res = message.Content.ReadAsStringAsync();
Console.WriteLine(res);
Developing Windows Azure and Web Services 3-29

Although this code provides a simple asynchronous API, it is not common for the client to require string
representation of the data. A more useful approach is to obtain a de-serialized object based on the entity
body.

To support serializing and de-serializing objects, HttpClient uses a set of extensions defined in the
System.Net.Http.Formatting.dll that is a part of the Microsoft ASP.NET Web API Client Libraries
NuGet package. The System.Net.Http.Formatting.dll adds the extension methods to the
System.Net.Http namespace so that no additional using directive is needed.
This code example uses the ReadAsAsync<T> extension method to deserialize the content of the HTTP
message into a list of Destinations.

Using the ReadAsAsync<T> extension method


var client = new HttpClient
{
BaseAddress = new Uri("http://localhost:12534/")
};

HttpResponseMessage message = await client.GetAsync("destinations");


var destinations = await message.Content.ReadAsAsync<List<Destination>>();
Console.WriteLine(destinations.Count);

Demonstration: Consuming Services Using HttpClient


In this demonstration, you will learn how to consume HTTP services from .NET Framework applications by
using the HttpClient class. You will also learn how the HttpClient class can serialize and deserialize the
body of HTTP messages into objects by using extensions defined in the System.Net.Http.Formatting.dll.

Demonstration Steps
1. Open Visual Studio 2012 as an administrator and open the
D:\Allfiles\Mod03\Democode\ConsumingFromHttpClient\begin\HttpClientApplication\HttpClientApp
lication.sln solution.
2. Add the Microsoft ASP.NET Web API Client Libraries NuGet package to the
HttpClientApplication.Client project.

3. Add code to perform a GET request for the destinations resource inside the CallServer method and
print the responses content as string to the console window.

Create a new HttpClient object and store it in a variable named client.

Set the client.BaseAddress property to the URI http://localhost:12534/

Call the client.GetAsync method with the relative URI api/Destinations, and use the await keyword
to call the method asynchronously.

Store the return value of the GetAsync method in a variable of type HttpResponseMessage. Name
the new variable message.

Read the content of the response message's body by calling the


message.Content.ReadAsStringAsync method. Use the await keyword to call the method
asynchronously.

Write the returned response string to the console output.

4. Add code to deserialize the response into a List<Destinations>.


3-30 Creating and Consuming ASP.NET Web API Services

After the code you added in the previous step, call the message.Content.ReadAsAsync method with
the generic type List<Destination> to deserialize the response message to a list of Destination
objects. Use the await keyword to call the method asynchronously.

Write the size of the returned destinations list to the console.


5. Run the server application, and then the client application. Show how the client code retrieves data
from the server.

Question: What are the benefits of HttpClient that makes it more useful than
HttpWebRequest and WebClient?
Developing Windows Azure and Web Services 3-31

Lab: Creating the Travel Reservation ASP.NET Web API


Service
Scenario
Now that the data layer has been created, services that provide travel destination information, flight
schedules, and booking capabilities can be developed. Blue Yonder Airlines intends to support many
device types. The back-end service must have an HTTP-based service. Therefore, the service is to be
implemented by using ASP.NET Web API. In this lab, you will create a Web API service that supports basic
CRUD actions over Blue Yonder Airlines database. In addition, you will update the Travel Companion
Windows Store app to consume the newly created service.

Objectives
After you complete this lab, you will be able to:

Create an ASP.NET Web API service.

Implement CRUD actions in the service.

Consume an ASP.NET Web API service with the System.Net.HttpClient library.

Lab Setup
Estimated Time: 30 Minutes
Virtual Machine: 20487B-SEA-DEV-A, 20487B-SEA-DEV-C

User name: Administrator, Admin

Password: Pa$$w0rd, Pa$$w0rd


For this lab, you will use the available virtual machine environment. Before you begin this lab, you must
complete the following steps:

1. On the host computer, click Start, point to Administrative Tools, and then click Hyper-V Manager.

2. In Hyper-V Manager, click MSL-TMG1, and in the Action pane, click Start.
3. In Hyper-V Manager, click 20487B-SEA-DEV-A, and in the Action pane, click Start.

4. In the Action pane, click Connect. Wait until the virtual machine starts.

5. Sign in using the following credentials:


User name: Administrator

Password: Pa$$w0rd

6. Return to Hyper-V Manager, click 20487B-SEA-DEV-C, and in the Action pane, click Start.
7. In the Action pane, click Connect. Wait until the virtual machine starts.

8. Sign in using the following credentials:


User name: Admin

Password: Pa$$w0rd

9. Verify that you received credentials to log in to the Azure portal from you training provider, these
credentials and the Azure account will be used throughout the labs of this course.
3-32 Creating and Consuming ASP.NET Web API Services

Exercise 1: Creating an ASP.NET Web API Service


Scenario
Implement the travelers service by using ASP.NET Web API. Start by creating a new ASP.NET Web API
controller, and implement CRUD functionality using the POST, GET, PUT and DELETE HTTP methods.

The main tasks for this exercise are as follows:


1. Create a new API Controller for the Traveler Entity

Task 1: Create a new API Controller for the Traveler Entity


1. In the 20487B-SEA-DEV-A virtual machine, open the
D:\AllFiles\Mod03\LabFiles\begin\BlueYonder.Companion\BlueYonder.Companion.sln solution file,
and add a new class called TravelersController to the BlueYonder.Companion.Controllers project.

2. Change the access modifier of the class to public, and derive it from the ApiController class.

3. Create a private property named Travelers of type ITravelerRepository and initialize it in the
constructor.
Create a new property named Travelers of type ITravelerRepository.

Create a default constructor for the TravelersController class.

Initialize the Travelers property with a new instance of the TravelerRepository class.
4. Create an action method named Get to handle GET requests.

The method receives a string parameter named id and returns an HttpResponseMessage object.
Call the FindBy method of the ITravelerRepository interface to search for a traveler using the id
parameter. The ID of the traveler is stored in the traveler's TravelerUserIdentity property.

If the traveler was found, use the Request.CreateResponse to return an HTTP response message with
the traveler. Set the status code of the response to OK.
If a traveler was not found, use the Request.CreateResponse to return an empty message. Set the
status code to NotFound (HTTP 404).

5. Insert a breakpoint at the beginning of the Get method.

6. Create an action method to handle POST requests.

The method receives a Traveler parameter called traveler and returns an HttpResponseMessage
object.
Implement the method by calling the Add and then the Save methods of the Travelers repository.

Create an HttpResponseMessage returning the HttpStatusCode.Created status and the newly


created traveler object.

Set the Location header of the response to the URI where you can access the newly created traveler.
The new URI should be a concatenation of the request URI and the new traveler's ID.

Note: You can refer to the implementation of the Post method in the
ReservationsController class for example of how to set the Location header.

Return the HTTP response message.

7. Insert a breakpoint at the beginning of the Post method.

8. Create an action method to handle PUT requests.


Developing Windows Azure and Web Services 3-33

The method receives a string parameter called id and a Traveler parameter called traveler. The
method returns an HttpResponseMessage object.

If the traveler does not exists in the database, use the Request.CreateResponse method to return an
HTTP response message with the HttpStatusCode.NotFound status.

Note: To check if the traveler exists in the database, use the FindBy method as you did in
the Get method.

If the traveler exists, call the Edit and then the Save methods of the Travelers repository to update
the traveler, and then use the Request.CreateResponse method, to return an HTTP response
message with the HttpStatusCode.OK status.

Note: The HTTP PUT method can also be used to create resources. Checking if the
resources exist is performed here for simplicity.

9. Insert a breakpoint at the beginning of the Put method.

10. Create an action method to handle DELETE requests.

The method receives a string parameter called id.


If the traveler does not exists in the database, use the Request.CreateResponse method to return an
HTTP response message with the HttpStatusCode.NotFound status.

Note: To check if the traveler exists in the database, use the FindBy method as you did in
the Get method.

If the traveler exists, call the Delete and then the Save methods of the Travelers repository, and then
use the Request.CreateResponse method, to return an HTTP response message with the
HttpStatusCode.OK status.

Results: After you complete this exercise, you will be able to run the project from Visual Studio 2012 and
access the travelers service.

Exercise 2: Consuming an ASP.NET Web API Service


Scenario
Consume the travelers service from the client application. Start by implementing the GetTravelerAsync
method by invoking a GET request to retrieve a specific traveler from the server. Continue by
implementing the CreateTravelerAsync method by invoking a POST request to create a new traveler.
And complete the exercise by implementing the UpdateTravelerAsync method by invoking a PUT
request to update an existing traveler.

The main tasks for this exercise are as follows:

1. Consume the API Controller from a Client Application


2. Debug the Client App
3-34 Creating and Consuming ASP.NET Web API Services

Task 1: Consume the API Controller from a Client Application


1. In the 20487B-SEA-DEV-C virtual machine, open the
D:\AllFiles\Mod03\LabFiles\begin\BlueYonder.Companion.Client\BlueYonder.Companion.Client.sln
solution.

2. In the BlueYonder.Companion.Client project, open the DataManager class from the Helpers folder
and implement the GetTravelerAsync method.

Remove the return null line of code.

Build the relative URI using the string format "{0}travelers/{1}". Replace the {0} placeholder with the
BaseUri property and the {1} placeholder with the hardwareId variable.

Call the client.GetAsync method with the relative address you constructed. Use the await keyword
to call the method asynchronously. Store the response in a variable called response.

Check the value of the response.IsSuccessStatusCode property. If the value is false, return null.

If the value of the response.IsSuccessStatusCode property is true, read the response into a string by
using the response.Content.ReadAsStringAsync method. Use the await keyword to call the
method asynchronously.

Use the JsonConvert.DeserializeObjectAsync static method to convert the JSON string to a


Traveler object. Call the method using the await keyword and return the deserialized traveler object.
3. Insert a breakpoint at the beginning of the GetTravelerAsync method.

4. Review the CreateTravelerAsync method. The method sets the ContentType header to request a
JSON response. The method then uses the PostAsync method to send a POST request to the server.
5. Insert a breakpoint at the beginning of the CreateTravelerAsync method.

6. Review the UpdateTravelerAsync method. The method uses the client.PutAsync method to send a
PUT request to the server.
7. Insert a breakpoint at the beginning of the UpdateTravelerAsync method.

Task 2: Debug the Client App


1. Go back to the virtual machine 20487B-SEA-DEV-A and start debugging the
BlueYonder.Companion.Host project.
2. Go back to the virtual machine 20487B-SEA-DEV-C and start debugging the
BlueYonder.Companion.Client project.

3. Debug the client app and verify that you break before sending a GET request to the server. Press F5
to continue running the code.

4. Go back to the virtual machine 20487B-SEA-DEV-A and debug the service code.

The breakpoint you have set in the Get method of the TravelersController class should be
highlighted.

Inspect the value of the id parameter.

Press F5 to continue running the code.


5. Go back to the virtual machine 20487B-SEA-DEV-C and debug the client app.

The code execution breaks inside the CreateTravelerAsync method.

Press F5 to continue running the code.


6. Go back to the virtual machine 20487B-SEA-DEV-A and debug the service code.
Developing Windows Azure and Web Services 3-35

The breakpoint you have set in the Post method solution should be highlighted.

Inspect the contents of the traveler parameter.

Press F5 to continue running the code.

7. Go back to the virtual machine 20487B-SEA-DEV-C and use the client app to purchase a flight from
Seattle to New York.

Display the app bar search for the word New. Purchase the trip from Seattle to New York.
Fill in the traveler information according to the following table and then click Purchase.

Field Value

First Name Your first name

Last Name Your last name

Passport Aa1234567

Mobile Phone 555-5555555

Home Address 423 Main St.

Email Address Your email address

The code execution breaks inside the UpdateTravelerAsync method.


Press F5 to continue running the code.

8. Go back to the virtual machine 20487B-SEA-DEV-A and debug the service code.
The breakpoint you have set in the Put method solution should be highlighted.
Inspect the contents of the traveler parameter.

Press F5 to continue running the code.


9. Go back to the virtual machine 20487B-SEA-DEV-C, close the confirmation message, and then close
the client app.

10. Go back to the virtual machine 20487B-SEA-DEV-A and stop the debugging in Visual Studio 2012.

Results: After you complete this exercise, you will be able to run the BlueYonder Companion client
application and create a traveler when purchasing a trip. You will also be able to retrieve an existing
traveler and update its details.

Question: Why did you need to return an HttpRequestMessage from the Post action
method?
3-36 Creating and Consuming ASP.NET Web API Services

Module Review and Takeaways


In this module, you learned how HTTP can be used for creating services and how to use ASP.NET Web API
to create HTTP-based services. You have also learned how to host ASP.NET Web API services in IIS and
consume them from Windows Store apps by using the HttpClient class. You also learned how to apply
best practices when you develop HTTP services by using ASP.NET Web API.
Best Practices:

Model your services to describe resources and not functions.

Use the HttpResponseMessage to return a valid HTTP response message.

When handling errors, throw an HttpResponseException exception to avoid complex code.

Review Question(s)
Question: What are ASP.NET Web API controllers used for?

Tools
IIS
4-1

Module 4
Extending and Securing ASP.NET Web API Services
Contents:
Module Overview 4-1

Lesson 1: The ASP.NET Web API Pipeline 4-2

Lesson 2: Creating OData Services 4-13


Lesson 3: Implementing Security in ASP.NET Web API Services 4-18

Lesson 4: Injecting Dependencies into Controllers 4-29


Lab: Extending Travel Companions ASP.NET Web API Services 4-31
Module Review and Takeaways 4-38

Module Overview
ASP.NET Web API provides a complete solution for building HTTP services, but services often have various
needs and dependencies. In many cases, you will need to extend or customize the way ASP.NET Web API
executes your service. Handling needs such as applying error handling and logging integrate with other
components of your application and support other standards that are available in the HTTP world.

Understanding the way ASP.NET Web API works is important when you extend ASP.NET Web API. The
division of responsibilities between components and the order of execution are important when
intervening with the way ASP.NET Web API executes.

ASP.NET Web API also includes built-in extensions you can use. In this module, you will learn how to
extend your services to support OData.

Finally, with ASP.NET Web API, you can also extend the way you interact with other parts of your system.
With the dependency resolver mechanism, you can control how instances of your service are created,
giving you complete control on managing dependencies of the services.

Objectives
After completing this module, students will be able to:

Extend the ASP.NET Web API request and response pipeline.


Create OData services by using ASP.NET Web API.

Secure ASP.NET Web API.

Inject dependencies into ASP.NET Web API controllers.


4-2 Extending and Securing ASP.NET Web API Services

Lesson 1
The ASP.NET Web API Pipeline
Based on your organizations needs and requirements, you need to customize and extend the ASP.NET
Web API pipeline. This module describes the ASP.NET Web API architecture. The module also covers
various tools and functionalities such as filters, asynchronous actions, and media type formatters that you
can use to customize and extend the ASP.NET Web API architecture.

Lesson Objectives
After completing this lesson, students will be able to:

Describe the ASP.NET Web API processing architecture.

Describe the functionality of the DelegatingHandler class.

Explain how filters work.


Explain the flow of requests and responses through the pipeline.

Describe asynchronous actions.

Create asynchronous actions.


Describe the media type formatters.

Describe how to return images by using media type formatters.

ASP.NET Web API Processing Architecture


To build HTTP-based services, you need to handle
two main workflows:
Receiving HTTP request messages from clients
and creating method invocations based on
those messages.

Returning HTTP response messages to clients


based on the result of the methods invoked.
To handle these two tasks, ASP.NET Web API uses
a processing architecture that spans from the
underlying communication infrastructure to the
action method, handling every aspect of both
HTTP messages and method invocation. Understanding this architecture can help you in extending
ASP.NET Web API and developing better services.

Architecture Overview
The ASP.NET Web API processing architecture is made of three layers:

Hosting

Message handlers
Controllers

Hosting
The hosting layer is in charge of interacting with the underlying communication infrastructure, creating an
HttpRequestMessage object from the request and sending the object down through the message
Developing Windows Azure and Web Services 4-3

handling pipeline to the messages handler layer. The hosting layer is also in charge of converting
HttpResponseMessage objects received from the message handlers to HTTP messages sent through the
underlying communication infrastructure.

ASP.NET Web API has two implementations for the hosting layer:
Web-hosting implemented in the System.Web.Http.WebHost.dll uses the HttpControllerHandler
class, which is an asynchronous IIS handler. This provides a hosting layer for hosting in Internet
Information Services (IIS).

Self-hosting implemented in the System.Web.Http.SelfHost.dll uses the HttpSelfHostServer class


that uses the WCF channel stack to host ASP.NET Web API services in any .NET framework process.

Note: WCF will be introduced in Module 5, Creating WCF Services in Course 20487.

Message Handlers
Message handlers are objects that are chained to each other to form a pipeline. Every handler receives an
HttpRequestMessage object, returns an HttpResponseMessage object and performs some processing
on the message before passing it to the next handler in the pipeline. This allows ASP.NET Web API to
separate the concerns for different processing that should be applied on every message, and provides an
extensibility point for developers. Message handlers are covered later in this lesson.

After the hosting layer has completed creating the HttpRequestMessage, it creates a new instance of the
System.Web.Http.HttpServer class, which is a message handler. When an instance of the HttpServer
class is initialized, it creates a chain of message handlers in the following order:

Custom Message Handlers. With ASP.NET Web API, you can create your own message handlers and
configure the host to execute them. When HttpServer starts processing a message, custom message
handlers are processed first. Custom message handlers are covered in depth, in the next topic
Message Handlers.

HttpRoutingDispatcher. After the custom message handlers, ASP.NET Web API adds a message
handler of type HttpRoutingDispatcher. The HttpRoutingDispatcher class is in charge of finding
the route that matches the HttpRequestMessage.

HttpControllerDispatcher. The next message handler in the chain is the HttpControllerDispatcher


class. The HttpControllerDispatcher class is in charge of selecting and creating the controller. After a
controller is created the HttpControllerDispatcher calls the controllers ExecuteAsync method,
which hands over the responsibility to the third and final layer of the ASP.NET Web API processing
pipeline.

Controllers
The final layer in the ASP.NET Web API is executed by the controllers themselves. When the
ExecuteAsync method of a controller is called, it starts a process that should result in execution of an
action method processing the request and returning a response. The process is made out of the following
steps:

Action Selection. The first step for executing an action method is identifying which action should be
executed. Action selection is covered in Module 3, Creating and Consuming ASP.NET Web API
Services, Lesson 2, Creating and Consuming ASP.NET Web API Services in Course 20487.

Creating the Filters Pipeline. Each action can have a set of components called filters associated with
it. Similar to message handlers, filters also provide a way to create a pipeline of processing units but
only for an action and not for the entire host. ASP.NET Web API has three types of filters executed in
the following order:
4-4 Extending and Securing ASP.NET Web API Services

o Authorization Filters. Authorization filters are covered in Lesson 3, Implementing Security in


ASP.NET Web API Services in this module.

o Action Filters. Action filters are covered later in this lesson.


o Exception Filters.

The filters pipeline also contains two other components:

HttpActionBinding. The HttpActionBinding class performs the process of parameter binding and is
executed after the authorization filters. Parameter binding is covered in Module 3, Creating and
Consuming ASP.NET Web API Services, Lesson 2, Creating and Consuming ASP.NET Web API
Services in Course 20487.

ApiControllerActionInvoker. The System.Web.Http.Controllers.ApiControllerActionInvoker class is in


charge of invoking the action method and converts the result to an HttpResponseMessage (if
needed).

The DelegatingHandler Class


A pipeline of message processing components is a
common pattern in many frameworks that deal
with messages. WCF channels, ASP.NET modules,
Connect middleware (in Node.js) and many other
frameworks all provide components that receive a
request, return a response, and provide
extensibility to a message processing pipeline.
ASP.NET Web API message handlers are classes
derived from the
System.Net.Http.HttpMessageHandler class.
The main method for message handlers is the
SendAsync method, which receives a parameter
of type HttpRequestMessage and returns a Task<HttpResponseMessage>. The common behavior for
most message handlers is to call another message handler creating a pipeline, and perform some
processing either on the HttpRequestMessage before passing it to the inner message handler or on the
HttpResponseMessage returned from the SendAsync method of the inner message handler.
ASP.NET Web API also provides the System.Net.Http.DelegatingHandler class as a base class for
message handlers that include a property called InnerHandler and an implementation of the SendAsync
that invokes the inner handler to simplify creating message handlers for a pipeline.

The following code shows a simple handler created by deriving from the DelegatingHandler class.

A simple DelegatingHandler Implementation


public class TimerHandler : DelegatingHandler
{
protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request,
CancellationToken
cancellationToken)
{
var watch = new Stopwatch();
watch.Start();

// calling the inner handler


return base.SendAsync(request, cancellationToken)
.ContinueWith(t =>
{
Developing Windows Azure and Web Services 4-5

// this will be executed after the inner handler executes


asynchronously
watch.Stop();
Trace.WriteLine("The execution of the request:");

// this is the HttpRequestMessage


Trace.WriteLine(request.ToString());
Trace.WriteLine(string.Format("completed in {0}
milliseconds",
watch.ElapsedMilliseconds));

// this is the HttpResponseMessage


return t.Result;
});

}
}

You can add custom message handlers to the ASP.NET Web API pipeline by using configuration. ASP.NET
Web API hosts (both HttpSelfHostServer and HttpControllerHandler) can be configured by using the
System.Web.Http.HttpConfiguration class that is passed to their constructor.

Configuring the Host


Before creating an instance of a host, you need to create and configure the HttpConfiguration object it
will receive. This process is slightly different between self-hosting and web-hosting.

Self-hosting
When using a self-host you need to create a new instance of the
System.Web.Http.SelfHost.HttpSelfHostConfiguration class. The HttpSelfHostConfiguration class
derives from the HttpConfiguration class and adds configuration for capabilities for things that in web-
hosting are managed by Internet Information Services (IIS) like managing certificates and timeouts. After
you have created a new instance of the HttpSelfHostConfiguration class, you create a new instance of
the handler and call the Add method of the MessageHandlers property.

The following code configures a self-host to use the TimerHandler

Configuring Handlers in Self-hosting


var config = new HttpSelfHostConfiguration("http://localhost:2301");
config.MessageHandlers.Add(new TimerHandler());

using (HttpSelfHostServer server = new HttpSelfHostServer(config))


{
server.OpenAsync().Wait();

Console.WriteLine("Press Enter to quit.");


Console.ReadLine();
}

Web-hosting
In web-hosting, the HttpServer is created by the HttpControllerHandler when receiving the first
request. To configure the HttpServer, you need to set the GlobalConfiguration.Configuration static
property in the same way you configured the HttpSelfHostConfiguration object in the preceding
example. The initialization should be performed in the Application_Start method of the global.asax,
before the first request is handled.
4-6 Extending and Securing ASP.NET Web API Services

Filters
Delegating handlers are applied early on in the
ASP.NET Web API pipeline. This is done before
reaching the controller. This means that any
delegating handler that is configured for a host
will be executed for every request and response
the host handles.

Sometimes, a more selective approach is needed.


Filters provide a mechanism to extend the
pipeline for specific actions or controllers.
ASP.NET Web API has three different types of
filters, each designed for a different purpose and
executed in a different stage:

Authorization Filters. These are classes that implement the IAuthorizationFilter interface.
Authorization filters are the first type of filters to be executed in the filters pipeline and are in charge
of validating if the request is authorized. A common use is to return a 401 (Unauthorized) response if
the request is not authenticated, or 403 (Forbidden) if the request is authenticated but users have no
permission to execute the action.
The following code is an example of an authorization filter that searches for an ASP.NET session variable
called user for authorizing a user.

Session based Authorization Filter


public class SessionAuthorizationFilter : IAuthorizationFilter
{
public async Task<HttpResponseMessage>
ExecuteAuthorizationFilterAsync(HttpActionContext actionContext,
CancellationToken
cancellationToken,

Func<Task<HttpResponseMessage>> continuation)
{
if (HttpContext.Current.Session["user"] == null)
throw new HttpResponseException(
new HttpResponseMessage(HttpStatusCode.Unauthorized)
);

return await continuation();

public bool AllowMultiple


{
get { return false; }
}
}

Note: The usage of the Authorization filter is explained in Lesson 3, Implementing Security
in ASP.NET Web API.

Action Filters These are classes that implement the IActionFilter interface. Action filters are
executed later on in the filter pipeline, this is done after the authorization filters are executed and
after parameter binding takes place. You can use action filters to extend the ASP.NET Web API
pipeline in a similar way to delegating handlers. There are two main differences between action filters
Developing Windows Azure and Web Services 4-7

and delegating handlers. The first is that action filters can be applied to specific actions or controllers.
The second difference from delegating handler is the fact that action filters do not receive an
HttpRequestMessage as a parameter. Instead, action filters receive a parameter of type
HttpActionContext. The HttpActionContext provides a more complete object model, which
includes access to APIs such as actions arguments, model state, the request, and response and more.

The following code sample shows an action filter the uses the System.Debugging.Trace call to omit
traces.

A simple Action Filter.


public class TraceFilterAttribute : Attribute, IActionFilter
{

public async Task<HttpResponseMessage> ExecuteActionFilterAsync(HttpActionContext


actionContext,
CancellationToken
cancellationToken,

Func<Task<HttpResponseMessage>> continuation)
{
Trace.WriteLine("Trace filter start");

foreach (var item in actionContext.ActionArguments.Keys)


Trace.WriteLine(string.Format("{0}: {1}", item,
actionContext.ActionArguments[item]));

var response = await continuation();

Trace.WriteLine(string.Format("Trace filter response: {0}", response));


return response;
}

public bool AllowMultiple


{
get { return true; }
}
}

Exception Filters These are classes that implement the IExceptionFilter interface and are used to
handle exceptions. Exception filters are executed after the completion of other filters and only if the
Task<HttpResponseMessage> returned by the filters pipeline is in faulted state.

The following code sample demonstrates how to create an exception filter.

Simple Tracing Exceptions Filter


public class TraceExceptionFilter : IExceptionFilter
{
public Task ExecuteExceptionFilterAsync(HttpActionExecutedContext
actionExecutedContext,
CancellationToken cancellationToken)
{
return Task.Run(()=> Trace.WriteLine(actionExecutedContext.Exception));
}

public bool AllowMultiple


{
get { return false; }
}
}
4-8 Extending and Securing ASP.NET Web API Services

Demonstration: The Flow of Requests and Responses Through the Pipeline


In this demonstration, you will create a delegating handler that writes the content of the request and
response message to the trace log. In addition, you will create an action filter that writes the name of the
controller and action for each incoming request to the trace log.

Demonstration Steps
1. Open the RequestResponseFlow.sln solution from the
D:\Allfiles\Mod04\DemoFiles\RequestResponseFlow\end\RequestResponseFlow folder.

2. Add a trace handler to the RequestResponseFlow.Web project by creating a new class called
TraceHandler.
3. Ensure that the TraceHandler class is a Delegating Handler by deriving from the DelegatingHandler
class.

4. Implement the SendAsync method by writing a start message and the request to the trace log,
calling the base method, writing an end message and finally returning the response from the base
method.

5. Add the trace handler class to the message handlers pipeline by adding a new instance of the
TraceHandler class to the MessageHandlers collection of the configuration object.

6. Add a new filter to the RequestResponseFlow.Web project by creating a new class called
TraceFilterAttribute.

7. Ensure that the TraceFilterAttribute class is an action filter by deriving it from the ActionFilter class
and requiring the IActionFilter interface.
8. Implement the ExecuteActionFilterAsync method by writing a start message to the trace log,
followed by each of the individual elements in the ActionArguments collection, calling the
continuation and wait for it to finish and finally by writing an end message to the trace log.
9. Implement the AllowMultiple property by returning true. This property indicates if the attribute can
be applied multiple times on the same action or controller.

10. Place a breakpoint in the beginning of the ExecuteActionFilterAsync method.


11. Add the filter to the controller class by adding the TraceFilter attribute to the ValueController class.
12. Start the application in Debug mode and view its call stack. Make sure that the code in the
TraceFilterAttribute is called.

Asynchronous Actions
One of the most powerful capabilities of ASP.NET
Web API is the support for building asynchronous
actions. Asynchronous actions provide a simple to
use mechanism that you can use to improve
services scalability when performing I/O bound
operations.

I/O bound operations


I/O bound operations are common in services.
These include operations, such as database access,
file access or remote service calls. Most I/O bound
APIs, from the low level System.IO.Stream to
Developing Windows Azure and Web Services 4-9

more high level APIs such as ADO.NET and HttpClient provide both synchronous and asynchronous
operations.

Synchronous I/O bound operations


Synchronous operations provide a simple model to access I/O devices. For example when accessing a
network card using the WebRequest API.

The following code shows a synchronous call using the WebRequest API.

Synchronous Call Using the WebRequest API


var client = WebRequest.Create("http://server-2/");
var response = client.GetResponse();
var stream = response.GetResponseStream();
var reader = new StreamReader(stream);
var result = reader.ReadToEnd();

The preceding code is relatively easy to follow. However, there is one line that you should pay close
attention to. When calling the Client.GetResponse method, the executing thread is blocked waiting for
the response. This blocking behavior is unecessary, considering the fact that the most of
Client.GetResponse method execution is carried out by the network card and the remote server.

Asynchronous I/O Bound Operations


Asynchronous I/O bound APIs are designed to avoid the redundant behavior of their synchronous
equivalents. For example, the HttpClient class provides an asynchronous API for calling HTTP-based
services.
The following code shows an asynchronous service call using the HttpClient API

Asynchronous Call Using the HttpClient API


var client = new HttpClient();
var response = await client.GetAsync("http://server-2/");
var result = await response.Content.ReadAsStringAsync();

The preceding code uses the await keyword to simplify the call to the asynchronous
HttpClient.GetAsync method. While this code seems sequential during the execution, it is actually
divided into the following steps:

All the code up to the await keyword is being executed sequentially.

When calling the HttpClient.GetAsync method, the method immediately returns a Task representing
its asynchronous execution and the current thread returns.

The HttpClient.GetAsync will execute asynchronously.

When using the await keyword, the C# compiler generates a continuation method that includes all
the code following the await statement. This code will be used as the continuation of the task
returned by the HttpClient.GetAsync method, which is invoked by the Input/Output Completion
Port (IOCP).

Creating an Asynchronous Action


The await keyword must be used in an asynchronous method. Asynchronous methods are methods that
are declared using the async keyword and return one of the following types: Task, Task<T> or void.
When an ASP.NET Web API action is created as asynchronous method, ASP.NET Web API executes this
method asynchronously. This means that when the executing thread encounters the first await statement,
it returns the thread, and ASP.NET Web API can use that thread for other calls. After the asynchronous
operation completes, the remaining of the asynchronous method will execute on a thread-pool thread.
4-10 Extending and Securing ASP.NET Web API Services

The following code sample shows an asynchronous service call executed from inside an asynchronous
action.

An Asynchronous Action Method


public async Task<string> Get()
{
var client = new HttpClient();
var response = await client.GetAsync("http://server-2/");
return await response.Content.ReadAsStringAsync();
}

Demonstration: Creating Asynchronous Actions


In this demonstration, you will convert an existing method that uses synchronous I/O calls to an
asynchronous method that uses asynchronous I/O calls. As part of the conversion, you will use the async
and await keywords.

Demonstration Steps
1. Open the AsynchronousActions.sln solution from the
D:\Allfiles\Mod04\DemoFiles\AsynchronousActions\begin\AsynchronousActions folder.

2. Observe the code in the CountriesController class. The Get method calls the GetCountries method,
which uses a synchronous web request call to retrieve the list of countries from an external web
service. To better utilize the thread pool, the Get method and the GetCountries method should both
run asynchronously.

3. Change the GetCountries method to asynchronous by adding the async keyword to the method
declaration and returning a Task<XDocument> instead of XDocument.

4. Replace the code that creates an HttpWebRequest with a code that creates a new HttpClient
object. Store the object in the client variable.
5. Replace the client.Accept property set with the matching HttpClient code. Use the
DefaultRequestHeaders.Accept property to access the Accept HTTP header, and add the
application/xml media type.

6. Replace the client.GetResponse method call with the matching HttpClient code. Use the GetAsync
method to call the service asynchronously, and add the await keyword before calling the method to
ensure the response variable is set after the list of countries is retrieved.
7. Replace the response.GetResponseStream method call with the matching HttpResponseMessage
code. Use the Content property to get the HTTP message, and then use the ReadAsStreamAsync
method to get the body of the response asynchronously. Add the await keyword before calling the
method to ensure the Load method is called after the body is read asynchronously.

8. In the Get method, add the await keyword before calling the GetCountries method to ensure the
result variable is set after the list of countries is retrieved and loaded to the XDocument object.

9. Change the Get method to asynchronous by adding the async keyword to the method declaration
and returning a Task<IEnumerable<string>> instead of IEnumerable<string>.

10. Start the DataServices web application, and then start the AsynchronousActions.Web web
application. Verify you see the list of countries in the browser.
Developing Windows Azure and Web Services 4-11

Media Type Formatters


Serialization and deserialization are common tasks
when creating and consuming services. Over the
years, the .NET Framework offered a variety of
serialization mechanisms supporting different
formats such as: XML, Binary and JSON. However,
when creating HTTP-based services, serialization
must be aware of HTTPs content negotiation.
Content negotiation is discussed in-depth in
Module 3, Creating and Consuming ASP.NET
Web API Services, Lesson 1, HTTP Services in
Course 20487.

ASP.NET Web API has built in support for content


negotiation using media type formatters. Media type formatters are classes derived from the
MediaTypeFormatter base class. Each media type formatter has a property called
SupportedMediaTypes containing all the media-types it supports. When you implement a new media
type formatter, you first need to populate this property.
The following code demonstrates populating the SupportedMediaTypes property inside a media type
formatters constructor.

Adding Support for Media-Types in a Media Type Formatter


public class CsvFormatter : MediaTypeFormatter
{
public CsvFormatter()
{
this.SupportedMediaTypes.Add(new MediaTypeHeaderValue("text/csv"));
}
}

Sometime the same media type can be supported only by specific types. For example images might be a
valid media-type when requesting a resource for an employee in a company, but not for a department.
The MediaTypeFormatter class has the CanReadType and CanWriteType abstract methods that can be
used to define which types can be read or written using the specific media type formatter.
Finally, you can implement the actual process of reading or writing the data using the
ReadFromStreamAsync and WriteToStreamAsync methods.

The following code demonstrates the use of the WriteToStreamAsync method to provide a list of
employees using the CSV file format.

Implementing the WriteToStreamAsync Method


public override void WriteToStream(Type type, object value, Stream writeStream,
System.Net.Http.HttpContent content)
{
var employees = value as IEnumerable<Employee>;

using (var writer = new StreamWriter(writeStream))


{
// Write CSV header
writer.WriteLine("Full Name, Employee ID");
if (employees != null)
{
foreach (var employee in employees)
{
writer.WriteLine("{0},{1}", employee.FullName, employee.ID);
}
4-12 Extending and Securing ASP.NET Web API Services

}
}
}

Demonstration: Returning Images by Using Media Type Formatters


In this demonstration, you will use a custom media type formatter that handles the image/png content-
types by returning a thumbnail representing the returned object.

Demonstration Steps
1. Open the ImagesWithMediaTypeFormatter.sln solution from the
D:\Allfiles\Mod04\DemoFiles\ImagesWithMediaTypeFormatter folder.

2. Explore the content of the ValuesController class. The controller handles Value objects.
3. Explore the content of the Value class in the ValuesController.cs file. The [IgnoreDataMember]
attribute prevents the serialization of the Thumbnail property.

4. Open the ImageFormatter.cs file from the Formatters folder and examine its content. The
constructor of the ImageFormatter class uses the SupportedMediaTypes collection to specify the
mime types supported by the media type formatter.
5. Observe the code in the CanWriteType method. The formatter is used when the content is a Value
object.

6. Observe the code in the WriteToStream method. The method uses the Thumbnail property of the
Value object to locate the image file and return it instead of the object returned by the controllers
action.

7. Open the UriFormatHandler.cs file from the Formatters folder and examine its content. The
method checks the requests URL. If the extension in the URL matches one of the image types, the
extension is removed from the URL, a matching mime type is added to the request, and the request is
send to the next component in the pipeline.
8. Run the web application without debugging, open the developer tools window by pressing F12, and
in the Network tab, click Start capturing.

9. Enter a value from 0 to 9, and then click Get default. Use the Network tab to view the requests
Accept header, and the responses Content-Type header. When the Accept header is set to */*, the
default content type of ASP.NET Web API is JSON. Click Clear to clear the list of requests.

10. Click Get JSON and use the Network tab to view the requests Accept header, the responses
Content-Type header, and the responses body. When the Accept header is set to application/json,
the result is a JSON string. Observe that the Thumbnail property is not present because it was
omitted from the serialization.

11. Click Get XML and use the Network tab to view the requests Accept header, the responses
Content-Type header, and the responses body. When the Accept header is set to application/xml,
the result is an XML string.

12. Click Get Image and use the Network tab to view the requests Accept header, the responses
Content-Type header, and the responses body. When the Accept header is set to an image type,
the result is an image instead of the Value objects content.

Note: Close the developer tools window before closing the browser.
Developing Windows Azure and Web Services 4-13

Lesson 2
Creating OData Services
This lesson describes the purpose and functionality of the OData services. The topics show you how to
create queryable actions with OData. You will also see how to create OData models and consume OData
services.

Lesson Objectives
After completing this lesson, students will be able to:

Create actions with OData querying support


Describe how to create OData models.

Describe how to consume OData services.

Create and consume OData services

OData Queryable Actions


The Open Data Protocol (OData) is an HTTP-
based data access protocol created by Microsoft.
OData is designed for querying and updating data
by utilizing common web technologies such as
HTTP and the Atom Publishing Protocol
(AtomPub).
OData provides a RESTful implementation for
exposing data models, which is based on Feeds.
Feeds are collections of entities known as Entries,
which represent data entities.

OData Entries are data structures containing their


own properties. Entries properties can be
represented as data (either primitive or complex types) or as relation to other Entries or Feeds using links.

OData Query String Options


One of the core concepts of OData is query string options. OData utilizes the query string part of the URI
to perform querying operations using predefined query string parameters prefixed with the $ character.
An OData service can support any of these parameters as well as custom ones. The following query string
options are among the common OData System query options:

$orderby. You can use the $orderby query option to specify an expression that enables you to
control the order of the entries in the result feed returned by the OData service.

The following OData service uses the $orderby query option to order flights by their date

Using the $orderby Query Option


http://localhost/OData/Flights?$orderby=Date asc

$top. You can use the $top query option to specify the maximum amount of entries to be returned
by your query.

The following OData service uses the $top query option to return the first 10 flights from the service
4-14 Extending and Securing ASP.NET Web API Services

Using the $top Query Option


http://localhost/OData/Flights?$top=10

$skip. You can use the $skip query option you can specify a number of entries to be omitted from
the beginning of the feed.

The following OData service uses the $skip query option to omit the first 10 flights from the service.

Using the $skip Query Option


http://localhost/OData/Flights?$skip=10

$filter. You can use the $filter query option to specify an expression containing logical operators. The
expression defines which subset of entries will be retrieved from the sum total of the services entries.

The following code uses the $filter query option to select flights to New-York.

Using the $filter Query Option


http://localhost/OData/Flights?$Filter=Destination/Name eq 'New-York'

For more information about OData Query String Options, see the OData documentation, URI Conventions
section.

OData URI Conventions


http://go.microsoft.com/fwlink/?LinkID=298765&clcid=0x409

Defining OData Actions


ASP.NET Web API supports OData Query String Options using queryable actions. Queryable actions are
action methods that return an IQueryable<T> and are decorated using the [Queryable] attribute.

The following action provide support for OData query string options by returning an IQueryable<Flights>.

A Queryable Action
[Queryable]
public IQueryable<Flights> Get()
{
return FlightsRepository.Flights;
}

OData Models
OData query strings options are powerful but
OData is built to expose complete data models.
This includes hierarchies and the relations
between them using links, and exposing metadata
regarding the data models structure. ASP.NET
Web API has built-in mechanisms for creating and
exposing OData models. Most of these
mechanisms can be derived from the
Microsoft.AspNet.WebApi.OData NuGet
package.

OData Controllers
Developing Windows Azure and Web Services 4-15

To deal with OData formatting, ASP.NET Web API introduces the ODataController base class. After
deriving from the ODataController class, you can implement OData actions.

The following ODataController exposes a single queryable action.

A simple ODataController Controller


public class FlightsController : ODataController
{
FlightContext context = new FlightContext();

[Queryable]
public IQueryable<Flight> Get()
{
return context.Flights;
}
}

A more convenient option for implementing OData controllers is deriving from the
EntitySetController<TEntity, TKey> class, which provides a set of virtual methods you can override for
exposing an entity set using OData.

Entity Data Model (EDM)


OData exposes the structure of its data model using a service metadata document. The service metadata
document is an XML-based document describing the conceptual data model in EDM terms.
To generate the service metadata document, ASP.NET Web API needs an instance of an entity data model
of class implementing the IEdmModel interface. You can use the ODataConventionModelBuilder class
from the Microsoft.AspNet.WebApi.OData NuGet package.

The following code is using the ODataConventionModelBuilder class to a data model.

Using the ODataConventionModelBuilder Class to Create an Entity Data Model


ODataConventionModelBuilder modelBuilder = new ODataConventionModelBuilder();
modelBuilder.EntitySet<Flights>("Flights");
IEdmModel model = modelBuilder.GetEdmModel();

OData Routes
After you have your OData controllers in place and a corresponding EDM, you need to provide a route to
your OData service, which will expose the different feeds as well as the service metadata document for
your model. To do so, you can use the MapODataRoute extension method.

The following code uses the MapOdataRoute extension method to expose an OData service

Using the MapOdataRoute Extension Method


config.Routes.MapODataRoute(routeName: "OData", routePrefix: "api/odata", model: model);
4-16 Extending and Securing ASP.NET Web API Services

Consuming OData Services


OData is an HTTP-based protocol, which can be
consumed from any HTTP client. Visual Studio
2012 has built-in capabilities you can take
advantage of to simplify the way you consume
OData services based on tools such as the OData
service metadata document and LINQ.

Generate Client Classes by Using the


Add Service Reference Dialog Box
Visual Studio has the built-in capability to
generate local classes for data-structures exposed
by an OData services and a class called Container
that exposes the services data model locally and is
used to consume the OData service. This capability was first introduced to support Web Services
Description Language (WSDL)-based metadata but can also be used to consume OData services using the
service metadata document. To generate the local classes, you can use the Add Service Reference dialog
box of Visual Studio 2012.
To use the Add Service Reference dialog box:
1. Right-click your project in Solution Explorer and then click Add Service Reference.

2. In the Add Service Reference dialog box, enter the address of the service, and then click Go. Visual
Studio 2012 will try to connect with the service and request the services metadata document.
3. Optionally, you can replace the default namespace with your own namespace for the services classes.

4. Click OK to generate the local classes.

Using the Container Class and LINQ to Consume the OData Service
After Visual Studio 2012 generates the local classes, you can use the Container class to consume the
service. The Container class exposes properties representing the different feeds in the OData service. Each
property is of type DataServiceQuery<T> where T is the entity generated locally for the entity exposed
by the OData service. The DataServiceQuery<T> class provides a LINQ based API for querying OData
feeds. You can use LINQ to query the property, and the DataServiceQuery<T> class will translate the
query to an HTTP request using OData query options.

The following code uses a LINQ query to search for the WCF course in an OData Courses feed.

Consuming an OData service from Visual Studio 2012


var container = new OData.Container(new Uri("http://localhost:57371/odata"));

var course = (from c in container.Courses


where c.Name == "WCF"
select c).FirstOrDefault();

Demonstration: Creating and Consuming an OData Services


In this demonstration, you will create a queryable action, convert the existing API controller to an OData
service controller, and then consume the OData service from a client application written in C#.
Developing Windows Azure and Web Services 4-17

Demonstration Steps
1. Open the ConsumingODataService.sln solution from the
D:\Allfiles\Mod04\DemoFiles\ODataService\begin\ConsumingODataService folder.

2. Review the content of the CoursesController.cs file located in the Controllers folder in the
ConsumingODataService.Host project. Observe the Get action, which returns an
IQueryable<Course> and is also decorated with the [Queryable] attribute. This is done to enable
OData queries. Observe the CoursesController class declaration. Note that the CoursesController
derives from the ODataController base class which handles the formatting.

3. Open the Global.asax file, and review the content of SetupOData method. Observe the use of the
ODataConventionModelBuilder class to create an entity data model, which is used to create the
OData metadata, and the use of the MapODataRoute method to create routes for the OData
metadata as well as the various controllers in the model.

4. Run the ConsumingODataService.Host project without debugging.

5. In the ODataService.Client project, use the Add Service Reference dialog box to add a service
reference to the OData model.
6. Create a new instance of the OData.Container class, and use a LINQ query to select the WCF course
from the containers Courses property. Print the name and id of the course.

7. Start the application in Debug mode.


4-18 Extending and Securing ASP.NET Web API Services

Lesson 3
Implementing Security in ASP.NET Web API Services
Many of the aspects of Web service security in ASP.NET Web API relies on the inherent security features of
HTTP, such as the HTTP authorization header and HTTPS (HTTP Secured), and how they are implemented
by Windows and IIS.

Note: Appendix B, Lesson 1, Introduction to Web Services Security in Course 20487,


discusses additional aspects of Web service security, not necessarily related to HTTP.

The three common aspects of security, which you need to handle when creating HTTP-based services with
ASP.NET Web API, are:

Securing the communication channel: Encrypting the data transferred from the client to the service
and the data from the service to the client is crucial to protect it from theft and alterations by
attackers. You can encrypt yhe HTTP communication channel by using HTTPS instead of HTTP.

Authenticating clients: If you do not want your service to be publicly accessible, and you only want
certain people or applications to access it, you will need to secure your service by authenticating you
clients. HTTP supports passing client credentials in the message headers, which can then be
authenticated by IIS and the Windows operating system.
Authorizing clients: If some of your authenticated clients have permissions to invoke actions that
other clients cannot invoke, you will also need to authorize your clients before allowing them to
invoke the actions in the controllers. HTTP does not support authorization of clients, but the ASP.NET
user identities infrastructure can assist you in accessing the users information so you can authorize
them.
This lesson describes how HTTPS works, how you can use HTTP and IIS to authenticate and authorize your
clients, and how to implement custom user authentication and authorization in ASP.NET Web API.

Lesson Objectives
Secure service communication with HTTPS.

Authenticate clients.

Authorize clients.

Securing HTTP with HTTPS


SSL helps protect your transport by encrypting
every request message that is sent by the client,
and every response message sent by the server
(service). HTTPS uses SSL, and its successor, TLS, to
secure HTTP communication channels. When
using IIS to host your Web applications, the
Windows operating system and IIS are responsible
for starting the SSL session and performing the
security handshake, so you are not required to
change your code to support HTTPS.

To configure IIS for HTTPS, follow these steps:


Developing Windows Azure and Web Services 4-19

1. Obtain and install a certificate for server authentication. You can create a self-signed certificate,
purchase a certificate from a known certificate authority (CA), or if you have a local certificate
authority, request it from your domain controller.

2. Configure your IIS Web sites bindings to HTTPS, and assign the server certificate to the SSL port (the
default port for HTTPS in IIS is 443).

3. Optionally, configure your Web application to require the client to authenticate with a client
certificate.

The following article describes how to apply the preceding steps for setting up SSL with IIS.

How to Set Up SSL on IIS 7


http://go.microsoft.com/fwlink/?LinkID=298766&clcid=0x409

How SSL Works?


Before starting to exchange encrypted messages, the client and the server agree on the encryption
method that will be used, and which encryption key to use.

To start a secured session, the client sends a special request to the server, asking it to start a secured
session (step 1 in the diagram). In return, the server responds by sending its X.509 certificate, which it was
issued by a CA (step 2 in the diagram).

The client receives the certificate, and uses it to validate the server (step 3 in the diagram). X.509
certificates hold information about the server's address, which the client can check to verify that the server
is authentic and not an impersonator. In addition, the certificate also holds information about the issuer of
the certificate, which the client can use to verify that the certificate was issued by a trusted CA, and is not
a fake.

After validating the certificate, the client generates a random symmetric encryption key, places it in a
message, encrypts the message using the server's public key that was supplied with the server's certificate,
and sends it to the server (step 4 in the diagram). Public key encryption can only be decrypted by using
the server's private key, which only the server has. After the server decrypts the message (step 5 in the
diagram), both sides have the symmetric key. They can now start exchanging encrypted messages using
the symmetric key to encrypt the message on one side and decrypt it on the receiving side (step 6 in the
diagram).

SSL uses symmetric encryption for message exchange instead of public key encryption, because public key
encryption is slower than symmetric encryption, and the created message is larger than when created with
a symmetric key encryption.

Although not required, the SSL handshake can also require the client to authenticate with a certificate. If
you configure IIS to also require client authentication, the client will send the generated symmetric
encryption key along with its certificate to the server. After receiving the clients certificate, the server will
validate it and will continue with the handshake only if it is a valid certificate.

Note: Validation of the client certificate is part of the SSL handshake since Windows Server
2003 Service Pack 1. SSL does not perform authorization, and cannot be used to authenticate
other client credential types.
4-20 Extending and Securing ASP.NET Web API Services

Authenticating Clients
When you create a service, you need to decide
whether you want clients to identify themselves
when they call the service, or whether the service
is accessible to any client, without requiring an
identity. Requiring a client to pass its identity to
the service has several uses:

Prevent anonymous access. By requiring


clients to authenticate, you can prevent
unknown clients from accessing your services,
and therefore restrict the exposure of data.

Auditing. By knowing who the client that


performs an action is, you can add an
auditing layer that logs who invoked which action.
Authorization. Different clients may have different permissions in the system. Authorization is the
process of identifying who is your client, and then decide what they can do. Authentication is a
prerequisite to adding an authorization layer. Authorization and its implementation in ASP.NET Web
API will be discussed in the next topic.

Credential Types
Users can have multiple identities they can use to identify themselves. For example, a user might be
logged in to the local network and have a Windows identity, or they might have a smart card they use to
access restricted content (smart card chips have client certificates installed in them). If your service
requires authentication, you will need to find which type of identities your client have, and configure the
service, or the hosting environment accordingly.

The following table lists some of the known identity types, and how to authenticate them.

Identity Authorized
Description
type by

Windows An identity used when working within a domain. The client sends a IIS
user token to the service, and the token is authenticated against the
domain controller.

Basic With this type of identity, the users username and password are sent IIS
(Username as plain text to the service, and the server authenticates the identity
+ against the local domain controller. This type of identity is used when
Password) the client has a domain account, but they are not connected to the
domain controller. When using this type of identity, it is advisable to
use HTTPS to encrypt the username and password.

Certificate Certificates hold information about the client and the certificates IIS
issuer, making it simple to verify the certificates authenticity, without
connecting to another server for authentication. With IIS, you can also
map certificates to Windows identities to provide the service additional
information about the user. Certificates are commonly used when your
client is a service.

Forms Forms authentication, also referred to as ASP.NET Membership, is part ASP.NET


of the ASP.NET infrastructure. After the user logs on to the web
application through a login page, a session token is created and stored
in a cookie which is saved in the clients browser. When the client
sends requests to a service in the same web application, the cookie is
Developing Windows Azure and Web Services 4-21

Identity Authorized
Description
type by
sent with the request, identifying the logged in user. ASP.NET has
built-in support for authentication against SQL Server or Active
Directory. You can also create a custom Membership provider if you
store the usernames and passwords somewhere else.

Issued Issued tokens, such as OpenID, are mostly used in the Internet. With Identity
token issued tokens, the service informs the client which public identity Provider +
providers it trusts, such as Twitter, Facebook, and Widows LiveID. The your code
client then authenticates against the identity provider, usually by using
a username and a password. After authenticating against the identity
provider, the client sends the authentication token to the service,
which then verifies the token is authentic. To use issued tokens, you
need to register your service with the providers you wish to trust, and
write the code to verify the providers token. Issued token
implementation in ASP.NET Web API is covered in Module 11, Identity
Management and Access Control.

Custom If you have your own authentication mechanism, you can use it instead Your code
of the standard identities. To use your custom authentication, turn on
anonymous authentication in IIS and disable all other authentication
types, to prevent IIS from authenticating the client. Then add your own
authentication code, as explained later on in this topic.

Note: When using Windows credentials, the user sends a special token to the server which
it receives from the domain controller. IIS then sends this token to the domain controller to
authenticate the client. The token does not include the users password, similar to basic
authentication.

If you want to use authentication types that are managed by IIS or ASP.NET, such as Windows
authentication or Forms authentication, you need to first configure IIS for those authentication types. For
a description of the steps for configuring the various authentication types, see:

Configuring Authentication in IIS


http://go.microsoft.com/fwlink/?LinkID=298767&clcid=0x409

If you want to use Forms authentication, you will also need to create a login page in your web application,
either by using either ASP.NET Web Forms or ASP.NET MVC. For information on how to use Forms
authentication with ASP.NET Web API and ASP.NET MVC, see:

Forms Authentication
http://go.microsoft.com/fwlink/?LinkID=298768&clcid=0x409

Accessing User Information


If you use IIS or ASP.NET to authenticate your client, ASP.NET will automatically create a security object
for the user. This object, which implements the System.Security.Principal.IPrincipal interface, contains
information about the user, such as the authentication type that was used, and the users name (or the
users certificate hash, if a certificate was used). The IPrincipal object is stored in the requests executing
for you to use during the execution of your code.
The following code demonstrates how to retrieve user information from the current principal.
4-22 Extending and Securing ASP.NET Web API Services

Accessing the Users Identity Information


System.Security.Principal.IIdentity identity =
System.Threading.Thread.CurrentPrincipal.Identity;
string authenticationType = identity.AuthenticationType;
string name = identity.Name;
Logger.Log("Action invoked by {0}. Authentication type: {1}", name, authenticationType);

Custom Authentication
If you have your own credential type that you wish to use in your service, you will need to write a custom
authenticator.

Note: Before you test your custom authentication code, make sure that IIS is configured for
anonymous authentication so it will pass through the request to your service, without
authenticating it.

To create your own custom authenticator in ASP.NET Web API, you need to create a new message handler
by deriving from the DelegatingHandler class (delegating handlers were discussed in Lesson 1, The
ASP.NET Web API Pipeline). In the SendAsync method, you will need to handle the following questions:

Should all requests be authenticated? If not, you can ignore requests that dont have
authentication information and send them directly to the controllers action. If all messages should be
authenticated, you will need to return an HTTP 401 (Unauthorized) response if the request does not
carry authentication headers.
Which authentication scheme are you using? If you are using a specific authentication scheme,
such as Basic, make sure you inform the client which scheme you expect by adding the WWW-
Authenticate header with the supported schemes when you send unauthorized responses.
How do you authenticate your clients? Add the code to retrieve the clients identity from the
request, and the custom code for the authentication process.

The following code demonstrates how to create a delegating handler for username/password
authentication.

Creating a Custom Authentication Delegating Handler


public class AuthenticationMessageHandler : DelegatingHandler
{
protected override async Task<HttpResponseMessage> SendAsync(HttpRequestMessage
request, CancellationToken cancellationToken)
{
HttpResponseMessage response;

if (request.Headers.Authorization != null &&


request.Headers.Authorization.Scheme == "Basic")
{
// Extract the username and password from the HTTP Authorization header
var encodedUserPass = request.Headers.Authorization.Parameter.Trim();
var userPass =
Encoding.Default.GetString(Convert.FromBase64String(encodedUserPass));
var parts = userPass.Split(":".ToCharArray());
var username = parts[0];
var password = parts[1];

if (!AuthenticateUser(username, password))
{
// Authentication failed
response =
request.CreateResponse(System.Net.HttpStatusCode.Unauthorized);
response.Headers.Add("WWW-Authenticate", "Basic");
Developing Windows Azure and Web Services 4-23

return response;
}
}

response = await base.SendAsync(request, cancellationToken);

// Add the required authentication type to the response


if (response.StatusCode == System.Net.HttpStatusCode.Unauthorized)
{
response.Headers.Add("WWW-Authenticate", "Basic");
}

return response;
}

private bool AuthenticateUser(string username, string password)


{
// Use a simplified authentication check where username must be equal to password
if (username == password)
{
// User is valid. Create a principal for the user
IIdentity identity = new GenericIdentity(username);
IPrincipal principal = new GenericPrincipal(identity, new[] { "Users",
"Admins" });
Thread.CurrentPrincipal = principal;
return true;
}
return false;
}
}

The preceding code answers the following question:


Should all requests be authenticated? Not all requests must be authenticated. If the Authorization
HTTP header is missing, there will not be an authorization check and the code will continue. If the
service itself requires authentication, the service will probably fail and return an HTTP 401
(unauthorized) response.

Which authentication scheme are you using? This code sample uses the Basic authentication
scheme. Basic authentication means that the username and password are passed in the HTTP
Authorization header as plain text, delimited by a semi-colon, and encoded into a Base64 string. The
code decodes the encoding and breaks down the string to the two strings: a username and a
password.
How do you authenticate your clients? The authentication shown in the AuthenticateUser is a
simple authentication which only compares the username and the password. If the authentication
fails, the method returns false, causing the SendAsync method to return an unauthorized response,
which will require the user to enter their credentials again.

If the user is authenticated, the AuthenticateUser method sets the executing threads principal to a
GenericPrincipal object and assigns the user to two roles, Users and Admins. The usage of roles for the
purpose of authorization will be discussed in the next topic.

After you create the delegating handler, you need to register it with the ASP.NET Web API configuration.
You can register the delegating handler to all the controllers by adding it to the configurations
MessageHandlers collection, or you can register it to a specific route by setting the handlers of the route.

The following code demonstrates how to attach the delegating handler to a specific route.

Attaching the Authentication Delegating Handler to a Specific Route


config.Routes.MapHttpRoute(
name: "EmployeeSalaries",
4-24 Extending and Securing ASP.NET Web API Services

routeTemplate: "api/salaries/{id}",
defaults: new { controller = "salaries", id = RouteParameter.Optional },
constraints: null,
handler: new AuthenticationMessageHandler()
);

Authorizing Clients
If you do not use any authentication, any client
will be able to call the services actions. On the
other hand, if you authenticate every client, only
authenticated clients can access your service. But
what if you want to have both? If some of your
controllers actions are public and other require
authentication, you will need to allow anonymous
users to access the public actions, but if those
users try to call private actions, they will be
required to authenticate.
To support both anonymous and authenticated
users, you need to do the following:

If you are using IIS or ASP.NET for authentication, turn on both anonymous authentication and the
required authentication type, such as Windows or Forms authentication.

If you are using a custom delegating handler for authentication, make sure you do not respond with
an HTTP 401 response for anonymous requests.
Apply the [Authorize] attribute according to the required component requiring authentication: an
action, a controller, or the entire application.

Note: Refer to the previous topic in this lesson for instructions how to enable
authentication in IIS and ASP.NET.

The following code demonstrates how to apply the [Authorize] attribute to controllers and actions.

Using the [Authorize] Attribute


[Authorize]
public class ProductController : ApiController
{
public HttpResponseMessage Get() { ... }
public HttpResponseMessage GetSpecific(int id) { ... }
public HttpResponseMessage Delete() { ... }
}

When you apply the [Authorize] attribute to a controller, every user activating the controllers action
must be authenticated. If an anonymous user tries to invoke one of the actions, an HTTP 401
(Unauthorized) response will be returned automatically, and the action will not be invoked. You can also
decorate action methods with the [Authorize] attribute instead of decorating the controller, to specify
that only the marked actions require authentication.

You can also use the [Authorize] attribute to specify which authenticated users can invoke the action,
whereas all other users will not be permitted to invoke it. To specify the authorized users, you can use the
[Authorize] attributes Roles and Users parameters. Each of these parameters accepts a comma-
Developing Windows Azure and Web Services 4-25

separated string that contains the list of authorized roles and users, accordingly. Roles are a way to group
users that have the same set of permissions; this way you only need to specify the role name, instead of
hard-coding the names of all the authorized users.

Best Practice: Role-based authorization is more manageable than user-based


authorization, because roles are usually defined in the early stages of the application design, and
are changed rarely. Users, on the other hand, come and go, which requires a more frequent
changing of hard-coded user names.

Users roles are retrieved after the user is authenticated. For example, when you use Windows
authentication in IIS, ASP.NET populates the users roles from the users Active Directory groups. If you use
custom authentication, you will need to populate the users list of roles yourself, as shown in the sample
code for the AuthenticationMessageHandler class in the previous topic.

Note: If you use Forms authentication, you can control where roles are stored by
configuring the ASP.NET Role Manager. For example, you can authenticate your users with
ASP.NET Membership against Active Directory Domain Services (AD DS), and load the users roles
from SQL Server. Role manager configuration is outside the scope of this course.

The following code demonstrates how to authorize specific roles with the [Authorize] attribute.

Authorizing Users With Roles


[Authorize]
public class ProductController : ApiController
{
public HttpResponseMessage Get() { ... }
public HttpResponseMessage GetSpecific(int id) { ... }
[Authorize(Roles="Admins")]
public HttpResponseMessage Delete() { ... }
}

In the preceding example, the [Authorize] attribute is used on the Delete method to specify that only
the users belonging to the Admins role can invoke the method.
If your controller requires authorization, and you want to exclude an action from being authorized, you
can do so by decorating the method with the [AllowAnonymous] attribute. You can use this attribute to
override the use of the [Authorize] attribute, either when the [Authorize] attribute is used to decorate
the containing controller, or when it is added to the ASP.NET Web API filters list.

The following code demonstrates how to use the [AllowAnonymous] attribute.

Using the [AllowAnonymous] Attribute


public static class WebApiConfig
{
public static void Register(HttpConfiguration config)
{
config.Filters.Add(new AuthorizeAttribute());

// Configuration code continues


//
}
}

public class ProductController : ApiController


{
public HttpResponseMessage Get() { ... }
4-26 Extending and Securing ASP.NET Web API Services

[AllowAnonymous]
public HttpResponseMessage GetSpecific(int id) { ... }
[Authorize(Roles="Admins")]
public HttpResponseMessage Delete() { ... }
}

In the preceding code, the ProductController class does not have the [Authorize] attribute. Instead, the
AuthorizeAttribute class is added to the ASP.NET Web API Filters collection, which applies it to all the
controllers in the application. The only method that permits anonymous access is the GetSpecific
method, which is decorated with the [AllowAnonymous] attribute. When adding the
AuthorizeAttribute class in the configuration level, you can also use the [AllowAnonymous] attribute
on a controller, to specify that all the controllers actions are accessible for anonymous users.

If you require a level of authorization that is not handled by the [Authorize] attribute. For example, if you
want to authorize according to roles, but also be able to specify users that are excluded from the role, you
can create your own authorization filter. To create your own authorization filter, derive from the
AuthorizationFilterAttribute class and override the OnAuthorization method with your custom
authorization code.

The following code demonstrates how to create a custom authorization filter.

Creating a Custom Authorization Filter


public class ExtendedAuthorize : AuthorizationFilterAttribute
{
string _role;
string[] _excludedUsers;

public ExtendedAuthorize(string role, string excludedUsers)


{
_role = role;
if (excludedUsers != null)
{
_excludedUsers = excludedUsers.Split(',');
}
}

public override void OnAuthorization(System.Web.Http.Controllers.HttpActionContext


actionContext)
{
IPrincipal principal = System.Threading.Thread.CurrentPrincipal;
// User cannot be anonymous
if (!principal.Identity.IsAuthenticated)
{
throw new HttpResponseException(
new HttpResponseMessage(System.Net.HttpStatusCode.Unauthorized));
}
else if (!principal.IsInRole(_role) ||
(_excludedUsers != null && _excludedUsers.Contains(principal.Identity.Name)))
{
// User is either not in the correct role, or is one of the excluded users
throw new HttpResponseException(
new HttpResponseMessage(System.Net.HttpStatusCode.Forbidden));
}
}
}

The ExtendedAuthorize class implements authorization logic similar to that described before; if the user
belongs to the required role, and is not blocked, they will be authorized. The preceding code throws two
security responses either an HTTP 401 (unauthorized) response, if the user is anonymous, or an HTTP
403 (forbidden), if the user has been authenticated, but is not permitted to execute the required code.
Developing Windows Azure and Web Services 4-27

Note: As shown in Lesson 1, The ASP.NET Web API Pipeline, in the Filters Topic, you can
also implement custom authorization logic by implementing the IAuthorizationFilter interface.
The difference between the IAuthorizationFilter and AuthorizationFilterAttribute types is that
the attribute provides a synchronous authorization check, whereas the interface provides an
asynchronous authorization check. Asynchronous authorization checks are useful when the
authorization logic is more I/O-bound then CPU-bound. For example, if the code performs a
network call to retrieve user information required by the authorization process.

For additional information on authentication and authorization in ASP.NET Web API, see:

Authentication and Authorization in ASP.NET Web API


http://go.microsoft.com/fwlink/?LinkID=298769&clcid=0x409

Demonstration: Creating Secured ASP.NET Web API Services


In this demonstration, you will add a delegating handler to the ASP.NET Web API message handling
pipeline. The delegating handler will validate the user's credentials, supplied according to the basic
authentication scheme, and create the appropriate IPrincipal and IIdentity objects. You will also use the
[Authorize] and [AllowAnonymous] attributes to specify which actions require authentication and
which can be called by anonymous users.

Demonstration Steps
1. Open the WebAPISecurity.sln solution from the D:\Allfiles\Mod04\DemoFiles\WebAPISecurity folder.
2. Open the AuthenticationMessageHandler.cs file from the WebAPISecurity project, and examine
the code in the SendAsync method. The code first checks if the request contains Basic authentication
information by checking the HttpRequestMessage.Headers.Authorization.Scheme property. If the
request does not contain the Authorization header, it is sent to the next handler without checking.

Note: If some of the actions are accessible by anonymous users, the authentication handler
should not require that every request contains authentication information. This is the case in this
demo.

3. Examine the code in the first if statement. If the request contains Basic authentication information,
the code retrieves the identity from the HTTP Authorization header, parses it into the username and
password, and then sends the identity to be verified in the AuthenticateUser method. If the
authentication fails, an Unauthorized response is send back to the client.

Note: In Basic authentication, the username and password are encoded to a single Base64
string.

4. Examine the code in the last if statement in the SendAsync method. In ASP.NET Web API, an action
can return an unauthorized response if it requires authentication and the user did not supply it, or if it
requires the user to have a specific role which the user does not have. If an unauthorized response is
returned from the action, the code will add the Basic authentication type to notify the client of the
expected authentication type.

5. Locate the AuthenticateUser method, and examine its code. After the identity if authenticated, the
code creates a GenericIdentity and GenericPrincipal objects to identify the user and its roles. The
4-28 Extending and Securing ASP.NET Web API Services

principal is then attached to the Thread.CurrentPrincipal property to have it available for the
authorization process.

6. Open the ValuesControllers.cs file from the projects Controllers folder, and examine the use of the
[Authorize] and [AllowAnonymous] attributes. The [Authorize] attribute verifies that the user is
authenticated before invoking any action in the controller. The [AllowAnonymous] attribute
decorating the second Get method skips the authentication check, allowing anonymous users to
invoke the decorated action.
7. Open the WebApiConfig.cs file from the projects App_Start folder, and examine the call to the
MessageHandler.Add method. This is how the authentication message handler is attached to the
message handling pipeline.

8. Run the web application and in the browser, append the suffix api/values/1 to the address. Verify
you see an XML with the response of the action.
9. Browse to api/values/, enter non-matching username and password, and verify you are asked again
for credential. Enter matching username and password and then verify you see an XML with the
response of the action.
Developing Windows Azure and Web Services 4-29

Lesson 4
Injecting Dependencies into Controllers
Most applications usually consist of several components that are dependent on each other. It is important
to be able to replace the implementation of a dependent module without having to change the code that
uses the dependency. To do this, you first need to decouple the software components from the other
components they are dependent on. This lesson describes how to decouple dependent components from
their dependencies. The lesson also explains how you can use the IDependencyResolver interface in ASP.
NET Web API to implement dependency injection.

Lesson Objectives
After completing this lesson, students will be able to:

Describe how dependency injection works.


Describe how to use the ASP.NET Web API dependency resolver.

Create a dependency resolver.

Dependency Injection
Modern software systems are built out of different
software components. For example many
distributed applications use a layered architecture
that separate different responsibilities to different
components (Logical Layers of Distributed
Applications are discussed in Module 1 Overview
of Service and Cloud Technologies Lesson 1 Key
Components of Distributed Applications).
Dependency Injection is a common software
design pattern that is used to decouple software
components from other components they are
dependent on. This is done so that dependencies
could be easily replaced if needed, for example it is common to replace the dependencies during tests
with mock object in order to control the result they return.

At the core of the Dependency Injection design pattern, there are three types of components:

The dependent component. A software component that is dependent on other components to


execute.

Dependencies. These are software components that the dependent component is depended upon.

Injector. A component that obtains or creates instances of the dependencies and passes them to the
dependent component.

In order for the dependent component to be decoupled from its dependencies, it should only define
them as interfaces. The dependencies should be passed into the dependent component as method or
constructor parameters by the injector allowing the injector to replace the dependencies concrete
implementation at runtime.
4-30 Extending and Securing ASP.NET Web API Services

Using the ASP.NET Web API Dependency Resolver


ASP. NET Web API supports dependency injection
with the IDependencyResolver interface. You can
implement your custom resolver or use it to wrap
a known IoC Container such as Castle, Unity, MEF,
Ninject, and so on. You can also register the
dependency resolver through Web API global
configuration.

Demonstration: Creating a
Dependency Resolver
In this demonstration, you will use a dependency resolver that provides instances of controllers.

Demonstration Steps
1. Open the DependencyResolver.sln solution from D:\Allfiles\Mod04\DemoFiles\DependencyResolver.
2. Explore the code in the CoursesController class. Note that the ISchoolContext constructor
argument is supplied to the class by its caller and can therefore have different concrete
implementations.

3. Explore the code for the ManualDependencyResolver class. Note that the serviceType parameter
determines which concrete implementation is returned.
4. Explore the code for the WebApiConfig class. Note that the config.DependencyResolver property
determines how ASP.NET Web API will determine which dependency resolver to use for matching
concrete types with interfaces.
5. Run the project without debugging, append api/courses to the address bar, and verify you can see
the list of courses.
Developing Windows Azure and Web Services 4-31

Lab: Extending Travel Companions ASP.NET Web API


Services
Scenario
Blue Yonder Airlines wishes to add more features to their client application: searchable flight schedule,
getting an RSS feed for flight schedules, and changing the way the client connects to the service to use a
secured HTTPS channel. In this lab, you will implement the necessary changes to the ASP.NET Web API
services you previously created. In addition, you will use these changes to apply validations and
dependency injection to the services, to make them more secured, manageable, and testable.

Objectives
After completing this lab, students will be able to:

Secure the services communication with HTTPS


Use System.ComponentModel.DataAnnotations and an ActionFilter to validate a model

Create a MediaTypeFormatter to support returning image content


Create an OData queryable service action

Use dependency resolver to inject repository types

Lab Setup
Estimated Time: 75 Minutes.
Virtual Machine: 20487B-SEA-DEV-A, 20487B-SEA-DEV-C

User name: Administrator, Admin


Password: Pa$$w0rd, Pa$$w0rd
For this lab, you will use the available virtual machine environment. Before you begin this lab, you must
complete the following steps:

1. On the host computer, click Start, point to Administrative Tools, and then click Hyper-V Manager.
2. In Hyper-V Manager, click MSL-TMG1, and in the Action pane, click Start.

3. In Hyper-V Manager, click 20487B-SEA-DEV-A, and in the Action pane, click Start.

4. In the Action pane, click Connect. Wait until the virtual machine starts.
5. Sign in using the following credentials:

User name: Administrator


Password: Pa$$w0rd

6. Return to Hyper-V Manager, click 20487B-SEA-DEV-C, and in the Action pane, click Start.

7. In the Action pane, click Connect. Wait until the virtual machine starts.

8. Sign in using the following credentials:

User name: Admin

Password: Pa$$w0rd

In this lab, you will install NuGet packages. It is possible that some NuGet packages will have newer
versions than those used when developing this course. If your code does not compile, and you identify
the cause to be a breaking change in a NuGet package, you should uninstall the NuGet package and
instead, install the old version by using Visual Studio's Package Manager Console window:
4-32 Extending and Securing ASP.NET Web API Services

1. In Visual Studio, on the Tools menu, point to Library Package Manager, and then click Package
Manager Console.

2. In Package Manager Console, type the following command and then press Enter.
install-package PackageName -version PackageVersion -ProjectName ProjectName

(The project name is the name of the Visual Studio project that is written in the step where you were
instructed to add the NuGet package).

3. Wait until Package Manager Console finishes downloading and adding the package.
The following table details the compatible versions of the packages used in the lab:

Package name Package version

Microsoft.AspNet.WebApi.OData 4.0.30506

Exercise 1: Creating a Dependency Resolver for Repositories


Scenario
Controllers use repositories as data providers. In the current implementation, the controller is responsible
to create the repository in the constructor. This is not a good practice because the controller knows the
type of the repository that results in a coupling.

In the following exercise, you will decouple the controller and repository using dependency injection
technique to inject the repository interface as a parameter in the controller constructor.

You will start by creating a dependency resolver class that is responsible for creating the repositories. You
will then register the dependency resolver class in the HttpConfiguration to automatically create a
repository when a controller is used. Finally, you will use Microsoft Fakes and create a stub for a
repository, and then use it in a unit test project to test the location controller.

The main tasks for this exercise are as follows:


1. Change FlightController constructor to support Injection

2. Create a dependency resolver class

3. Register the dependency resolver class with HttpConfiguration

Task 1: Change FlightController constructor to support Injection


4. Open the BlueYonder.Companion.sln solution from the
D:\AllFiles\Mod04\LabFiles\begin\BlueYonder.Companion folder.
5. In the BlueYonder.Companion.Controllers project, change the LocationsController class
constructor to receive the locations repository:

Change the LocationsController constructor method, so that it receives an ILocationRepository


object.
Initialize the Locations member with the constructor parameter.

Note: The same pattern was already applied in the begin solution for the rest of the
controller classes (TravelersController, FlightsController, ReservationsController and
TripsController).
Open those classes to review the constructor definition.
Developing Windows Azure and Web Services 4-33

Task 2: Create a dependency resolver class


6. Open the BlueYonderResolver class from the BlueYonder.Companion.Controllers project, and in
the GetService method, add the missing code for resolving the LocationsController class.

If the serviceType parameter is of the type LocationsController, create an instance of the


LocationsController class with the required repository.

Note: The BlueYonderResolver class implements the IDependencyResolver interface.

Task 3: Register the dependency resolver class with HttpConfiguration


1. Open the WebApiConfig.cs file, located under the App_Start folder in the
BlueYonder.Companion.Host project, and set the BlueYonderResolver as the new dependency
resolver.

Use the DependencyResolver property of the config object to set the dependency resolver.
2. Test the application and the DependencyResolver injection:

Open the LocationsController class from the BlueYonder.Companion.Controllers project, and


place a breakpoint in the constructor.
Run the BlueYonder.Companion.Host project in debug.

Navigate to the address http://localhost:9239/Locations

Return to Visual Studio 2012 and verify the code breaks on the breakpoint and that the constructor
parameter is initialized (not null).

3. Open the LocationControllerTest test class from the BlueYonder.Companion.Controllers.Tests


project, and examine the code in the Initialize method. The test initialization process uses the
StubLocationRepository type which was auto-generated with the Fakes framework. This stub
repository mimics the real location repository. You use the fake repository to test the code, instead of
using the real repository, which requires using a database for the test. When running unit tests, you
should use fake objects to replace external components, in order to reduce the complexity of creating
and executing the test.

Note: For additional information about Fakes, see:


http://go.microsoft.com/fwlink/?LinkID=298770&clcid=0x409

4. Test the application using the Fakes mock framework, by running the test project.
On the Test menu, point to Run, and the click All Tests.

Results: You will be able to inject data repositories to the controllers instead of creating them explicitly
inside the controllers. This will decouple the controllers from the implementation of the repositories.

Exercise 2: Adding OData Capabilities to the Flight Schedule Service


Scenario
OData is a data access protocol that provides standard CRUD access of a data source via a website.

To add support for OData protocol you will install a NuGet Microsoft.AspNet.WebApi.OData package,
and then decorate the methods that you want to support OData with [Queryable] attribute.
4-34 Extending and Securing ASP.NET Web API Services

The main tasks for this exercise are as follows:

1. Add a Queryable action to the flight schedule service

2. Handle the search event in the client application and query the flight schedule service by Using OData
filters

Task 1: Add a Queryable action to the flight schedule service


1. Add the Microsoft ASP.NET Web API OData NuGet package to the
BlueYonder.Companion.Controllers project.

2. In the BlueYonder.Companion.Controllers project, open the LocationsController class, and


decorate the Get method overload which has three parameters with the [Queryable] attribute.

3. Change the method implementation to use the repository's GetAll method and return an IQueryable
instead of IEnumerable.

Remove the parameters from the method declaration. You will not need them anymore because the
OData infrastructure will take care of the query filtering.

Task 2: Handle the search event in the client application and query the flight schedule
service by Using OData filters
1. Log on to the virtual machine 20487B-SEA-DEV-C as Admin with the password Pa$$w0rd.
2. Open the BlueYonder.Companion.Client solution from the
D:\AllFiles\Mod04\LabFiles\begin\BlueYonder.Companion.Client folder.

3. Open the Addresses.cs file from the BlueYonder.Companion.Shared project and change the
GetLocationsWithQueryUri property to use OData querying instead of standard query string.

Replace the current query string with the $filter option using the expression
substringof(tolower('{0}'),tolower(City)).
The resulting code should resemble the following code.

GetLocationsUri + "?$filter=substringof(tolower('{0}'),tolower(City))";

Results: Your web application exposes OData protocol that supports Get request of the locations data.

Exercise 3: Applying Validation Rules in the Booking Service


Scenario
To make your web application more robust, you need to add some validations to the incoming data.

You will apply validation rule to your server to verify that all required fields of a model are sent from the
client before handling the request.

You will decorate Travel model with attributes that define the required fields, then you will derive from
ActionFilter class and implement the validation of the model. Finally, you will add the validation to the
Post action.

The main tasks for this exercise are as follows:

1. Add Data Annotations to the Traveler Class

2. Implement the Action Filter to Validate the Model

3. Apply the custom attribute to the PUT and POST actions in the booking service
Developing Windows Azure and Web Services 4-35

Task 1: Add Data Annotations to the Traveler Class


4. Return to the BlueYonder.Companion solution in the 20487B-SEA-DEV-A virtual machine. In the
BlueYonder.Entities project, use data annotation attributes to validate the Traveler class properties
using the following rules:

FirstName, LastName and HomeAddress should use the [Required] validation attribute.
MobilePhone should use the [Phone] validation attribute.

Email should use the [EmailAddress] validation attribute.

Task 2: Implement the Action Filter to Validate the Model


1. In the BlueYonder.Companion.Controllers project, create a public class named
ModelValidationAttribute that derives from ActionFilterAttribute. Add the new class to the project's
ActionFilters Folder.

2. In the new class, override the OnActionExecuting method and implement it as follows:
Check if the model state is valid by using the actionContext.ModelState.IsValid property.

If the model state is not valid, call the actionContext.Request.CreateErrorResponse to create an


error response message.

Note: The CreateErrorResponse is an extension method. To use it, add a using directive
for the System.Net.Http namespace.

For the error response, use the overload that expects an HttpStatusCode enum and an HttpError
object.
Use the HttpStatusCode.BadRequest, and initialize the HttpError object with the
actionContext.ModelState and the true Boolean value for the second constructor parameter.

Task 3: Apply the custom attribute to the PUT and POST actions in the booking
service
1. In the BlueYonder.Companion.Controllers project, open the TravelersController class, and
decorate the Put and Post methods with the [ModelValidation] attribute.

2. Build the solution.

Results: Your web application will verify that the minimum necessary information is sent by the client
before trying to handle it.

Exercise 4: Secure the communication between client and server


Scenario
To support secured communication between the service and the client you will need to define a
certificate.

You need to add to the service support for HTTPS binding that uses a certificate and set the client to work
with the secured connection.

The main tasks for this exercise are as follows:


1. Add an HTTPS Binding in IIS

2. Host ASP.NET Web API web application in IIS


4-36 Extending and Securing ASP.NET Web API Services

3. Test the client application against the secure connection.

Task 1: Add an HTTPS Binding in IIS


1. Run the Setup.cmd file from D:\AllFiles\Mod04\LabFiles\Setup.

Note: The setup script creates a server certificate to be used for HTTPS communication.

2. Open IIS Manager. Open the Server Certificates feature from the SEA-DEV12-A (SEA-DEV12-
A\Administrator) features list.

Verify you see a certificate issued to SEA-DEV12-A. This certificate was created by the script you ran
in the previous task.

3. Select the Default Web Site from the Connections pane, open the Bindings list, and add an HTTPS
binding. Select the SEA-DEV12-A certificate that was created by the setup script.

Note: When you add an HTTPS binding to the Web site bindings, all web applications in
the Web site will support HTTPS.

Task 2: Host ASP.NET Web API web application in IIS


1. Publish the service to IIS from Visual Studio 2012 by changing the BlueYonder.Companion.Host
project properties.

In the project's Properties window, click the Web tab, and change the server from IIS Express to the
local IIS Web server.

Create the virtual directory and save the changes to the project.

2. Browse the web application using HTTP to get the location data.
Return to IIS Manager, refresh the Default Web Site, and then select the
BlueYonder.Companion.Host application.

Click the Browse *:80 link to browse the Web application.


Append the string locations to the address bar to get the list of locations.

Explore the contents of the file. It contains the Location database content in the JSON format.

3. Change the scheme of the URL from HTTP to HTTPS to get the location data again, this time through
a secured channel.

Make sure you also change the computer name in the URL from localhost to SEA-DEV12-A.

Explore the contents of the file. It should contain the same locations as before.

Note: If you use localhost instead of the computer's name, the browser will display a
certificate warning. This is because the certificate was issued to the SEA-DEV12-A domain, not
the localhost domain.

Task 3: Test the client application against the secure connection.


1. Return to the BlueYonder.Companion.Client solution in the 20487B-SEA-DEV-C virtual machine.
Open the Addresses.cs file from the BlueYonder.Companion.Shared project and in the BaseUri
property change the URL from using HTTP to HTTPS.
Developing Windows Azure and Web Services 4-37

2. Run the client app, search for New, and purchase a flight from Seattle to New-York. Provide an
incorrect value the email field and verify you get a validation error message originating from the
service.

Use any non-valid email address, such as ABC or ABC@DEF.


3. Correct the email address and purchase the trip. Verify the trip was purchased successfully, and then
close the client app.

Results: The communication with your web application will be secured using a certificate.

Question: Why does using dependency injection make it easier to change your code?
4-38 Extending and Securing ASP.NET Web API Services

Module Review and Takeaways


In this module, you learned how the ASP.NET Web API request and response pipeline is structured and
how to extend it. You learned how to create OData services with ASP.NET Web API and how to consume
them from client applications. You also learned how to secure ASP.NET Web API with IIS, ASP.NET, and
custom message handlers. Lastly, you learned how to use dependency injection with ASP.NET Web API
controllers.
5-1

Module 5
Creating WCF Services
Contents:
Module Overview 5-1

Lesson 1: Advantages of Creating Services with WCF 5-3

Lesson 2: Creating and Implementing a Contract 5-6


Lesson 3: Configuring and Hosting WCF Services 5-14

Lesson 4: Consuming WCF Services 5-29


Lab: Creating and Consuming the WCF Booking Service 5-35
Module Review and Takeaways 5-44

Module Overview
The previous two modules are about ASP.NET Web Application Programming Interface (API), which is the
.NET technology to create Hypertext Transfer Protocol (HTTP)-based services. In the first module, we also
explained about another type of service, Simple Object Access Protocol (SOAP)-based services. In the first
lesson of this module, you will learn about the differences between HTTP-based and SOAP-based services.
The rest of this module will be about implementing SOAP-based services with the Windows
Communication Foundation (WCF) framework.

When you develop an application that has client/server architecture, the technologies that you are likely
to use is the Windows Communication Foundation (WCF) framework. WCF is the most up-to-date
communication infrastructure made by Microsoft, and is designed for building distributed applications
that use service-oriented architecture (SOA).

WCF is a very flexible and extensible framework. You can customize and configure WCF to match different
application scenarios. You can control almost every aspect of client/server communication, either through
configuration or by implementing various extensions.

You can control the following aspects of WCF:


Communication protocol, such as Transmission Control Protocol (TCP), HTTP, and User Datagram
Protocol (UDP).

Security aspects, such as authentication and authorization.

Performance tuning, such as throttling, concurrency, and load balancing.

Hosting environment, such as Internet Information Services (IIS) and Windows Services.

This module describes how to create a simple WCF service, host it using self-hosting, and consume the
service from a client application. After you get familiar with WCF and its various configurations and
extensions, you will be able to build robust, flexible, and scalable services.

Objectives
After you complete this module, you will be able to:
5-2 Creating WCF Services

Describe why and when to use WCF to create services.

Define a service contract and implement it.

Host and configure a WCF service.

Consume a WCF service from a client application.


Developing Windows Azure and Web Services 5-3

Lesson 1
Advantages of Creating Services with WCF
SOAP is a protocol specification for exchange of structured information between peers in a decentralized,
distributed environment. SOAP uses Extensible Markup Language (XML) for its message formatting, and
usually relies on HTTP for message negotiation and transmission, although there are implementations of
SOAP over other transports, such as SOAP-over-UDP, which is used for Universal Plug and Play (UPnP).
SOAP-based services technology has existed for more than a decade. It was created by Microsoft and was
later adopted by W3C as a standard. SOAP is used as the underlying layer for ASP.NET Web Services
(ASMX) and for WCF. However, unlike Web Services, which are limited to SOAP over HTTP, WCF can use
SOAP over TCP and even Named pipes. WCF is also not limited to SOAP - it can be configured to use
standard XML (known as plain XML) and JavaScript Object Notation (JSON) also.

Note: SOAP over TCP is a WCF proprietary implementation and is not part of the World
Wide Web Consortium (W3C) standards. Therefore, it can be used only between a WCF client and
a WCF service.

This lesson briefly reviews the history of SOAP-based services, starting with Web Services and ending with
WCF. It discusses the advantages WCF offers as an infrastructure service, and reviews some of these
features of WCF and its characteristics, including:

Multiple transport support, such as TCP and HTTP.

Various messaging patterns, such as request-response and one-way.

Complex application scenarios, such as transactions, reliable messaging, and service discovery.
Finally, this lesson explains the features of WCF that are not supported by the ASP.NET Web API.

Lesson Objectives
After you complete this lesson, you will be able to:
Explain the benefits of using SOAP-based services.

List the features of WCF that are not supported by ASP.NET Web API.

The Benefits of Using SOAP-Based Services


SOAP, a standard remote procedure call (RPC)
protocol, is now maintained by the World Wide
Web Consortium (W3C). It was designed as a
protocol specification for invoking methods on
servers, services, and objects. SOAP was developed
as a multi-environment and language-
independent way of transferring structured data
between services.

SOAP is an XML-based protocol that is mainly (but


not exclusively) transmitted over HTTP. The main
characteristics of SOAP are as follows:

Lightweight protocol

Used for communication between applications


5-4 Creating WCF Services

Designed to be transmitted over HTTP

Not designed for any specific programming language

XML-based

Simple and extensible

SOAP is not the only RPC protocol. There are many others, such as Common Object Request Broker
Architecture (CORBA) and the Distributed Component Object Model (DCOM) RPC protocol. There are
several benefits to consider over other protocols:
Versatility. DCOM and CORBA have not been adopted by many platforms. DCOM is supported only
by Windows and CORBA is mainly used in Java.

Security. CORBA, DCOM, and similar protocols are usually not firewall/proxy friendly and might be
blocked.

For more information on SOAP, see:


Simply SOAP
http://go.microsoft.com/fwlink/?LinkID=298771&clcid=0x409

The following code presents a sample SOAP request message sent to a service, followed by a sample
SOAP response message returned by the service. The service multiplies any number sent in the request
message by two.

Request and Response SOAP Messages


Request

<s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/">
<s:Body>
<MultiplyByTwo xmlns="http://tempuri.org/">
<value>123</value>
</MultiplyByTwo>
</s:Body>
</s:Envelope>

Response

<s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/">
<s:Body>
<MultiplyByTwoResponse xmlns="http://tempuri.org/">
<MultiplyByTwoResult>246</MultiplyByTwoResult>
</MultiplyByTwoResponse>
</s:Body>
</s:Envelope>

SOAP is used as an underlying protocol in ASP.NET Web Services and in WCF services. Both technologies
are used to build RPC services.
Developing Windows Azure and Web Services 5-5

WCF Features That Are Not Supported by ASP.NET Web API


In the previous topic, we discussed about SOAP,
an RPC protocol. Nowadays, HTTP-based services
have become popular and are replacing RPC
services.

ASP.NET Web API is a simple yet powerful


infrastructure for building HTTP-based services. It
may seem that WCF is obsolete and you can
replace it with ASP.NET Web API. However, that is
not always the case. There are several scenarios in
which ASP.NET Web API does not provide a
solution, or in which WCF is still a preferable
infrastructure:

One way messaging


Multicast messaging with UDP

In-order delivery of messages

Duplex services (for example, publisher-subscriber)

Using Message Queues (for example, MSMQ) as a transport


Run time discovery of services

Content-based message routing

Distributed transactions support


WCF handles these scenarios by using SOAP and Web Service specifications (commonly referred to as WS-
*). The WS-* standards complement SOAP by controlling the needs of distributed applications, such as
message security, message reliability, and transaction support. HTTP-based services rely mostly on HTTP
headers, which are not designed for these scenarios.

But this is not to say that ASP.NET Web API is less useful than WCF. Services that you create with ASP.NET
Web API can take advantage of HTTP and create services that have cacheable responses in clients, built-in
concurrency mechanisms, and other features that are part of the HTTP application protocol. In addition,
ASP.NET Web API is supported by browsers, which do not support SOAP.

Question: When would you prefer using ASP.NET Web API over WCF?
5-6 Creating WCF Services

Lesson 2
Creating and Implementing a Contract
When you create a service, you must answer the following questions:

What can the service do?

Where can I find the service?

How do I send messages to the service?

This lesson explains the service contract, which provides the answer to the first question, What can the
service do?
The service contract is one of the fundamentals of WCF. It is a definition of the operations supported by
the service. Additionally, the service contract defines other aspects of the service and its operations, such
as error handling.

This lesson describes how to create and implement a service contract. It also explains how to control the
service behavior by using the [ServiceContract] attribute, the [OperationContract] attribute, and the
[FaultContract] attribute.

Lesson Objectives
After you complete this lesson, you will be able to:
Define the WCF service contract.

Create and implement a service contract.

Add exception handling to the service.

Creating Service and Data Contracts


A WCF service contract is a standard interface, but
a [ServiceContract] attribute is added to indicate
that the interface defines a WCF service contract.
The interface methods that are exposed as service
operations will each include an
[OperationContract] attribute, indicating that
the method defines an operation that is part of a
service contract.

Best Practice: You can apply the


[ServiceContract] attribute to an interface or to a
class. It is better to use an interface because then
you can replace the implementation. It also creates a clear separation between the service
contract and its implementation.

These attributes do more than merely mark the interface methods as service operations. They also control
various aspects of the service behavior and characteristics, such as session support, exception handling,
and callback contracts on duplex services.
Developing Windows Azure and Web Services 5-7

Another important aspect of the service contract is the data contract. Data contracts define the data that
is exchanged between the service and the client. Data contracts define data that is either returned by a
service operation or received by a service operation as a parameter.

Similar to the service contract, the data contract is defined by using special attributes:
The [DataContract] attribute. Applied to the class to mark it as a data contract.

The [DataMember] attribute. Applied to those class properties that will be included in the data
contract.

Note: The ServiceContract and OperationContract attributes are part of the


System.ServiceModel namespace in the System.ServiceModel assembly. The DataContract
and DataMember attributes are part of the System.Runtime.Serialization namespace of the
System.Runtime.Serialization assembly.

The following code example depicts a service contract declaration alongside a data contract
(Reservation). The contract exposes basic hotel reservation operations.

Service Contract and Data Contract


[ServiceContract]
public interface IHotelBookingService
{
[OperationContract]
BookingResponse BookHotel(Reservation request);
[OperationContract]
Booking GetExistingReservation(string bookingReference);
[OperationContract]
string CancelReservation(string bookingReference);
}
[DataContract]
public class Reservation
{
[DataMember]
public string HotelName {get; set; }

[DataMember]
public DateTime CheckinDate {get; set; }

[DataMember]
public int NumberOfDays {get; set; }

[DataMember]
public string GuestFirstName {get; set; }

[DataMember]
public string GuestLastName {get; set; }
}

The [DataContract] and [DataMember] attributes are optional. If a class is not decorated with the
[DataContract] attribute, WCF will automatically serialize every public property and field that it
encounters in your class.
You can therefore choose whether to use an inclusive approach or an exclusive approach to mark
properties and fields to be serialized:

Inclusive approach. Use [DataContract], and apply [DataMember] to each member that needs to
be serialized.

Exclusive approach. Do not use [DataContract], and apply the [IgnoreDataMember] attribute to
each public property and field that you do not want to be serialized.
5-8 Creating WCF Services

Choosing the approach to use depends on the characteristics of your class: how large it is, how many
serialized and non-serialized members it contains, and whether you can change how it is declared. It is
preferable to choose one technique and apply it to all of your classes to prevent confusion.

For more information on inclusive and exclusive data contracts, see:


Using Data Contracts
http://go.microsoft.com/fwlink/?LinkID=298772&clcid=0x409

Implementing a Service Contract


After you define a service contract, you must
provide the actual implementation. This involves
creating a class that implements the service
contract interface.
The previous topic gives an example of a simple
service contract, IHotelBookingService. The
following code example demonstrates
implementation of this contract.

Implementing the IHotelBookingService


Service Contract
public class HotelBookingService :
IHotelBookingService
{
public BookingResponse BookHotel(Reservation request)
{
BookingResponse response = BusinessLogic.ProcessReservation(request);

return response;
}

public Booking GetExistingBooking(string bookingReference)


{
Booking booking = BusinessLogic.GetBookingByReference(bookingReference);

return booking;
}

public string CancelReservation(string bookingReference)


{
string cancelationReference =
BusinessLogic.CancelBookingByReference(bookingReference);
}
}

As you can see, implementation of a service requires you to implement the service contract interface and
provide your business logic. Apart from providing concrete implementation of the service contract, you
can control other aspects of the service behavior, such as the instantiation and concurrency model of the
service:

Instantiation. When you send a request to a service, the request is executed in an instance of the
service class. The service instantiation controls when new instances of your service class are created.

Concurrency. Each request in WCF runs in its own thread. But when several requests running in
different threads are executing in parallel, they might attempt to use the same service instance
Developing Windows Azure and Web Services 5-9

depending on the instantiation mode. The concurrency setting controls how many requests can use
the same service instance concurrently.

The following code example demonstrates how to set the instantiation and concurrency of a service.

Setting the Instantiation and Concurrency of a Service


[ServiceBehavior(InstanceContextMode=InstanceContextMode.PerCall,
ConcurrencyMode=ConcurrencyMode.Single)]
public class HotelBookingService : IHotelBookingService
{

}

The instancing and concurrency modes are controlled by the [ServiceBehavior] attribute. You can
control the instancing mode by adding the InstanceContextMode parameter, and the concurrency
mode by adding the ConcurrencyMode parameter.

The instancing options supported in WCF are:


Per Call. A new instance of your service class is created upon each request and destroyed after the
request is complete.
Single. A single instance is created for all requests and is destroyed when the service closes.

Per Session. A new instance is created per client connection (session) and is destroyed when the
client disconnects or is idle for too long (the default idle time is 10 minutes). This is the default
setting.
When you use the Per Session or Single instancing modes, you can use one of the following concurrency
modes to control the number of requests the same service instance can use at the same time:
Single. Only a single request can execute in an instance at a time. This is the default setting.

Multiple. Multiple requests can execute in an instance at a time.

Reentrant. As with Single, only a single request can execute in an instance at a time. However, if your
instance method calls another service, then instead of the instance being blocked while waiting for
the call to return, the instance is released, and a waiting request can start using it.

Note: When you use the Per Call instancing mode, each request executes in its own service
class instance eliminating the need to manage concurrency issues.

WCF Sessions are not covered in depth in this course. For more information about instances, concurrency,
and sessions in WCF, see:

Sessions, Instancing, and Concurrency


http://go.microsoft.com/fwlink/?LinkID=298773&clcid=0x409
5-10 Creating WCF Services

Handling Exceptions
Applications must handle errors. These can
include, for example, failed validation of user
input or exceptions that may be thrown when the
application tries to access a resource such as a file
or a database.

Usually, when you develop a stand-alone


application that does not involve a server, you can
catch exceptions by using try/catch blocks. You
can then have the application display a friendly
error message that explains what happened and
what the user can or should do next. If the
application does not handle the exception, the
.NET Framework will handle it instead, and display a default error dialog box.
However, handling errors in WCF is difficult. A WCF service cannot throw the exception back to the client
for several reasons:

The client may not understand the exception. The client may be running on a different platform, in
which .NET Framework exceptions have no meaning.

A .NET Framework exception exposes implementation details (stack trace, error codes). This is a bad
practice because the client should not be familiar with the service implementation. For example,
hackers can take advantage of information included in exceptions, such as database name, and file
paths on your server.

If an exception cannot be thrown back to the client, what therefore is the solution? The answer is SOAP
fault messages. Fault messages are special messages, in XML format, that contain data about an error that
occurred on the service and where it originated. By default, an exception that is thrown back to the client
from WCF, whether unhandled or thrown deliberately by the service developer, will result in the service
returning a message with a SOAP fault message, containing a general error message.
Fault messages are part of the SOAP standard. They can be processed and handled by all compliant
platforms, making them interoperable.

The following message shows the default fault message that is returned for unknown errors.

The Default SOAP Fault Error Message of WCF


<s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/">
<s:Header />
<s:Body>
<s:Fault>
<faultcode
xmlns:a="http://schemas.microsoft.com/net/2005/12/windowscommunicationfoundation/dispatch
er">a:InternalServiceFault</faultcode>
<faultstring>The server was unable to process the request due to an internal error.
For more information about the error, either turn on IncludeExceptionDetailInFaults
(either from ServiceBehaviorAttribute or from the &lt;serviceDebug&gt; configuration
behavior) on the server in order to send the exception information back to the client, or
turn on tracing as per the Microsoft .NET Framework SDK documentation and inspect the
server trace logs.</faultstring>
</s:Fault>
</s:Body>
</s:Envelope>

If you read the message in the faultstring element, you will notice that WCF provides an option to
include the actual exception information in the message, if you set the IncludeExceptionDetailsInFaults
Developing Windows Azure and Web Services 5-11

parameter to true in the [ServiceBehavior] attribute. You also have the option of configuring this
behavior in the service configuration file. You will learn about service configurations in the next lesson,
and see how to use the serviceDebug configuration behavior in Lesson 3, "Configuring and Hosting WCF
Services" in Course 20487.

Note: Including the content of an exception in a SOAP fault message is recommended for
debugging. As mentioned, exception messages can expose sensitive information about your
service implementation, and non-.NET platforms will not know how to handle this type of
content.

Instead of turning on the IncludeExceptionDetailInFaults flag, you can throw a FaultException that
describes the reason for the problem.

The following code shows how to throw a FaultException.

Throwing a Fault Exception


public BookingResponse BookHotel(Reservation request)
{
if (!IsHotelNameValid(request))
{
throw new FaultException("Unable to find a match for the requested hotel");
}

BookingResponse response = BusinessLogic.ProcessBookingRequest(request);

return response;
}

When you throw a FaultException instead of a standard exception, the service includes the exception
text in the fault section of the SOAP envelope without disclosing sensitive exception information such as
the stack trace of the exception. This will make the exception text available to your client, no matter which
platform it is written in.
However, sometimes just writing a fault string is not enough, and you will want the fault to contain more
information, such as the problematic data that caused the exception or a unique error ID number that
your customers can use to contact the service administrator for further investigation. In such a case, you
can use the FaultException<T> generic exception class. The generic argument T specifies the data
contract that holds extra information about the error.

The following code returns a fault with extra details contained in a ReservationFault object.

Throwing a Detailed Fault


public BookingResponse BookHotel(Reservation request)
{
if (!IsHotelNameValid(request))
{
throw new FaultException<ReservationFault>(
new ReservationFault
{
HotelName = request.HotelName,
ErrorCode = "InvalidHotelName"
}, " Unable to find a match for the requested hotel");
}

BookingResponse response = BusinessLogic.ProcessBookingRequest(request);

return response;
}
5-12 Creating WCF Services

The ReservationFault class used in the above example is a simple data contract, and does not derive
from any Exception class.

The following code depicts the declaration of the ReservationFault class.

The ReservationFault Class


[DataContract]
public class ReservationFault
{
[DataMember]
public string HotelName { get; set; }

[DataMember]
public string ErrorCode { get; set; }
}

Because clients only recognize data contracts if their classes are used in the method signatures of
operations, they will not be aware of data contract used in a throw statement within the code. Therefore,
you must add the data contract of the fault to the service contract manually. You can include information
about fault contracts (the data contracts used in faults), by adding the [FaultContract] attribute to the
operations that can throw such faults.

The following code shows how to include a fault contract in a service contract.

Service Contract with a Fault Contract


[ServiceContract]
public interface IHotelBookingService
{
[OperationContract]
[FaultContract(typeof(ReservationFault))]
BookingResponse BookHotel(Reservation request);

[OperationContract]
Booking GetExistingReservation(string bookingReference);

[OperationContract]
string CancelReservation(string bookingReference);
}

For an example of how to call the BookHotel service operation from a client application, and how to
handle the ReservationFault fault exception, refer to Lesson 4 in this module, "Consuming WCF Services,"
and look at the code example shown in the first topic, "Generating Service Proxies with Visual Studio
2012."

Best Practice: It is advised that all service operations use the [FaultContract] attribute to
inform the client of every kind of fault it can expect to receive.

For more information about handling faults in WCF, see:

Specifying and Handling Faults in Contracts and Services


http://go.microsoft.com/fwlink/?LinkID=298774&clcid=0x409

Question: Why should you avoid returning the entire Exception object from a service?
Developing Windows Azure and Web Services 5-13

Demonstration: Creating a WCF Service


This demonstration shows how to create a service contract and a data contract, how to implement the
service, and how to test the service with the WCF Test Client tool.

Demonstration Steps
1. In Visual Studio 2012, open
D:\Allfiles\Mod05\DemoFiles\CreatingWCFService\begin\CreatingWCFService.sln. Explore the
code in the files of the Service project, and notice the use of service contract and data contract
attributes.

2. Add a reference to the System.ServiceModel and System.Runtime.Serialization assemblies.

3. In the IHotelBookingService interface, decorate the interface with the [ServiceContract] attribute,
and decorate the BookHotel method with the [OperationContract] attribute.
4. Decorate the BookingResponse and Reservations classes with the [DataContract] attribute.
Decorate each of their properties with the [DataMember] attributes.

5. Run the service in debug and test it by using the built-in WCF Test Client. Run the BookHotel
operation with the HotelName set to HotelA, and verify that the response shows the booking
reference AR3254.

Question: What are the steps for creating a service contract?


5-14 Creating WCF Services

Lesson 3
Configuring and Hosting WCF Services
In the previous lesson, we discussed the logical aspects of WCF services: SOAP messaging, service
contracts, data contracts, and service implementation. However, these only explain what the service can
do. There are many other questions to be answered:

What is the address of the service?


What communication protocols does it support?

How are the service implementation instances created?

What are the security characteristics of the service (such as authentication and encryption)?
These are some of the questions answered in this lesson. The features of WCF that control these aspects
are the service host, service configuration, and endpoint configuration.

This lesson explains how to host a WCF service, how to configure and control the behavior of a service
host, and how to define and configure the service endpoints.

Lesson Objectives
After you complete this lesson, you will be able to:

Host a WCF service.


Explain the basics of WCF service endpoints.

Set the address of a service endpoint.

Set and configure the binding of a service endpoint.


Define the contract used by the service endpoint.

Expose the service metadata with metadata exchange endpoints.

Hosting WCF Services


In the previous lesson, we discussed how to use
service contracts and data contracts to describe
what a service can do, and how to provide the
actual implementation of the service.
Implementing a service is not enough to make it
run, you also need a host to listen for incoming
requests and direct them to your service. The
service host prepares the service implementation
class that will be addressed by clients, allocating
the resources the service requires and managing
the execution state of the service.

The service host is responsible for opening ports


and listening to requests according to the configuration. The host manages the incoming requests of
service, allocates resources such as memory and threads, creates the service instance context, and then
passes the instance context through the WCF runtime.
There are two ways to host a WCF service:
Developing Windows Azure and Web Services 5-15

Self-hosting. Create your own application, such as a Windows Presentation Foundation (WPF)
application or a Windows Service. The host will start after the application starts, and shutdown when
the application shuts down.

Web hosting. Hosts the service in IIS. The host will start after IIS receives the first request to the
service, and shut down when the web application shuts down.

This lesson focuses on the basics of self-hosting with a simple Console application. Module 6, "Hosting
Services," will explain in depth how you can self-host WCF services in Windows Services, and how you can
host WCF services with IIS.

The base class that manages WCF hosts is the ServiceHostBase type, but this class is an abstract class. The
concrete class that you will use to host your services is the ServiceHost type, which is declared in the
System.ServiceModel assembly.

Note: There are other technology-specific service host classes that derive from the
ServiceHostBase class, such as WorkflowServiceHost, which is used to host services that
execute Windows Workflow (WF) activities.

The following code demonstrates how to host a WCF service with the ServiceHost class.

Hosting a WCF Service with ServiceHost


ServiceHost hotelServiceHost = new ServiceHost(typeof(Services.HotelBookingService));

hotelServiceHost.Open();

Console.WriteLine("Service has been hosted. Press Enter to stop");


Console.ReadLine();

hostelServiceHost.Close();

A ServiceHost instance can manage a single service type, but it can open many listeners for that service
type, each with a different configuration. For example, a single ServiceHost can listen to both HTTP and
UDP communication, and invoke a service method when a request is received on either of these
transports.

Note: If you have more than one service, you will need a different instance of ServiceHost
for each service.

Before the service is opened by using the Open method, the host requires configuration that instructs it
which communication transports it needs to listen to and at which addresses. You can set this
configuration either in the code itself, before calling the Open method, or in the application configuration
file (app.config). You will learn to create this configuration, both in code and in the configuration file, later
in this lesson.

After the service host has opened, you can close it at any time by calling the Close method. The Close
method will stop the host from listening to any communication, making the service unavailable for clients.

Best Practice: The Open and Close methods can throw an exception if the service host has
difficulties listening to ports, such as if a port is already opened by another application. Although
the code sample shown above does not demonstrate this, it is advised to wrap the Open method
and Close method calls with a try/catch block.
5-16 Creating WCF Services

Service Endpoints Overview


As mentioned in previous topics, the service host
must listen to one or more communication
channels so that it can receive requests from
clients. In WCF, these listeners are called
endpoints, and the configuration of each listener
is referred to as an endpoint configuration.

An endpoint is the end of the client-host


communication channel, and therefore is the
entry point to the service. An endpoint receives
messages from a communications channel and
transfers that message to the service
implementation for processing. A service can have
numerous endpoints, each one listening to a different type of communication, such as HTTP, UDP, or TCP.
Each endpoint has a different address to distinguish it from other endpoints of that service and from
those of other services running on the same machine.

A service endpoint defines how the service is exposed to the clients. A service endpoint answers the
following three questions:
Address. Specifies where the service resides. The address is a Uniform Resource Locator (URL) that is
used by the client applications to locate the service.
Binding. Specifies how clients should communicate with the service. The binding specifies the
message encoding, transport type, security modes, session support, and other protocols.

Contract. Specifies the operations supported by the endpoint. The contract needs to match one of the
contract interfaces implemented by your service class.
These endpoint settings are called the ABCs of an endpoint.

The following code demonstrates how to add an endpoint through code.

Configuring an Endpoint in Code


ServiceHost hotelServiceHost = new ServiceHost(typeof(Services.HotelBookingService));

hotelServiceHost.AddServiceEndpoint(
typeof(Contracts.IHotelBookingService),
new BasicHttpBinding(),
"http://localhost:8080/booking/");

hotelServiceHost.Open();

The above code adds a single endpoint to the service with the following configuration:
Contract. The IHotelBookingService service contract. If a service has more than one contract, you
must create several endpoints, one for each contract.

Binding. The endpoint is configured to use BasicHttpBinding. This binding listens to HTTP
communication, and expects XML messages with SOAP envelopes. You can have multiple endpoints
with different bindings. Bindings will be explained in detail in the next topic.

Address. The address http://localhost:8080/booking is the listening address of the endpoint. When
a client sends a message to the service, it will send the message to
http://ServerName:8080/booking/, where ServerName is the DNS or IP address of the server
hosting the service.
Developing Windows Azure and Web Services 5-17

The following XML configuration demonstrates how to add an endpoint in the application configuration
file.

Configuring an Endpoint in a Configuration File


<?xml version="1.0" encoding="utf-8" ?>
<configuration>
<system.serviceModel>
<services>
<service name="Services.HotelBookingService">
<endpoint
address="http://localhost:8080/booking/"
binding="basicHttpBinding"
contract="Contracts.IHotelBookingService">
</endpoint>
</service>
</services>
</system.serviceModel>
</configuration>

The WCF configuration shown in the above example is contained in the <system.serviceModel>
configuration section group. The <services> section contains the list of services, each in its own
<service> element. The name attribute in the <service> element is set to the fully qualified name of the
service implementation class. Each <service> element can contain <endpoint> elements, and each such
element contains its ABC settings (address, binding, and contract).

Defining a Service Endpoint Address

The endpoint address identifies an endpoint


uniquely and informs potential clients of its
location. The endpoint address has the following
parts:
Scheme (HTTP, TCP)

Machine name (or IP address)

Port (optional)

Path (for example


/reservations/groupBooking/)

There are two ways to specify an address for an endpoint in WCF:

Specify an absolute address for each endpoint.


Specify a base address for the service host and then specify a relative address for each endpoint.

In each case, you can specify the address both in code and in configuration.

The following example demonstrates how to use relative endpoint addresses in the configuration file.

Using a Relative Endpoint Address in Configuration Files


<?xml version="1.0" encoding="utf-8" ?>
<configuration>
<system.serviceModel>
<services>
<service name="Services.HotelBookingService">
<host>
5-18 Creating WCF Services

<baseAddresses>
<add baseAddress="http://localhost:8080/reservations/" />
</baseAddresses>
</host>
<endpoint
address=""
binding="basicHttpBinding"
contract="Contracts.IHotelBookingService ">
</endpoint>
<endpoint
address="secured"
binding="wsHttpBinding"
contract="Contracts.IHotelBookingService ">
</endpoint>
<endpoint
address="net.tcp://localhost:8081/reservations/"
binding="netTcpBinding"
contract="Contracts.IHotelBookingService ">
</endpoint>
</service>
</services>
</system.serviceModel>
</configuration>

In the above example, the service configuration has three endpoints for the IHotelBookingService
contract. The first two endpoints use the http://localhost:8080/reservations/ base address specified in
the <host> element. Of these, the first endpoint uses an empty relative address, so the actual address of
the endpoint is the same as the base address. The second uses the secured relative address, so its actual
address is http://localhost:8080/reservations/secured. The third endpoint uses the
net.tcp://localhost:8081/reservations/ address. The third address cannot use the HTTP base address,
since it uses a different binding (TCP rather than HTTP). You will learn about bindings in the next topic.

Note: The service host matches the address and the base address according to the binding
of the endpoint and the scheme of the base addresses. For example, both basicHttpBinding and
wsHttpBinding use HTTP communication, and therefore the service host will search for a base
address with the HTTP scheme. Therefore, you can have only one base address per URI scheme.

The http:// URI scheme in endpoint addresses is part of the HTTP URI structure, but most other addresses
do not have URI schemes and are WCF-proprietary. This is why some address schemes have the "net."
prefix, such as net.tcp:// and net.pipe://. The soap.udp:// scheme and ws:// scheme are different, as
these schemes do have URI structures: soap.udp is used in UDP URIs, and ws is used in WebSocket URIs.

You can also use relative endpoint addresses when you declare endpoints in code. You can specify the
base address of the service host in the ServiceHost constructor method.

The following code demonstrates how to use relative endpoint addresses in code.

Using a Relative Endpoint Address in Code


ServiceHost hotelServiceHost = new ServiceHost(
typeof(HotelBookingService), new Uri("http://localhost:8080/reservations/"));

hotelServiceHost.AddServiceEndpoint(
typeof(IHotelBookingService), new BasicHttpBinding(), "");

hotelServiceHost.AddServiceEndpoint(
typeof(IHotelBookingService), new WSHttpBinding(), "secured");

hotelServiceHost.Open();
Developing Windows Azure and Web Services 5-19

Defining Service Endpoint Bindings


When a client sends a message to a service, it
handles many technology related issues, such as
the kind of transfer protocol to use, the message
be streamed or buffered, the XML be sent as text
or as binary, and the message needs to contain a
security header of some sort.

To make those choices easily, WCF offers the


binding mechanism. Binding encapsulates all the
technology decisions required to pass a message
from point A to point B:

The binding defines which transport to use to


send the messages.

The binding defines how to encode the messages onto the wire.

The binding defines which protocols, such as security and sessions, are required.

A binding in WCF is a combination of these three elements: transport, encoding, and protocols. In WCF,
you can create all the definitions and configurations of the binding in a single place - either in code or in
configuration files - which simplifies the amount of work required.

Note: The binding also defines the properties of the communications channel and
messages, such as timeouts and maximum message size.

Predefined Bindings
Instead of setting the three elements of the binding each time you define an endpoint, WCF provides a
collection of predefined bindings for the most common combinations of binding elements. You can use
these bindings with their default values or fine-tune the bindings to your needs. The following are some
typically used predefined bindings:

Binding name Transport Encoding Usage

BasicHttpBinding HTTP Text Interoperability with older service


technologies. Uses SOAP 1.1 with no
WS-* protocols.

WSHttpBinding HTTP Text Interoperability with new service


technologies. Uses SOAP 1.2, and
supports all WS-* protocols.

NetTcpBinding TCP Binary Non-interoperable binding.


Optimized for intranet
communication scenarios. Supports
all WS-* protocols.

NetNamedPipeBinding Named pipes Binary Non-interoperable binding.


Optimized for inter-process
communication on the same
machine. Lacks some of the WS-*
options, such as message security.

UdpBinding UDP Text Interoperable with service


5-20 Creating WCF Services

Binding name Transport Encoding Usage


technologies that implement SOAP-
over-UDP. Lacks some of the WS-*
options, such as reliable messaging.

NetHttpBinding HTTP/WebSockets Binary Non-interoperable binding.


Optimized for Internet
communication and duplex
communication with WebSockets.
Supports all WS-* protocols.

For more information on predefined bindings available in WCF, see:

Configuring System-Provided Bindings


http://go.microsoft.com/fwlink/?LinkID=298775&clcid=0x409

Although most bindings work even in scenarios for which they are not designed, it is a good practice to
choose the correct binding for a given endpoint. There are many considerations to take into account
when deciding which binding to use. Covering all those is beyond the scope of this topic. Here however
are some examples:

Bindings that support end-to-end security:

o NetTcpBinding

o WSHttpBinding

Interoperable bindings you can use with other service platforms:

o BasicHttpBinding
o WsHttpBinding

o UdpBinding (most service platforms do not currently support SOAP-over-UDP)


Bindings that cross firewalls easily:

o BasicHttpBinding

o WSHttpBinding

Bindings that support duplex (two-way) communication:

o NetTcpBinding

o WSDualHttpBinding

o NetHttpBinding

Configuring Bindings
Each binding has a set of configurable properties that you can change to modify the binding to your
needs. You can change the settings either through code or by using the configuration file.

The following example shows how you can configure the basic HTTP binding in the configuration file.

Configuring a Binding in a Configuration File


<system.serviceModel>
<bindings>
<basicHttpBinding>
<binding name="increasedSettings"
sendTimeout="00:10:00"
receiveTimeout="00:30:00"
Developing Windows Azure and Web Services 5-21

maxReceivedMessageSize="5000000"/>
</basicHttpBinding>
</bindings>
</system.serviceModel>

The <bindings> element needs to be placed inside the <system.serviceModel> element in the
configuration file.

Inside the <bindings> element, place an element by the name of the binding type that you wish to
change. Note that the name of the binding is in camel case (the first letter of the first word is in
lowercase, and each subsequent word is capitalized).

Inside this element, place a <binding> element and give it a name. The name attribute will then be
applied to any endpoints that make use of that binding.

Inside the <binding> tag, add the attributes and elements that you wish to set.

In the previous example, three attributes were changed. The sendTimeout attribute, which sets the
maximum time a service waits for a message to be sent, was changed to 10 minutes. The receiveTimeout
attribute, which sets the maximum time a service waits until a message is fully received, was changed to
30 minutes. The maxReceivedMessageSize attribute, which sets the maximum allowable size of a
message sent to the service, was set to 5000000 bytes.
To apply this binding to an endpoint, you will need to set the binding configuration of the endpoint
accordingly.

The following example shows how to configure an endpoint with the new binding configuration.

Configuring an Endpoint with a Modified Binding Configuration


<services>
<service name="Services.HotelBookingService">
<endpoint
address="http://localhost:8080/booking/"
binding="basicHttpBinding"
bindingConfiguration="increasedSettings"
contract="Contracts.IHotelBookingService">
</endpoint>
</service>
</services>

In addition to modifying the binding in configuration, you can also modify the binding in code. All the
predefined bindings are exposed as .NET classes. You can configure a binding by setting the properties of
the binding instance object.

The following code sets the same binding configuration as the previous code example, this time in code.

Configuring a Binding in Code


ServiceHost hotelServiceHost = new ServiceHost(typeof(Services.HotelBookingService));

BasicHttpBinding basicHttpWithIncreasedSettings = new BasicHttpBinding


{
ReceiveTimeout = TimeSpan.FromMinutes(30),
SendTimeout = TimeSpan.FromMinutes(10),
MaxReceivedMessageSize = 5000000
};

hotelServiceHost.AddServiceEndpoint(
typeof(Contracts.IHotelBookingService),
basicHttpWithIncreasedSettings,
"http://localhost:8080/booking/");
5-22 Creating WCF Services

hotelServiceHost.Open();

Custom Binding
In addition to predefined bindings, you can create your own custom binding. When using a custom
binding, you can select the binding elements that compose the binding. You can define the transport
element, the message encoding, and other elements such as security and transaction support. You can
define custom bindings by adding a <customBinding> element to the <bindings> section in the
configuration file, or by creating a new instance from the CustomBinding class in code.

Note: To use the CustomBinding class in your code, add a using directive for the
System.ServiceModel.Channels namespace.

The following example demonstrates how to create a custom binding that uses HTTP and binary XML
encoding.

Creating Custom Bindings


<system.serviceModel>
<bindings>
<customBinding>
<binding name="compressedBinaryHttpBinding">
<reliableSession/>
<binaryMessageEncoding compressionFormat="GZip"/>
<httpTransport/>
</binding>
</customBinding>
</bindings>
<services>
<service name="HotelBooking.HotelBookingService">
<endpoint
address="http://localhost:8080/booking/"
binding="customBinding"
bindingConfiguration="compressedBinaryHttpBinding"
contract="Contracts.IHotelBookingService">
</endpoint>
</service>
</services>
</system.serviceModel>

The custom binding created in the above example uses HTTP transport. Instead of encoding the message
as plain text it encodes the message by using a WCF-proprietary binary encoding and compresses the
content with GZIP so that it can be sent faster on slow networks. In addition, the binding uses the WS-
ReliableMessaging protocol, which helps to cope with network failures while sending messages from end
to end.

This course does not cover the reliable messaging support of WCF. For more information on reliable
messaging in WCF, see:
Introduction to Reliable Messaging with the Windows Communication Foundation
http://go.microsoft.com/fwlink/?LinkID=298776&clcid=0x409

Question: What are the advantages of using the built-in bindings rather than creating
custom bindings?
Developing Windows Azure and Web Services 5-23

Defining Service Endpoint Contracts


The final setting of the endpoint is the service
contract. The endpoint must indicate the exposed
service contract and, in so doing, indicates the
operations supported by the endpoint.

As you saw before, a service can have multiple


endpoints if it wants to expose a contact through
different bindings. Another case in which you will
create multiple endpoints for a service is when the
service has multiple service contracts (that is,
implements multiple interfaces).

If your service class implements more than one


contract, you will need to create multiple
endpoints to expose these contracts to clients. It is common to create these multiple endpoints such that
they use the same binding and binding configuration, but have different addresses.
The following example shows two endpoints, each for a different contract.

Endpoint Per Service Contract


<system.serviceModel>
<services>
<service name="Services.HotelBookingService">
<endpoint
address="http://localhost:8080/reservations/GroupBooking"
binding="basicHttpBinding"
contract="Contracts.IHotelGroupBookingService">
</endpoint>
<endpoint
address="http://localhost:8080/reservations/IndividualBooking"
binding="basicHttpBinding"
contract="Contracts.IHotelIndividualBookingService">
</endpoint>
</service>
</services>
</system.serviceModel>

If you create different endpoints for different contracts but use the same binding and binding
configuration for all the endpoints, you can use the same address for all the endpoints. In the above
example, both addresses could be replaced with the same address, http://localhost:8080/reservations.
In such a case, WCF will automatically identify which endpoint was addressed by checking the message
headers. Every message sent to a service has an Action header that holds the name of the contract and
the name of the requested operation.
5-24 Creating WCF Services

Exposing Service Metadata


The last step before clients can start consuming
your service is to expose the service contract and
endpoint information. This information is known
as the service metadata. The service metadata
includes all the information that is required to
interact correctly with the service.

WCF exposes service metadata in the form of Web


Services Definition Language (WSDL) documents.
WSDL is an XML-based format that is used to
describe the functionality offered by a service. A
WSDL description of a service provides a list of
service operations, the parameters and data
structures each operation expects, and any security or messaging policies the client has to obey.
In WCF, the WSDL of the service describes the following:
Service contracts

Data contracts

Fault contracts
Service endpoint addresses

Binding-related policies, such as security and transactions


WCF services expose metadata by using two techniques:
HTTP GET. You can get the WSDL document by sending a simple GET request over HTTP to the
service. This technique is useful if you want other developers to download your WSDL through a
browser. This technique is also useful to test if your service has loaded correctly (by browsing to the
WSDL address and verifying that you do not get any exceptions).

Metadata Exchange (MEX) endpoints. You can get the WSDL document by calling a special service
endpoint that uses SOAP messages with the WS-MetaDataExchange protocol. If you expose a service
metadata as an endpoint rather than over HTTP, you have more control over the type of transport
you use, and you can control other binding-related configurations. For example, you can decide if
you want to expose the service metadata over TCP and prevent unauthorized clients from accessing
the metadata.

By default, for security reasons, WCF services do not expose their metadata. If you want to expose your
service metadata, you will need to change the behavior of your service. You will recall from earlier in this
module that you can control service behavior through the [ServiceBehavior] attribute. However,
exposing metadata is one of several service behaviors that cannot be controlled through the service code
with the [ServiceBehavior] attribute. Instead, you must configure it either through the ServiceHost class
or in the service configuration file.

Note: Some behaviors, such as concurrency and instancing, are more development-
oriented. Others, such as service metadata, are more deployment-and-hosting-oriented.
Development-related behaviors are set in the service implementation by using attributes, while
hosting-related behaviors are set in the host project (either in the configuration file or in the
service host code, in the ServiceHost instance).
Developing Windows Azure and Web Services 5-25

To add a service behavior configuration to your configuration file, open your application configuration file
and perform the following steps:

1. Add a <behaviors> section to the <system.serviceModel> section group, and in it create a


<serviceBehaviors> element.

Note: In addition to service behaviors, you can also use the <behaviors> section to
configure endpoint behaviors. Endpoint behaviors are beyond the scope of this course.

2. Add a new <behavior> element to the <serviceBehaviors> element, and set its name attribute to a
name describing the use of the behavior.

3. Add service behavior elements to the <behavior> element to configure various behaviors of your
service and your hosting environment.

4. After you create the service behavior configuration, set your service to use that configuration by
adding the behaviorConfiguration attribute to your <service> element and setting the value of the
attribute to the name of the service behavior. As long as you created the service behavior in the
configuration file first, Visual Studio 2012 will open a drop-down list showing the names of the
existing service behaviors when you add the behaviorConfiguration attribute.

You can change the behavior of your service so that it exposes metadata by creating a service behavior
element and adding the <serviceMetadata> element to it.
The following code demonstrates how to configure a service to expose metadata.

Exposing Metadata of a Service


<system.serviceModel>
<behaviors>
<serviceBehaviors name="metadata">
<behavior>
<serviceMetadata httpGetEnabled="True"/>
</behavior>
</serviceBehaviors>
</behaviors>
<services>
<service name="Services.HotelBookingService" behaviorConfiguration="metadata">
<endpoint
address="http://localhost:8080/reservations/GroupBooking"
binding="basicHttpBinding"
contract="Contracts.IHotelGroupBookingService">
</endpoint>
</service>
</services>
</system.serviceModel>

The above example adds the <serviceMetadata> element, which changes the default behavior of the
service so that it exposes its metadata. You can also control how the service exposes metadata by adding
the httpGetEnabled attribute and setting it to true. This attribute configures the service to expose the
metadata with simple HTTP GET requests.

Note: You can set other attributes of the <serviceMetadata> element to control how the
metadata is exposed. For example, you can set the httpsGetEnabled attribute to true to expose
the metadata over HTTPS.
5-26 Creating WCF Services

In addition to the <serviceMetadata> behavior, many more behaviors are available, such as the
<serviceDebug> behavior that was mentioned in Lesson 2, "Creating and Implementing a Contract. You
can also define the service behaviors in code before opening the service host.

Note: If you are going to host multiple services in the same hosting project, and you want
several services to use the same behavior configuration, you can omit the
behaviorConfiguration attribute from the <service> element and the name attribute from the
<behavior> element. Behaviors without a name will automatically apply to every service that
does not have a specific service behavior configuration.

You can also configure service behavior in code before opening your service host.

The following code demonstrates how to add service behaviors to the service host.

Using Service Behaviors in Code


ServiceHost hotelServiceHost = new ServiceHost(typeof(Services.HotelBookingService));

hotelServiceHost.Description.Behaviors.Add(new ServiceMetadataBehavior { HttpGetEnabled =


true });
hotelServiceHost.Description.Behaviors.Add(new ServiceDebugBehavior {
IncludeExceptionDetailInFaults = true });

hotelServiceHost.Open();

The above example adds two behaviors, ServiceMetadataBehavior, and ServiceDebugBehavior. You
can find these two behaviors in the System.ServiceModel.Description namespace. You can mix adding
behaviors in code and in configuration, but make sure you do not add the same behavior twice.

If you prefer exposing your service metadata with a Metadata Exchange (MEX) endpoint, you can do so by
adding such an endpoint to your service endpoints list.

The following configuration demonstrates how to add a MEX endpoint.

Adding a MEX Endpoint


<system.serviceModel>
<behaviors>
<serviceBehaviors>
<behavior>
<serviceMetadata/>
</behavior>
</serviceBehaviors>
</behaviors>
<services>
<service name="Services.HotelBookingService">
<endpoint
address="http://localhost:8080/reservations/GroupBooking"
binding="basicHttpBinding"
contract="Contracts.IHotelGroupBookingService"/>
<endpoint
address="http://localhost:8080/reservations/MEX"
kind="mexEndpoint"/>
</service>
</services>
</system.serviceModel>

As you can see, the new MEX endpoint has no binding and no contract, but instead has the kind attribute.
The kind attribute is used when creating standard endpoints, such as MEX endpoints and service
discovery endpoints. When you use the mexEndpoint kind, the endpoint is configured to use HTTP
Developing Windows Azure and Web Services 5-27

binding and the IMetadataExchange contract automatically. The IMetadataExchange contract is a


special built-in contract that handles WS-MetaDataExchange messages.

Note: Instead of using the kind attribute, you can set the endpoint to use the
mexHttpBinding binding and the IMetadataExchange contract. This has the same result as
using the kind attribute.
You can change the binding attribute if you want to use non-HTTP bindings for the MEX
endpoint, such as mexHttpsBinding and mexTcpBinding, and you can provide additional
binding configuration if required.

In a single project, if you find yourself hosting more and more services, with more and more contracts,
you will have a very big configuration file to manage. Instead of writing the configuration file yourself,
you can use the Microsoft Service Configuration Editor tool (SvcConfigEditor.exe).
SvcConfigEditor.exe is a graphical utility that you can use to add new services and service endpoints to
your configuration file, and to edit WCF settings such as the binding configuration and service behaviors.
You can open the Service Configuration Editor in Visual Studio 2012 from the Tools menu (on the
Tools menu, click WCF Service Configuration Editor), or in Solution Explorer, by right-clicking
App.config and clicking Edit WCF Configuration.

The following screenshot shows the Service Configuration Editor tool.

FIGURE 5.1: THE SERVICE CONFIGURATION EDITOR TOOL


For more information on how to use the WCF Service Configuration tool, see:

Configuration Editor Tool (SvcConfigEditor.exe)


http://go.microsoft.com/fwlink/?LinkID=298777&clcid=0x409

Demonstration: Configuring Endpoints in Code and in Configuration


This demonstration shows how to add endpoints to a service in configuration and in code.

Demonstration Steps
1. Open D:\Allfiles\Mod05\DemoFiles\DefineServiceEndpoints\begin\DefineServiceEndpoints.sln.
5-28 Creating WCF Services

2. In the ServiceHost project, open the App.config file, locate the <serviceBehaviors> section, and
then add a <behavior> element with the serviceMetadata behavior.

Note: Refer to Topic 6, "Exposing Service Metadata" in this lesson for a code sample of how
to add the serviceMetadata behavior.

3. In the App.config, add a base address to the <service> element by using the address
http://localhost:8733/.

Note: Refer to Topic 3, "Defining a Service Endpoint Address" in this lesson for a code
sample of how to add a base address in configuration.

4. Save the changes you made to the App.config file, open the file with the WCF Configuration Editor,
and then add a new endpoint to the service.

To open the App.config file with the WCF Configuration Editor tool, right-click the App.config file
in Solution Explorer, and then click Edit WCF Configuration.

Add a new service endpoint with the following settings:

Property Value

Address booking

Binding basicHttpBinding

Contract HotelBooking.IHotelBookingService

Save the changes you have made to the confirmation, and then close the Service Configuration Editor
window.

5. In the ServiceHost project, open the Program.cs, and add an endpoint that uses NetTcpBinding.

o Add the endpoint before calling the host.Open method.


o Call the host.AddServiceEndpoint method by using the following code.

host.AddServiceEndpoint(typeof(IHotelBookingService), new NetTcpBinding(),


"booking");

6. Run the service host in debug and test it by using the built-in WCF Test Client. Run the BookHotel
operation by using the TCP endpoint. Set the HotelName to HotelA, and then verify that the
response shows the booking reference AR3254.

7. Browse to the base address of the service, and then view the WSDL file with the service metadata.

8. Close the browser and the WCF Test Client. Return to Visual Studio, stop the debugger, and then
close Visual Studio 2012.
Question: Give several examples for creating service configuration from code instead of
using configuration files.
Developing Windows Azure and Web Services 5-29

Lesson 4
Consuming WCF Services
The last step in developing a service is consuming a WCF service, which is performed once to make the
service run. There are several ways to consume a WCF service. The most productive way is to use WCF on
the client side. However, you can also consume a service from non-.NET clients.

WCF and Visual Studio 2012 provide different tools that can help you consume services easily. In this
lesson, you will learn how to use Visual Studio 2012 to generate a service proxy class in design-time, and
how to create a service proxy at run time with the ChannelFactory<T> generic class. You will also learn
how to use the proxy created with these two techniques to call your service.

Lesson Objectives
After you complete this lesson, you will be able to:

Generate a client proxy with the Add Service Reference dialog box of Visual Studio 2012.

Create a client proxy with the ChannelFactory<T> generic class.

Generating Service Proxies with Visual Studio 2012


For your client application to consume a service, it
needs to communicate with the service, send it
the correct messages, and translate the returned
message. To enable a client to communicate with
a service without having to manage all the
implementation that is required to transfer the
message, WCF uses the proxy pattern.
A proxy is an adapter that reflects the capabilities
of a service over the networking and messaging
technological boundaries that are used by the
service. A proxy on one side reflects the service
contract by implementing the same contract
(interface) that is exposed by the service. On the other side, each operation the proxy implements
internally performs all the necessary operations that are required to communicate with the service. The
following steps, implemented by the proxy, are required when communicating with a service:

1. Serialize the request to a message object.

2. Open a channel to the service and send the message to the service, according to the required
transport and other binding settings.

3. Wait for the service to respond to the request and send its response.

4. Deserialize the message and return the value to the caller.

If you want to use the proxy pattern to consume a service, you need to build a class that implements the
service contract and that is responsible for all the transformations and communication with the service.
WCF can build those proxy classes by using the Add Service Reference dialog box of Visual Studio 2012.
This tool reads WSDL documents, extracts the service contract, and creates proxy classes that match the
service contracts. In addition, this tool also creates data classes according to the data contracts specified in
the WSDL file. To use the Add Service Reference dialog box:
1. In Solution Explorer, right-click your project, and then click Add Service Reference.
5-30 Creating WCF Services

2. In the Add Service Reference dialog box, enter the WSDL file address of the service, and then click
Go. WCF will try to connect with the service and request the services WSDL file.

Note: If you are trying to consume a WCF service that you have developed, make sure the
service has exposed its WSDL document. To expose the document, the service must use the
ServiceMetadata service behavior.

3. After Visual Studio finds the WSDL, the list of service contracts will display. Enter the name of the
namespace with which you want to create proxies, and then click OK. WCF will create the proxy
classes that are required for every service contract and data contract classes exposed in the WSDL.
Visual Studio will also place all service endpoint configurations in the configuration file of the client.
This allows you to use the proxy easily with any of the service's endpoints.

The following screenshot shows the Add Service Reference dialog box in Visual Studio 2012.

FIGURE 5.2: THE ADD SERVICE REFERENCE DIALOG BOX IN VISUAL


STUDIO 2012
The Advanced button in the Add Service Reference dialog box opens a window in which you can
configure code generation settings. You can configure settings such as the accessibility level of proxy
classes (public or internal), whether to generate asynchronous client calls in addition to synchronous calls,
and which type to use when creating collection properties (for example, generating them as an array or as
a generic List<T>).
For more information about the Add Service Reference dialog box, see:

Add Service Reference dialog box


http://go.microsoft.com/fwlink/?LinkID=313733

Each of the generated proxies is named according to the name of the contact, without the I prefix, and
will be appended with the suffix Client. For example, if the name of the service contract is
IHotelBookingService, then the generated proxy class will be named HotelBookingServiceClient.
To use the generated proxy, create an instance of it in your code, and use it to call the service.
Developing Windows Azure and Web Services 5-31

The following example demonstrates how to use a generated proxy to call a service.

Using a Generated Proxy to Call a Service


var proxy = new HotelBooking.HotelBookingServiceClient();

HotelBooking.Reservation reservation = new HotelBooking.Reservation()


{
FirstName = "James",
LastName = "Alvord",
HotelName = "Contoso Motel",
NumberOfNights = 3
};

try
{
proxy.BookHotel(reservation);
}
catch (FaultException<ReservationFault> reservationEx)
{
Console.WriteLine(reservationEx.Message + Environment.NewLine +
reservationEx.Detail.ErrorCode);
}
catch (FaultException faultEx)
{
Console.WriteLine(faultEx.Message);
}
catch (Exception ex)
{
Console.WriteLine("An unknown error has occurred: " + ex.Message);
}

The above example creates a proxy object from the generated HotelBookingServiceClient class, a
reservation object from the generated Reservation class, and then calls the BookHotel method of the
proxy. Calling this method will invoke the BookHotel operation in the service.

Note: One of the risks in working with proxies is that developers can forget that they are
using an object that crosses the boundary of the application, and possibly even the device. Be
aware that, though it might seem that you are working with a local object, there is an underlying
mechanism that adds overhead on each method call. Communication latency, serialization, and
many other factors can cause a latency penalty.

The above code example also has a try/catch block to handle possible exceptions and service faults. The
code handles the fault exception that returns a ReservationFault object, a more general fault exception
for any other unknown service faults, and general exception handling in case of a more catastrophic
exception, such as a communication exception. You can see the server-side implementation of the
ReservationFault class in Lesson 2, "Creating and Implementing a Contract," in the "Handling Exceptions"
topic.

If the service contract changes over time and you want your proxies to reflect the new state of the
contract and/or data contract, you do not have to delete the service reference and start all over again. The
Add Service Reference dialog box also supports an update option. In Solution Explorer, expand the
Service References folder under your project, select the service reference that you want to update, right-
click it, and then click Update Service Reference.

If you want to create a cleaner configuration file, you can use the Svcutil.exe tool, which is similar to the
Add Service Reference dialog box. However, you can use the Svcutil.exe tool from a command prompt,
and it offers more options than the Add Service Reference dialog box, including metadata export and
service validation.
5-32 Creating WCF Services

Creating a Service Proxy with ChannelFactory<T>


Another way to create a service proxy is to use the
System.ServiceModel.ChannelFactory<T>
generic class. There are several differences
between creating a proxy through the Add
Service Reference dialog box and through
ChannelFactory<T>. Here are some of the major
differences:

1. Adding a service reference creates the proxy


at design time, while ChannelFactory<T>
creates a proxy class at run time (by using the
.NET Remoting transparent proxy).
2. Adding a service reference creates a proxy
class together with all the data contracts and required configuration, while ChannelFactory<T> only
generates a proxy. You, as the client developer, need to provide the service contract interface and the
data contract classes.

3. Adding a service reference requires the service metadata to generate a proxy, while
ChannelFactory<T> requires the service contract interface as a generic type parameter.

To use the ChannelFactory<T> generic class, your client needs to have access to the service contract
interface and the data contracts. You can achieve this through either a shared assembly or a shared, linked
C# file that contains the service interface.
There are two ways to use the ChannelFactory<T> generic class:

Use the static CreateChannel method without creating an instance of ChannelFactory<T>.


Create an instance of ChannelFactory<T> by using the service contract interface as a generic type
parameter, and then use the CreateChannel method of the instance.

The following code example demonstrates the different ways to use ChannelFactory<T>.

Creating a Service Proxy with ChannelFactory<T>


// Create a service proxy with the static CreateChannel method
IHotelBookingService proxyA = ChannelFactory<IHotelBookingService>.CreateChannel(
new BasicHttpBinding(),
new EndpointAddress(@"http://localhost:8733/booking"));

// Create a service proxy with an instance of ChannelFactory<T>


ChannelFactory<IHotelBookingService> factory = new ChannelFactory<IHotelBookingService>(
"HotelBooking_http");

IHotelBookingService proxyB = factory.CreateChannel();

(proxyA as ICommunicationObject).Open();
(proxyB as ICommunicationObject).Open();

var reservation = new Reservation()


{
FirstName = "James",
LastName = "Alvord",
HotelName = "Contoso Motel",
NumberOfNights = 3
};

proxyA.BookHotel(reservation);
proxyB.BookHotel(reservation);
Developing Windows Azure and Web Services 5-33

(proxyA as ICommunicationObject).Close();
(proxyB as ICommunicationObject).Close();

Note: When you use the Add Service Reference dialog box, the generated proxy class
derives from the ClientBase<T> abstract class. Under the hood, this abstract class uses
ChannelFactory<T> to generate the inner proxy that is called by the generated proxy class.

The first CreateChannel method call receives two parameters: a binding object and the service endpoint
address. The third part of the service endpoint's ABC, the contract, is passed as the generic type of the
ChannelFactory<T> class. If you choose to use the second technique, by creating an instance on
ChannelFactory<T>, you can either call the constructor with a binding instance and a service address, as
you do with the static method, or pass a string representing the name of an already configured endpoint,
as shown in the example.
After creating the proxy objects, their channels are opened by calling the ICommunicationObject.Open
method. When you call the CreateChannel method, it dynamically creates a proxy object that
implements both the IHotelBookingService and the ICommunicationObject interfaces. You can use the
ICommunicationObject interface to open and close the communication channel manually, as well as
registering to channel-related events, such as Opening, Closing, and Faulted. If you do not open the
channel manually by calling the Open method, it will be opened when you send the first request to the
service.

Note: Opening the channel can take several seconds if there is a lengthy negotiation
process between the client and the service, for example when you call a secured service that
requires the client to authenticate. In such cases, opening the communication channel ahead of
time, when the application starts or the form is loaded, can save some time when calling the
service for the first time.

For the above example to work, you need to have a <client> endpoint configuration element in the
application configuration file of your client, and you need to set the name attribute of the element to the
name you used in the constructor.

The following configuration demonstrates how to create client endpoints in configuration.

Creating Client Endpoints in Configuration


<system.serviceModel>
<client>
<endpoint address="http://localhost:8733/booking"
binding="basicHttpBinding"
contract="HotelBooking.IHotelBookingService"
name="HotelBooking_http" />
<endpoint address="net.tcp://localhost:8734/booking"
binding="netTcpBinding"
contract="HotelBooking.IHotelBookingService"
name="HotelBooking_tcp">
</endpoint>
</client>
</system.serviceModel>

The advantage of using channel factories is in how they handle breaking changes in contracts. Breaking
changes are changes that force you to fix your proxy code, such as when another parameter is added to
an operation, or when the name of a data contract is changed. If you use channel factories, a breaking
change will stop your code from compiling, but if you use a generated proxy, you might end up having
exceptions at run time.
5-34 Creating WCF Services

Demonstration: Adding a Service Reference


This demonstration shows how to add a service reference in Visual Studio 2012, and how to use the
generated proxy to call the service.

Demonstration Steps
1. Open D:\Allfiles\Mod05\DemoFiles\AddingServiceReference\begin\AddServiceReference.sln.
In the ServiceHost project, open the App.config file, and then view the service configuration,
including the endpoint configuration and service behaviors.

2. Run the ServiceHost project without debugging.


3. In the ServiceClient project, use the Add Service Reference dialog box to add a service reference to
the HotelBookingService service.

o Use the address http://localhost:8733/HotelBooking to locate the service metadata

o Use the HotelBooking namespace for the generated proxy.


4. Open the Program.cs file in the ServiceClient project, and then in the Main method, uncomment
the code. Observe the use of the generated proxy and data contracts.
5. Run the ServiceClient project and verify that the client is able to connect to the service.

Question: What are the advantages and disadvantages of using the Add Service Reference
dialog box of Visual Studio 2012?

Demonstration: Using Channel Factories


This demonstration shows how to use the ChannelFactory<T> generic class to create a proxy and call a
service.

Demonstration Steps
1. Open D:\Allfiles\Mod05\DemoFiles\UsingChannelFactory\begin\UsingChannelFactory.sln, and
then view the service configuration, including the endpoint configuration and service behaviors.
2. In the ServiceClient project, add a reference to the Common project and the System.ServiceModel
assembly.

3. In the ServiceClient project, open the Program.cs file, and then add the following code before the
commented code.

ChannelFactory<IHotelBookingService> serviceFactory =
new ChannelFactory<IHotelBookingService>
(new BasicHttpBinding(),
"http://localhost:8733/HotelBooking/HotelBookingHttp");
IHotelBookingService proxy = serviceFactory.CreateChannel();

4. In the Main method, uncomment the code, run the service and the client, and then test whether the
client can connect to the service.

The console application should displays the message Booking response: Approved, booking reference:
AR3254

Question: What are the requirements for using the ChannelFactory<T> generic class?
Developing Windows Azure and Web Services 5-35

Lab: Creating and Consuming the WCF Booking Service


Scenario
Until now, most of Blue Yonder Airlines booking systems were internal systems, connected to the booking
database of the company. Since there are plans to move the newly created ASP.NET Web API service to a
location outside internal network of the company (probably to the Windows Azure cloud), there is a need
to create a new service that can receive booking requests from both internal and external applications.
Since the requirements from the new service include features such as support for TCP and MSMQ
communication, it was decided that the new service will be a WCF service. In this lab, you will create a
WCF service for the booking subsystem. In addition, you will update the ASP.NET Web API booking
service to use the new WCF booking service.

Objectives
After completing this lab, you will be able to:

Create service and data contract, and implement the service contract.

Configure a WCF service for TCP and host it in a console application.

Consume a WCF service from a client application.

Lab Setup
Estimated Time: 40 minutes
Virtual Machine: 20487B-SEA-DEV-A, 20487B-SEA-DEV-C

User name: Administrator, Admin

Password: Pa$$w0rd, Pa$$w0rd


For this lab, you will use the available virtual machine environment. Before you begin this lab, you must
complete the following steps:

1. On the host computer, click Start, point to Administrative Tools, and then click Hyper-V Manager.

2. In Hyper-V Manager, click MSL-TMG1, and in the Action pane, click Start.
3. If you executed a later lab before this one, follow these instructions:

In Hyper-V Manager, click the 20487B-SEA-DEV-A virtual machine.

In the Snapshots pane, right-click the StartingImage snapshot and then click Apply.
In the Apply Snapshot dialog box, click Apply.

4. In Hyper-V Manager, click 20487B-SEA-DEV-A, and in the Action pane, click Start.

5. In the Action pane, click Connect. Wait until the virtual machine starts.
6. Sign in using the following credentials:

User name: Administrator


Password: Pa$$w0rd

7. Return to Hyper-V Manager, click 20487B-SEA-DEV-C, and in the Action pane, click Start.

8. In the Action pane, click Connect. Wait until the virtual machine starts.

9. Sign in using the following credentials:

User name: Admin

Password: Pa$$w0rd
5-36 Creating WCF Services

10. Verify that you received credentials to log in to the Azure portal from you training provider, these
credentials and the Azure account will be used throughout the labs of this course.

Exercise 1: Creating the WCF Booking Service


Scenario
The first step in creating a WCF service is to define the service contract and data contracts. Only
afterwards can you begin implementing the service contract. In this exercise, you will define a service
contract interface for the booking service along with the required data contracts, and then you will
implement the service contract.

The main tasks for this exercise are as follows:

1. Create a data contract for the booking request

2. Create a service contract for the booking service

3. Implement the service contract

Task 1: Create a data contract for the booking request


1. In the 20487B-SEA-DEV-A virtual machine, open the BlueYonder.Server.sln solution file from the
D:\AllFiles\Mod05\LabFiles\begin\BlueYonder.Server folder.
2. In the BlueYonder.BookingService.Contracts project, add the TripDto data contract class.

Set the access modifier of the class to public and decorate it with the [DataContract] attribute.

Add the following properties to the class.

Name Type

FlightScheduleId int

Status FlightStatus

Class SeatClass

Decorate each of the new properties with the [DataMember] attribute.

Note: In order to use data contract object, the System.ServiceModel and


System.Runtime.Serialization assemblies need to be added to the project references. The
begin solution already contains those assemblies.

3. Add the ReservationDto data contract class.


Set the access modifier of the class to public and decorate it with the [DataContract] attribute.

Add the following properties to the class.

Name Type

TravelerId int

ReservationDate DateTime

DepartureFlight TripDto
Developing Windows Azure and Web Services 5-37

Name Type

ReturnFlight TripDto

Decorate each of the new properties with the [DataMember] attribute.

Note: Review the ReservationCreationFault class in the Faults folder. The class will be
used later, as a data contract object to mark a fault reservation.

Task 2: Create a service contract for the booking service


1. In the BlueYonder.BookingService.Contracts project, add a new interface named IBookingService.
Set the access modifier of the interface to public.

2. Decorate the interface with the [ServiceContract] attribute and set the Namespace parameter of the
attribute to http://blueyonder.server.interfaces/.
3. Add the CreateReservation method to the interface, and define it as an operation contract.
The method should receive a parameter named request of type ReservationDto, and return a string.

Decorate the method with the [OperationContract] attribute.


Decorate the method with the [FaultContract] attribute, and set the attribute's parameter to the
type of the ReservationCreationFault class.

Task 3: Implement the service contract


1. In the BlueYonder.BookingService.Implementation project, implement the IBookingService in the
BookingService class.
Change the class declaration, so it will implement the IBookingService interface.

Decorate the class with the [ServiceBehavior] attribute and set the attribute's
InstanceContextMode parameter to InstanceContextMode.PerCall.
2. In the service implementation class, implement the service contract.

Create the CreateReservation method, but do not fill it with code yet.

Note: At this point, the class will not compile because no value is returned from the
method. Ignore this for now, as you will soon write the missing code.

3. Start implementing the CreateReservation method by verifying whether the request contains
information for the departure flight. If the departure flight information is missing, throw a fault
exception.
If the request.DepartureFlight property is null, throw a FaultException of type
ReservationCreationFault.

In the FaultException constructor, create a new instance of ReservationCreationFault with the


following property values.

Property Value

Description Reservation must include a departure flight


5-38 Creating WCF Services

Property Value

ReservationDate request.ReservationDate

In the FaultException constructor, set the second constructor parameter to the reason string
"Invalid flight info".

4. Continue implementing the CreateReservation method by creating a Reservation object.


Initialize the Reservation object according to the following table.

Property Value

TravelerId request.TravelerId

ReservationDate request.ReservationDate

DepartureFlight A new Trip object with the following values.

Property Value

Class request.DepartureFlight.Class

Status request.DepartureFlight.Status

FlightScheduleID request.DepartureFlight.FlightScheduleID

5. Continue implementing the CreateReservation method by checking whether the return flight is not
null. If the request.ReturnFlight is not null, add a trip to the reservation object you created.
Initialize the reservation.ReturnFlight property with a new Trip object, and set its properties
according to the following table.

Property Value

Class request.ReturnFlight.Class

Status request.ReturnFlight. Status

FlightScheduleID request.ReturnFlight.FlightScheduleID

6. Continue implementing the CreateReservation method by adding the new reservation object to the
database:

Create a new ReservationRepository object and initialize it with the ConnectionName field.
Use the ReservationUtils.GenerateConfirmationCode static method to generate a confirmation
code and assign in to the reservation.ConfirmationCode property before saving the new
reservation.

Call the Add and then the Save methods of the repository to save the newly created reservation.

Return the generated confirmation code to the client.


Developing Windows Azure and Web Services 5-39

Note: To make sure the context and the database connection are disposed properly at the
end of the service operation, you should place the repository-related code in a using block.

7. Insert a breakpoint at the beginning of the CreateReservation method.

Results: You will be able to test your results only at the end of the second exercise.

Exercise 2: Configuring and Hosting the WCF Service


Scenario
The second step in creating the WCF service is to create a project for hosting the service. In this exercise,
you will create a service host, configure it with a TCP endpoint and use it to make the service available for
clients.
The main tasks for this exercise are as follows:

1. Configure the console project to host the WCF service with TCP endpoint

2. Create the service hosting code

Task 1: Configure the console project to host the WCF service with TCP endpoint
1. In the BlueYonder.BookingService.Host project, add a reference to the System.ServiceModel
assembly.

Note: The begin solution already contains all the project references that are needed for the
project. This includes the BlueYonder.BookingService.Contracts,
BlueYonder.BookingService.Implementation BlueYonder.DataAccess, and
BlueYonder.Entities projects, as well as the Entity Framework 5.0 package assembly.

2. Review the FlightScheduleDatabaseInitializer.cs file in the BlueYonder.BookingService.Host


project. Observer how the Seed method initializes the database with predefined locations and flights.
3. In the BlueYonder.BookingService.Host project, open the App.config, and add a service
configuration section for the Booking WCF service.

Add the <system.serviceModel> element to the configuration, and in it add the <services>
element.

In the <services> element, add a <service> element, and then set its name attribute to
BlueYonder.BookingService.Implementation.BookingService.
4. Add an endpoint configuration to the service configuration you added in the previous step.

In the <service> element, add an <endpoint> element with the following attributes.

Attribute Value

name BookingTcp

address net.tcp://localhost:900/BlueYonder/Booking/

binding netTcpBinding
5-40 Creating WCF Services

Attribute Value

contract BlueYonder.BookingService.Contracts.IBookingService

5. In the App.config, add a connection string to the local SQL Express.

<connectionStrings>
<add name="BlueYonderServer" connectionString="Data
Source=.\SQLEXPRESS;Database=BlueYonder.Server.Lab5;Integrated Security=SSPI"
providerName="System.Data.SqlClient" />
</connectionStrings>

Note: You can copy the connection string from the ASP.NET Web API services
configuration file in
D:\Allfiles\Mod05\Labfiles\begin\BlueYonder.Server\BlueYonder.Companion.Host\Web.c
onfig. Make sure you change the database parameter to BlueYonder.Server.Lab5.

Task 2: Create the service hosting code


1. In the BlueYonder.BookingService.Host project, open the Program.cs file, and then add two static
event handler methods to handle the ServiceOpening and ServiceOpened events of the service
host.
Each of the methods receives two parameters: sender, of type object, and args, of type EventArgs.

In each method, write a short status message to the console window.

2. In the Main method, add the following code to initialize the database.

var dbInitializer = new FlightScheduleDatabaseInitializer();


dbInitializer.InitializeDatabase(new
TravelCompanionContext(Implementation.BookingService.ConnectionName));

3. Remaining in the Main method, add code to host the BookingService service class.
Create a new instance of the ServiceHost class for the BookingService service class.

Register to the service host's Opening and Opened events with the ServiceOpening and
ServiceOpened methods, respectively.

Open the service host, wait for user input, and the close the service host.

Note: Refer to Lesson 3, "Configuring and Hosting WCF Services", Topic 1, "Hosting WCF
Services", for an example on how to open the service host, wait for user input, and the closing the
service host.

4. Run the BlueYonder.BookingService.Host project in debug mode and verify it opens without
throwing exceptions. Keep the console window open, as you will need to use it later in the lab.

Results: You will be able to start the console application and open the service host.
Developing Windows Azure and Web Services 5-41

Exercise 3: Consuming the WCF Service from the ASP.NET Web API
Booking Service
Scenario
After you create the WCF service, you can consume it from the ASP.NET Web API web application. In this
exercise, you will configure the client endpoint in the ASP.NET Web API web application, and use the
ChannelFactory<T> generic class to create a client proxy. You will then use the new proxy to call the
WCF service, create the reservation on the backend system, and get the reservation confirmation code in
return.

The main tasks for this exercise are as follows:

1. Add a reference to the service contract project in the ASP.NET Web API projects

2. Add client configuration to Web.Config

3. Call the Booking service by using ChannelFactory<T>

4. Debug the WCF service with the client app

Task 1: Add a reference to the service contract project in the ASP.NET Web API
projects
1. Open the D:\AllFiles\Mod05\LabFiles\begin\BlueYonder.Server\BlueYonder.Companion.sln
solution file in a new Visual Studio 2012 instance, and add the
BlueYonder.BookingService.Contracts project from the
D:\Allfiles\Mod05\Labfiles\begin\BlueYonder.Server\BlueYonder.BookingService.Contracts
folder to the solution.

2. In the BlueYonder.Companion.Controllers project, add a reference to the


BlueYonder.BookingService.Contracts project.

3. In the BlueYonder.Companion.Host project, add a reference to the


BlueYonder.BookingService.Contracts project.

Task 2: Add client configuration to Web.Config


1. In the BlueYonder.Companion.Host project, open the Web.config file, and add a client endpoint
configuration to call the Booking WCF service.

Add the <system.serviceModel> element to the configuration, and in it add the <client> element.

In the <client> element add an <endpoint> element with the following attributes.

Configuration Parameter Value

address net.tcp://localhost:900/BlueYonder/Booking

binding netTcpBinding

contract BlueYonder.BookingService.Contracts.IBookingService

name BookingTcp

Note: Make sure you set the name attribute of the endpoint to BookingTcp, as you will
use this endpoint name in code to locate the endpoint configuration.
5-42 Creating WCF Services

Task 3: Call the Booking service by using ChannelFactory<T>


1. In the BlueYonder.Companion.Controllers project, open the ReservationsController.cs file. In the
ReservationsController class, create a channel factory object for the IBookingService service
contract, and store it in a field named factory.

In the channel factory constructor, use the BookingTcp endpoint configuration name.
2. In the CreateReservationOnBackendSystem method, uncomment the code that creates the
TripDto and ReservationDto objects.

3. In the CreateReservationOnBackendSystem method, create a new channel by using the channel


factory you have created.

Before the try block, create the channel by calling the factory.CreateChannel method.
Store the newly created channel in a variable of type IBookingService.

4. In the try block, call the CreateReservation operation of the Booking service.

The CreateReservation operation returns the confirmation code string. Store the returned string in a
local variable.

After calling the service, close the channel by casting the channel object to the
ICommunicationObject interface, and then call its Close method.
Return the confirmation code string and remove the return statement that is currently at the end of
the method.

Note: Refer to Lesson 4, "Consuming WCF Services", Topic 2, "Creating a Service Proxy with
ChannelFactory<T>", for an example on how to close a channel.

5. Change the first catch block from

catch (HttpException fault)

to

catch(FaultException<ReservationCreationFault> fault)

Inside the catch block, throw an HttpResponseException with an HttpResponseMessage object.

Create the HttpResponseMessage by using the Request.CreateResponse method. Set the status
code to BadRequest (HTTP 400), and the content of the message to the description of the fault.

Abort the connection in case of Exception, by calling the Abort method on the proxy object.

6. In the second catch block, abort the connection before calling the throw statement.
Abort the connection by casting the channel object to the ICommunicationObject interface, and
then calling its Abort method.

7. In the Post method, before adding the new reservation to the repository, call the Booking WCF
service and set the reservation's confirmation code.

Call the CreateReservationOnBackendSystem method with the newReservation object.

Store the returned confirmation code of the reservation in the newReservation.ConfirmationCode


property.
Developing Windows Azure and Web Services 5-43

Note: The reservation should be saved to the database with the confirmation code you got
from the WCF service, so make sure you set the confirmation code property before adding the
reservation to the repository.

8. Build the solution.

Note: The BlueYonder.Companion.Host project is already configured for Web hosting


with IIS. Building the solution will make the Web application ready for use.

Task 4: Debug the WCF service with the client app


1. Place a breakpoint on the line of code that calls the CreateReservationOnBackendSystem method,
and start debugging the Web application.

2. In the 20487B-SEA-DEV-C virtual machine, open the BlueYonder.Companion.Client solution from


the D:\AllFiles\Mod05\LabFiles\begin folder, and run it without debugging.

3. Search for New and purchase a new trip from Seattle to New York.

4. Go back to the 20487B-SEA-DEV-A virtual machine, and debug the BlueYonder.Companion and
BlueYonder.Server solutions. Verify the ASP.NET Web API service is able to call the WCF service.
Continue running both solutions and verify the client is showing the new reservation.

Results: After you complete this exercise, you will be able to run the Blue Yonder Companion client
application and purchase a trip.

Question: Why should you use the NetTcpBinding in your endpoints instead of
BasicHttpBinding or WSHttpBinding?

Question: Why did you create the service contract and service implementation in separate
C# projects?
5-44 Creating WCF Services

Module Review and Takeaways


At the beginning of this module you learned about SOAP-based services and their benefits compared to
other technologies. You also learned about the differences between ASP.NET Web API and WCF and the
factors that should be considered when choosing between the two.

You then learned the basics of WCF services: defining a service contract interface and data contracts,
implementing a service, and the proper way to handle exceptions. Next, you learned about hosting WCF
services; the responsibilities of the host, the options that are available in hosting, and the details of how to
self-host. You also learned how to configure service endpoints in code and in the configuration file of
your host. You then learned how to expose your service metadata, including information on the various
contracts and endpoints, by using service behaviors and MEX endpoints.

Finally, you learned how to create a proxy that can call a service by using the Add Service Reference
dialog box of Visual Studio 2012, or by using the generic ChannelFactory<T> class.

This module covered the fundamentals of creating WCF services. If you wish to go in depth into other
features of WCF, such as messaging patterns, WCF extensibility, and how to secure WCF services, refer to
Appendix 1, Designing and Extending WCF Services, and Appendix 2, Implementing Security in WCF
Services in Course 20487.

Common Issues and Troubleshooting Tips


Common Issue Troubleshooting Tip

When running the service host from Visual


Studio, an exception of type
AddressAccessDeniedException is thrown,
saying HTTP could not register URL

When running the service host from Visual


Studio, an exception of type
AddressAlreadyInUseException is thrown
saying HTTP could not register URL

Review Question(s)
Question: When should you favor WCF over of ASP.NET Web API?

Question: When should you use the Add Service Reference dialog box, and when should
you use the ChannelFactory<T> generic class?

Tools
WCF Test Client, Microsoft Service Configuration Editor, Svcutil.exe
6-1

Module 6
Hosting Services
Contents:
Module Overview 6-1

Lesson 1: Hosting Services On-Premises 6-3

Lesson 2: Hosting Services in Windows Azure 6-13


Lab: Hosting Services 6-22

Module Review and Takeaways 6-31

Module Overview
The most important aspect of implementing a service is hosting it so that clients can access it. For both
Windows Communication Foundation (WCF) and ASP.NET Web API, the host is responsible for allocating
all the resources required for the service. The host opens listening ports, creates an instance of a service
when a request arrives, and allocates memory and threads as required. If the host fails, the service fails.
There is a one-to-one dependency between the host and the service. The reliability and performance of
the host directly affects the quality of the service.

You can host WCF services in a self-hosted Windows application. WCF supports hosting in IIS also, and
you can self-host your ASP.NET Web API services. In this module, you will explore the various ways of
hosting your services on-premises, and the benefits each type of host provides, in relation to issues such
as reliability, performance, and durability.

Apart from deciding on the type of hosting service to use, Web-host or self-host, you also need to think
about the hosting environment for your services - whether it will be on-premises or in the cloud platform.
Considerations for deciding which environment to use include:
Specific hardware requirements. When hosting on-premises, you have more control over the
hardware of your server than in the cloud platform. In the latter case, you only know how many
Central Processing Units (CPUs), memory, and disk space your virtual machines have.

Scaling requirements. Hosting services on-premises requires usage prediction and servers. Other
than the costs involved with over-provisioning, on-premises hosting can also be impacted by under
provisioning caused by rapid growth and unpredictable increase in demand. Hosting your services in
the cloud environment makes your servers available by using the elasticity of the cloud platform to
scale out when more resources are required.

Legal requirements. In some countries, certain types of data, such as personal data, can only be
stored inside the boundaries of the country. For on-premises hosting, this is achieved easily, but when
you host your services and data in the cloud platform, your data might be copied between data
centers in different locations on the globe, for reasons such as availability and backup.

Your decisions related to hosting type and hosting environment, although seemingly independent of each
other, can affect each other. For example, if you choose to host your services in the Windows Azure cloud
environment, you need to choose between hosting your services in a web role, a worker role, or a Web
6-2 Hosting Services

Site. If you chose to self-host your service, you only have the worker role option, but if you want to use
Web-hosting for your service, you need to choose between a web role and a Web Site. In this module,
you will learn more about hosting services in Windows Azure and supported hosting types.

Note: The Management Portal UI and Windows Azure dialog boxes in Visual Studio 2012
are updated frequently when new Windows Azure components and SDKs for .NET are released.
Therefore, it is possible that some differences will exist between screen shots and steps shown in
this module and the actual UI you encounter in the Management Portal and Visual Studio 2012.

Objectives
After you complete this module, you will be able to:

Host services on-premises by using Windows services and IIS.

Host services in the Windows Azure cloud environment by using Windows Azure Cloud Services and
Web Sites.
Developing Windows Azure and Web Services 6-3

Lesson 1
Hosting Services On-Premises
When you want to host a web service on-premises, you can host it by using a Windows service or Internet
Information Services (IIS). A Windows service is a long-running application that runs in the background.
Windows services have no user interfaces, and do not produce any visual output. Services run in the
background while a user performs or executes any other task in the foreground, but they also run when a
user is not logged on. This makes Windows services a good candidate for classic server applications, such
as an email server or a File Transfer Protocol (FTP) server.

Running without a user interface poses a debugging and operations challenge, because the user is not
notified about warnings or errors. To overcome this, Windows services use the Windows Event Log and
other logging frameworks to record tracing information and notify the system administrator about error
conditions.
IIS is a Windows web server that hosts web applications.

This lesson explains how to host WCF services and ASP.NET Web API services in a Windows service, and
how to host WCF services in IIS.

Note: IIS can also host ASP.NET Web API services. This is covered in Module 3 "Creating
and Consuming ASP.NET Web API Services" in the Lesson "Hosting and Consuming ASP.NET Web
API Services" in Course 20487.

Lesson Objectives
After you complete this lesson, you will be able to:
Host WCF services in a Windows service.

Host ASP.NET Web API services in a Windows service.


Host WCF services in IIS.
Compare service hosting in Windows services and IIS.

Self-Hosting WCF Services in Windows Services


You can use self-hosted console applications to
host your WCF services. Hosting your service in a
self-hosted console application is useful for quick
proof-of-concept projects and testing, but in real-
world scenarios, you should host your WCF
services in a background process that cannot be
shut down by the user easily. Windows services
allow you to run your WCF service in a
background process.
Windows services are processes that the Windows
operating system deploys and manages. A
Windows service process runs in the background
without any UI, making it transparent to the user who may be unaware of the existence of the service
altogether. Windows services are a suitable approach for implementing long-running processes that do
not require user interaction, and therefore are very useful for hosting your WCF services.
6-4 Hosting Services

The Windows operating system manages the loading and execution of Windows services. In addition, the
Microsoft Services Management Console is a UI tool that you can use for managing Windows Services and
their configuration settings. To open the Microsoft Services Management Console, open Control Panel
from the Start screen, open Administrative Tools, and then open Services.

The following image is a screenshot of Microsoft Services Management Console.

Image not available in the media folder

You can configure a Windows service to start automatically when the system finishes booting, and to
restart in case a failure occurs. Both are useful features for WCF service hosts. Another difference between
a Windows service and a foreground application started by the user is that a Windows service runs within
a security context that is different from the security context of the user. By default, a specialized local
identity, such as Network Service or Local System, is used to run Windows Services, but you can change it
to suit your needs.
For more information about Service User Accounts, see

Service User Accounts.


http://go.microsoft.com/fwlink/?LinkID=298808&clcid=0x409

Hosting a WCF Service in a Windows Service


To host a WCF service in a Windows service, you need to perform the following steps:
1. Create a Windows Service project, and add the WCF hosting code to it.

2. Create an Installer class for the Windows service.

3. Install the Windows service.

Creating a Windows Service Project


To host a WCF service in a Windows service, you create a Windows Service project in Visual Studio. The
project created by Visual Studio 2012 contains a class that derives from the
System.ServiceProcess.ServiceBase class. The generated class is the entry point to your Windows
service. The generated class already contains the OnStart and OnStop methods. The OnStart method is
called when the Windows service is about to start. In that method, you use the ServiceHost class to host
your WCF service. The OnStop method is called when the Windows service is closing. In that method, you
close the ServiceHost object and release any unmanaged resources.

The following code example shows a Windows Service that hosts the BookingService WCF service.

It overrides the OnStart method to open the service host and the OnStop method to close it.

Hosting a WCF Service in a Windows Service


public partial class BookingWindowsService : ServiceBase
{
private ServiceHost _serviceHost;

public BookingWindowsService()
{
InitializeComponent();
}

protected override void OnStart(string[] args)


{
_serviceHost = new ServiceHost(typeof(BookingService));

try
{
// Open the host and start receiving incoming messages
Developing Windows Azure and Web Services 6-5

_serviceHost.Open();
}
catch (Exception e)
{
// If an error occurred while trying to open the service, set the field to
null to prevent
// it from being closed when the Windows Service stops.
_serviceHost = null;
Debug.WriteLine("Unable to open service.\n{0}", e.Message);
}
}

protected override void OnStop()


{
// If the service host was opened successfully, close it when the Windows Service
is stopped
if (_serviceHost == null) return;

_serviceHost.Close();
_serviceHost = null;
}
}

Windows services also support the pause and resume actions. The WCF service host does not provide a
pause and resume feature. If you close the WCF service host, you cannot open it again, and you will need
to create a new ServiceHost object. Therefore, pausing and resuming services is not common when
hosting WCF services. However, you can create your own logic for pausing services by checking a global
Boolean flag before executing your service operation's code for example, and then overriding the
OnPause and OnContinue methods of the ServiceBase class.

Note: Instead of creating a global Boolean flag, you can use the WCF Extensible objects
that were discussed in Lesson 3 "Extending the WCF Pipeline", in Appendix A, "Designing and
Extending WCF Services" in Course 20487.

For more information on hosting WCF services in a Windows Service, see

Hosting in a Windows Service Application


http://go.microsoft.com/fwlink/?LinkID=298809&clcid=0x409

Creating an Installer
After you create the Windows Service code, you need to create an installer class. The installer component
provides information about the Windows Service, such as its name, a short description, its start mode
(automatic/manual), and the Windows identity under which it runs.

To create an installer for your Windows Service, open the Windows Service class with the designer and in
the designer view, right-click, and click Add Installer. This will create a new class that derives from the
System.Configuration.Install.Installer class. In the new installer class, Visual Studio will create an
instance from each of the following classes:

System.ServiceProcess.ServiceProcessInstaller. Controls the account under which the Windows


Service will execute.

System.ServiceProcess.ServiceInstaller. Controls settings, such as the startup mode of the service,


its description, and the list of Windows services your service depends on, for example, a database
Windows service. Setting the dependency list ensures that the operating system will start the
dependent services before starting your service.
6-6 Hosting Services

You can configure each of these settings by opening the service installer in the designer, clicking either
the service process installer or the service installer, and setting their properties in the Properties window.

The following image is a screenshot of the service installer in Visual Studio 2012.

FIGURE 6.1: THE SERVICE INSTALLER PROPERTIES

Installing the Windows Service


After you finish creating and compiling the Windows service project, you need to install the Windows
service in the operating system. Installing a Windows service requires some special actions, such as
updating the registry. To install a Windows service, you need to run a special command-line installer
utility called InstallUtil.exe. This utility is included with the Microsoft .NET Framework. To run this utility,
open the Visual Studio 2012 Command Prompt, and in the command window, type InstallUtil
YourAssemblyPath. Replace YourAssemblyPath.exe with the path to your compiled Windows service
assembly or executable file.
After you run the utility and verify that it installed successfully, you can open the Microsoft Services
Management Console window from the Control Panel to verify that it configured properly. You can then
start the utility if you have not set its start mode to automatic.

Note: To uninstall a Windows service, type the following command: InstallUtil


YourServiceExePath.exe /u.

You should test your code carefully if you plan to execute it within a Windows service. The fundamental
difference between development and production environments is that a production Windows service runs
within a security context that is different from the development security context of the user. Issues such as
missing or incorrect permissions are common when switching between the security context of the user,
which is often an administrative account, and the security context of the Windows Service, which might be
a more restricted account.

For more information on creating Window Service application with Visual Studio 2012, see

Introduction to Windows Service Applications


http://go.microsoft.com/fwlink/?LinkID=298810&clcid=0x409
Developing Windows Azure and Web Services 6-7

Hosting WCF Services in IIS


By hosting WCF services in IIS, you can benefit
from reliability features offered by the application
pools and worker processes in IIS, such as process
recycling, health monitoring, message-based
activation, and idle shutdown.

You can use IIS to host multiple services and


isolate them from one another by using the IIS
application pool configuration and the worker
process mechanism. IIS provides better hosting by
managing the health of your service through the
application pool configuration. IIS performs
various actions on your service, such as the
following:
Shuts down your service if it is idle for a long time, to conserve resources.
Starts your service after shutdown, when a message arrives.

Recycles your service if it uses too much CPU or memory over time.

Protects your service with rapid fail protection if your service fails or is unresponsive for a long time.

Note: Module 3, "Creating and Consuming ASP.NET Web API Services", Lesson "Hosting
and Consuming ASP.NET Web API Services" in Course 20487 provides more explanation of IIS
core capabilities.

IIS running on Windows Server 2008 and later versions supports most of the built-in transport types of
WCF, including HTTP, HTTPS, TCP, Message Queuing and Named pipes. The new User Datagram Protocol
(UDP) transport released with WCF 4.5 is not yet supported by IIS. For earlier versions of IIS (versions 6.0
and earlier), IIS only supports bindings that use the HTTP and HTTPS transports.

By default, WCF services that are hosted in IIS only support HTTP and HTTPS transports. To support other
transports, such as TCP and Named Pipes, you need to configure your Web application.
For more information on Configuring the Windows Process Activation Service for Use with Windows
Communication Foundation, see:

Configuring the Windows Process Activation Service for Use with Windows Communication
Foundation
http://go.microsoft.com/fwlink/?LinkID=313734

IIS uses a hierarchical-style directory management, where each virtual directory maps to a folder in the file
system. This virtual directory contains static files such as images and webpages, in addition to web
applications such as ASP.NET Web API services and WCF services. Because of the ability of the IIS to host
multiple web applications on a single server, you can deploy several WCF services to IIS, each of them
running independent of each other.

Note: If different web applications share the same application pool, these applications also
share the same worker process. If one of the services causes its worker process to fail, for example
due to a critical exception, all the hosted applications in the worker process will also fail. To
prevent such a scenario, consider separating web applications to different application pools.
6-8 Hosting Services

One of the differences between a self-hosted WCF service and an IIS-hosted WCF service is how you
construct the endpoint address in the service configuration, and how you configure it on the client-side.
On the client side, the address of a WCF service is constructed of two parts:

1. The virtual directory path in IIS. IIS has a set of hierarchical virtual directories. The path where you
place your web application will be the path used by clients to access your service.

2. The .svc file name of the service. When a client requests a resource from IIS, it uses a URL that
points to a file, such as a .html, a .aspx, or a .asmx file. IIS uses the file extension part of the URL to
determine which handler should handle the request. For IIS to identify a request as being sent to a
WCF service, you need to create a file with the .svc extension. The .svc file has specific content, which
identifies the service type that needs to be hosted and called.

For example, if the virtual directory path to the web application is /MyApps/Booking, and the.svc file
name of the service is bookingService.svc, the URL that the client uses will be
http://serverName/MyApps/Booking/bookingService.svc. The approach is similar to the one used
when your service uses other transport types, such as TCP or Named pipes.

For more information about IIS architecture, see


Introduction to IIS Architecture
http://go.microsoft.com/fwlink/?LinkID=298811&clcid=0x409

When IIS hosts WCF services in a web application, it does not automatically load a service host for every
service. The first time IIS receives a request for a specific service, the service host is created and opened.
This is also referred to as message-based service activation. The .svc file contains the information required
by IIS to start the service host.
On the service side, IIS automatically sets the base address of your service to the virtual directory path, to
avoid using absolute addresses in your configuration. In addition, IIS does not require you to specify the
.svc file name in the address settings of the service, because IIS identifies the service it needs to call from
the content of the .svc file. Therefore, it only uses the configuration to identify which of the endpoints of
the service is being accessed. Because of this behavior of IIS, the address you set in your service endpoints
is considered as a suffix to the URL that points to the .svc file.

The following example demonstrates the content of a WCF service description file.

WCF Service Description File


<%@ ServiceHost
Service="BlueYonder.BookingService" %>

The ServiceHost directive contains all the information IIS requires to create a service host for your service,
and includes the following attributes:
Service. This attribute points to the Common Language Runtime (CLR) type of the service to host.

Factory. Because IIS instantiates the service host on your behalf, this attribute provides a way to inject
a service host instance to use instead of the default service host created by IIS. To do this, create a
new class that derives from the ServiceHostFactory class, and specify its type name in the Factory
attribute.

For more information about creating service host factories, see

Extending Hosting Using ServiceHostFactory


http://go.microsoft.com/fwlink/?LinkID=298812&clcid=0x409
Note: A web application can hold many services, with each service having a matching .svc
file.
Developing Windows Azure and Web Services 6-9

The web.config file contains the same service configuration as an app.config file. The difference is that you
do not have to use a base addressIIS always uses the URL of the .svc file as the base address for the
service.

Note: Because IIS does not require you to define base address for your service, and
because WCF supports automatic endpoint configuration, you might find that some web
applications do not have the <services> section in their WCF configuration.

Instead of creating a .svc file for each service, you can supply the same configuration in the web.config file
by adding the <serviceHostingEnvironment> section to the <system.serviceModel> section group.

The following configuration demonstrates how to describe a service without using a .svc file.

Configuring Service Activations


<serviceHostingEnvironment >
<serviceActivations>
<add relativeAddress="Booking.svc" service="BlueYonder.BookingService"/>
</serviceActivations>
</serviceHostingEnvironment>

If you choose to use the web.config file to configure the service activation, you do not need to create
matching .svc files for your services.
Like the .svc file, you can configure the <add> element of the <serviceActivations> element to use a
service host factory, by adding the factory attribute to the element and setting the attribute to the type
name of the service host factory derived class.

For more information about hosting a WCF service under IIS, see

How to: Host a WCF Service in IIS


http://go.microsoft.com/fwlink/?LinkID=298813&clcid=0x409

Question: Why is using IIS preferable to using a Windows Service?

Self-Hosting ASP.NET Web API Services


IIS provides an environment with several features
for hosting ASP.NET Web API services. However,
sometimes IIS might not be the most suitable
hosting solution. For example, you create an
instant messaging desktop application that
communicates with other computers running the
same application by using HTTP. You will want to
host the ASP.NET Web API service in your desktop
application, instead of hosting the service in IIS
and making IIS communicate with your
application in some way. Similar to WCF, ASP.NET
Web API supports self-hosting. ASP.NET Web API
self-hosting is included in a NuGet package named Microsoft ASP.NET Web API Self Host and uses the
System.Web.Http.SelfHost.HttpSelfHostServer class to host services.

The following code example uses the HttpSelfHostServer class to host an ASP.NET Web API service
inside a console application.
6-10 Hosting Services

Self-Hosting an ASP.NET Web API Service


static void Main(string[] args)
{
var config = new HttpSelfHostConfiguration("http://localhost:2300");

WebApiConfig.Configure(config);

using (HttpSelfHostServer server = new HttpSelfHostServer(config))


{
server.OpenAsync().Wait();
Console.WriteLine("Press Enter to quit.");
Console.ReadLine();
}
}

Before you start the self-hosted server, you need to configure the ASP.NET Web API environment. To do
this, you need to create an instance of the HttpSelfHostConfiguration class, and apply the required
configuration, such as adding routing rules, and media type formatters.
After you create the configuration, you create an instance of the HttpSelfHostServer class, with the
configuration as a constructor parameter, and then call the OpenAsync method of the class to load the
service.

Note: The underlying implementation of the HttpSelfHostServer uses WCF for its
communication channel. However, the message does not flow through the entire WCF pipeline,
but is rather decoded in the channel stack, and then sent to the ASP.NET Web API routing
mechanism.

One of the scenarios for using self-hosting is running integration tests. To test an ASP.NET Web API
service, you start by testing the controller and its actions. If your message-handling pipeline includes
additional components that need to run as part of the test, such as message handlers and action filters,
you cannot test the controller end-to-end by simply instantiating it and calling one of its action methods.
You need to send an HTTP request to the service, so that the message will flow through the pipeline,
testing each part in the pipeline. Because of test cases being independent of each other, this requires
running multiple service hosts, which you can accomplish by using the self-hosting feature. In addition, to
reduce the latency caused by sending HTTP messages over the network, your test client can send
messages to the self-hosted service in-memory. This makes it possible to test code that interacts with the
request and response pipeline, without actually sending an HTTP message over the network. In these
cases, ASP.NET Web API self-hosting can be very useful. For testing, create an instance of the
HttpSelfHostServer class and pass it to an HttpClient instance for it to consume in-memory messages
without creating a network connection.

The following code example creates a self-host and passes it as a constructor parameter to an HttpClient
object for testing.

In-Memory Testing with HttpSelfHostServer


using (HttpSelfHostServer server = new HttpSelfHostServer(config))
{
// If you only need the self-hosting for in-process, you do not need to call the
OpenAsync method //server.OpenAsync().Wait();
var client = new HttpClient(server);
var result = client.GetAsync("/").Result;
}
Developing Windows Azure and Web Services 6-11

Demonstration: How to Host WCF Services in IIS


This demonstration shows how to host a WCF service in IIS. In this demonstration, you create a WCF
service application project, use Visual studio to build the service and deploy it in IIS, and finally, connect
to the hosted WCF service and retrieve its Web Service Description Language (WSDL).

Demonstration Steps
1. Open Visual Studio 2012 and create a new WCF Service Application project named MyIISService.

2. Review the content of web.config file. Observe there is no <services> section in the
<system.serviceModel>. This is because IIS automatically defines the base address according to the
virtual directory where the services are hosted, and the default endpoint configuration of WCF
therefore does not require any specific configuration for each endpoint.

3. Review the content of the Service1.svc file by right-clicking it in Solution Explorer and then clicking
View Markup. When IIS receives a request addressed to the .svc file, it uses the value of the Service
attribute from the file to know which service it needs to call.

4. In Solution Explorer, select the MyIISService project, and then press F5 to debug the service. After
the browser opens, append the path Service1.svc?wsdl to the address and observe the WSDL file.
The WSDL is used for creating the client-side proxy code.

5. Publish the service to the file system, under C:\inetpub\wwwroot\MyService.


6. Open IIS Manager and convert your WCF service to an application, by right-clicking it, and then
clicking Convert to Application.

7. Browse to the IIS-hosted service and view its WSDL. Use the address
http://localhost/MyService/Service1.svc?wsdl.
Question: What is the difference in the way you create and configure your WCF service
when hosting in IIS?

Compare IIS and Self-hosting Features


To decide whether to host your WCF application
in a Windows service or in IIS, you should be
aware of the advantages and disadvantages of
each method.

Windows Service IIS

Lifetime Service process lifetime is IIS shuts down idle services to improve resource
controlled by the operating management. The service is reactivated when a
system, and is not message- message is received.
activated.

Health No health management. Services are monitored and recycled when an error
6-12 Hosting Services

Windows Service IIS


Management occurs.

Endpoint Configured in the Bound to the IIS virtual directory path, which contains
Address app.config file. the .svc file.

Deployment Requires installation by Can be published from Visual Studio into IIS or into a
using installutil.exe. package for future deployment.

For more information about hosting options, see

Hosting Services
http://go.microsoft.com/fwlink/?LinkID=298814&clcid=0x409
Developing Windows Azure and Web Services 6-13

Lesson 2
Hosting Services in Windows Azure
You can create highly scalable services that are easy to deploy and manage by using Windows Azure. When
hosting a service in Windows Azure, you provide the service application and its configuration. Windows Azure
handles hosting, lifecycle management, and the availability of your service.

Windows Azure provides multiple kinds of service hosting:

Worker role. Hosts a background process for long process tasks, similar to a Windows service.

Web role. Hosts services that are used as a front-end to the Internet.

Web Site. Similar to web role with faster deployment but fewer features.

Virtual machines. Create virtual machines running Windows or Linux from existing templates or by
using an existing virtual hard disk (VHD) file. Virtual machines are not covered in this course.

This module explains the different options for hosting services in Windows Azure, the differences between
them, and how to use them to host your services.

Lesson Objectives
After you complete this lesson, you will be able to:
Run Windows Azure roles in a local compute emulator.

Describe the features available in Windows Azure Cloud Services.

Host a service in a Windows Azure web role.


Host a service in a Windows Azure worker role.

Host a service in a Windows Azure Web Site.

Explain the differences between Windows Azure Web Sites and Windows Azure web roles.

Cloud Projects and the Windows Azure Emulator


To develop applications for Windows Azure, you
have to download and install the Windows Azure
Software Development Kit (SDK), which is not part
of the core .NET Framework. To download the
Windows Azure SDK, see

The Windows Azure SDK


http://go.microsoft.com/fwlink/?LinkID=298
815&clcid=0x409

After you install the SDK, Visual Studio 2012 offers


new project templates for cloud projects. When
you select a cloud project, Visual Studio 2012 opens a dialog box where you can select the roles for your
project.

The following image is a snapshot of the Windows Azure Cloud Service dialog box that Visual Studio 2012
displays when you create a cloud project.
6-14 Hosting Services

FIGURE 6.2: WINDOWS AZURE CLOUD SERVICE DIALOG BOX


The SDK also installs the Windows Azure Compute Emulator from which you can run, debug, and test
cloud projects on your computer without even connecting to the Internet. When you run your project
locally, you can monitor your roles by using the Windows Azure Compute Emulator UI. To open the UI,
right-click the Windows Azure tray icon, and then click Show Compute Emulator UI.
The UI displays all running role instances. Selecting a role instance displays an output window with the
logging information about the role. From the UI you can start, stop, and restart the deployment of your
role.
The following image is a snapshot of the Windows Azure Compute Emulator UI that hosts a worker role
and web role. Each has a single instance.

FIGURE 6.3: WINDOWS AZURE COMPUTE EMULATOR


There are differences between services that run in the emulator and services that run in the cloud
environment, such as:

The emulator limits the number of roles and the number of compute instances you can emulate. The
maximum number of roles per deployment is 25 and the maximum core count is 20.
Developing Windows Azure and Web Services 6-15

Services that run in the emulator have default administrator privileges, whereas services in Windows
Azure do not have default administrator privileges.

The emulator does not use Windows Azure load balancing capability.
By default, the emulator uses IIS Express. You can change this setting in the Visual Studio 2012
projects properties on the Web tab. Windows Azure uses IIS.

All roles in the emulator run on the same local computer, whereas in Windows Azure roles run on
separate virtual machines.

Web applications hosted with the Windows Azure Emulator can only be accessed from the local
computer running the emulator. Web applications deployed in Windows Azure can be accessed from
any computer on the Internet.

Windows Azure Cloud Services


When you deploy an application to Windows
Azure Cloud Services, you deploy one or more
roles. A role can either be a web role, which is a
Web application running under IIS, or a worker
role, which is a background process. You can
configure how many compute instances
(computers) each role will use. The minimum
number of instances is one per-role, but to
achieve better reliability, it is advisable to deploy
to at least two instances. All the compute
instances running the same role actually run the
same application, only in different machine.

Note: In addition to the application you develop for the role, you can include built-in
modules that will deploy to each role instance. Modules can perform instance configuration, such
as setting remote desktop, or install background services that run in each instance, such as
diagnostics collection. You can select the modules to include in the role by using the role
properties window in Visual Studio 2012, or by manually editing the service definition file of the
cloud project. The Diagnostics module is discussed in depth in Module 10, "Monitoring and
Diagnostics" in Course 20487.

When deploying a cloud project to Windows Azure, all the deployed roles are packaged to a compressed
.cspkg file and the file is uploaded to Windows Azure Storage. After the package uploads, Windows Azure
locates a physical machine that can host roles. It then mounts a virtual machine, running Windows Server
2012 by default, and deploys each role to its assigned virtual machine by downloading the package from
the storage and deploying the required files to each instance. This procedure takes several minutes.

There are several benefits to using Cloud Services, which include:

Easy to create and deploy. Cloud Services provide a Platform as a Service (PaaS) environment, where
you supply the application package, and the Windows Azure platform supplies the compute
instances.

Automatic Operating System updates. When you use Cloud Services, the platform is responsible
for keeping your instances up-to-date with the latest Windows updates.
6-16 Hosting Services

Hardware monitoring. Cloud Services can detect faulty hardware and move your deployment to a
virtual machine on a different physical server.

All the compute instances of the role are accessible from the Internet through a single IP address. This is
achieved by using a load balancer. The load balancer is an important component of Windows Azure,
because it maintains the health of the role and the application deployed to it. Windows Azure uses a
software load balancer to distribute network traffic among role instances. Requests sent to a public
endpoint are distributed among live instances of the role. The load balancer probes each role for its
health at specified intervals and recycles unresponsive roles.

Note: Storing the deployment package in Windows Azure Storage is crucial for the role's
continuity. If any of the role instances breaks down and stops working, another instance is
automatically located instead of the failed one and is assigned to the role. After the new virtual
machine is turned on, the deployment package is downloaded from storage and deployed to the
new instance. Windows Azure Storage is explained in depth in Module 9, "Windows Azure
Storage" in Course 20487.

Windows Azure offers two deployment environments for roles:

The staging environment is used for testing your deployment.


The production environment is accessed by your clients over the Internet.

Note: The staging and production deployment environments have the same hardware
capabilities.

These two environments have the same functionality and support all role features. The difference between
them is in the virtual IP addresses by which the cloud service is accessed. The staging environment URL
contains a globally unique identifier (GUID) such as http://3D535850-3C00-4C8C-B89A-
795B9EF93038.cloudapp.net. In the production environment, the URL is based on a friendlier Domain
Name System (DNS) prefix assigned to the cloud service, such as: http://calculatorservice.cloudapp.net.

Windows Azure Web Role


A web role is an IIS-based hosting environment.
Its primary use is to make it easier to create web
applications. In a web role, you can host any type
of application that is supported in IIS, such as
ASP.NET applications, WCF services, and Node.js
applications. There is no restriction on having web
roles of different kinds. For example, you can have
one web role that hosts a WCF service and
another web role that hosts an ASP.NET
application in the same deployment.

Web roles interact with Windows Azure


components. For example, a Web application can
connect to a Windows Azure Storage service, which provides storage for binary and text data, messages,
and structured data. For more information about Windows Azure Storage services, see

Blobs, Queues, and Tables


Developing Windows Azure and Web Services 6-17

http://go.microsoft.com/fwlink/?LinkID=298816&clcid=0x409
Note: Windows Azure Storage will be discussed in detail in Module 9, "Windows Azure
Storage" in Course 20487.

By default, every Web application you add to your cloud project creates a new Web role, and each such
Web role will be deployed to a different set of compute instances. Windows Azure supports deploying
several Web applications to the same Web role, by using IIS Web Sites, however this feature is not
accessible through the Visual Studio 2012 user-interface, and instead you will need to change the cloud
project's service definition file manually, ServiceDefinition.csdef. Due to the complexity of this process, it
will not be covered in this course.

Windows Azure Worker Role


A worker role is designed to run long-running
tasks. Such tasks can vary from background
processing of stored data to self-hosting services.
Worker roles do not have IIS enabled by default,
but can be accessed from the Internet.

You can use worker roles for any type of


background processing, including:
Hosting public services. Hosts a WCF or a
self-hosted ASP.NET Web API service, and
open the worker role for incoming
communication from the Internet or from
other web/worker roles.

Back-end processing. Runs a background process that receives processing instructions from a front-
end application, such as a Web application hosted in a Web role. Background processes hosted in
worker roles can receive messages by listening to queues, such as Windows Azure Queues or
Windows Azure Service Bus.
Windows Azure Queue storage. Used for reliable, persistent messaging between roles. Queues store
messages that may be read by any role that has access to the Windows Azure Storage account.
Windows Azure Queue storage is discussed in Module 9, "Windows Azure Storage" in Course 20487.

Windows Azure Service Bus. Offers simple and guaranteed message delivery. Service Bus Topics
deliver messages to multiple subscriptions and easily fan out message delivery at scale to downstream
systems. Windows Azure Service Bus is discussed in Module 7, "Windows Azure Service Bus" in Course
20487.

A typical operation scenario involves uploading a file to a web site for processing. The file is loaded by the
Web role, which saves it to a Windows Azure Storage Blob and sends a message to the worker role to
process it. The worker role retrieves the file from the Windows Azure Storage Blob and processes it.

Note: In the previous lesson, you compared hosting services in IIS and in self-hosted
environments. The choice you make on how to host your services will affect the type of role you
choose: use web roles to host your services in IIS, or worker roles if your services should be self-
hosted.

For more information about web roles, worker roles, and Cloud Services in general, see
6-18 Hosting Services

Windows Azure Execution Models Cloud Services


http://go.microsoft.com/fwlink/?LinkID=313735

Question: When should you prefer using a worker role instead of a Web Role?

Windows Azure Web Sites


Windows Azure Web Sites is an environment for
hosting web sites on Windows Azure in a very
simple and intuitive way. To deploy to a Web Site,
you just need to copy your files to an already
running virtual machine that has IIS, so it is very
easy and fast, and requires only a few clicks to get
running. Unlike web roles, Web Sites do not have
multiple deployment environments such as
staging and production. Additionally, Web Sites
support application deployment directly from a
source code repository by using Team Foundation
Service (TFS), Git, or Mercurial.
Web Sites are the optimal environment when your application consists of client-side markup and does not
require multitier capabilities, such as asynchronous background processing. A Windows Azure Web Site is
both the web front-end tier and the back-end tier that accesses the database. Web Sites do not support
advanced capabilities, such as remote desktop access or running with elevated privileges.

Note: Windows Azure Web Sites can access the Internet, so you can hand-off any
asynchronous background processing to a Windows Azure Worker role by either communicating
with a Windows Azure hosts, or placing the information Windows Azure requires in a queue.
Windows Azure Web Sites can also access other Windows Azure components, such as SQL
Database and Windows Azure Storage.

Web Sites have different hosting modes that affect the cost of hosting your web site and resource
allocation:

Free Mode. Web Sites run alongside web sites of other users and share web server resources. The free
mode has hard quotas on CPU utilization, memory usage, and is limited to 165 megabytes (MB) of
outgoing data transfer per day.
Shared Mode. A Web Site running in shared mode is deployed in a shared/multitenant hosting
environment. The shared instance model improves the free mode and provides support for custom
domain names, 1 gigabyte (GB) of storage, and unlimited outgoing data transfer that is charged
according to your Windows Azure subscription.

Reserved Mode. When running in reserved instance mode your Web Sites are guaranteed to run
isolated by using dedicated virtual machines, meaning no other customers share your machines.

For more information about Web Site instancing modes, see

Windows Azure Web Sites Instance Modes


http://go.microsoft.com/fwlink/?LinkID=298817&clcid=0x409
Developing Windows Azure and Web Services 6-19

Note: The quota restrictions specified for the Free, Shared, and Reserved modes are the
known restrictions and might be subject to change. Before choosing the mode for your Web
application, consult the Windows Azure pricing page.

Demonstration: Hosting in Windows Azure


This demonstration shows you how to host a Web application in a Windows Azure Web Site and a
Windows Azure cloud service. In it, you create ASP.NET MVC 4 application, create a Windows Azure Web
Site by using the Windows Azure Management Portal, and finally deploy the application to the Web Site
and then to a Windows Azure web role.

Note: ASP.NET MVC 4 is covered in Module 3, "Creating and Consuming ASP.NET Web API
Services" in Course 20487.

Demonstration Steps
1. Open Visual Studio 2012 and create a new ASP.NET MVC 4 Web Application project named
MyWebSite by using the Web API template.

2. Create a Web Site named wawsdemoYourInitials (YourInitials contains your initials).

3. In Visual Studio, open Server Explorer, right-click Windows Azure Web Sites, and then click Import
Subscriptions. Download the subscription file and import it into Visual Studio. You only need to
import your Windows Azure credentials once. After you do that, you can see the list of your Web Sites
and cloud service when you deploy your applications.
4. Right-click the project in Solution Explorer, and then click Publish. Import the deployment settings
for the newly created Windows Azure Web Site, and complete the publish process. Note that the
deployment process is quick, because the deployment process only copies the content of the
application to an existing virtual machine, and does not need to wait for a new virtual machine to be
created.
5. Return to Visual Studio 2012 and add a Windows Azure Cloud Service project for the Web application
by right-clicking the project, and then clicking Add Windows Azure Cloud Service Project.

6. To deploy the Web application to a Web role, you need to create or select a cloud service, and
choose whether to deploy the application to production or staging. Create a new cloud service for
your role and name it WebRoleDemoYourInitials (YourInitials contains your initials). Set the location
of the role to the region closest to you. Leave the other settings with their current values.

7. To deploy your Web application to a Web role, you also need to create a Windows Azure Storage
Account where the deployment package will be stored. Click the Advanced Settings tab, and then
create a new storage account named webroledemostorageyourinitials (yourinitials contains your
initials) in the same location you choose for your cloud service. Make sure all the letters in the storage
account name are in lowercase.

Note: If you get a message saying the service creation failed because you reached your
storage account limit, delete one of your existing storage accounts and retry the step. If you do
not know how to delete a storage account, consult the instructor.

8. Publish the application and wait for the Windows Azure Activity Log window to appear. Visual Studio
will start the deployment process, which includes packaging the application, uploading the package
6-20 Hosting Services

to storage, creating the VM, and then deploying the package to the VM. This process will take several
minutes to complete.

Note: If time permits, wait for the deployment to complete, and then click the Website
URL link to open the Web application in a browser. If you receive an error message saying the
web browser cannot be started, open a browser manually and type the address.

Question: What is the difference in the deployment process between deploying to a Web
Site and to a web role?

Comparing Windows Azure Web Roles and Windows Azure Web Sites
To decide whether to use a Windows Azure web
role or a Windows Azure Web Site, you first have
to know the differences between the features
offered by these two environments, and then
compare your application requirements against
those features. The following table describes the
features that are unique to each environment and
are not supported by the other.

Web Role Web Site

Hosting Virtual machine for each Dependent on the instance modes: Free and
method instance. Shared modes share a virtual machine with
other web sites; Reserved mode does not share
virtual machines with other web sites not in your
account.

Scaling Long (minutes), requires Short (seconds), uses already running virtual
creation of a virtual machine. machines.

Deployment Staging and development Single environment.


environment environments.

Provisioning Supports continuous Supports continuous deployment by using Git,


deployment by using TFS. Also Mercurial, or TFS. Also supports deployment
supports staging and rollback.
production quick swap.

Configuration Every change to the Web.config Supports editing parts of the Web.config
file requires redeploying the without redeploying the entire application.
application. To change
configuration without
redeploying, move the
configuration settings to the
service configuration file.

Remote Supported. Not Supported.


Developing Windows Azure and Web Services 6-21

Web Role Web Site


Desktop
access

Windows Support Windows Azure Virtual You cannot join a Virtual Network.
Azure Virtual Network to connect with other
Network web roles and VMs.

For a more detailed comparison, see

Windows Azure Web Sites, Web Roles, and Virtual Machines: when to use which?

http://go.microsoft.com/fwlink/?LinkID=298818&clcid=0x409
6-22 Hosting Services

Lab: Hosting Services


Scenario
Blue Yonder Airlines estimated research results show that in the next six months, more than 100,000
people will start to use their new Travel Companion app. Because the company does not yet have a web
data center, they decided not to buy additional servers to support the new user load. Instead, they are
moving their external service layer to Windows Azure. By using Windows Azure, they can scale-out as
required, as more people start to use the new app. In this lab, you move the WCF booking service from its
simple console host to IIS, and move the ASP.NET Web API services from their on-premises IIS hosting
environment to a web role in Windows Azure.

In addition, Blue Yonder Airlines wants to separate the deployment of the flights management web
application from the Travel Companion back-end service. Because the web application is a small
application and does not require many resources, it was decided that the application should be deployed
to a Windows Azure Web Site. In this lab, you deploy the booking management web application to a
Windows Azure Web Site.

Objectives
After you complete this lab, you will be able to:

Host the WCF booking service in IIS.


Host the destinations, flight schedules, and booking ASP.NET Web API services in a Windows Azure
web role.

Host the booking management web application in a Windows Azure Web Site.

Lab Setup
Estimated Time: 45 minutes

Virtual Machine: 20487B-SEA-DEV-A and 20487B-SEA-DEV-C


User name: Administrator and Admin

Password: Pa$$w0rd and Pa$$w0rd

For this lab, you will use the available virtual machine environment. Before you begin this lab, you must
complete the following steps:
1. On the host computer, click Start, point to Administrative Tools, and then click Hyper-V Manager.

2. In Hyper-V Manager, click MSL-TMG1, and in the Action pane, click Start.

3. In Hyper-V Manager, click 20487B-SEA-DEV-A, and in the Action pane, click Start.
4. In the Action pane, click Connect. Wait until the virtual machine starts.

5. Sign in using the following credentials:

User name: Administrator

Password: Pa$$w0rd

6. Verify that you received credentials to log into the Azure portal from your training provider; these
credentials and the Azure account will be used throughout the labs of this course.
In this lab, you will install NuGet packages. It is possible that some NuGet packages will have newer
versions than those used when developing this course. If your code does not compile, and you identify
the cause to be a breaking change in a NuGet package, you should uninstall the NuGet package and
instead, install the old version by using Visual Studio's Package Manager Console window :
Developing Windows Azure and Web Services 6-23

1. In Visual Studio, on the Tools menu, point to Library Package Manager, and then click Package
Manager Console.

2. In Package Manager Console, enter the following command and then press Enter.
install-package PackageName -version PackageVersion -ProjectName ProjectName

(The project name is the name of the Visual Studio project that is written in the step where you were
instructed to add the NuGet package).

3. Wait until Package Manager Console finishes downloading and adding the package.
The following table details the compatible versions of the packages used in the lab:

Package name Package version

EntityFramework 5.0.0

Exercise 1: Hosting the WCF Services in IIS


Scenario
To host the WCF services in IIS, you will start by creating a web application. After you create the web
application and add the proper references, you will configure the web application to have the same
service configuration as the self-hosted services. After you finish preparing the web application, you will
host it in IIS and configure IIS to support both HTTP and NET.TCP endpoints.
The main tasks for this exercise are as follows:

1. Create a web application project


2. Configure the web application project to use IIS
3. Configure the web applications to support NET.TCP

Task 1: Create a web application project


1. Run the setup.cmd file from D:\AllFiles\Mod06\LabFiles\Setup
2. Open the BlueYonder.Server solution file from the
D:\AllFiles\Mod06\LabFiles\begin\BlueYonder.Server folder.

3. To the BlueYonder.Server solution, add a new ASP.NET Empty Web Application project named
BlueYonder.Server.Booking.WebHost.

4. Install the Entity Framework NuGet package in the BlueYonder.Server.Booking.WebHost project.

5. In the BlueYonder.Server.Booking.WebHost project, add references to the System.ServiceModel


assembly and to the following projects:

BlueYonder.BookingService.Contracts

BlueYonder.BookingService.Implementation

BlueYonder.DataAccess
BlueYonder.Entities

6. Copy the FlightScheduleDatabaseInitializer.cs file from the booking service self-host project to the
new web application project.

7. Add a Global.asax file to the new web application, and in the Application_Start method, add the
required database initialization code.
Initialize the database by using the database initializer you copied from the self-host project.
6-24 Hosting Services

Refer to the database initialization code in the Program.cs file that is in the
BlueYonder.Server.Booking.Host project.

Task 2: Configure the web application project to use IIS


1. Copy the entire content of the App.config file from the BlueYonder.Server.Booking.Host project
and paste it over the content of the Web.config file in the BlueYonder.Server.Booking.WebHost
project.

2. Within the <system.serviceModel> section group, add a <serviceHostingEnvironment> element


with service activation configuration for the Booking service, and use the relative address
Booking.svc.

Declare a <serviceActivations> section inside the <serviceHostingEnvironment> section.

Inside the <serviceActivations> section, add an <add> tag with the following parameters.

Attribute Value

service BlueYonder.BookingService.Implementation.BookingService

relativeAddress Booking.svc

Note: You can also refer to Lesson 1, "Hosting Service On-Premises", Topic 2, "Hosting WCF
Service in IIS" in this module, for a code example.

3. Add a <system.web> element to the <configuration> section, and in it add a <compilation>


element and a <httpRuntime> element.
In the <compilation> element, set the debug attribute to true and the targetFramework attribute
to 4.5.

In the <httpRuntime> element, set the targetFramework attribute to 4.5.

Note: .NET 4 and 4.5 use the same .NET Framework version for the IIS application pool. If
you do not specify the target framework of your Web application in the Web.config file, the
default version will be .NET 4.

4. Remove the addresses used in the service metadata behavior and the service endpoint.

Remove the httpGetUrl attribute from the <serviceMetadata> element.

Remove the address attribute from the <endpoint> element.

Note: IIS uses the address of the web application to create the service metadata address
and the service endpoint address.

5. In the BlueYonder.Server.Booking.WebHost projects properties, on the Web tab, change the


server from IIS Express to the local IIS Web server, and then build the solution.

Task 3: Configure the web applications to support NET.TCP


1. Open IIS Manager from the Start screen and select the Default Web Site from the Connections
pane. Verify that the net.tcp binding is listed in the Bindings list.
Developing Windows Azure and Web Services 6-25

Note: The site bindings configure which protocols are supported by the IIS Web Site and
which port, host name, and IP address are used with each protocol.

2. Open the Advanced Settings dialog box of the BlueYonder.Server.Booking.WebHost web


application, and set both to enable the http and net.tcp protocols. Use a comma to separate the
values.

Note: In addition to adding net.tcp to the site bindings list, you also need to enable net.tcp
for each Web application you host in IIS. By enabling net.tcp, WCF will automatically create an
endpoint with NetTcpBinding.

3. Connect to the WCF service through the WCF Test Client application, and verify if the application is
able to retrieve metadata from both services.
Open the WCF Test Client application from D:\AllFiles.

Connect to the address http://localhost/BlueYonder.Server.Booking.WebHost/Booking.svc


Wait until you see the service and endpoints tree in the pane to the left.

Results: You will be able to run the WCF Test Client application and verify if the services are running
properly in IIS.

Exercise 2: Hosting the ASP.NET Web API Services in a Windows Azure


Web Role
Scenario
Before you deploy the ASP.NET Web API services to Windows Azure, you need to create a cloud service
where you will deploy the web application and an SQL Database server for the application to use. After
you create the cloud service, you will upload several certificates to it, so the web application can use these
certificates when calling the on-premises WCF service.

After you create the components in Windows Azure, you will create the cloud project in Visual Studio
2012, and configure it to deploy the ASP.NET Web API web application to a Windows Azure web role.
Before deploying the application to Windows Azure, you will test it locally with the Windows Azure
compute emulator, and after you verify it is running properly, you will deploy it to Windows Azure.

The main tasks for this exercise are as follows:


1. Create a new SQL database server and a new cloud service

2. Add a cloud project to the solution

3. Deploy the cloud project to Windows Azure


4. Test the cloud service against the client application

Task 1: Create a new SQL database server and a new cloud service
1. Open the Windows Azure Management Portal (http://manage.windowsazure.com)

2. Create a new SQL Database Server


On the SQL DATABASES page, in the SERVERS tab, click ADD.

Use the login name BlueYonderAdmin and the password Pa$$w0rd.

Select a Region that is closest to your location and create the SQL Database Server.
6-26 Hosting Services

Write down the name of the new SQL Database Server.

3. Configure the SQL Database server to allow access from any IP address by creating a rule with the
following settings:
RULE NAME: OpenAllIPs

START IP ADDRESS: 0.0.0.0

END IP ADDRESS: 255.255.255.255

Note: As a best practice, you should allow only your IP address, or your organization's IP
address range to access the database server. However, in this course, you will use this database
server for future labs, and your IP address might change in the meanwhile, therefore you are
required to allow access from all IP addresses.

4. Create a new Windows Azure Cloud Service named BlueYonderCompanionYourInitials


(YourInitials contains your initials). Choose a region closest to your location for the new cloud service.
5. Upload the CloudApp certificate from the D:\AllFiles\certs folder to the new cloud service.

Open the new cloud service configuration and then open the CERTIFICATES tab.
Use the password 1 to open the certificate file.

Note: In this lab, the ASP.NET Web API services are accessible through HTTP and HTTPS. To
use HTTPS, you need to upload a certificate to the Windows Azure cloud service.

6. Open the BlueYonder.Companion.sln solution file from the


D:\AllFiles\Mod06\LabFiles\begin\BlueYonder.Server folder in a new Visual Studio 2012 instance.
7. In the BlueYonder.Companion.Host project, edit the connection string within the Web.config file,
and update the SQL Database server name.

Locate the two {ServerName} placeholders in the connectionString attribute


Replace the placeholders with the SQL database server name from the portal.

8. Locate the client endpoint configuration for the Booking service, and change its address to point to
the web-hosted service. Use the address
net.tcp://localhost/BlueYonder.Server.Booking.WebHost/Booking.svc.

Task 2: Add a cloud project to the solution


1. Add a new cloud service project with a Windows Azure web role for the
BlueYonder.Companion.Host project:

Right click the BlueYonder.Companion.Host project and select Add Windows Azure Cloud
Service Project

Verify that the cloud project contains a web role named BlueYonder.Companion.Host.Azure.

Note: You can achieve the same result by adding a new Windows Azure Cloud Service
project, to the solution, and then manually adding a Web Role Project from an existing project.

2. Add the HTTPS certificate to the Web Role.


Developing Windows Azure and Web Services 6-27

Use the Certificates tab in the role's Properties window, to add a certificate according to the
following settings.

Name Value

Name BlueYonderCompanionSSL

Store Location LocalMachine

Store Name My

Thumbprint Click the ellipsis and then select the


BlueYonderSSLCloud certificate

3. On the Certificates tab, change the Service Configuration to Local, and then change the
BlueYonderCompanionSSL certificate from BlueYonderSSLCloud to BlueYonderSSLDev. Change
the service configuration back to All Configurations when you are done.

Note: SSL certificates contain the name of the server so that clients can validate the
authenticity of the server. Therefore, there are different certificates for the local deployment, and
for the cloud deployment.

4. Add an HTTPS endpoint to the web role.

Use the Endpoints tab in the role's Properties window, to add an endpoint according to the
following settings:

Name Value

Name Endpoint2

Type Input

Protocol https

Public Port 443

SSL Certificate Name BlueYonderCompanionSSL

5. Run the ASP.NET Web API project with the Windows Azure compute emulator.

After the two web browsers open, verify they use the addresses http://127.0.0.1:81 and
https://127.0.0.1:444.

Note: The endpoint configuration of the role uses ports 80 and 443 for the HTTP and
HTTPS endpoint. However, the local IIS Web server already uses those ports, so the emulator
needs to uses different ports.

6. Log on to the virtual machine 20487B-SEA-DEV-C as Admin with the password Pa$$w0rd.
7. Open the BlueYonder.Companion.Client solution file from the
D:\AllFiles\Mod06\LabFiles\begin\BlueYonder.Companion.Client folder.
6-28 Hosting Services

8. The client app is already configured to use the Windows Azure compute emulator. Run the client and
verify it can connect to the emulator by searching for a flight to New York and verifying you see a list
of flights.

Note: Normally, the Windows Azure Emulator is not accessible from other computers on
the network. For purposes of testing this lab from a Windows 8 client, a routing module was
installed on the server's IIS, routing the incoming traffic to the emulator.

9. Switch back to the 20487B-SEA-DEV-A virtual machine.

Task 3: Deploy the cloud project to Windows Azure


1. On the View menu, open the Task List window.

2. Switch to Comments and search for TO-DO items. Double-click each comment and look at the
disabled WCF calls :
UpdateReservationOnBackendSystem

CreateReservationOnBackendSystem

Note: Prior to the deployment of the cloud project to Azure, all the on-premises WCF calls
were disabled.
These include calls from the Reservation Controller class and the Trips Controller class.
After you deploy the ASP.NET Web API project to Windows Azure, it cannot call the on-premises
WCF service, so for now, the WCF Service calls are disabled. In Module 7, "Windows Azure Service
Bus" in Course 20487, you will learn how a cloud application can connect to an on-premises
service.

3. Use Visual Studio 2012 to publish the BlueYonder.Companion.Host.Azure project to the cloud
service you created.

Open the Publish Windows Azure Application dialog box for the cloud project.

Click the Sign in to download credentials link.

Save the publish settings file and return to Visual Studio 2012
Click Import, select the publish settings file you saved, and move to the next step of the publish
wizard.

4. Select the BlueYonderCompanionYourInitials (YourInitials contains your names initials) Windows


Azure Cloud Service

If required, create a new Windows Azure Storage account named byclyourinitials (yourinitials
contains your names initials, in lower-case) in a region closest to you location.

On the Advanced Settings tab, set the name of the new deployment to Lab6, and clear the Append
current date and time check box.

Note: The abbreviation bycl stands for Blue Yonder Companion Labs. An abbreviation is
used because storage account names are limited to 24 characters. The abbreviation is in lower-
case because storage account names are in lower-case. Windows Azure Storage is covered in
depth in Module 9, "Windows Azure Storage" in Course 20487.

5. Start the deployment process by clicking Publish. This might take several minutes to complete.
Developing Windows Azure and Web Services 6-29

Task 4: Test the cloud service against the client application


1. Go back to the 20487B-SEA-DEV-C virtual machine.

2. Replace the Address used to communicate with the server:

In the BlueYonder.Companion.Shared project, open the Addresses class and in the BaseUri
property, replace the address of the emulator with the cloud service address you created earlier.

Use the form https://blueyondercompanionYourInitials.cloudapp.net/ (replace YourInitials with


your initials).

3. Run the client app and search for flights to New York. Verify the client application is able to connect
to the ASP.NET Web API Web application hosted in Windows Azure and retrieve the list of flights.

Results: You will verify the application works locally in the Windows Azure compute emulator, and then
deploy it to Windows Azure and verify it works there too.

Exercise 3: Hosting the Flights Management Web Application in a


Windows Azure Web Site
Scenario
In this exercise, you will deploy the flights manager web application to a Windows Azure Web Site. The
first step will be to create a Windows Azure Web Site. After you create the Web Site, you will download
the publish settings file, which you will then use in Visual Studio 2012 to populate all the publish settings
required by the publish process (such as destination server, username, and password).
The main tasks for this exercise are as follows:

1. Create new Web Site in Azure

2. Upload the Flights Management web application to the new Web Site by using the Windows Azure
Management Portal

Task 1: Create new Web Site in Azure


1. Go back to the 20487B-SEA-DEV-A virtual machine, and open the Windows Azure Management
Portal (http://manage.windowsazure.com).
2. Create a new Web Site named: BlueYonderCompanionYourInitials (YourInitials contains your
initials).

Use NEW and then QUICK CREATE button.

Select a Region that is closest to your location.

Click CREATE WEB SITE to create the web site.

3. Open the new Web Site's Dashboard page, and download the publish profile file.

Note: The publishing profile file includes the information required to publish a Web
application to the Web Site. This is an alternative publish method to downloading the
subscription file, as shown in Lesson 2, "Hosting Services in Windows Azure", Demo 1, "Hosting in
Windows Azure" in Course 20487. The difference is that by importing the subscription file, you
can publish to any of the Web Sites manages by your Windows Azure subscription, whereas
importing the publish profile file of a Web Site will only allow you to publish to that specific Web
Site.
6-30 Hosting Services

Task 2: Upload the Flights Management web application to the new Web Site by
using the Windows Azure Management Portal
1. Open the BlueYonder.Companion.FlightsManager solution file from the
D:\AllFiles\Mod06\LabFiles\begin\BlueYonder.Server folder in a new Visual Studio 2012 instance.

2. In the BlueYonder.FlightsManager project, open the web.config file, and edit the <appSettings>
section to substitute the {YourInitials} placeholder with the initials you have used when you created
the Windows Azure cloud service earlier.

3. Publish the BlueYonder.FlightsManager project.

Use the publish profile file you downloaded in the previous task.

After the deployment completes, a browser will automatically open, showing the deployed site.

4. Verify that you can see flights schedule from Paris to Rome, indicating that the application was able
to retrieve information from the web role.

Results: After you publish the flights manager web application, you will open the web application in a
browser and verify if it is working properly and is able to communicate with the web role you deployed in
the previous exercise.

Question: In this lab, you used a Web Site and a web role, while the worker role was
mentioned earlier in the lesson. What criteria would you use to determine what roles your
application requires?
Developing Windows Azure and Web Services 6-31

Module Review and Takeaways


In this module, you learned about the different ways in which to host on-premises WCF and ASP.NET Web
API services, namely self-hosting and IIS hosting. You also learned about the Windows Azure web and
worker roles and Windows Azure Web Sites as the options to use for cloud-based hosting.

Review Question(s)
Question: What would you use to host a personal blog site in Windows Azure, and why?
6-32 Hosting Services
7-1

Module 7
Windows Azure Service Bus
Contents:
Module Overview 7-1

Lesson 1: Windows Azure Service Bus Relays 7-2

Lesson 2: Windows Azure Service Bus Queues 7-12


Lesson 3: Windows Azure Service Bus Topics 7-23

Lab: Windows Azure Service Bus 7-32


Module Review and Takeaways 7-41

Module Overview
Integration and collaboration are key requirements in many distributed applications. Windows Azure
Service Bus provides a variety of cloud-based infrastructures that you can use on-premises and with
cloud-based applications to interconnect and collaborate in a distributed, scalable, and hybrid
environment.
This module describes messaging patterns for distributed applications in hybrid environments, and the
infrastructures provided by Windows Azure Service Bus to support these patterns.

Note: The Management Portal UI and Windows Azure dialog boxes in Visual Studio 2012
are updated frequently when new Windows Azure components and SDKs for .NET are released.
Therefore, it is possible that some differences will exist between screen shots and steps shown in
this module, and the actual UI you encounter in the Management Portal and Visual Studio 2012.

Objectives
After completing this module, you will be able to:
Describe the purpose and functionality of relayed and buffered messaging.

Provision, configure, and use the service bus queues.

Enhance the effectiveness of queue-based communications by using topics, subscriptions and filters.
7-2 Windows Azure Service Bus

Lesson 1
Windows Azure Service Bus Relays
The Service Bus Relay is a middle-tier service, running in Windows Azure, for connecting remote clients
and cloud-based applications with on-premises services. You can connect the clients and applications with
the on-premises services across firewalls and network address translation (NAT) over the Internet, without
using virtual private networks (VPNs). This lesson describes the Service Bus Relay capabilities and
programming model.

Lesson Objectives
After completing this lesson, you will be able to:

Describe the Service Bus Relay.


Create new Service Bus namespaces.

Use WCF relay bindings.

What Is Windows Azure Service Bus Relay?


Connectivity between an on-premises application
and remote clients or cloud-based applications is
a key requirement in a large variety of
applications. Establishing a connection from a
local server to services running on the cloud is a
simple task. However, establishing a connection in
the reverse direction might be challenging.
Firewalls and Network Address Translation (NAT)
protect services that reside within a corporate
enterprise network and it is unacceptable to open
up a firewall and expose the local services to the
Internet.

Note: NATs are components, hardware or software, responsible for translating IP addresses
for both outgoing and incoming messages. For example, a private network of 50 computers can
connect to the Internet through one computer, called a gateway, which has a public Internet IP
address and a NAT installed. When one of the computers sends a message to the Internet, the
message passes through the gateway, and the NAT replaces the origin IP with the IP of the
gateway. When a message is sent from the Internet to one of the computers, the message arrives
to the gateway computer, where the NAT replaces the public IP address of the gateway with the
internal network address of the target computer, and then sends it to the private network.

Windows Azure Service Bus Relay acts as a cloud-based mediator between networks, for example between
a corporate network and the cloud, or between two corporate networks. You can host a WCF service on
premises and delegate its endpoint to the Service Bus Relay by establishing a bidirectional communication
channel between the two. Remote applications that want to use your service, first, establish a connection
to the Service Bus Relay, which then routes the communication all the way to your local service. By using
Windows Azure Service Bus, you can expose your local services securely to customers outside of the
enterprise network without opening the corporate firewall.
Developing Windows Azure and Web Services 7-3

Windows Azure Service Bus Relay uses the WCF programming model and supports various
communication patterns and protocols. You can use one-way messaging and route incoming messages to
one or more listeners. You can use the relay to establish a live socket between sender, the relay, and
receiver, and then upgrade the socket to a direct channel, if supported.

The Access Control Service (ACS) manages the authentication and authorization of Windows Azure Service
Bus. You can control who can access your services at a very granular level.

Note: One-way messaging and other messaging patterns you can use in WCF services are
discussed in Appendix 1, "Designing and Extending WCF Services". ACS is discussed in Module 11,
"Identity Management and Access Control" in Course 20487.

Windows Azure Service Bus Relay forwards messages and sessions between senders and receivers
bypassing NAT and firewalls, yet it neither persists nor filters or processes messages in any way.

Windows Azure Service Bus Relay provides new communication options to local services that were
traditionally exposed only to customers within the corporate network.
The Service Bus Relay was designed for scalability and availability. The Service Bus has a large number of
load-balanced front-end nodes through which the sender and receiver establish communication and a
back-end fabric, which executes the core relay functionality.
The relay is capable of establishing a firewall and NAT-traversing connection because both sender and
receiver create a channel to the relay, implying that NAT is not relevant and the firewall will allow the
communication.

One-Way Messaging
With one-way relay bindings, a client sends
messages to a service, without expecting any
response from it. Because the client cannot
communicate directly with the service, it uses the
Service Bus as the mediator.

The following diagram depicts how one-way


message forwarding is used with Service Bus
Relays.
Developing Windows Azure and Web Services 7-5

FIGURE 7.2: REQUEST-RESPONSE MESSAGING RELAY PATTERN


Request-response messages are made possible with Service Bus by using socket-to-socket forwarding and
rendezvous connections:

1. The receiver creates a bidirectional channel to the relay.

2. The sender creates an outbound channel to the relay.


3. A control message is sent to the receiver by using its subscription in the services fabric and the
previously created channel as described in the message forwarding section. The control message
contains information about the socket just created by the sender and its location in the relay front-
end nodes.

4. The receiver creates another outbound connection to the front-end nodes, also referred to as a
rendezvous connection, which now acts as a socket-to-socket forwarder. This outbound connection
serves as a live socket that the sender and the receiver can use for communication. From the point of
view of the sender and the receiver, the socket is a simple point-to-point socket even though it is
implemented by two different sockets combined by a socket forwarder in the relay front-end node.

Socket forwarding looks and feels like a normal socket but it is slow. Upgrading to a real point-to-point
socket between the sender and receiver will improve performance, but it requires knowledge of the
correct IP address of the receiver behind NAT. Service Bus Relay uses NAT traversal algorithms to try and
figure out the IP address to use by sending control messages to the receiver. By looking at the source and
destination address, the relay can determine the behavior of the NAT translator and estimate the correct
IP address that will provide direct communication with the receiver through the NAT translator. If the
relay succeeds in determining the correct IP address, the channel can be upgraded, which leads to major
performance improvements.

To instruct the Service Bus Relay to try establishing a direct connection between sender and the receiver,
you need to change the connection mode of the binding from a relayed connection to a hybrid
connection.
For more information about configuring a hybrid connection, see How to: Change the
Connection Mode
http://go.microsoft.com/fwlink/?LinkID=313736
7-6 Windows Azure Service Bus

Creating Windows Azure Service Bus Namespaces


To create a relay endpoint, you must first create a
Service Bus namespace. A Service Bus namespace
is a unique name used for addressing and
resource configuration. Service Bus unique
resource identifiers (URIs) use the form
sb://namespace.servicebus.windows.net. For
example, Blue Yonder Airlines can have the
namespace blueyonder, and use the URI form of
blueyonder.servicebus.windows.net.

Note: The sb:// scheme is used when


accessing Service Bus resources by using TCP. If
your resource is accessible through HTTP, you can use the http:// scheme instead.

To create a new Service Bus namespace, log on to the Windows Azure Management Portal, click SERVICE
BUS in the navigation pane, and then click Create. In the Create a Namespace dialog box, enter a name
for the namespace, a region, and then click the check mark to begin the provisioning process.

Note: The process for creating a new Service Bus namespace is demonstrated in Demo 1,
"Creating Service Bus Relays" of this lesson.

Service Bus resources, such as service endpoints and queues, are organized hierarchically in tree form,
with the namespace as the root of the tree. Each individual resource has a URI that is relative to the URI of
the Service Bus namespaces. For example, Blue Yonder Airlines can create a relay endpoint for the
Booking service under sb://blueyonder.servicebus.windows.net/booking.

The following diagram depicts the hierarchy of Service Bus resources

FIGURE 7.3: SERVICE BUS RESOURCE HIERARCHY


Every Service Bus namespace has a matching management namespace in the ACS, which you can use to
configure permissions for each resource in the namespace tree. For example, you can create a set of users
with read/write permissions for the Service Bus namespace path
sb://blueyonder.servicebus.windows.net/services, and then create a set of service endpoints:
sb://blueyonder.servicebus.windows.net/services/booking and
sb://blueyonder.servicebus.windows.net/services/frequentflyer. The two endpoints will inherit the
7-6 Windows Azure Service Bus

Creating Windows Azure Service Bus Namespaces


To create a relay endpoint, you must first create a
Service Bus namespace. A Service Bus namespace
is a unique name used for addressing and
resource configuration. Service Bus unique
resource identifiers (URIs) use the form
sb://namespace.servicebus.windows.net. For
example, Blue Yonder Airlines can have the
namespace blueyonder, and use the URI form of
blueyonder.servicebus.windows.net.

Note: The sb:// scheme is used when


accessing Service Bus resources by using TCP. If
your resource is accessible through HTTP, you can use the http:// scheme instead.

To create a new Service Bus namespace, log on to the Windows Azure Management Portal, click SERVICE
BUS in the navigation pane, and then click Create. In the Create a Namespace dialog box, enter a name
for the namespace, a region, and then click the check mark to begin the provisioning process.

Note: The process for creating a new Service Bus namespace is demonstrated in Demo 1,
"Creating Service Bus Relays" of this lesson.

Service Bus resources, such as service endpoints and queues, are organized hierarchically in tree form,
with the namespace as the root of the tree. Each individual resource has a URI that is relative to the URI of
the Service Bus namespaces. For example, Blue Yonder Airlines can create a relay endpoint for the
Booking service under sb://blueyonder.servicebus.windows.net/booking.

The following diagram depicts the hierarchy of Service Bus resources

FIGURE 7.3: SERVICE BUS RESOURCE HIERARCHY


Every Service Bus namespace has a matching management namespace in the ACS, which you can use to
configure permissions for each resource in the namespace tree. For example, you can create a set of users
with read/write permissions for the Service Bus namespace path
sb://blueyonder.servicebus.windows.net/services, and then create a set of service endpoints:
sb://blueyonder.servicebus.windows.net/services/booking and
sb://blueyonder.servicebus.windows.net/services/frequentflyer. The two endpoints will inherit the
Developing Windows Azure and Web Services 7-7

permissions you assigned to their parent node. Configuring Service Bus with ACS will be discussed in
detail in Module 11, "Identity Management and Access Control".

Using WCF Relay Bindings


Service Bus Relay uses WCF as its underlying
programming model. You build receivers by
implementing a simple WCF service and exposing
endpoints that use relay bindings. In addition to
configuring the endpoint to use a relay-based
binding, you also need to authenticate the service
against the relay. This is done by attaching a
TransportClientEndpointBehavior service
behavior to the relay endpoint.

Most traditional WCF bindings have


corresponding relay bindings, as described in the
following table.

WCF Binding Relay Binding

BasicHttpBinding BasicHttpRelayBinding

WebHttpBinding WebHttpRelayBinding

WSHttpBinding WSHttpRelayBinding

WS2007HttpBinding WS2007HttpRelayBinding

WSHttpContextBinding WSHttpRelayContextBinding

WS2007FederationHttpBinding WS2007FederationHttpRelayBinding

NetTcpBinding NetTcpRelayBinding

NetTcpContextBinding NetTcpRelayContextBinding

n/a NetOnewayRelayBinding

n/a NetEventRelayBinding

Note: NetOnewayRelayBinding and NetEventRelayBinding provide one-way message


forwarding, and have no corresponding traditional WCF binding.

To configure your app with all of the necessary Service Bus dependencies, such as referenced assemblies
and WCF extension configuration, install the Windows Azure Service Bus NuGet package.
The WCF service itself contains no code that is relevant to the relaying operation. This means that you can
use the same WCF service to provide functionality inside your corporate network as well as a relay
receiver.

The following code shows how to build a WCF service that can act as a receiver in the relay.
7-8 Windows Azure Service Bus

WCF Service as a Relay Receiver


public class ChatService : IChatContract
{
void IChatContract.Hello(string nickname)
{
Console.WriteLine(string.Format("{0} has joined the chat room", nickname));
}

void IChatContract.Chat(string nickname, string text)


{
Console.WriteLine(string.Format("{0} said: {1} ", nickname, text));
}

void IChatContract.Bye(string nickname)


{
Console.WriteLine(string.Format("{0} has left the chat room", nickname));
}
}

The following code shows how to host a WCF service that acts as a relay receiver.

Hosting a Relay Receiver


// Create authentication token
var relayCredentials = new TransportClientEndpointBehavior()
{
TokenProvider = TokenProvider.CreateSharedSecretTokenProvider(issuerName,
issuerSecret)
};

// Create the Service Bus resource URI


Uri serviceAddress = ServiceBusEnvironment.CreateServiceUri(
"sb", serviceBusNamespace, "MyChatService");

ServiceHost host = new ServiceHost(typeof(ChatService));

ServiceEndpoint endpoint = host.AddServiceEndpoint(


typeof(IChatContract), new NetEventRelayBinding(), serviceAddress);

endpoint.EndpointBehaviors.Add(relayCredentials);

host.Open();

In the preceding example, you create an endpoint with the NetEventRelayBinding binding, and attach a
security token to it to authenticate the service as a listener (receiver). The binding uses TCP
communication, therefore the URI is created with the sb scheme.

The preceding example creates the endpoint and endpoint behavior in code. However, you can also
create this configuration in the configuration file.
The following code shows how to declare a service with a relay endpoint by using XML configuration.

Creating Service Configuration by using XML


<behaviors>
<endpointBehaviors>
<behavior name="sbAuthentication">
<transportClientEndpointBehavior>
<tokenProvider>
<sharedSecret issuerName="owner" issuerSecret="PlaceBase64StringHere"
/>
</tokenProvider>
</transportClientEndpointBehavior>
</behavior>
</endpointBehaviors>
Developing Windows Azure and Web Services 7-9

</behaviors>
<services>
<service name="Service.ChatService">
<endpoint contract="Service.IChatContract"
binding="netEventRelayBinding"

address="sb://myServiceBusNamespace.servicebus.windows.net/MyChatService"
behaviorConfiguration="sbAuthentication"/>
</service>
</services>

The owner user, used in the preceding example, is created automatically with the Service Bus namespace
and has full control over the namespace, including permissions to create new users and grant or deny
access to resources. To learn how to create users with restricted access, for example, users that can only
send messages to the relay, to be used in client applications, refer to Module 11, "Identity Management
and Access Control", Lesson 3, "Configuring Service to Use Federated Identities".

Note: When you add the <transportClientEndpointBehavior> element to your


configuration file, Visual Studio 2012 will display a warning, because this element is not part of
the known endpoint behaviors. If you installed the Windows Azure Service Bus NuGet package,
you will notice that this endpoint behavior is listed in the <extensions> section as a new
behavior element. Visual Studio 2012 does not add the behaviors listed in the <extensions>
section to the valid list of behaviors, and that is why the warning is shown. You can ignore this
warning.

To send a message to the relay, you create a WCF proxy that uses a relay-binding and relay-endpoint
address. Similar to the receiver, the sender also has to authenticate with the relay before sending
messages. To implement authentication, attach a TransportClientEndpointBehavior endpoint behavior
to the proxy.

The following code shows how to send a message to the relay.

Implement Relay Sender


var relayCredentials = new TransportClientEndpointBehavior()
{
TokenProvider = TokenProvider.CreateSharedSecretTokenProvider(issuerName,
issuerSecret)
};

Uri serviceAddress = ServiceBusEnvironment.CreateServiceUri(


"sb", serviceBusNamespace, "MyChatService");

ChannelFactory<IChatChannel> channelFactory = new ChannelFactory<IChatChannel>(


new NetEventRelayBinding(), new EndpointAddress(serviceAddress));
channelFactory.Endpoint.EndpointBehaviors.Add(relayCredentials);
IChatChannel proxy = channelFactory.CreateChannel();
proxy.Open();

//Send a "Hello" message to the relay


proxy.Hello(chatNickname);

For additional information about the different relay bindings, see Service Bus Bindings
http://go.microsoft.com/fwlink/?LinkID=313737

Question: What are the three steps for adding relay endpoints in a WCF service?
7-10 Windows Azure Service Bus

Demonstration: Creating Service Bus Relays


In this demonstration, you will see how to create a Service Bus namespace, add a relay endpoint to a WCF
service running on-premises, and then consume the WCF service from a cloud-based Web application,
deployed in a Windows Azure Web Site, by using Service Bus Relay.

Demonstration Steps
Open Visual Studio 2012, and then open the ServiceBusRelay.sln solution from the
D:\Allfiles\Mod07\DemoFiles\ServiceBusRelay\begin\ServiceBusRelay folder.

1. Observe the code in the Program.cs file of the ServiceBusRelay.Server project. The service endpoint
is configured to receive messages directly by using TCP on port 747.
2. Note the code in the HomeController.cs file of the ServiceBusRelay.WebClient project. The Web
client consumes the service by using the address and port of the server.

3. Configure the ServiceBusRelay solution to have multiple startup projects, starting both the
ServiceBusRelay.Server and ServiceBusRelay.WebClient projects.

4. Run the solution, and use the console writer web application to write text to the console window.
Close both windows after you finish testing the application.
5. Open the Windows Azure Management Portal (http://manage.windowsazure.com) and create a
new Windows Azure Service Bus namespace named ServiceBusDemo07YourInitials (Replace
YourInitials with your initials).

6. Select the newly created Service Bus namespace, click ACCESS KEY, and then copy the default key to
the clipboard.
7. Install the WindowsAzure.ServiceBus NuGet package in the ServiceBusRelay.Server and
ServiceBusRelay.WebClient projects.

8. Change the Program class in the ServiceBusRelay.Server project to expose a Service Bus endpoint
instead of a TCP endpoint. Change the base address of the host to
sb://ServiceBusDemo07YourInitials.servicebus.windows.net/console (Replace YourInitials with
your initials), and change the binding of the endpoint to NetTcpRelayBinding.

9. Add a new TransportClientEndpointBehavior to the endpoint's behaviors list. Set the


TokenProvider property of the behavior by using the
TokenProvider.CreateSharedSecretTokenProvider method. Use the owner issuer name and the
access key you copied from the management portal as the issuer secret.

10. Change the HomeController class in the ServiceBusRelay.WebClient project to consume the
Service Bus endpoint. In the Write method, change the endpoint address to
sb://ServiceBusDemo07YourInitials.servicebus.windows.net/ console (Replace YourInitials with
your initials), and change the binding to NetTcpRelayBinding.

11. Add a new TransportClientEndpointBehavior to the endpoint of the factory. Set the
TokenProvider property of the behavior by using the
TokenProvider.CreateSharedSecretTokenProvider method. Use the owner issuer name and the
access key you copied from the management portal as the issuer secret.

12. Create a Windows Azure Web Site named ServiceBusDemo07YourInitials (Replace YourInitials with
your initials) in the Windows Azure Management Portal. Create the new web site in the same region
as your Service Bus namespace.

13. Open the web site's Dashboard page and download the web site's publish profile file.

14. Publish the ServiceBusRelay.WebClient project to Windows Azure by using the publish settings file
you downloaded.
Developing Windows Azure and Web Services 7-11

15. Set the ServiceBusRelay.Server project as the startup project, and start it without debugging. Use
the console writer web application to write text to the console window. The web application running
in a web site will send the message to the console window hosted on your computer.

Question: If you need to implement the opposite scenario, where you have a single sender
with multiple receivers, which relay binding would you use?
7-12 Windows Azure Service Bus

Lesson 2
Windows Azure Service Bus Queues
Brokered messaging is the use of a central entity, referred to as a broker, which passes messages between
senders and receivers. Brokered messaging simplifies integration and improves scalability by providing
durable, enterprise-level messaging capabilities within the Windows Azure Service Bus. In contrast to
relays, brokered messaging is capable of decoupling senders and receivers, which means that a receiver
does not have to be online when a message is sent to it, and the receiver does not have to be online when
the receiver picks up the message.

This lesson describes the brokered messaging pattern and the queuing infrastructure provided by the
Windows Azure Service Bus.

Lesson Objectives
After completing this lesson, you will be able to:
Describe the concept of brokered messaging.

Describe how to create a Service Bus Queue by using the Management Portal, Visual Studio 2012, and
the .NET Framework.
Describe the functionality of the BrokeredMessage class.
Send messages to and receive messages from a Service Bus Queue.

What Is Brokered Messaging?


The most common messaging pattern is request-
response where the client sends a request to the
server, which immediately processes the request
and replies with a response message. The
effectiveness of the request-response pattern lies
in its simplicity. The server does not have to
recognize or have any other information about
the client. The server just needs to send the
response message through the same channel on
which the request was received, and the response
message will arrive at the right destination. This
pattern is firewall-friendly as the client initiates the
call to the server and the firewall is assumed to allow outgoing calls.

To implement the request-response pattern, the following assumptions should be valid:


The server is online and available when the client sends the request.

There are enough resources on the server side to process the request and create a response message
in a reasonable amount of time.

There are many scenarios where these assumptions are not valid. For example:

Occasionally connected systems. The communication channel is not always available.

Inactive listener. The server is not always online and available.


Developing Windows Azure and Web Services 7-13

Burst of traffic. A large number of requests are sent simultaneously, at a particular point in time and
there might not be enough resources to handle all of them.

In such scenarios, brokered messaging might be the solution. In brokered messaging, producers (senders)
and consumers (receivers) do not have to be online at the same time. A producer sends messages over a
communication infrastructure that reliably stores these messages in a queue until the consumer is ready
to receive and process them. If the consumer is busy or unavailable, messages will just accumulate in the
queue and no error will be raised. When the consumer is ready to process these requests, it will extract the
messages from the queue and a response message might be sent back to the client.

In contrast to the request-response messaging pattern, brokered messaging is non-blocking. This means
that the client does not wait for an immediate response from the server. Messages are sent one-way. This
might introduce some complexity, if responses are required, because the server must be aware to the
address of the client and the client must be reachable.

Windows Azure Service Bus brokered messaging


The Windows Azure Service Bus brokered messaging infrastructure introduces enterprise-level queuing
capabilities that you can use in cloud-based, on-premises, and hybrid applications. You can consume the
Windows Azure Service Bus brokered messaging services by using the Representational State Transfer
(REST)-based service endpoints, or by using the APIs and WCF bindings provided by the .NET Framework.

The core components of the Windows Azure Service Bus brokered messaging infrastructure are:
Service Bus Queues

Service Bus Topics

Service Bus Queues are basic persistent communication channels. Queues offer First In, First Out (FIFO)
message delivery to one or more consumers. Messages are expected to be received and processed by the
receivers in the temporal order in which they were sent, and each message is received and processed by
exactly one message consumer.
Service Bus Topics are special kind of queues, in which, depending on subscriptions and filters, multiple
consumers (multicast) can receive a single message. You can use Service Bus Topics to implement a
publish/subscribe pattern in which one or more consumers subscribe to a specific type of message.
Publishers send messages to the topic and the messages will be delivered to one or more associated
subscriptions, depending on the filter associated with each subscription. Messages are still expected to be
received and processed in the temporal order in which they were sent, but an individual message may be
received and processed by multiple consumers. Service Bus Topics will be discussed in Lesson 3, "Windows
Azure Service Bus Topics".

For additional information about brokered messaging, relayed messaging, and the Service
Bus infrastructure, see Relayed and Brokered Messaging
http://go.microsoft.com/fwlink/?LinkID=313738
7-14 Windows Azure Service Bus

Creating Windows Azure Service Bus Queues


There are various methods for creating Service Bus
Queues. You can use the Management Portal, the
Service Bus REST management API, the Windows
Azure Service Bus libraries, or use tools such as
Visual Studio 2012 or Service Bus Explorer. The
Management Portal and Visual Studio 2012 are
more suitable for creating Service Bus Queues at
design time, while the REST API and the Service
Bus libraries are usually used during run time.

Note: Service Bus Explorer is a tool


developed by the Windows Azure Product Team
at Microsoft that users can use to administer Service Bus namespaces. You can download the tool
from Windows Azure code samples website at http://go.microsoft.com/fwlink/?LinkID=313739.

Before creating a Service Bus Queue, you must first create a Service Bus namespace. A queue is a resource
in the namespace tree. Similar to all other resources, a queue has a URI that is relative to the URI of the
namespace, which you can secure by using ACS.

Create a Service Bus Queue with the Management Portal


The simplest method to create a queue is by using the Management Portal. After you log on to the
Management Portal, click NEW, click APP SERVICES, click SERVICE BUS, click QUEUE, and then click
QUICK CREATE or CUSTOM CREATE. Follow the instructions to specify the name of the queue, in which
region to create it, in which Service Bus namespace, and then click the check mark to begin the
provisioning process.

The following figure presents the NEW dialog for Service Bus Queue creation

FIGURE 7.4: CREATE A NEW SERVICE BUS QUEUE

Create a Service Bus Queue with Visual Studio 2012


Instead of using the Management Portal, you can use Visual Studio 2012 to create a new queue from the
Server Explorer window. Before you can create new queues from Visual Studio 2012, you need to add a
connection to the Service Bus namespace in which you plan to create the queue. To create a connection,
right-click the Windows Azure Service Bus node in Server Explorer, and then click Add Connection.

You can create a connection to your Service Bus namespace in one of the following ways:

Import your Windows Azure subscription information. In the Add Connection dialog box, select the
Your subscription option, click the Download Publish Settings link, download your subscription's
Developing Windows Azure and Web Services 7-15

publish settings file, and then click the Import button to import the file. After importing your
subscription settings file, select any of the Service Bus namespaces that exist in your subscription.

Manually enter namespace credentials. Type the namespace name, and provide an access key for the
namespace, which you can retrieve from the Management Portal. To retrieve the access key from the
Management Portal, log on to the Management Portal, click SERVICE BUS in the navigation pane,
click the namespace you wish to connect to, click ACCESS KEY, and then copy the Default Key to the
clipboard and paste it back in Visual Studio 2012, in the Issuer Key box.
This figure presents how to create a new connection to your Service Bus namespace in Visual Studio 2012.

FIGURE 7.5: CREATE A NEW CONNECTION TO YOUR SERVICE BUS NAMESPACE IN


VISUAL STUDIO.
After a connection is established to your namespace, expand the namespace node in Server Explorer,
right-click the Queues node, and then click Create New Queue. In the New Queue dialog box, enter the
information for the new queue, and click Save.

This figure presents the dialog box for creating a new Service Bus Queue in Visual Studio 2012.
7-16 Windows Azure Service Bus

FIGURE 7.6: CREATE A NEW SERVICE BUS QUEUE IN VISUAL STUDIO

Note: When you create queues with Visual Studio 2012, you have more control over the
queue settings, than when creating the queue through the Management Portal. For example, you
can control the default lock duration of messages before they are released, or the maximum size
of the queue. If you created the queue in the Management Portal, you can change these settings
in the queue's Configure page.

Create a Service Bus Queue with the Windows Azure Service Bus .NET Libraries
It is also possible to create a new queue by using the Windows Azure Service Bus libraries for .NET. To
configure your application with all of the necessary Service Bus dependencies, first you need to install the
Windows Azure Service Bus NuGet package. After you install the NuGet package, you will have
references to the Service Bus .NET libraries and will be able to create new queues in code.

The following code shows how to create a new queue by using the Service Bus .NET libraries. If the queue
already exists, the code will delete the existing queue before creating a new one.
Developing Windows Azure and Web Services 7-17

Create a New Queue in C#


string connectionString = "Endpoint=sb://[your
namespace].servicebus.windows.net/;SharedSecretIssuer=owner;SharedSecretValue=[your
secret]";

NamespaceManager namespaceManager =
new NamespaceManager.CreateFromConnectionString(connectionString);

// Delete if exists
if (namespaceManager.QueueExists(queueName))
{
namespaceManager.DeleteQueue(queueName);
}
namespaceManager.CreateQueue(queueName);

You use the NamespaceManager.CreateFromConnectionString method to connect to the Service Bus


namespace. The connection string you pass to the method contains the URI of the namespace, and the
credentials of the user that has management permissions over the namespace. In the preceding code, the
user is identified by an issuer name and the issuer's secret key. Instead of typing the connection string,
you can copy it from the Management Portal: In the Management Portal, click SERVICE BUS in the
navigation pane, click the namespace you wish to use, click ACCESS KEY, and then copy the
CONNECTION STRING to the clipboard.

Instead of placing the Service Bus connection string inside your code, making it hard to update, you
should place it in the configuration file. If you create a cloud-based application, you should place it in the
web or worker role service configuration file, and access it by using the
CloudConfigurationManager.GetSetting method.

Note: The CloudConfigurationManager class belongs to the


Microsoft.WindowsAzure.Configuration assembly that is part of the Windows Azure
Configuration Manager NuGet package. This NuGet package is installed automatically when
you install the Windows Azure Service Bus NuGet package. The GetSetting method will search
for application settings either in the service configuration file (ServiceConfiguration.cscfg) if you
are running in a Cloud Service or with the Windows Azure Emulator, or in the app.config or
Web.config file if you are running on-premises or in a Windows Azure Web Site.

After you have access to your Service Bus namespace, you can access, create, and delete queues. When
creating a queue you can configure properties such as message Time to Live (TTL), message lock duration,
duplicate detection and session support, in addition to the queue name shown in the preceding example.

Duplicate Detection

You can use duplicate detection to ensure that messages will be sent only once.
If a client sent a message to a queue and a timeout exception was thrown, there is no way for the client to
know if the message has been sent successfully or if it failed. The client can choose to resend the message
to ensure at-least-one time delivery, but the result can be the same message being sent twice. The client
can choose not to resend the message to ensure at-most-one time delivery, but the result can be the
message not being sent at all. By configuring the queue to detect duplicate messages, the client can retry
sending the message and the queue will ignore any duplicates. This will ensure that the message is
delivered exactly once. To turn on duplicate detection, check the Requires Duplicate Detection option
when you create the queue with Visual Studio 2012. If you are creating the queue in code, create a
QueueDescription object for the new queue, set its RequiresDuplicateDetection property to true, and
pass the object to the CreateQueue method.
7-18 Windows Azure Service Bus

The BrokeredMessage Class


The Windows Azure Service Bus libraries for .NET
introduces the BrokeredMessage class to
represent a message that is transferred through
queues and topics.

The BrokeredMessage class provides the


following methods for handling the message in
the context of queued transport:
GetBody. Deserializes the brokered message
body into an object of the specified type by
using the DataContractSerializer.
DeadLetter. Moves the message to the dead-
letter queue. Dead-letter queue is a special sub-queue created for each queue to store messages that
have waited in the queue for too long without being pulled. The DeadLetter queue is also used for
storing messages that failed consumer handling and were placed aside to prevent the messages from
being pulled again.

Abandon. Releases the lock on a peek-locked message.

Defer. Indicates that the receiver wants to defer the processing for this message.
RenewLock. Renews the lock on a message.

Complete. Completes the receive operation of a message and indicates that the message should be
marked as processed and deleted or archived.
A brokered message has three major parts:

Message Body. The message body contains the message payload, which is usually the serialized
representation of a business entity. The message body is transparent to the messaging infrastructure.
For queues and topics, the message body is plain data that should be transmitted from producer to
consumer.
Brokered message properties. Brokered message properties is a user-defined key-value collection
of data that is visible to the infrastructure and is used to route or handle messages while being
transmitted. By being able to access the message properties, the infrastructure can participate in the
message processing. Topic subscriptions and filters use the information in the brokered message
properties to route the message to the proper destination. Service Bus Topics will be discussed in the
next lesson, Windows Azure Service Bus Topics.

Other metadata. The message metadata contains information such as content-type, ID, sequence
number, size, expiry time, delivery count, and so on.

To learn more about the BrokeredMessage class, see The BrokeredMessage Class.
http://go.microsoft.com/fwlink/?LinkID=298819&clcid=0x409

Question: What is the difference between the body and properties parts of a brokered
message?
Developing Windows Azure and Web Services 7-19

Sending and Receiving Messages


After the queue is ready, you can send messages
to the queue and receive messages from the
queue by using the QueueClient class.

It is possible to create an instance of QueueClient


by using the connection string or by explicitly
providing the URI of the Service Bus Queue and
the client's credentials.
The following code shows how to create
QueueClient by using a connection string and by
providing the URI of the queue and credentials
explicitly.

Create a QueueClient
// Create a queue client using a connection string
QueueClient client1 = QueueClient.CreateFromConnectionString(connectionString,
queueName);

// Create a queue client explicitly


MessagingFactory factory = MessagingFactory.Create(serviceUri, credentials);
QueueClient client2 = factory.CreateQueueClient(queueName);
QueueClient client3 = new QueueClient(queueName);

The last line of code in the preceding example is a bit tricky. At first glance, the instantiation of a
QueueClient without specifying the connection string or by using a MessagingFactory should not work,
however this code will execute successfully. When you instantiate a new MessagingFactory object, it is
automatically stored in the memory in a cache. When you create new instances of the QueueClient class
without using a MessagingFactory or a connection string, the constructor uses the cached
MessagingFactory to initialize the QueueClient object. You can also use this constructor if you called the
NamesapceManager.CreateFromConnectionString method, because this method also creates a
MessagingFactory object internally.

Sending Messages
To create a message, you create an instance of BrokeredMessage and send it by using the queue client.

The following code shows how to create and send a brokered message by using a queue client.

Create and Send BrokeredMessage by using a QueueClient


QueueClient client = QueueClient.CreateFromConnectionString(connectionString, QueueName);

var message = new BrokeredMessage(updatedCustomer);


message.Properties["customerID"] =1 ;
message.ContentType= "SalaryUpdate";
client.Send(message);

The preceding code creates a BrokeredMessage object with three parts:


Message body. An object that will be serialized. The message body can be set to any object that can
be serialized by the Data Contract serializer, or to a stream object.

Brokered message properties. The customerID is a property that can be used by the consumer of
the message, and the queue infrastructure.
7-20 Windows Azure Service Bus

Metadata properties. The ContentType property contains a string intended to help the consumer
decide which processing logic to perform. For example, you can set the ContentType to the type of
the message body to inform the consumer how to deserialize the content.

Sessions
You can use sessions to group messages that belong to a certain logical group and receive them all on a
dedicated receiver. Senders can attach a SessionId to outgoing messages and the queue ensures that all
messages with the same SessionId are delivered to the same receiver. To group several messages in the
same session, set the SessionId property of all the BrokeredMessage objects to a unique string.

One common use case for using sessions is to get around the 265 kilobytes (KB) message size limit of
Service Bus Queues. You can split a large message into a collection of small messages and send all as a
group.

Receiving Messages
To receive a message, call the Receive method of the QueueClient class. Calling this method will create
internally an instance of a MessageReceiver object for receiving messages.

The following code shows how to receive a message by using a QueueClient object.

Receive a Message by using QueueClient


QueueClient client = QueueClient.CreateFromConnectionString(connectionString, QueueName);
BrokeredMessage message = client.Receive(TimeSpan.FromSeconds(5));
if (message != null)
{
ProcessMessage(message.MessageId, message.GetBody<Customer>()));
message.Complete();
}

When you call the Receive method, the message is retrieved from the queue, but does not get removed
from the queue; instead, the message is marked as locked in the queue, to prevent other consumers from
retrieving it. Because the message is only locked and not deleted, if the consumer fails during message
processing, the message is not lost. This retrieval technique is also referred to as Peek-Lock. After you
finish processing the message, you call the Complete method to instruct the queue to release the lock
and delete the message from the queue.

Note: If the consumer fails to call the Complete method during the allotted lock time, the
message will unlock. You can set the lock duration, which is by default 1 minute, when you create
the queue.

Instead of using the Peek-Lock technique, you can use the Receive-and-Delete technique, in which a
received message is deleted immediately from the queue. This technique can improve the performance of
the queue when many messages are locked, but can be problematic if the receiver crashes during
processing of the message. To change the queue from Peek-Lock to Receive-and-Delete, use the
QueueClient.CreateFromConnectionString method overload that accepts the ReceiveMode parameter.

After you retrieve the message from the queue, you can deserialize the message body to an object by
calling the GetBody<T> generic method. You will need to set the generic type to the actual type of the
object placed in the queue. It is also possible to receive a message by creating a MessageReceiver
explicitly.

The following code shows how to receive a message by using a MessageReceiver object.

Receive Messages by using MessageReceiver


MessagingFactory factory = MessagingFactory.Create(serviceUri, credentials);
Developing Windows Azure and Web Services 7-21

MessageReceiver myMessageReceiver = factory.CreateMessageReceiver(queueName);

BrokeredMessage message;
TimeSpan interval = new TimeSpan(hours: 0, minutes: 0, seconds: 5);
while ((message = myMessageReceiver.Receive(interval)) != null)
{
ProcessMessage(message.MessageId, message.GetBody<Customer>()));
message.Complete();
}

To receive messages grouped in a session, you use a MessageSession object instead of a


MessageReceiver. The MessageSession class is a dedicated receiver for session messages. If you created
your queue with sessions required, you must use the MessageSession receiver.

The following code shows how to receive messages grouped in a session.

Receive Messages in a Session


QueueClient client = QueueClient.CreateFromConnectionString(connectionString, QueueName);
MessageSession sessionReceiver =
client.AcceptMessageSession(acceptSessionReceiverTimeout);

while (sessionReceiver.TryReceive(receiveSessionMessageTimeout, out receivedMessage))


{
ProcessMessage(
receivedMessage.SessionId, receivedMessage.MessageId,
receivedMessage.GetBody<Customer>()));
}

Demonstration: Sending Messages by Using Windows Azure Service Bus


Queues
In this demonstration, you will see how to create a Service Bus Queue, create an application that sends
messages to the queue (a producer), and then run another application that pulls messages from the
queue (a consumer).

Demonstration Steps
1. Open Visual Studio 2012, and then open the QueuesDemo.sln solution from the
D:\Allfiles\Mod07\DemoFiles\QueuesDemo\Begin folder.

2. Add a new Console Application project named ServiceBusMessageSender to the QueuesDemo


solution.

3. Install the Windows Azure Service Bus NuGet package in the ServiceBusMessageSender project.

4. Open the Windows Azure Management Portal (http://manage.windowsazure.com)

5. If you did not perform the demo in the previous lesson, create a new Windows Azure Service Bus
namespace named ServiceBusDemo07YourInitials (Replace YourInitials with your initials). Locate
the connection string of the namespace, and then copy it to clipboard.

6. In the ServiceBusMessageSender project, open the App.config file, and replace the value of the
Microsoft.ServiceBus.ConnectionString app setting with the connection string copied in the
previous step.
7. In the ServiceBusMessageReceiver project, open the App.config file, and replace the value of the
Microsoft.ServiceBus.ConnectionString app setting with the connection string copied in the
previous step.
7-22 Windows Azure Service Bus

8. Open Program.cs from the ServiceBusMessageSender project, and use the


CloudConfigurationManager static class to get the connection string from the application settings,
and create a NamespaceManager object by using the connection string.

9. Use the NamespaceManager object to check if a queue named servicebusqueue already exists, and
if not, create it.

10. Create a QueueClient object for the servicebusqueue queue.

11. Add code to continuously get text input from the user and send it to the queue. Create a
BrokeredMessage object to hold the string payload, and send it to the queue by using the
QueueClient.Send method.

12. Run the ServiceBusMessageSender project. Enter some text in the console window to send it to the
queue and then close the console window.

13. Observe the code in the Program.cs file, in the ServiceBusMessageReceiver project. This
application pulls messages from the queue and prints them to the console. Run the
ServiceBusMessageReceiver project and verify the text you entered before it is printed in the
console window. Close the console window after the text appears.

Question: Why do you need to call the Complete method when you finish processing the
message?
Developing Windows Azure and Web Services 7-23

Lesson 3
Windows Azure Service Bus Topics
Topics are the infrastructure provided by Windows Azure Service Bus to implement Publish-Subscribe
persistent messaging. Publish-Subscribe persistent messaging is a messaging pattern where a single
producer can send notifications to an unknown number of consumers, possibly even zero, without having
to be aware to the specific consumers.

This lesson describes Windows Azure Service Bus Topics, subscriptions, and filters.

Lesson Objectives
After completing this lesson, you will be able to:

Describe subscription-based messaging.


Create a topic by using various options such as the Management Portal, Visual Studio, or the C# API.

Create a subscription and filters by using various options such as the Management Portal, Visual
Studio, or the C# API.
Create subscription filters.

Send a message to and receive a message from a topic.

Subscription-Based Messaging
With brokered-based messaging, a single queue
can have multiple receivers, but all receivers are
expected to handle messages in the same way.
The purpose of multiple receivers is to provide
scalability when the queue receives many
messages. However, there are scenarios where
multiple receivers are required not for scalability
reasons, but because of the type of actions that
the receivers need to perform. Each receiver
performs different actions when it receives a
message. For example, you can have one receiver
that only logs the content of each message, and
another receiver that processes the message. If both receivers listen to the same queue, some of the
messages would reach one of the receivers, but not the other. An optional solution for this scenario would
be to create two queues, and send the same message to both queues, but that would couple the sender
of the message to the number of receivers that currently exist in the system.

To handle such scenarios, you can implement subscription-based messaging in which multiple receivers
share the same queue but receive only the messages targeted for them. Receivers use subscriptions to
inform the queue infrastructure the messages they expect to receive and the queue uses subscriptions to
forward incoming messages to the correct listeners. With subscription-based messaging, a single message
sent to a queue can reach multiple receivers, if the message matches their subscription, or none of the
receivers, if the message does not match with any subscription.

A common pattern for implementing subscription-based messaging is Publish-Subscribe in which


publishers send messages to the Publish-Subscribe infrastructure. The infrastructure then acts as an
intermediary and forwards messages to a collection of recipients who have subscribed to a particular
7-24 Windows Azure Service Bus

information topic. Publishers and receivers are decoupled, which means that publishers do not know the
identity or the location of receivers.

You can also use subscription-based messaging to implement multicast, in which one message can be
forwarded to multiple receivers, if there are multiple subscriptions to the same topic. There are scenarios
where unicast messaging is required and only one subscriber will receive the message.

Windows Azure Service Bus implements subscription-based messaging by using Service Bus Topics. You
can use a Service Bus Topic as a shared channel to communicate with multiple receivers. Unlike Service
Bus Queues, where a single receiver processes a message, topics and subscriptions provide a one-to-many
form of communication in which each message will be forwarded to all subscriptions. You can register a
filter on each subscription to filter messages. This way the receiver receives only the messages it expects
from the subscription.

Just as messages in Service Bus Queues are persisted and first-in, first-out (FIFO) order is maintained,
subscriptions behave as virtual queues that receive copies of all messages that were sent to the topic.

For additional comparison of Service Bus Queues and Topics, see Service Bus Queues, Topics,
and Subscriptions.
http://go.microsoft.com/fwlink/?LinkID=313740

Creating Windows Azure Service Bus Topics


To create Service Bus Topics, you can use the
same methods that are available for creating
queues. You can create new topics in the
Management Portal, by using the Service Bus
REST management API, from Visual Studio 2012,
and by using the Windows Azure Service Bus
libraries. For example, to create a topic from the
Management Portal, use the same instructions
specified in Lesson 2, "Windows Azure Service Bus
Queues", Topic 2, "Creating Windows Azure
Service Bus Queues", but after you click NEW,
click TOPIC instead of QUEUE.
The following screenshot presents the New dialog box for the Service Bus Topic creation:

FIGURE 7.7: CREATE A SERVICE BUS TOPIC BY USING WINDOWS AZURE


MANAGEMENT PORTAL
To create topics from code, first you need to install the Windows Azure Service Bus NuGet package. You
can then use the NamespaceManager class to check if a topic exists, create new topics, and delete
existing topics.
Developing Windows Azure and Web Services 7-25

The following code shows how to create a new topic by using the NamespaceManager class. If the topic
already exists, the code will delete the existing topic before creating a new one.

Create a New Service Bus Topic with the NamespaceManager Class


string connectionString = "Endpoint=sb://[your
namespace].servicebus.windows.net/;SharedSecretIssuer=owner;SharedSecretValue=[your
secret]";

NamespaceManager namespaceManager =
new NamespaceManager.CreateFromConnectionString(connectionString);

// Delete if exists
if (namespaceClient.TopicExists(topicName))
{
namespaceClient.DeleteTopic(topicName);
}

namespaceClient.CreateTopic(topicName);

After you create a topic, you continue by creating subscriptions for your different consumers.

Creating Topic Subscriptions


Subscriptions act as connectors that consumers
use for receiving messages. Subscriptions provide
support for message expiration and message
sessions. As topics automatically implement
duplicate message detection, you do not need to
implement this on subscriptions. A topic can have
multiple subscriptions, but a subscription can only
belong to one topic.
Based on filters, all the incoming messages of a
topic are replicated and forwarded to all
subscriptions according to a routing logic. From
the perspective of the consumer, a subscription
acts as a virtual queue since messages are persisted and FIFO ordering is maintained. The techniques for
receiving messages from queues can also be used to receive messages from subscriptions.

Create a Subscription with the Management Portal


The simplest method for creating a subscription is by using the Windows Azure Management Portal.

Log on to the Windows Azure Management Portal, click SERVICE BUS in the navigation pane, and then
click the namespace in which the topic is defined. Click the TOPICS tab, click the topic name for which
you want to create a subscription, and then click CREATE SUBSCRIPTION. Specify the subscription name,
set its properties, and finally create the subscription by clicking the check sign.

The following illustration presents the dialog box for creating a new subscription in the Windows Azure
Management Portal.
7-26 Windows Azure Service Bus

FIGURE 7.8: CREATE A NEW SUBSCRIPTION IN THE MANAGEMENT PORTAL


When you create a subscription, you can set queue-related configuration, such as whether to enable
sessions, and whether expired messages are sent to a dead-letter queue. These settings resemble the ones
that you have when creating queues, because subscriptions are virtual queues.

Create a Subscription with Visual Studio 2012


You can use Visual Studio to create a new subscription from the Server Explorer window. In Server
Explorer, expand Windows Azure Service Bus, expand the namespace where you created the topic, expand
Topics, expand the topic you want to add the subscription to, right-click the Subscriptions node, and
then click Create New Subscription.

Note: To view the list of topics, first you need to import your Service Bus namespace
configuration to Visual Studio 2012. The instructions for adding the namespace to the Server
Explorer window are detailed in Lesson 2, "Windows Azure Service Bus Queues", Topic 2, and
Creating Windows Azure Queues".

The following illustration shows the dialog box for creating a new subscription in Visual Studio 2012.
Developing Windows Azure and Web Services 7-27

FIGURE 7.9: CREATE A NEW SUBSCRIPTION IN VISUAL STUDIO 2012

Create a Subscription with the Windows Azure Service Bus .NET Libraries
It is possible to create a new subscription with the Windows Azure Service Bus libraries by using the
NamespaceManager class.
The following code shows how to create a subscription by using the NamespaceManager class.

Create a New Subscription with the NamespaceManager Class.


var namespaceManager = NamespaceManager.CreateFromConnectionString(connectionString);
// Create the topic
namespaceManager.CreateTopic("ProductUpdates");
// Create three subscriptions in the topic
namespaceManager.CreateSubscription("ProductUpdates", "Inventory");
namespaceManager.CreateSubscription("ProductUpdates", "Accounting");
namespaceManager.CreateSubscription("ProductUpdates", "Sales");

After you call the CreateSubscription method, the subscription is created without any filters, which
means that it will receive every message that is sent to the topic. In the preceding example, every message
will be forwarded to all three subscriptions. If you want the subscription to receive some of the messages,
you need to create a filter for the Accounting subscription. For example, you would create a filter if you
want the Accounting subscription to receive only messages for products that cost over $5000. .
7-28 Windows Azure Service Bus

Creating Filters
A subscription without a filter forwards all
incoming messages in a topic to the subscription's
consumer. However, most consumers will
probably want to receive only the messages that
are relevant for them. The Publish/Subscribe
pattern clearly defines that subscribers can choose
to subscribe to a specific subset of messages that
they wish to receive. Receiving only a subset of
message is done by creating filters for your
subscriptions. With filters, only the required
messages will be forwarded to each subscription.

When you create a subscription, you can add one


of the following filter types to implement the required filtering:

SqlFilter. Forwards messages based on an SQL-like expression that is evaluated by using values in the
message property dictionary.

CorrelationFilter. Forwards messages based on the value of the CorrelationId property of the
brokered message. You can use correlation filters to match a set of messages that relate to each
other.

True filter. Messages are always forwarded.

False filter. Messages are never forwarded.


To create a filter for your subscription, you call the CreateSubscription method and pass a third
parameter with the filter you want to create.
The following code shows how to create subscription filters.

Create Subscription Filters


//Create a namespace manager using connection string
var namespaceManager = NamespaceManager.CreateFromConnectionString(connectionString);

// Create a subscription for all messages sent to the topic.


namespaceManager.CreateSubscription(
"ProductUpdates","Auditing", new TrueFilter());

// Create a subscription that receives messages over a specific price


namespaceManager.CreateSubscription(
"ProductUpdates","Accounting", new SqlFilter("ProductPrice > 5000"));

The TrueFilter class will make the subscription receive any message sent to the topic. If you call the
CreateSubscription method without passing a filter, as shown in previous examples, the subscription is
created with the TrueFilter automatically.

The SqlFilter constructor takes a string parameter with an SQL-like expression. The properties you use for
comparison, such as the ProductPrice property in the preceding example, are expected to exist in the
message properties dictionary, which you set when sending a message.

For a complete syntax of filter expressions, see SQL Expressions.


http://go.microsoft.com/fwlink/?LinkID=313741
Developing Windows Azure and Web Services 7-29

The third type of filters, correlation filters, do not use expressions, but rather check the value of the
message's CorrelationId string property. You can use correlation filters to match and correlate between
sets of messages. Consider the following scenario:

1. A client sends a CreateOrder message to the topic.


2. A service with a subscription for orders over $5000 picks up the message and processes it.

3. The service completes the order processing and wants to inform the client that the order was
approved.

The service cannot return a message to the client, because topics, as queues, provide one-way messaging,
not request-response messaging. If the client was also listening to the subscription, the service could send
a message to the client through the topic.

If the client wants to receive a message from the topic, it too needs to create a subscription, preferably
before the service sends the response (to prevent the response being sent before the subscription is
capable of receiving it). However, the client does not know the ID of the order before sending the
CreateOrder message, because the order was not created yet.

The solution for this scenario is:


1. The client generates a unique ID for the order, for example, a GUID.
2. The client creates a subscription with a correlation filter set to the unique ID.

3. The client creates a new CreateOrder message, sets the CorrelationId property of the message to the
unique ID, and then sends the message.

4. The service receives the message, processes it, and then sends an approval message to the topic with
the same CorrelationId property value.
5. The subscription in the client-side receives the message and informs the client that the order was
approved.

The following code shows how to create a subscription with a correlation filter.

Creating a Correlation Filter for a Subscription


// Create a subscription that expects a specific message
// This subscription is created by a producer (a client)
namespaceManager.CreateSubscription(
"Orders","Confirmed"+orderGUID, new CorrelationFilter(orderGUID));

If the subscription was created only for handling the response message, you can delete the subscription
after you receive the response. If you create a subscription with a SqlFilter, and your SQL expression is
solely comprised of comparing a property by using the equality operator, consider using correlation filters
instead, because correlation filters do not use string expressions and therefore provide better
performance over SQL filters.
7-30 Windows Azure Service Bus

Sending and Receiving Messages


Sending messages to a topic is very similar to
sending messages to a queue. Similarly, receiving
messages from a subscription is very similar to
receiving messages from a queue. You use the
MessageReceiver class to receive messages from
either a queue or subscription and use the
MessageSender class to send messages to either
a queue or topic. To send messages to a topic, use
the TopicClient class.

The following code shows how to send a message


to a topic by using TopicClient.

Sending a Message to a Topic


TopicClient client = TopicClient.CreateFromConnectionString(connectionString, topicName);

var message = new BrokeredMessage(updatedProduct);


message.Properties["productId"] = updatedProduct.Id ;
message.Properties["productPrice"] = updatedProduct.Price;
client.Send(message);

If you want to send messages with a correlation ID, set the BrokeredMessage.CorrelationId property to
the correlation ID.
To receive messages from a subscription, use the SubscriptionClient class.

The following code shows how to receive a message from a topic by using a SubscriptionClient class.

Receive a Message by Using the SubscriptionClient Class


var subscriptionClient = SubscriptionClient.CreateFromConnectionString(
connectionString, topicName, subscriptionName);

while (true)
{
BrokeredMessage message = subscriptionClient.Receive();
if (message != null)
{
try
{
ProcessMessage(message.GetBody<Product>());
message.Complete();
}
catch
{
message.Abandon();
}
}
}

After you process the message from the subscription, you call the Complete method to have it removed
from the subscription. This is similar to how you would receive messages from queues. If there is a
problem during processing, you can release the lock placed on the message in the queue by calling the
Abandon method of the BrokeredMessage class. Calling the Abandon method is preferable than letting
the lock time out, because by releasing the lock, other consumers can immediately receive the message
and process it again, instead of waiting for the lock to expire.

Question: When should you call the BrokeredMessage.Abandon method?


Developing Windows Azure and Web Services 7-31

Demonstration: Subscription-Based Messaging with Windows Azure


Service Bus Topics
In this demonstration, you will see how to create a topic and subscriptions, run an application that sends
messages to the topic, and then run another application that receives messages on the various
subscriptions you created.

Demonstration Steps
1. Open Visual Studio 2012, and then open the TopicsDemo.sln solution from the
D:\Allfiles\Mod07\DemoFiles\TopicsDemo folder.

2. Examine the content of the ServiceBusTopicPublisher, ExpensivePurchasesSubscriber,


CheapPurchasesSubscriber, and AuditSubscriber projects. The ServiceBusTopicPublisher project
creates topics, defines the available subscriptions, and sends four sales messages. The three remaining
projects act as subscribers and output console messages are sent to their subscription.

ExpensivePurchasesSubscriber. Receives sales that are over $4000.

CheapPurchasesSubscriber. Receives sales that are under $4000.


AuditSubscriber. Receives all sales.

3. Explore the ServiceBusTopicPublisher project. The code in the Main method creates a topic by
using the NamespaceManager.CreateTopic, then creates three subscriptions by using the
NamespaceManager.CreateSubscription method and the SqlFilter class, and then sends four
messages to the topic by using the BrokeredMessage and TopicClient classes.

4. Explore the ExpensivePurchasesSubscriber project. The code uses the SubscriptionClient class to
connect to the productsalestopic Service Bus topic with the ExpensivePurchases subscription. The
CheapPurchasesSubscriber and AuditSubscriber projects are used similarly with the two other
subscriptions.
5. Open the Windows Azure Management Portal (http://manage.windowsazure.com).

6. If you did not perform the demo in the previous lesson, create a new Windows Azure Service Bus
namespace named ServiceBusDemo07YourInitials (Replace YourInitials with your initials). Locate
the connection string of the namespace and copy it to clipboard.

7. With the connection string you copied from the Management Portal, replace the Service Bus
connection string defined in App.Config files in all projects in the solution.

8. Run the ServiceBusTopicPublisher project and wait until the application sends the four messages to
the topic.

9. Open the properties of the solution, and change the startup mode to multiple projects. Set the three
subscriber projects to start.

10. Run the projects and verify that the subscribers received the appropriate messages according to their
subscription filter (under $4000, over $4000, and all messages).

Question: Why is it better to use SQL filters instead of creating a single subscription and
check the value of the message's ContentType property?
7-32 Windows Azure Service Bus

Lab: Windows Azure Service Bus


Scenario
The IT team at Blue Yonder Airlines complained that since the company began to open its booking
services to other companies, they had to make many changes to the networks firewall in order to open
inbound ports. This, of course, limits their ability to secure the companys internal network properly. To
resolve this issue, the company has decided to change the way external applications, including the Travel
Companion back-end services, connect to the WCF booking service. Now all the communication with the
on-premises service will be directed through a Windows Azure Service Bus Relay. In this lab, you will
create a Service Bus Relay in Windows Azure, configure the WCF booking service to register with the
Service Bus, and update the Travel Companion back-end service to send messages to the WCF service
through the relay.

In addition, Blue Yonder Airlines wishes to improve the service offered to Travel Companion users by
sending users updated information about changes made to their booked flights directly to their client
app. To provide immediate feedback to the end user who updates the flight schedules, it was decided that
the notifications will not be sent during the update process but rather be sent by a background process.
To interact with the background process, the ASP.NET Web API service will use Service Bus Queues, and
the background process itself will run in a Windows Azure Worker Role. For the notifications, the worker
role will use Windows Push Notification Services (WNS).
In this lab, you will update the ASP.NET Web API services to use Windows Azure Service Bus Queues, and
create a new Windows Azure Worker Role to perform background processing.

Objectives
After completing this lab, you will be able to:

Use Windows Azure Service Bus Relays.


Use Windows Azure Service Bus Queues.

Lab Setup
Estimated Time: 60 Minutes.
Virtual Machine: 20487B-SEA-DEV-A, 20487B-SEA-DEV-C

User name: Administrator, Admin

Password: Pa$$w0rd, Pa$$w0rd

For this lab, you will use the available virtual machine environment. Before you begin this lab, you must
complete the following steps:

1. On the host computer, click Start, point to Administrative Tools, and then click Hyper-V Manager.

2. In Hyper-V Manager, click MSL-TMG1, and in the Action pane, click Start.
3. In Hyper-V Manager, click 20487B-SEA-DEV-A, and in the Action pane, click Start.

4. In the Action pane, click Connect. Wait until the virtual machine starts.

5. Sign in using the following credentials:


User name: Administrator

Password: Pa$$w0rd

6. Return to Hyper-V Manager, click 20487B-SEA-DEV-C, and in the Action pane, click Start.
7. In the Action pane, click Connect. Wait until the virtual machine starts.

8. Sign in using the following credentials:


Developing Windows Azure and Web Services 7-33

User name: Admin

Password: Pa$$w0rd

9. Verify that you received credentials to log in to the Azure portal from you training provider, these
credentials and the Azure account will be used throughout the labs of this course.

In this lab, you will install NuGet packages. It is possible that some NuGet packages will have newer
versions than those used when developing this course. If your code does not compile, and you identify
the cause to be a breaking change in a NuGet package, you should uninstall the NuGet package and
instead, install the old version by using Visual Studio's Package Manager Console window:

1. In Visual Studio 2012, on the Tools menu, point to Library Package Manager, and then click
Package Manager Console.

2. In Package Manager Console, enter the following command and then press Enter.

install-package PackageName -version PackageVersion -ProjectName ProjectName

(The project name is the name of the Visual Studio project that is written in the step where you were
instructed to add the NuGet package).

3. Wait until Package Manager Console finishes downloading and adding the package.
The following table details the compatible versions of the packages used in the lab:

Package name Package version

WindowsAzure.ServiceBus 2.1.0.0

Exercise 1: Using a Service Bus Relay for the WCF Booking Service
Scenario
In this exercise, you will create a Service Bus namespace, configure the on-premises WCF Booking service
to use a Service Bus Relay, and then configure the ASP.NET Web API services running in Windows Azure
to communicate with the on-premises services by using the newly created relay.

The main tasks for this exercise are as follows:

1. Create the Service Bus namespace by using the Windows Azure Management Portal
2. Add a new WCF Endpoint with a relay binding

3. Configure the ASP.NET Web API back-end service to use the new relay endpoint
4. Test the WCF service

Task 1: Create the Service Bus namespace by using the Windows Azure Management
Portal
1. In the 20487B-SEA-DEV-A virtual machine, run the Setup.cmd file from
D:\AllFiles\Mod07\LabFiles\Setup, and write down the name of the cloud service created by the
script.

Note: You may see a warning saying the client model does not match the server model.
This warning may appear if there is a newer version of the Windows Azure PowerShell cmdlets. If
this message is followed by an error message, please inform the instructor, otherwise you can
ignore the warning.

2. Open the Windows Azure Management Portal (http://manage.windowsazure.com)


7-34 Windows Azure Service Bus

3. Create a new Windows Azure Service Bus namespace named BlueYonderServerLab07YourInitials


(Replace YourInitials with your initials).

Select a region closest to your location.


Use the CONNECTION INFORMATION button to open the ACCESS CONNECTION INFORMATION
dialog box.

Copy the value from the DEFAULT KEY box to the clipboard.

Task 2: Add a new WCF Endpoint with a relay binding


1. Open the BlueYonder.Server solution file from the
D:\AllFiles\Mod07\LabFiles\begin\BlueYonder.Server folder.

2. Install the Windows Azure Service Bus NuGet package in the


BlueYonder.Server.Booking.WebHost project.
3. In the BlueYonder.Server.Booking.WebHost project, change the endpoint configuration of the
booking service:
Open the Web.config file

Locate the endpoint of the service named BookingTcp, and change its binding attribute to
netTcpRelayBinding.
Add an address attribute with the following value:
sb://BlueYonderServerLab07YourInitials.servicebus.windows.net/booking (Replace YourInitials
with your initials).
4. Add a new endpoint behavior named sbTokenProvider to the endpoint behaviors configuration.

Add a new <endpointBehaviors> element to the <behaviors> element that is under the
<system.serviceModel> section.
In the new <endpointBehaviors> element, add a new <behavior> element, and set its name
attribute to sbTokenProvider.

In the new <behavior> element, add a <transportClientEndpointBehavior> behavior element to


the configuration.
In the behavior element add a <tokenProvider> element, and in it, add a <sharedSecret> element
with the issuerName attribute set to owner and the issuerSecret attribute set to the access key of
the new Service Bus you created.

Note: Visual Studio Intellisense uses built-in schemas to perform validations. Therefore, it
will not recognize the transportClientEndpointBehavior behavior extension, and will display a
warning. Disregard this warning.

5. Locate the endpoint of the service, and set it to use the new endpoint behavior.

Use the behaviorConfiguration attribute to connect the endpoint to the endpoint behavior named
sbTokenProvider.

6. Add an <applicationInitialization> element to <system.webServer> section group, and then set


the initialization page to /Booking.svc.

Note: Application initialization automatically sends requests to specified addresses after the
Web application loads. Sending the request to the service will make the service host load and
initiate the Service Bus connection.
Developing Windows Azure and Web Services 7-35

7. Open IIS Manager, and then set the start mode of the DefaultAppPool:

Open the Connections pane and click the Application Pools node.

Open the Advanced Settings of the DefaultAppPool.

Set the Start Mode to AlwaysRunning.

Note: Setting the start mode to AlwaysRunning will load the application pool
automatically after IIS loads. To use application initialization the application pool must be
running.

8. Enable the preload feature on the BlueYonder.Server.Booking.WebHost

Open the Advanced Settings of the BlueYonder.Server.Booking.WebHost node.

Set the Preload Enabled option to True.

Note: When preload is enabled, IIS will simulate requests after the application pool starts.
The list of requests is specified in the application initialization configuration that you already
created.

9. Return to Visual Studio 2012 and build the BlueYonder.Server.Booking.WebHost project.

10. Return to IIS and recycle the default application pool.

Click the Application Pools node in the Connections pane.


Select the DefaultAppPool and click Recycle.

Task 3: Configure the ASP.NET Web API back-end service to use the new relay
endpoint
1. Open BlueYonder.Companion.sln from D:\AllFiles\Mod07\LabFiles\begin\BlueYonder.Server in a new
Visual Studio 2012 instance.
2. Install the Windows Azure Service Bus NuGet package in the BlueYonder.Companion.Host project.

3. Configure the BlueYonder.Companion.Host project to consume the new relay endpoint:


In the Web.config file, locate the <client> section within the <system.serviceModel> section
group.

Change the client endpoint configuration to use the netTcpRelayBinding

Set the value of the address attribute to the value


sb://BlueYonderServerLab07YourInitials.servicebus.windows.net/booking (Replace YourInitials
with your initials).

4. Add a new endpoint behavior to the endpoint behaviors configuration.


Add a new <behaviors> element to the <system.serviceModel> section.

Add a new <endpointBehaviors> element to the <behaviors> element.

In the new <endpointBehaviors>, element add a new <behavior> element, and in it, add a
<transportClientEndpointBehavior> behavior element.

In the behavior element add a <tokenProvider> element, and in it, add a <sharedSecret> element
with the issuerName attribute set to owner and the issuerSecret attribute set to the access key of
the new Service Bus you created.
7-36 Windows Azure Service Bus

Note: Visual Studio Intellisense uses built-in schemas to perform validations. Therefore, it
will not recognize transportClientEndpointBehavior behavior extension, and will display a
warning. Disregard this warning.

Task 4: Test the WCF service


1. Open the Windows Azure Management Portal (http://manage.windowsazure.com)

2. Locate the BlueYonderServerLab07YourInitials (Replace YourInitials with your initials) Service Bus
namespace, and verify that it contains the booking relay.
3. In the BlueYonder.Companion solution, bring back the call to the WCF service from the reservation
controller.

In the BlueYonder.Companion.Controllers project, open ReservationControll.cs and locate the


following comment.

// TODO: Lab 07, Exercise 1: Task 4.3: Bring back the call to the backend WCF service.
You can use the Task List window to locate the TODO comment.
Uncomment the call to the CreateReservationOnBackendSystem method.

4. Publish the BlueYonder.Companion.Host.Azure project. If you did not import your Windows Azure
subscription information yet, download your Windows Azure credentials, and import the downloaded
publish settings file in the Publish Windows Azure Application dialog box.
5. Select the cloud service that matches the cloud service name you wrote down at the beginning of the
lab, after running the setup script.

6. To finish the deployment process, click Publish.


7. In the BlueYonder.Server solution, open the BookingService.cs file from the
BlueYonder.BookingService.Implementation project. Place a breakpoint in the beginning of the
CreateReservation method, and start debugging the web application.

8. Log on to the virtual machine 20487B-SEA-DEV-C as Admin with the password Pa$$w0rd.

9. Open the
D:\AllFiles\Mod07\LabFiles\begin\BlueYonder.Companion.Client\BlueYonder.Companion.Client.sln
solution file.

10. In the BlueYonder.Companion.Shared project, open the Addresses class, and set the BaseUri
property to the Windows Azure Cloud Service name you wrote down at the beginning of this lab.
11. Start the client application without debugging, and purchase a new trip to New York.

Use the Search button and start typing New.

Wait for the app to show list of flights from Seattle to New York.
Fill in the reservation form and click Purchase.

12. Go back to the 20487B-SEA-DEV-A virtual machine to debug the WCF Web application.
Verify that you break on the WCF service code.

Continue running and verify the client is showing the new reservation.

Stop the WCF application debugging.

Results: After you complete this exercise, you can run the client app and book a flight, and have the
ASP.NET Web API services running in the Windows Azure Web Role communicate with the on-premises
WCF services by using Windows Azure Service Bus Relays.
Developing Windows Azure and Web Services 7-37

Exercise 2: Publishing Flight Updates to Clients by Using Windows Azure


Service Bus Queues
Scenario
The Flight Manager Web application supports updating flight schedules with new departure times. In this
exercise, you will add the push notifications feature to send flight updates directly to the clients who
booked the flight being updated. Sending push notifications to multiple clients can take some time, so
the notification part will be decoupled from the ASP.NET Web API service by using Service Bus Queues. In
this exercise, you will create a Service Bus Queue and send two types of messages to it from the ASP.NET
Web API service: client notification subscription, and flight update messages. In addition, you will create a
background process running in a Windows Azure Worker Role, which will receive messages from the
queue and act according to each message type by either registering the client with the push notification
server, or by sending flight update push notifications to registered clients.

The main tasks for this exercise are as follows:


1. Send flight update messages to the Service Bus Queue

2. Create a Windows Azure Worker role that receives messages from a Service Bus Queue

3. Handle the subscription and update messages


4. Test the Service Bus Queue with flight update messages

Task 1: Send flight update messages to the Service Bus Queue


1. Open the Windows Azure Management Portal (http://manage.windowsazure.com)

Open the Service Bus you created in the previous exercise and click CONNECTION INFORMATION
to open the ACCESS CONNECTION INFORMATION dialog.

Copy the value of the CONNECTION STRING field.

2. Return to the BlueYonder.Companion solution in Visual Studio 2012, and then add a string setting
to the web role to store the Service Bus connection string.
Name the new setting Microsoft.ServiceBus.ConnectionString, and then set its value to the
connection string you found in the previous step.
3. Open the ServiceBusQueueHelper class located in the BlueYonder.Companion.Controllers project.

Note: The BlueYonder.Companion.Controllers already contains the NuGet packages


Windows Azure ServiceBus and Windows Azure Configuration Manager.
When you install the Windows Azure ServiceBus NuGet package, the package Windows Azure
Configuration Manager is installed automatically.

4. Implement the ConnectToQueue method:

Create a Service Bus namespace manager object by using the connection string of the Service Bus.

Use the CloudConfigurationManager.GetSetting to retrieve the connection string.

To create the namespace manager object, use the CreateFromConnectionString method of the
NamespaceManager class.

Check if the Queue exists and create it by using the CreateQueue API if necessary.

Return a new QueueClient object for the queue by using the CreateFromConnectionString method
of the QueueClient class.
7-38 Windows Azure Service Bus

Note: The Queue name is stored in a static variable called QueueName, and has the value
of FlightUpdatesQueue

5. In the FlightsController class, add a static field for the QueueClient object.

Call the method you previously created in a static constructor to set the object.

6. In the Put method, after saving the changes made to the flight schedule, set the FlightId property of
the updatedSchedule variable to the id parameter containing the updated flight id.

Create a new BrokeredMessage object with the updated schedule as the message body, set the
ContentType property of the message to UpdatedSchedule, and send the message to the queue.

7. Review the Register method of the NotificationsController class. The same pattern of creating a
QueueClient object in the static constructor and then sending the update messages by using the
BrokeredMessage is applied to this controller.

Note: The Register method subscribes clients to flight update notifications. When a flight
update message is sent to the queue, every subscribed client waiting for that flight will be
notified by using the Windows Push Notification Services (WNS).

Task 2: Create a Windows Azure Worker role that receives messages from a Service
Bus Queue
1. Create a new Worker Role named BlueYonder.Companion.WNS.WorkerRole in the
BlueYonder.Companion.Host.Azure project.
Use the Worker Role with Service Bus Queue template when creating the role.

2. Copy the Microsoft.ServiceBus.ConnectionString setting from the BlueYonder.Companion.Host web


role to the BlueYonder.Companion.WNS.WorkerRole worker role settings.

Note: The BlueYonder.Companion.WNS.WorkerRole role already contains the


Microsoft.ServiceBus.ConnectionString setting. You only need to copy the connection string
value from the BlueYonder.Companion.Host web role.

Task 3: Handle the subscription and update messages


1. Add the BlueYonder.Companion.WNS project from the
D:\Allfiles\Mod07\LabFiles\begin\BlueYonder.Server\BlueYonder.Companion.WNS folder to
the solution.

Note: The BlueYonder.Companion.WNS project includes code that handles WNS


subscriptions and notifications. WNS is out of scope of this course; however, you can open the
project's code and observe how WNS is used.

2. Copy the database connection string from the BlueYonder.Companion.Host to the


BlueYonder.Companion.WNS.WorkerRole project:
Open the Web.config of the BlueYonder.Companion.Host project and copy the entire
<connectionStrings> element

Place the copied text in the App.Config file of the BlueYonder.Companion.WNS.WorkerRole


project, under the <configuration> section.
Developing Windows Azure and Web Services 7-39

3. In the BlueYonder.Companion.WNS.WorkerRole project, add the following application settings


elements to the App.config.

<add key="ClientSecret" value="1r7Bt7zllZLfDM4W4Q7BxAZEze2qnvuN" />


<add key="PackageSID" value="ms-app://s-1-15-2-1252400722-2342768715-2725817281-
1266214681-2802664595-2493784738-901281077" />

You can find the above configuration in the WnsConfiguration.xml file, under the lab's Assets
folder.

Note: The ClientSecret and PackageSID settings were retrieved by the Windows 8 client
team during the upload process of the client app to the windows store.

4. In the BlueYonder.Companion.WNS.WorkerRole project, add reference to the following projects:

BlueYonder.Companion.WNS

BlueYonder.Companion.Entities
BlueYonder.DataAccess.Interfaces

BlueYonder.DataAccess

BlueYonder.Entities
5. Add the MessageHandler.cs file from the labs Assets folder to the
BlueYonder.Companion.WNS.WorkerRole project.

Note: The MessageHandler class contains the code to subscribe clients to WNS and send
notifications to clients when their flights are rescheduled.

6. In the BlueYonder.Companion.WNS.WorkerRole project, locate the OnStart method of the


WorkerRole class, and add a call at the beginning of the method to the
WNSManager.Authenticate method.

7. In the WorkerRole class, change the value of the QueueName constant from ProcessingQueue to
FlightUpdatesQueue.

8. In the Run method, add code after the // Process the message comment, to handle received
messages, according to the value of the received message ContentType property:

Subscription. Use the receivedMessage.GetBody<T> generic method to retrieve the content of the
message as a RegisterNotificationsRequest object and call the MessageHandler.CreateSubscription.

UpdatedSchedule. Use the receivedMessage.GetBody<T> generic method to retrieve the content


of the message as a FlightSchedule object and call the MessageHandler.Publish method.
9. Publish the BlueYonder.Companion.Host.Azure project to Windows Azure.

Task 4: Test the Service Bus Queue with flight update messages
1. Place the two virtual machine windows so you work in virtual machine 20487B-SEA-DEV-A and see
the right-hand side of 20487B-SEA-DEV-C.

2. Run the client app in the 20487B-SEA-DEV-C virtual machine. The trip you purchased in the previous
exercise will show in the Current Trip list. Write down the date of the trip.
7-40 Windows Azure Service Bus

3. Leave the client app running and return to the 20487B-SEA-DEV-A virtual machine. Open the
BlueYonder.Companion.FlightsManager solution file from the
D:\AllFiles\Mod07\LabFiles\begin\BlueYonder.Server folder in a new Visual Studio 2012 instance.

4. Open the web.config file from the BlueYonder.FlightsManager project, and in the <appSettings>
section, locate the webapi:BlueYonderCompanionService key and update the {CloudService}
string to the Windows Azure Cloud Service name you wrote down at the beginning of the lab.
5. Run the BlueYonder.FlightsManager web application, find the flights from Seattle to New York, and
change the departure time of your purchased trip to 9:00 AM.

Verify you see a toast notification in the client app in the 20487B-SEA-DEV-C virtual machine (it
might take the notification a few seconds to appear).

Results: After you complete this exercise, you will be able to run the Flight Manager Web application,
update the flight departure time of a flight you booked in advance in your client app, and receive
Windows push notifications directly to your computer.

Question: In the lab, you used a worker role to send updates to customers. This worker role
runs as a separate component that is detached from the main application. What are some
benefits of using this architecture?
Developing Windows Azure and Web Services 7-41

Module Review and Takeaways


In this module, you learned about the Windows Azure Service Bus and its various capabilities. You saw
how to use relays to enable communications between on-premises apps and cloud-based applications.
You also learned about brokered messaging by using queues. Finally, you saw how the use of topics,
subscriptions and filters enable you to build publish-subscribe architectures.

Review Question(s)
Question:

Describe the criteria for choosing whether to use asynchronous, queue-based


communication or synchronous, request-response communication.
7-42 Windows Azure Service Bus
8-1

Module 8
Deploying Services
Contents:
Module Overview 8-1

Lesson 1: Web Deployment with Visual Studio 2012 8-3

Lesson 2: Creating and Deploying Web Application Packages 8-7


Lesson 3: Command-Line Tools for Web Deploy 8-11

Lesson 4: Deploying Web and Service Applications to Windows Azure 8-16


Lesson 5: Continuous Delivery with TFS and Git 8-21
Lesson 6: Best Practices for Production Deployment 8-25

Lab: Deploying Services 8-33


Module Review and Takeaways 8-38

Module Overview
The deployment of a web application to a remote server can sometimes be complex and involve several
steps, such as copying files, changing permissions, and configuring databases. Instead of performing these
steps manually, there are tools that can help automate this process and execute the deployment more
safely so that availability of your application is not affected. For example, some deployment tools can
automatically back up the web application before deploying the new version, and restore the backups if
any errors occur during the deployment.
In this module, you will learn how to use tools such as Web Deploy to improve the deployment process,
how to perform continuous delivery to automate the build and deployment process, and some best
practices for deployment in a production environment.

Note: The Management Portal UI and Windows Azure dialog boxes in Visual Studio 2012
are updated frequently when new Windows Azure components and SDKs for .NET are released.
Therefore, it is possible that some differences will exist between screen shots and steps shown in
this module, and the actual UI you encounter in the Management Portal and Visual Studio 2012.

Objectives
After you complete this module, you will be able to:
Deploy web applications with Visual Studio.

Create and deploy web applications by using IIS Manager.

Deploy web applications by using the command line.


Deploy web applications to Windows Azure environments.

Use continuous delivery with TFS and Git.


8-2 Deploying Services

Apply best practices for deploying web applications on-premises and to Windows Azure.
Developing Windows Azure and Web Services 8-3

Lesson 1
Web Deployment with Visual Studio 2012
One of the quickest ways to deploy a web application to a remote server is to deploy it with the Web
Deployment Framework, or Web Deploy. With Web Deploy, you can perform several tasks at one time,
such as copying files to remote servers, configuring IIS application pools, and applying permissions to the
file system. There are many ways to use Web Deploy, but one of the easier ways is by using the publishing
feature of Visual Studio 2012.

In this lesson, you will learn about Web Deploy and how to deploy web applications by using Web Deploy
in Visual Studio.

Lesson Objectives
After you complete this lesson, you will be able to:

Describe the capabilities of the Web Deployment Framework.

Create a Web Deploy package and perform a live deployment with Visual Studio 2012.

Introduction to Web Deploy


The deployment of web applications is complex
because it not only requires compiling the web
application and copying the compilation output
to the target server, but also other actions, such as
configuring IIS, modifying database connection
strings, installing certificates, updating database
schemas, and more.
In the early days of web development, the tools
supplied by the development platform, such as
xcopy deployment and FTP deployment, were not
enough to achieve the complex task of
deployment. In most cases, the result produced
either a lengthy document describing how to deploy the application manually or a set of complex scripts
that performed an automatic deploy of the application and had to be activated with a large set of
parameters.

This is where Web Deploy, which was released in 2009, is most useful. Web Deploy was created to simplify
the deployment of web applications to servers. Web Deploy can perform more than just file copy between
source and destination. It can perform additional tasks such as copying configuration from one IIS to
another, writing to the registry, setting file system permissions, performing transformation of
configuration files, and deploying databases.

Web Deploy is installed with Visual Studio 2012. If you have a computer that does not have Visual Studio
2012 installed on it, and you want to use Web Deploy, you have to install it manually.

To download Web Deploy, see:

Download Web Deploy


http://go.microsoft.com/fwlink/?LinkID=298820&clcid=0x409
8-4 Deploying Services

You can use Web Deploy to publish and synchronize an existing web application on a remote server. You
can also use Web Deploy to create a deployment package from an existing web application, and publish
that package to a server later on. A deployment package, which is a standard compressed file, contains
both the content that you want to copy to the server, and an instruction file that contains the list of
actions to execute on the target server. The instructions, or Providers, as they are referred to in the Web
Deploy terminology, control the various resources that can be created or manipulated in the server, such
as files, IIS applications, databases, and registry. You can also create your own custom Web Deploy
provider if you have to perform a task that is not implemented by any of the existing providers, such as
attaching a .VHD file as a local hard-drive.

For a list of available Web Deploy Providers, see:


Web Deploy Providers
http://go.microsoft.com/fwlink/?LinkID=298821&clcid=0x409

You can use Web Deploy in various ways. For example, when you use Visual Studio 2012 to publish a web
application, you are actually using the Web Deployment Framework for the task. The same is true when
you export an application from IIS Manager, or when you use the MSDeploy command-line tool.

Note: If you are familiar with Windows PowerShell, there is also a Web Deploy snap-in for
PowerShell, which is discussed in Lesson 3, Command-Line Tools for Web Deploy in this
module.

For more information about Web Deploy, see:

Introduction to Web Deploy


http://go.microsoft.com/fwlink/?LinkID=298822&clcid=0x409

Configuring Web Deployment By Using Visual Studio


If you are developing .NET-based web
applications, for example ASP.NET Web API
applications, or IIS-hosted WCF services, then your
first choice will probably be to use Web Deploy
from Visual Studio 2012.
Visual Studio 2012 provides several techniques for
publishing web applications:

File system, FrontPage 2002 Server Extensions


from Microsoft, and FTP. These options do
not use Web Deploy and provide a basic
process of compiling the web application and
copying the files to the destination.

Web Deploy and Web Deploy Package. These two options use the Web Deployment Framework to
perform complex deployments. For example, if you decide to create a Web Deploy package, you can
create a package that includes the web application files and database scripts, which will be executed
after the application is deployed.

Whichever deployment technique you choose, you can control some basic settings through the properties
of the web application project. If you do not plan to use Web Deploy, you can only control a few settings,
Developing Windows Azure and Web Services 8-5

such as whether to deploy files that are in the project folder but were not included in the project. If you
plan to use Web Deploy (either live or by creating a package), you can configure more settings, such as
copying local IIS application pool settings to the deployed server, and listing the SQL script files which will
execute as part of the deployment.

To browse to those settings

1. Right-click your web application project in the Solution Explorer window in Visual Studio 2012, and
then click Properties.

2. In the Properties window, there are two tabs for configuring publish settings, Package/Publish Web
and Package/Publish SQL.

The following illustration shows the location of the Package/Publish Web and Package/Publish SQL
tabs in the web application Properties window:

FIGURE 8.1: THE PROPERTIES WINDOW OF A WEB APPLICATION


PROJECT
For more information about the Package/Publish Web tab, see:

Package/Publish Web Tab, Project Properties


http://go.microsoft.com/fwlink/?LinkID=298823&clcid=0x409

For more information about the Package/Publish SQL tab, see:

Package/Publish SQL Tab, Project Properties


http://go.microsoft.com/fwlink/?LinkID=298824&clcid=0x409

After you have configured the publish settings, you can publish the web application by right-clicking the
project in the Solution Explorer window, and then clicking Publish. This displays the Publish Web dialog
box using which you can perform the following tasks:

Select the publish technique.


Provide information on the destination.

Select which solution configuration you want to publish, such as debug or release.
Begin the publishing process.

If you select any of the Web Deploy techniques, you can also provide additional settings, such as a new
connection string that will replace the current connection string in the web.config file.
8-6 Deploying Services

Visual Studio 2012 stores all the publish settings in the project so that the next time you have to publish
the application, you can do a one-click publish instead of supplying all the information again.

Visual Studio 2012 supports storing more than one publishing profile so that you can create profiles for
different scenarios. For example, you can create different profiles for testing and production
environments, each with its own database connection strings.

For more information on how to use the Web Deploy dialog box, see:

How to: Deploy a Web Project by Using One-Click Publish in Visual Studio
http://go.microsoft.com/fwlink/?LinkID=298825&clcid=0x409
Note: If you create a Web Deploy Package, you will find that in addition to the packaged
compressed file, a .cmd file is created, together with a readme.txt file that describes how to run
the .cmd files to deploy the package.

Demonstration: Deploying a Web Application By Using Visual Studio


This demonstration shows how to deploy a web application by using Visual Studio 2012.

Demonstration Steps
1. Open Visual Studio 2012, and create a new ASP.NET 4 MVC Web Application project named MyApp
in the D:\Allfiles\Mod08\DemoFiles\DeployWebApp\begin
2. Clear the Create directory for solution check box.

3. Use the Web API template to create the project.

4. Publish the application by using the following settings, and verify the JSON response contains two
items.
Profile name: RemoteServer

Publish method: Web Deploy

Server: http://10.10.0.11/msdeployagentservice
Site Name: Default Web Site/MyApp

User name: Administrator

Password: Pa$$w0rd

Save password: Checked

Destination URL: http://10.10.0.11/MyApp/api/values

5. In the MyApp project, under the Controllers folder, open the ValuesController.cs file, and change
the implementation of the parameterless Get method.

Change the implementation of the Get method to return a collection of three items instead of two
items.

6. Publish the application again, wait until the publishing is complete and the browser is opened, and
verify the JSON response contains three values.
Developing Windows Azure and Web Services 8-7

Lesson 2
Creating and Deploying Web Application Packages
Now that you have seen what Web Deploy can do, you can explore other ways of using Web Deploy, in
addition to Visual Studio 2012. In this lesson, you will learn how to create Web Deploy packages by using
IIS Manager to synchronize content such as web application files, IIS registry settings, and SSL certificates,
and to deploy them to other servers.

Lesson Objectives
After you complete this lesson, you will be able to:

Create a Web Deployment Package by using IIS Manager.

Deploy a Web Deployment Package by using IIS Manager.

Creating IIS Web Deployment Packages


In the previous lesson, you learned how to use
Web Deploy with Visual Studio for performing a
live deployment and for creating a deployment
package. One of the disadvantages of using Web
Deploy through Visual Studio 2012 is that you
must have the web application project with its
source files. If you have an already compiled web
application that you want to deploy to another
server, you cannot use Visual Studio 2012 to
deploy it.

There are additional ways to use Web Deploy, and


one of these ways is by using IIS Manager. When
you install Web Deploy on a computer that is running IIS, you will see a new set of options in the Actions
pane in IIS Manager:
Export/Import Application Package: You can use this option when you select a web site or a web
application from the Connections pane. This option creates a deployment package for Web Deploy
that includes the selected web application or multiple applications, if you selected to create a
package for a whole site. You can use this deployment package later to deploy the web application to
other servers that run IIS.

Export/Import Server/Site Package: You can use this option when you select the root node (the
computer node) from the Connections pane. This option creates a deployment package for Web
Deploy that contains the configuration of the web server. This includes configuration from the
applicationHost.config configuration file, IIS registry settings, SSL certificates, and the content of all
the web applications hosted in the server.

Note: Whether you decide to export an application or the whole server, you can still
control which parts to export. For example, if you decide to export the server, you can exclude
some web sites from the package, or remove some configuration that you do not want to export.

Unlike Visual Studio 2012, where you can only use some Web Deploy Providers, such as IIS and database
providers, when you export a Web Deploy package from IIS, you can use any of the supported providers.
For example, when you use IIS Manager to export a WCF service web application that uses message
8-8 Deploying Services

security, you can use the cert Web Deploy Provider to include the certificate that is used for service
authentication in the exported package.

In addition, when creating packages by using IIS, you can also use parameters. Parameters are used for
varying the way packages are deployed to different environments. For example, you probably want to use
the same database schema both in staging and in production. However, you probably want to update
different database servers in each environment when you deploy the package. By adding parameters, you
can do more than control the settings used by providers.
You can also change the content of configuration files of your application by using substitution tokens to
find sub-strings in text files and replacing them with the value set in the parameter. Each parameter that
you create can have a default value, a name, and a description shown when importing the package to
help the person importing the package understand what value they have to enter.

Note: If you are familiar with web.config transformations, do not confuse it with Web
Deploy Parameters. Web Deploy parameterization is not limited to web.config files, and you can
use it on any XML or text file in the deployment package. web.config transformations will be
explained in Lesson 6, Best Practices for Production Deployment.

For more information about how to export Web Deploy packages by using IIS Manager, see:

Export a Package through IIS Manager


http://go.microsoft.com/fwlink/?LinkID=298826&clcid=0x409

For more information about how to use parameters with Web Deploy, see:
How to: Use Web Deploy Parameters in a Web Deployment Package
http://go.microsoft.com/fwlink/?LinkID=298827&clcid=0x409

Deploying IIS Web Deployment Packages


After you create a Web Deploy package, whether
exported by IIS Manager or published by using
Visual Studio, you can use IIS Manager to import
the package, and deploy it locally on a server.

Note: Up to this point, you have only


learned how to create Web Deploy packages with
the help of Visual Studio 2012 and IIS Manager. As
mentioned at the beginning of this module, you
can create Web Deploy packages by using the
MSDeploy.exe command line tool and the Web
Deploy PowerShell Snap-In. Any Web Deploy
package created with those tools can also be imported by using IIS Manager.

IIS Manager can import two kinds of Web Deploy packages:

1. Packages that contain a web application: If you exported a web application with IIS Manager, or
published a web application through Visual Studio 2012, you can import it by selecting the target
web site in the Connections pane, and clicking Import Application in the Actions pane.
Developing Windows Azure and Web Services 8-9

Note: You can also import a package and deploy it under an existing web application
instead of under a web site. However, this scenario is less common.

2. Packages that contain a web server: If you exported a web server by using IIS Manager, you can
open IIS Manager on a different server, select the root node (machine node) from the Connections
pane, and then click the Import Server or Site Package in the Actions pane.

Note: Exporting a web server with IIS Manager uses the Web Deploy webServer Provider.
You can use the same provider in the command-line with MSDeploy.exe or by using the
PowerShell Snap-in, and import the created package with IIS Manager to achieve the same result.

To start the import process, select the deployment compressed file that you created. Next, select the parts
of the application to deploy. (By default, the entire package will be deployed, but you can select which
providers will run and which will not. Then provide values for the package parameters that you created
when exporting the package.

The following figure shows the list of parameters that are requested to deploy a sample package:

FIGURE 8.2: APPLICATION PACKAGE INFORMATION


In the previous illustration, you can see that there are two parameters that must be supplied, the
Application Path, and the Database Server name. Both already have default values assigned to them when
the package is created.

If you only want to see the list of actions that will be performed during deployment, you can use the
WhatIf flag. Performing an import with the WhatIf flag will not execute the deployment. It will only print
the actions that each provider will perform. To turn on the WhatIf option, click the Advanced Settings in
the Import Application Package dialog box, and then change the WhatIf setting from False to True.
When the import is complete, you can check the Details tab for a list of actions that would have been
performed if the WhatIf flag was set to False.
8-10 Deploying Services

Demonstration: Exporting and Importing Web Deploy Packages Through


IIS Manager
This demonstration shows how to export and import Web Deploy packages through IIS Manager.

Demonstration Steps
1. In the 20487B-SEA-DEV-B virtual machine, open IIS Manager, select the MyApp web application,
and then click Export Application.

2. In the Export Application Package dialog box, press Next until you reach the Save Package step,
enter the package path c:\MyApp.zip, and then complete the export process.
3. Copy the MyApp.zip file from the local C:\ root folder to the remote server at the UNC
\\10.10.0.10\c$\.

4. In the 20487B-SEA-DEV-A virtual machine, open IIS Manager, select the Default Web Site, and then
click Import Application.

5. In the Import Application Package dialog box, type the package path C:\MyApp.zip, and then
continue with the import.
6. Open a browser, and browse to the address http://localhost/MyApp/api/values. Verify that you see
the output of the service.
Developing Windows Azure and Web Services 8-11

Lesson 3
Command-Line Tools for Web Deploy
The previous two lessons showed how to use Web Deploy with applications such as Visual Studio 2012
and IIS Manager, but Web Deploy can also be implanted by using scripts that do not require human
interaction. Running scripts that do not require human interaction is useful if you are planning on
automating the deployment process. For example if you want to deploy the web application to an
integration environment each night.

In this lesson you, will learn how to use Web Deploy by using command line and PowerShell to automate
your packaging of web applications and deploying them to remote servers.

Lesson Objectives
After you complete this lesson, you will be able to:

Deploy web applications though command line with MSDeploy.

Create deployment packages and deploy them by using PowerShell.

Deploying with MSDeploy


MSDeploy is the command line tool version of
Web Deploy. With MSDeploy, you can perform
live server-to-server synchronization of web
application, create deployment packages, and
deploy packages to servers.

MSDeploy supports offline deployments, the


deployment of a packaged compressed file to the
current server that you are running on, and
remote deployment to other servers that have
Web Deploy installed on them.

For detailed instructions on how to configure the


server for remote deployment, see:

Configuring Server Environments for Web Deployment


http://go.microsoft.com/fwlink/?LinkID=298828&clcid=0x409

To start using the MSDeploy command line tool, you have to open a Command Prompt window, and run
the MSDeploy executable from the Microsoft Web Deploy folder in the %ProgramFiles%\IIS folder.

Note: After you install IIS8 and Visual Studio 2012, you might have several Microsoft Web
Deploy folders for the different versions of Web Deploy installed on your computer. You should
select the most recent version of MSDeploy.

The following command line executes MSDeploy to create a package file for the MyApp web application.

Executing the MSDeploy tool from command line to package the MyApp web application.
msdeploy.exe -verb:sync -source:iisApp="Default Web Site/MyApp" -
dest:package=c:\MyApp.zip
8-12 Deploying Services

The -verb parameter specifies which operation is required. The sync operation instructs the tool to
synchronize source and destination. If you use the dump operation instead, the tool will only list the
information that it received from the source.

For example, the following command line executes MSDeploy, but only prints the list of files that will be
copied without actually copying them.

Executing the MSDeploy tool to dump the list of files to be copied from the MyApp web
application

msdeploy.exe -verb:dump -source:iisApp="Default Web Site/MyApp"

The -source operation parameter indicates the source of the data. In the previous example, the source is
the iisApp provider. The iisApp provider can synchronize the content of a website or a web application,
either by its physical or virtual path. The iisApp provider is actually constructed from four other providers:
contentPath, createApp, dirPath, and filePath.

You can also use MSDeploy for live server-to-server synchronization. The following command line
executes MSDeploy to synchronize the MyApp web application from the current server to a remote server
named Server2.

Executing live server-to-server synchronization with the MSDeploy tool


msdeploy -verb:sync -source:iisApp="Default Web Site/MyApp" -dest:iisApp="Default Web
Site/MyApp",computerName=Server2

MSDeploy is also able to use the WhatIf flag to only print which actions will be taken, without actually
performing them. To use WhatIf, add the whatif parameter to the end of the command line.
The following command line will print the MyApp web application files that should be deployed to the
remote server. However, it will not actually copy the files to the remote server.

Executing live server-to-server synchronization with the MSDeploy tool by using the -whatif
operation setting
msdeploy -verb:sync -source:iisApp="Default Web Site/MyApp" -dest:iisApp="Default Web
Site/MyApp",computerName=Server2 -whatif

Note: When synchronizing web applications between servers, Web Deploy will check which
files already exist, unchanged, on the target server. These files will not be copied, therefore
improving the performance of the deployment process. If all the files on the target server are the
same as those in the source server, running the MSDeploy tool with the -whatif operation setting
will result in a message that states that 0 (zero) changes were made.

For a complete reference for the MSDeploy command line tool. see:

Web Deploy Command Line Reference


http://go.microsoft.com/fwlink/?LinkID=298829&clcid=0x409
Developing Windows Azure and Web Services 8-13

Packaging and Deploying By Using PowerShell


Today, many IT professionals prefer to use
PowerShell over batch files, because PowerShell
offers a more extensive scripting language,
object-oriented data handling, and a better
pipelining of commands.

For an overview of PowerShell scripting, see:

Scripting with Windows PowerShell


http://go.microsoft.com/fwlink/?LinkID=298
830&clcid=0x409

If more and more IT professionals use PowerShell


for scripting than batch files, it is only natural for Web Deploy to have a set of PowerShell cmdlets for
Web Deploy. The Web Deploy PowerShell Snap-in provides a set of cmdlets that you can use to perform
various operations over web applications, sites, servers, databases, registry, and other resources.

To use the Web Deploy PowerShell Snap-in, open a PowerShell window, and then type the following
command: Add-PSSnapin WDeploySnapin3.0.
The Web Deploy PowerShell Snap-in has several cmdlets that are resource and action specific, such as the
following:

Backup-* / Restore-*: This set of cmdlets backs up resources to a packaged compressed file and
restores them to a selected server. You can back up and restore a web application, a web site, a web
server, an SQL database, or a MySQL database. For example, the Restore-WDApp cmdlet is used for
restoring packages that contains a backed up web application.

Note: The package file that is created by the Backup-* cmdlets, and the one used by the
Restore-* cmdlets are standard Web Deploy packages. Therefore, you can use those packages
with other tools that support packaged files, such as IIS Manager, or MSDeploy. For example, you
can use the Backup-WDServer to create a Web Deploy package for a web server, and import it
through IIS Manager.

.Sync-*: This set of cmdlets synchronizes a resource between servers. Similar to the Backup-* and
Restore-* cmdlets, you can use the Sync-* cmdlets to synchronize web applications, web sites, web
servers, or databases such as MySQL and SQL Server databases.

Get/New-WDPublishSettings: By default, the Web Deploy cmdlets run on the local server. For
example, when you back up a web application, it is backed up from the local IIS. When you restore a
web application, it is restored to the local IIS. If you want to perform a backup/restore from/to a
different server, you have to first create a publish settings file for that server. A publish settings file
contains the address of the server and the credential information for that server. The New-
WDPublishSettings cmdlet creates a publish settings file for a server, and the Get-
WDPublishSettings cmdlet loads a publish settings file into an object that you can use with other
cmdlets, such as Backup-*, Restore-*, and Sync-*.

Note: When you use the Sync-* cmdlet, you can choose between synchronizing resources
in the same server (for example, for duplicating a web site), from the local server to a remote
server, or between two remote servers. If you decide to synchronize between two servers, you will
need two publish settings files, one for the source and one for the destination.
8-14 Deploying Services

The following script shows how to synchronize a web application named MyApp between the local server
and the remote server named Server2.

Synchronizing the MyApp web application from the local server to Server2
$cred = Get-Credential
New-WDPublishSettings -ComputerName Server2 -Credentials $cred -AgentType MSDepSvc -
FileName:"C:\Server2.publishsettings"
Sync-WDApp "Default Web Site/MyApp" "Default Web Site/MyApp" -DestinationPublishSettings
"C:\Server2.publishsettings"

The Get-Credentials cmdlet will show a credentials request dialog box where the user must enter their
credentials for accessing Server2. The New-WDPublishSettings cmdlet will create a publish settings file
that has information on how to publish resources to Server2. The Sync-WDApp cmdlet will synchronize
the MyApp Web Application from the local IIS to Server2 to the same Web site, and with the same web
application name.

The Sync-WDManifest cmdlets is a bit different from the other cmdlets, because it can synchronize
multiple providers in a single command. To use this cmdlet, you must create two manifest files, one for
the source and another for the destination. The manifest file is an XML file that lists providers and their
parameters.
For a detailed list of the Web Deploy PowerShell cmdlets, see:

Web Deploy PowerShell Cmdlets


http://go.microsoft.com/fwlink/?LinkID=298831&clcid=0x409

Demonstration: Using PowerShell Cmdlets


This demonstration shows how to export and import Web Deploy packages by using the Web Deploy
PowerShell cmdlets.

Demonstration Steps
1. In the 20487B-SEA-DEV-A virtual machine, open a PowerShell window, and then add the
WDeploySnapin3.0 PowerShell Snap-in.
Type the following command and press Enter.

Add-PSSnapin WDeploySnapin3.0

2. Request the credentials for the server that you will deploy to, and store them in a variable named
$cert.

Type the following command and press Enter.

$cred = Get-Credential

For the credentials, use the username Administrator and the password Pa$$w0rd.

3. Create a publish settings file for the remote server by typing the following command.

New-WDPublishSettings -ComputerName 10.10.0.11 Credentials $cred -AgentType MSDepSvc


-FileName:"C:\Server.publishsettings"

4. Synchronize the web application to the remote server by typing the following command.

Sync-WDApp "Default Web Site/MyApp" "Default Web Site/MyAppDeployedWithPowerShell" -


DestinationPublishSettings "C:\Server.publishsettings"
Developing Windows Azure and Web Services 8-15

Open a browser, and browse to http://10.10.0.11/MyAppDeployedWithPowerShell/api/values. Verify


that you see three values.
8-16 Deploying Services

Lesson 4
Deploying Web and Service Applications to Windows
Azure
In previous lessons, you learned how to deploy web applications by using Web Deploy. However, in
Windows Azure you can host both web applications and service applications (background processes that
run tasks). Therefore, it offers other techniques for software installation.

In this lesson you will learn about the different techniques that you can use to deploy web applications
and service applications to Windows Azure Cloud Services, and to Windows Azure Web Sites.

Lesson Objectives
After you complete this lesson, you will be able to:

Deploy services to Windows Azure Cloud Services.

Deploy services to Windows Azure Web Sites.

Deploying Services to Windows Azure Cloud Services


Deploying a service to a Windows Azure Worker
Role or Web Role is a process that involves several
steps:

1. Create a new Cloud Service.

2. Create a package with the service files.


3. Upload the package and configuration to
Windows Azure.

4. Create a new deployment and server


instances.

5. Deploy the uploaded package to the


instances.

When you use Visual Studio 2012 to perform the deployment, the environment handles the process.
When you publish a Cloud project, you provide Visual Studio 2012 with your Windows Azure account
information. With that information, Visual Studio creates a Cloud Service, uploads the package and
configuration, creates the deployment and instances, and deploys the package to the new instances.

If your Cloud project includes web roles, you can enable Web Deploy to improve the performance of
deploying web roles. With Web Deploy, Visual Studio will connect directly to the hosted instance and
update the files locally on the server. Because this requires connecting directly to the instance, using Web
Deploy requires enabling Remote Desktop, and is only applicable if you deploy the web role to a single
instance.

For more information about how to use Web Deploy with Windows Azure Roles, see:

Publishing a Cloud Service by using the Windows Azure Tools


http://go.microsoft.com/fwlink/?LinkID=298832&clcid=0x409
Developing Windows Azure and Web Services 8-17

Since Visual Studio 2012 can identify existing Cloud Services, you can decide whether to deploy the
service to a new Cloud Service or to an existing one. You can also decide whether to deploy to the
production or staging environment.

For more information about how to publish services to Windows Azure from Visual Studio, see.
Publishing Cloud Services to Windows Azure from Visual Studio
http://go.microsoft.com/fwlink/?LinkID=298833&clcid=0x409

In addition to publishing, Visual Studio 2012 provides the packaging option. When you package a Cloud
project, a Windows Azure package is created locally together with a configuration file. However, it will not
be deployed to Windows Azure. You can then take the files and deploy them manually to Windows Azure
either through the Windows Azure Management Portal or by using PowerShell cmdlets.

Note: The package file (with a .cspkg extension) and the configuration file (with a .cscfg
extension) are created under the Cloud projects folder, in the bin\configuration\app.publish
folder, where configuration is the build configuration that you choose when packaging, such as
debug or release. After the packaging is complete, Visual Studio 2012 opens the folder in File
Explorer.

After you create the deployment package, you can deploy it manually through the Windows Azure
Management Portal at http://manage.windowsazure.com. After you sign in to the portal, create a Cloud
Service or select an existing one, and then click Upload. In the dialog box, you can name the deployment,
and select the package and the configuration files that you want to deploy. The portal uploads the
package and configuration files, and processes the rest of the deployment steps.

Another option that you have is to deploy the package by using PowerShell. Although with other tools,
such as Visual Studio 2012, or the Windows Azure Management Portal where deployment is completed
manually, with PowerShell, you can automate the deployment process, and include it in a script.

Note: The Windows PowerShell cmdlets for Windows Azure are not installed with the
Windows Azure SDK. You can download the cmdlets from:
http://go.microsoft.com/fwlink/?LinkID=298834&clcid=0x409

The following PowerShell script creates a Cloud service, and then deploys a Windows Azure package to its
production environment.

Deploying a Windows Azure package to a new Cloud service through PowerShell


$thumbprint = 'YOUR_MANAGEMENT_CERTIFICATE_THUMBPRINT'
$subscriptionId = 'YOUR_SUBSCRIPTION_ID'
$managementCert = Get-Item Cert:\CurrentUser\My\$thumbprint
$cloudServiceName = 'CLOUD_SERVICE_NAME'

Set-AzureSubscription -SubscriptionName "My default subscription" -SubscriptionId


$subscriptionId -Certificate $managementCert -SubscriptionDataFile
"C:\MyDefaultSubscription.xml"

New-AzureService -ServiceName $cloudServiceName -Location "North Central US"

New-AzureDeployment -ServiceName $cloudServiceName -Package "PATH_TO_CSPKG" -


Configuration "PATH_TO_CSCFG" -Slot Production

The Set-AzureSubscription cmdlet configures the default subscription that is used by the rest of the
script. The subscription information is also stored in a file so you can use it with other scripts. The New-
8-18 Deploying Services

AzureService cmdlet creates a Cloud service in the north central US region, and the New-
AzureDeployment cmdlet deploys the package and the configuration files to the production
environment in the new Cloud service.

Note: If you want to try this script, you must set the thumbprint of your Windows Azure
management certificate, your Windows Azure subscription ID, the Cloud Service name, which
must be unique, and the path and names of the package and configuration files.

For more information about the Windows Azure cmdlets, see:

Get Started with Windows Azure Cmdlets


http://go.microsoft.com/fwlink/?LinkID=298835&clcid=0x409

Deploying Services to Windows Azure Web Sites


Windows Azure Web Sites (WAWS) use Web
Deploy to publish web applications. The
deployment to WAWS is done in either of the
following ways:
Downloading the publish profile file of the
web site from the Management Portal and
importing it in Visual Studio 2012.
Adding your Windows Azure subscription
information to Visual Studio and selecting the
web sites you want to deploy to.

To import a publish profile file, proceed with the


following steps:
1. In the Management Portal, click your Web Site, click the DASHBOARD tab, and then click Download
the publish profile. Save the publish profile file.

2. In Visual Studio 2012, publish a web application project, and in the Profile tab of the Publish Web
dialog box, click Import.

3. In the Import Publish Profile dialog box, select the Import from a publish profile file option, click
Import, and then import the downloaded publish profile file.

4. Click OK, and then continue with Web Deploy publishing as you usually would.

To select a web site from your Windows Azure subscription, proceed with the following steps:

1. In Visual Studio 2012, publish a web application project, and in the Profile tab of the Publish Web
dialog box, click Import.

2. In the Import Publish Profile dialog box, if you have not already added your Windows Azure
subscription to Visual Studio 2012, click Add Windows Azure subscription, and then follow the
instructions to add your subscription to Visual Studio 2012.
3. Select the Import from a Windows Azure web site option, and then select the web site to which
you want to deploy.

4. Click OK, and then continue with Web Deploy publishing as you usually would.
Developing Windows Azure and Web Services 8-19

The following figure shows the location of the Download publish profile link in the Windows Azure
Management Portal.

FIGURE 8.3: DOWNLOADING A PUBLISH PROFILE FILE OF A


WINDOWS AZURE WEB SITE
The publish profile file that you download from the management portal contains all the information
Visual Studio needs to connect to WAWS and deploy the web application to it. After you import the file
through the Publish Web dialog box, Visual Studio 2012 sets the required deployment settings, such as
the deployment service URL and the credentials required to connect to the service.

The following figure shows the result of importing a publish profile file in Visual Studio 2012.

FIGURE 8.4: PUBLISHING TO A WEB SITE IN VISUAL STUDIO 2012

Note: If you examine the content of the publish profile file, you might notice that its
content resembles that of the publish settings file that is created by the New-
WDPublishSettings cmdlet. Although the files do look similar in structure, you cannot use a
8-20 Deploying Services

WAWS publish profile file that uses the Web Deploy PowerShell cmdlets, because the cmdlet
expects a different format for the destination address.
Developing Windows Azure and Web Services 8-21

Lesson 5
Continuous Delivery with TFS and Git
In previous lessons, you saw how to use web deployment techniques to deploy your application both on-
premises and to Windows Azure. However, there are some questions you will probably want to answer
before you start using these deployment techniques, like when are you going to deploy your
applications? Will you deploy to your source control after each check-in or on demand? Will you deploy
only after the code passes unit tests? Will you deploy every couple of days or deploy nightly to have an
up-to-date testing environment the following day? And will you manually build, test, and deploy the
application every time, or use automated, scheduled tasks? Some of these questions, if not all, are
answered by a process called continuous delivery. If used correctly, it can help you increase the quality of
your application.

In this lesson, you will learn the benefits of using continuous delivery, how to use continuous delivery with
Windows Azure and with source control management (SCM) systems such as Git and Team Foundation
Services (TFS).

Lesson Objectives
After you complete this lesson, you will be able to:
Describe the benefits of continuous delivery.

Detail the principles of continuous delivery.

Use continuous delivery with TFS and Git.

Benefits of Continuous Delivery


Continuous delivery is a software development
approach where you release every good version of
your product to the production environment. That
is, as soon as you are confident that your product
is of sufficient quality, you can put it in front of
real-world users. You must determine how
frequently this happens: once a month, twice a
week or even multiple times during a single day.

By delivering continuously, you gain the following


benefits:
Reduce the time that is required for users and
customers to see improvements in your
application.

Increase the confidence of your development teams through the need to maintain a high-quality
product constantly.

Reduce the overall risk in developing a complex software product by using automated tools.
8-22 Deploying Services

Continuous Delivery Principles


When you apply continuous delivery, you set up a
pipeline that applies to all code changes that you
make to the product. This pipeline usually
includes the following:

Building the product.

Running unit and integration tests.

Deploying the product to a staging


environment and running functional tests.

Because it would be impractical to have a human


perform all the previous steps for every code
change, continuous delivery implies the use of
automation:

Triggering automated builds on every code change to a source-control repository.


Requiring that a successful compilation of the product be followed by a 100% pass rate of all unit and
integration tests.

Setting up of virtual machines for use as staging environments.

Running installation or deployment packages for setting up the product.

Continuous Delivery with Team Foundation Service and Git


The last step in every continuous delivery
automation process is the deployment of the
compiled and tested application to the server.
With Windows Azure, this means deploying the
web application to its destination, whether it is a
Cloud service or a Windows Azure Web Site.
Earlier in this module, you saw there are several
ways to deploy a web application. Some involve
human interaction, such as deployment through
Visual Studio 2012 or IIS Manager, whereas other
techniques involve command line tools, which you
can include in an automated build process. For
example, if you have an automated build process that has to deploy to a Windows Azure Cloud service,
you can add a call at the end of the build process that executes a PowerShell script to deploy the
compiled web application to the Cloud service.

Note: You can see an example of a deployment script in PowerShell in Lesson 4, Deploying
to Windows Azure.

If you are using the automated builds in TFS, for more information on how to deploy your web
application to Windows Azure Cloud Services, see:

Continuous Delivery for Cloud Services in Windows Azure


http://go.microsoft.com/fwlink/?LinkID=298836&clcid=0x409
Developing Windows Azure and Web Services 8-23

Instead of configuring the deployment step in the automated build process yourself, Windows Azure
provides continuous delivery for two well-known source control management systems:

Git: Windows Azure lets you create a Git repository for each Windows Azure Web Site. When you
finish creating and testing your web application, you can push it to the Git repository, which
automatically deploys it to your Windows Azure Web Site.

Team Foundation Service: With Windows Azure, you can create an automated build in your Team
Foundation Service project that automatically deploys your web application to a Windows Azure Web
Site or a Windows Azure Cloud Service when you check in your changes.

Note: When you use continuous delivery with either Git or Team Foundation Service,
Windows Azure only provides the deployment step of the automated build process. You must
create the tests that will execute during the automated build, and configure the build to execute
those tests.

For more information on publishing a web application to a Windows Azure Web Site with Git, see:

Publishing a Web Site with Git


http://go.microsoft.com/fwlink/?LinkID=298837&clcid=0x409

For more information on publishing a web application to a Windows Azure Cloud Service by using Team
Foundation Service, see:
Continuous delivery to Windows Azure by using Team Foundation Service
http://go.microsoft.com/fwlink/?LinkID=298838&clcid=0x409

Demonstration: Continuous Delivery with TFS


This demonstration shows how to set up a TFS project with continuous integration to a Windows Azure
Web Site.

Demonstration Steps
1. Sign in to http://go.microsoft.com/fwlink/?LinkID=313752, or sign up to create an account.

Note: If you previously created a TFS account by using your Windows Live ID, you cannot
create another account. If you have already created an account, use this step to show how to
create an account, and then browse to https://AccountName.visualstudio.com.

2. Create a new Team project named 20487B with the default process template, and locate the new
project.

3. Open a new instance of Visual Studio from the TFS project Web site, and then create a new ASP.NET
MVC 4 Web Application project named MyTFSWebApp with the Web API project template. Select
Add to source control when you create the project, and after the project is created, add the project
to the suggested location in the TFS.

4. Check in the solution and all its files.


8-24 Deploying Services

5. Leave Visual Studio 2012 open, return to the browser, open the Windows Azure Management Portal
(https://manage.windowsazure.com), and then create a new Windows Azure Web Site by using
the Quick Create option.

Note: You can skip this step if you already have a Windows Azure Web Site that you want
to use.

6. Open the configuration of the newly created Windows Azure Web Site, and then click Set up TFS
publishing. Enter the name of your TFS account, and then click Authorize Now. Accept the
requested permissions, then select the 20487B project, and then complete the set up process.
7. Leave the browser open, return to Visual Studio 2012, check out the ValuesController.cs file, and
change the parameterless Get method, change the code to return an array of three values instead of
two. Save the file, and check in the file back to TFS.

8. In Team Explorer, open the Builds view, click the build under the All Build Definitions, show the
students the list of builds, and update the list every couple of seconds until the build process is
completed.
9. Return to the browser to view the deployment history on the DEPLOYMENTS tab, and then click the
DASHBOARD tab to open the Web Sites dashboard, and then click the link under SITE URL. Append
the /api/values string to the URL in the address bar of the browser. Verify that the returned list
contains three values.

10. Clear the Add to source control check box on the New Project dialog box.

Note: If you do not clear the Add to source control check box, you will be prompted to
register new projects with a source control each time you add a new project.
Developing Windows Azure and Web Services 8-25

Lesson 6
Best Practices for Production Deployment
By now, you have learned how to use Web Deploy and continuous delivery to automate the deployment
process of your application, but there is more to deployment than just making sure the target server has
the same version of the new application. For example, when you deploy more than one web application
to a web server, there are steps you can take to improve the way these applications run side-by-side. In
addition, when you deploy a new application to an existing environment, especially to production
environments, you have to consider how the deployment process itself will affect users that are currently
trying to use your application. Will the application still be able to respond to requests while being
updated? Will its throughput be affected when servers are down for deployment?

In this lesson, you will learn of additional tools and techniques that can assist you in deploying
applications to production environments.

Lesson Objectives
After you complete this lesson, you will be able to:
Transform web.config files when publishing web applications.

Share assemblies between web applications hosted on the same server.


Explain the use of upgrade domains in Windows Azure.
Deploy your applications to staging and production environments in Windows Azure.

Web.config Transformations
One of the common tasks when you deploy web
applications to a different environment is
changing the content of the web.config file,
because some configurations differ between
environments. For example, different
environments usually use different database
connection strings; production environments
usually change the compilation mode from Debug
to Release; and in the development environment,
you would probably want to see the original
errors, whereas in other environments, you would
probably choose to hide them and show a custom
error page.

One of the options to change the content of the web.config file is to use Web Deploy parameters, as
explained in Lesson 1, Web Deployment with Visual Studio 2012, and Lesson 2, Creating and
Deploying Web Application Packages. For example, when publishing a web application through Visual
Studio 2012, you can set the database connection string that you want to use in the deployed
environment. This change is applied with the help of Web Deploy parameters. When you deploy through
IIS Manager, MSDeploy, and the Web Deploy snap-in for PowerShell, you have even more control over
file content with parameters that use XPath expressions to find and replace file content in an XML file.

However, creating XPath expressions to locate a specific XML content can be somewhat complex
sometimes. For example, the following XPath string represents a search pattern that matches a connection
string named MyAppDb: //*[local-name()='connectionStrings']/*[local-
name()='add'][@name='MyAppDb']/@connectionString.
8-26 Deploying Services

This is where web.config transformations are useful. web.config transformations are created specifically for
locating configuration sections and XML elements inside the web.config file, changing them, adding new
elements or attributes to them, and even removing elements and attributes from the configuration file.
When you publish a web application that uses Visual Studio 2012, during the build step, the
transformation is applied to the web.config file automatically. This results in a new web.config file. Then,
this is used to as a replacement of original web.config file for the rest of the publishing process (either
copying the new file to the destination server or packaging it).

Web.config transformation uses transformation files. These files use the naming convention of
web.configuration.config, where configuration is the solution configuration that you want to use the
transformation for, such as Debug or Release. For example, the file Web.Debug.config is the
transformation file that is used when publishing a web application that uses the Debug solution
configuration.

If you create a new web application project with Visual Studio 2012, two transformation files are created
automatically, one for Debug, and another for Release. If you delete these files, or add new solution
configurations, you can generate new transformation files by right-clicking the web.config file under
your web application project in the Solution Explorer window, and clicking Add Config Transform.
Clicking that option makes Visual Studio 2012 generate the transformation files for the missing solution
configurations automatically.
The content of a transformation file is XML-based, and is a mix of XML configuration structure and
transformation expressions.

For example, this is a sample of a Web.Release.config transformation file content.

Content of a Web.Release.config transformation file


<?xml version="1.0" encoding="utf-8"?>
<configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform">
<connectionStrings>
<add name="dbConnection"
connectionString="Data Source=SqlServer01;Initial Catalog=MyAppDB;Integrated
Security=True"
xdt:Transform="SetAttributes" xdt:Locator="Match(name)"/>
</connectionStrings>
<system.web>
<compilation xdt:Transform="RemoveAttributes(debug)" />
</system.web>
</configuration>

The overall structure of the XML file resembles that of a web.config file. It has the configuration root
element, and a section structure with connectionStrings and system.web, similar to that of a typical
web.config file. As the original web.config is transformed, the transformation mechanism searches for
elements in the original web.config as they appear in the transformation file. For example, the
transformation looks for a compilation element under the system.web section in the original web.config,
because that is the XML structure in the transformation file.

After the transformation process locates the element, it uses the attributes set for the element in the
transformation file to process the original element. For example, in the compilation element, the
xdt:Transform="RemoveAttributes(debug)" attribute changes the original compilation element by
removing the debug attribute from it. In the add element that is under the connectionStrings element,
the xdt:Transform="SetAttributes" sets the values of the connectionString and name attributes of the
original web.config to the attributes supplied in the transformation file, only if the original name attribute
and the name attribute of the transformation file are a match (both have the value dbConnection).

If this transformation file is applied to the following web.config file.


Developing Windows Azure and Web Services 8-27

Example of a web.config file


<?xml version="1.0" encoding="utf-8"?>
<configuration>
<connectionStrings>
<add name="dbConnection"
connectionString="Data Source=SqlServer03;Initial Catalog=MyApp1DB;Integrated
Security=True" />
<add name="historyDbConnection"
connectionString="Data Source=SqlServer03;Initial Catalog=HistoryDB;Integrated
Security=True" />
</connectionStrings>
<system.web>
<compilation debug="true" targetFramework="4.5" />
<httpRuntime targetFramework="4.5" />
</system.web>
</configuration>

Then the resulting web.config will be.

The transformed web.config


<?xml version="1.0" encoding="utf-8"?>
<configuration>
<connectionStrings>
<add name="dbConnection"
connectionString="Data Source=SqlServer01;Initial Catalog=MyAppDB;Integrated
Security=True" />
<add name="historyDbConnection"
connectionString="Data Source=SqlServer03;Initial Catalog=HistoryDB;Integrated
Security=True" />
</connectionStrings>
<system.web>
<compilation targetFramework="4.5" />
<httpRuntime targetFramework="4.5" />
</system.web>
</configuration>

The transformed web.config has a different dbConnection connection string, and the debug attribute is
removed from the compilation element.
After you create a transformation file and add the relevant transformations, you can see a preview of the
resulting web.config by right-clicking the transformation file in the Solution Explorer, and then clicking
Preview Transform. This opens a web.config preview page in the document area with the original file
and the transformed file one next to the other, and highlights the content that was added or removed.
The following figure shows a preview of a web.config transformation:

FIGURE 8.5: WEB.CONFIG TRANSFORMATION


For more information on the syntax of web.config transformation, see:

web.config Transformation Syntax for Web Project Deployment By Using Visual Studio
http://go.microsoft.com/fwlink/?LinkID=298839&clcid=0x409
8-28 Deploying Services

Demonstration: Transforming Web.config Files


This demonstration shows how to create Web.config transformations and use them when publishing Web
applications.

Demonstration Steps
1. Open Visual Studio 2012, and create a new ASP.NET MVC 4 Web Application that uses the Web API
template.

2. Open the Web.config file, and show the current DefaultConnection connection string and
<compilation> section in the <system.web> configuration group.
3. Expand the Web.config file in Solution Explorer, and open the Web.Release.config file.

4. View the auto-generated transformation for the <compilation> section, and add a new
transformation for the DefaultConnection connection string, by using the following configuration.

<connectionStrings>
<add name="DefaultConnection"
connectionString="Data Source=ProductionSQLServer;Initial
Catalog=MyAppProductionDB;Integrated Security=True"
xdt:Transform="SetAttributes" xdt:Locator="Match(name)"/>
</connectionStrings>

5. Publish the web application by using the following publish settings.


Server: localhost.

Site Name: Default Web Site/MyProductionApp.

Destination URL: http://localhost/MyProductionApp.


Configuration: Release.

6. In Visual Studio 2012, open the MyProductionApp web site from the Local IIS.

To open a web site from Visual Studio 2012, on the File menu, point to Open, and then click Web
Site.

7. Open the Web.config file, and show the students the altered DefaultConnection connection string,
and the missing debug attribute in the <compilation> section.

Sharing Assemblies in Shared Hosting Scenarios


Multiple web applications are often deployed to
the same server. This is common when you have
several web applications that have low traffic,
because that way you can deploy several such
web applications to a single server without
worrying too much about CPU and memory
consumption.
The problem with this scenario is that low-traffic
web applications are shut down more frequently
and therefore lead to frequent application
startups. Starting a web application can take
several seconds because of the need of loading its
assemblies, and during this time, requests take more time to handle than usual.
Developing Windows Azure and Web Services 8-29

Note: The default setting for IIS is to shut down the worker process automatically, if the
application is idle for more than 20 minutes. You can change this application pool setting
through IIS Manager.

To reduce some assembly loading time, a shared assembly feature was introduced with Visual Studio
2012. Consider a scenario where you have multiple web applications on the same server, and these
applications have the same set of assemblies, such as Entity Framework assemblies, JSON.NET assemblies,
and ASP.NET Web API core assemblies, which are deployed to their Bin folder. By using the shared
assembly feature, identical versions of an assembly can be replaced with a single assembly file, whereas
the rest of the assembly files are replaced with a symbolic link. Then, assembly files can be loaded more
quickly, reducing the startup time of web application.

To start this feature, you must run the aspnet_intern command line tool. This tool scans the ASP.NET
temporary files folder in search for reused assemblies, makes a single copy of these assemblies in a special
folder that you specify in the command line, and replaces the original assemblies with a symbolic link.

Note: By default, the command line considers an assembly to be reused if it appears in


three or more web applications. You can control this behavior by using the -minrefcount
parameter.

The following command shows how to use the aspnet_intern tool.

Using the aspnet_intern tool


aspnet_intern -mode exec -sourcedir
"C:\Windows\Microsoft.NET\Framework64\v4.0.30319\Temporary ASP.NET Files" -interndir
C:\CommonAssemblies

This command, when it is executed from the Developer Command Prompt for Visual Studio 2012, scans
the Temporary ASP.NET Files folder for 64-bit ASP.NET web applications, copies a single file of each
reused assembly to the C:\CommonAssemblies folder, and replaces the original assembly files with a
symbolic link.
The -sourcedir parameter must point to the location of the temporary ASP.NET files folder, because
ASP.NET loads assemblies from a temporary folder and not from the Bin folder of the web application.

For more information about loading assemblies from temporary folders in ASP.NET, see:

Shadow Copying Assemblies


http://go.microsoft.com/fwlink/?LinkID=298840&clcid=0x409

If you change the -mode parameter from exec to analyze, you can view the list of reusable assemblies
without replacing them with a symbolic link.

Note: Because this command is a one-time operation, you will probably want to run the
aspnet_intern command routinely, by using a scheduled task.
8-30 Deploying Services

Windows Azure Upgrade Domains


When you deploy a web application to an on-
premises scale-out deployment, a typical scenario
is cycling through the scale-out deployment,
taking one server offline at a time, updating that
server, taking it online as soon as it is updated,
and then continue to the next server in the farm.
A similar scenario is also used in Windows Azure
when you deploy a new version of a role in
addition to an existing deployment, and this
process is known as an in-place update. When you
perform an in-place update, only part of the
instances are stopped, updated, and brought
online at a time. The decision on which instances are stopped is based on the upgrade domain each
server belongs to. When Windows Azure creates instances in a Cloud Service deployment, they are
assigned to an upgrade domain (also known as update domain). By default, a deployment can have up to
five upgrade domains. For example, if a deployment has 15 instances in it, there will be five upgrade
domains, each upgrade domain having three instances.

Note: You can change the maximum number of upgrade domains by editing the service
definition configuration file, and setting the upgradeDomainCount attribute in the
ServiceDefinition root element.

The following figure shows how a deployment with 10 instances is divided among five upgrade domains.

FIGURE 8.6: INSTANCES OF A CLOUD SERVICE DEPLOYMENT


DIVIDED AMONG UPGRADE DOMAINS

Note: In the previous figure, there is also a column that displays the fault domain of each
instance. Fault domains are discussed in Lesson 2, Load Balancing, of Module 12, Scaling
Services.

When an in-place update occurs, upgrade domains take turns stopping, updating their instances, and
bringing them back online. For example, if a deployment has 10 instances, then at first, instances 0 and 5
(that belongs to upgrade domain 0) are stopped, updated, and brought online. Then, instances 1 and 6
(from upgrade domain 1) begin the same process, continuing until instances 4 and 9 (upgrade domain 4)
are updated by using the same steps.
Developing Windows Azure and Web Services 8-31

The use of upgrade domains leaves most of your instances running while it is updating only some of
them. This keeps your web application available. However, if you have more than two upgrade domains,
you will end up running two versions of your application at a time: the new version on the first upgrade
domain, and the old version on the third upgrade domain and on, whereas the second upgrade domain is
being updated. Having multiple versions of your web application under the same load balancer could
cause clients to receive different responses from services, or experience faulty connections if the service
contract has changed. Using in-place update frequently includes the burden of constructing your services
to be backward compatible. To overcome this problem, you might consider using a different approach for
updates by using VIP Swap, which is discussed next.

For more information on updating Windows Azure deployments, see:


Overview of Updating a Windows Azure Service
http://go.microsoft.com/fwlink/?LinkID=298841&clcid=0x409

Deploying to Staging and Production Environments


Just as you should never move web applications
to production without first testing them in a
testing environment, you should not deploy web
applications to a Windows Azure deployment
without first testing them in Windows Azure. That
is why each service deployment in Windows Azure
has both production and staging environments.
From the data center perspective, the same
hardware and software specification is used for
both environments. However, you can have
different hardware and software configurations
for each environment, depending on the service
configuration that you use. For example, you can deploy a service to a production environment by using
four extra-large instances, but deploy the same service to a staging environment that only has two
medium instances. The only difference between the two environments is the service URL and the virtual IP
(VIP) that are used to access the external endpoints of each environment. Staging environments use a
GUID prefix for the service URL and have a different VIP than that of the production environment.

Another thing that you can use the staging environments for is VIP Swap. With VIP Swap, the virtual IP
and DNS address of your staging and production environments are swapped. This results in your
production environment having the address and VIP of the staging environment and vice-versa.

By creating a staging environment that has the same hardware and software configuration as your
production environment, you can use VIP Swap to upgrade your production environment quickly without
experiencing the downtime of upgrade domains.

Note: If you have a single instance in your production environment in Windows Azure,
performing an in-place upgrade disables the instance during the upgrade. Using multiple
instances, which is the recommendation for production environment to achieve 99.95%
availability, provides the required availability of your service, but reduces the throughput of the
service because of the downtime of instances in the upgrade domain.

To perform VIP Swap, follow these steps:


8-32 Deploying Services

1. Deploy the upgraded web application to the staging environment. Use the same VM machine size
and number of instances as you use for your production environment.

2. Verify that your application works correctly in the staging environment. You might have to change
the service URL you are using in the client application to point to the staging environment instead of
the production environment.

Note: VIP Swap requires having both production and staging environments deployed. If
you only have the staging environment deployed, you will not be able to use VIP Swap.

3. In the Windows Azure Management Portal, click Cloud Services, select the service deployment, click
Staging from the dashboard, and then click Swap. In the Swap dialog box, verify that the number of
roles and instances is identical, and then click Yes.

Note: After you complete the VIP Swap, and no longer require the staging environment
instances, it is advised that you delete the staging deployment to conserve CPU hours.

For more information about staging and production environments, see:


Overview of Managing Deployments in Windows Azure

http://go.microsoft.com/fwlink/?LinkID=298842&clcid=0x409

To compare in-place updates and VIP swapping, see:

Windows Azure Deployment Domains

http://go.microsoft.com/fwlink/?LinkID=298843&clcid=0x409
Developing Windows Azure and Web Services 8-33

Lab: Deploying Services


Scenario
After having their services up and running on Windows Azure, Blue Yonder Airlines wants to add a new
weather forecast service. Before you deploy the new service to production, the service must be deployed
to a staging environment for testing. In this lab, you will deploy a new version of the services to Windows
Azure, first to staging, and then swap between staging and production environments.

In addition, Blue Yonder Airlines wants to scale its WCF booking service and frequent flyer service to
another server to increase its durability. In this lab, you will create an IIS deployment package and deploy
it to another server.

Objectives
After you complete this lab, you will be able to:

Deploy a web application to Windows Azure staging environment, and perform VIP swap.
Create an IIS deployment package, and install it on a different server.

Lab Setup
Estimated Time: 45 minutes
Virtual Machine: 20487B-SEA-DEV-A, 20487B-SEA-DEV-B, 20487B-SEA-DEV-C

User name: Administrator, Administrator, Admin


Password: Pa$$w0rd, Pa$$w0rd, Pa$$w0rd
For this lab, you will use the available virtual machine environment. Before you begin this lab, you must
complete the following steps:

1. On the host computer, click Start, point to Administrative Tools, and then click Hyper-V Manager.
2. In Hyper-V Manager, click MSL-TMG1, and in the Action pane, click Start.

3. In Hyper-V Manager, click 20487B-SEA-DEV-A, and in the Action pane, click Start.

4. In the Action pane, click Connect. Wait until the virtual machine starts.
5. Sign in using the following credentials:

User name: Administrator


Password: Pa$$w0rd

6. Return to Hyper-V Manager, click 20487B-SEA-DEV-B, and in the Action pane, click Start.

7. In the Action pane, click Connect. Wait until the virtual machine starts.

8. Sign in using the following credentials:

User name: Administrator

Password: Pa$$w0rd

9. Return to Hyper-V Manager, click 20487B-SEA-DEV-C, and in the Action pane, click Start.

10. Verify that you received credentials to log in to the Azure portal from you training provider, these
credentials and the Azure account will be used throughout the labs of this course.
8-34 Deploying Services

Exercise 1: Deploying an Updated Service to Windows Azure


Scenario
Start by adding the code to implement the weather forecast service. Then create a Windows Azure
Application package, and deploy it to a staging deployment. After the staging and production
deployments are online, test both with the client app to verify that the staging deployment is updated
with the new service. The last step is to perform a VIP Swap of the two deployments, and verify that the
weather forecast service is working in the production deployment.

The main tasks for this exercise are as follows:


1. Add the New Weather Updates Service to the ASP.NET Web API project

2. Deploy the updated project to staging by using the Windows Azure Management Portal

3. Test the client app with the production and staging deployments
4. Perform a VIP Swap by using the Windows Azure Management Portal and retest the client app

Task 1: Add the New Weather Updates Service to the ASP.NET Web API project
1. In the 20487B-SEA-DEV-A virtual machine, run the setup.cmd file from
D:\AllFiles\Mod08\LabFiles\Setup.

Write down the names of the Windows Azure Service Bus namespace and Windows Azure Cloud
Service.

2. Open the solution


D:\AllFiles\Mod08\LabFiles\begin\BlueYonder.Server\BlueYonder.Companion.sln.
3. Create a cloud deployment package for the cloud project.

Right-click the BlueYonder.Companion.Host.Azure project, and then click Package.


4. Create a package for the Cloud service configuration with the Debug build configuration.
5. Deploy the package to the production environment of your cloud service.

Open the Windows Azure Management Portal and locate the cloud service that was created
during the setup process.
Select the production environment for the cloud service and then click Update or Upload at the
bottom of the page (only one of the buttons should be visible).

Use Lab08 as the deployment name, and select the path to the configured package file, which should
be located in the bin\debug folder of the begin solution. The file name is
BlueYonder.Companion.Host.Azure.cspkg.
Select the path for the Configuration section from the ServiceConfiguration.Cloud.cscfg file from
the same location.

Select the Deploy even if one or more roles contain a single instance check box and approve.

From the INSTANCES tab verify that the instance is running.


6. In the BlueYonder.Companion.Controllers project, implement the GetWeather method of the
LocationController class.

Create a new instance of the WeatherService class that is part of the


BlueYonder.Companion.Controllers project.

Use the Locations.GetSingle method to get the Location object according to the locationId
parameter.
Developing Windows Azure and Web Services 8-35

Call the GetWeather method of the WeatherService class to get the WeatherForecast object.

Note: The Begin Solution already contains the WeatherService class. The class uses the
WeatherForecast class, and the WeatherCondition enum.
Expand the DataTransferObjects folder to review the files.

7. In the BlueYonder.Companion.Host project, under the App_Start folder, open the


WebApiConfig.cs file, and add an additional HTTP route, before the other routes. Use the following
settings:

Key Value

name LocationWeatherApi

routeTemplate locations/{locationId}/weather

defaults Create a new anonymous type using the following code.

New
{
controller = "locations",
action = "GetWeather" }

constraints Create a new anonymous type using the following code.

new
{
httpMethod = new HttpMethodConstraint(HttpMethod.Get)
}

Task 2: Deploy the updated project to staging by using the Windows Azure
Management Portal
1. Create a new package for the BlueYonder.Companion.Host.Azure project. Use the same procedure
as in the previous task.

2. Deploy the package to the Staging environment of your cloud service.

Note: You are performing the exact same procedure as you did in Task 1 of this exercise,
with one difference: you are deploying to the Staging environment and not to the Production
environment.

Task 3: Test the client app with the production and staging deployments
1. In the 20487B-SEA-DEV-C virtual machine, open the client solution from
D:\AllFiles\Mod08\LabFiles\begin\BlueYonder.Companion.Client.
2. In the Addresses class of the BlueYonder.Companion.Shared project, set the BaseUri property to
the Windows Azure Cloud Service name you wrote down at the beginning of this lab.
8-36 Deploying Services

3. Run the client app without debugging, purchase a trip from Seattle to New York, and verify that the
weather forecast for the current trip is missing the temperature.

The temperature text should only show the degrees Fahrenheit sign.
Close the client app after you verify the temperature is not shown.

4. In the Addresses class, change the BaseUri property to the staging deployment URL.

In the Management Portal, open the configuration of your cloud service, and then copy the staging
deployment URL to the BaseUri property.

Note: You will use the production environment address again shortly, so it is best that you
copy it aside, either to Notepad, or place it in comments, in the Addresses class.

5. Run the client app again, verify that the weather forecast is shown for the current trip, and then close
the client app.

Note: The staging and the production deployments share the database, which is why the
current trip, which you created with the production deployment, is shown when connecting to
the staging deployment.

Task 4: Perform a VIP Swap by using the Windows Azure Management Portal and
retest the client app
1. Return to the Windows Azure Management Portal and perform a VIP Swap between the staging and
production deployments. Return to Visual Studio 2012 when the swap is complete.
2. Set the service URL in the BaseUri property back to the production deployment URL, run the client
app without debugging and verify that the weather forecast is shown.

3. Return to the Windows Azure Management Portal and delete the staging deployment.

Note: After the production deployment is running and has been tested, it is recommended
that you delete the staging deployment to reduce compute hour charges.

Results: After you complete this exercise, the client app will retrieve weather forecast information from
the production deployment in Windows Azure.

Exercise 2: Exporting and Importing an IIS Deployment Package


Scenario
To scale out the WCF services from the first server to the additional server, you must create a Web Deploy
package that synchronizes everything the services will need to work correctly on the new server. This
includes their files, certificates they use, and IIS-related configuration.

After you create the deployment package, you will copy it to the remote server, logon to that server, and
deploy the package locally.

The main tasks for this exercise are as follows:

1. Export the web applications containing the WCF booking and frequent flyer services

2. Import the deployment package to a second server


Developing Windows Azure and Web Services 8-37

Task 1: Export the web applications containing the WCF booking and frequent flyer
services
1. In the 20487B-SEA-DEV-A virtual machine, open IIS Manager, select the Default Web Site, and
then open the Export Application Package dialog box.

2. In the Export Application Package dialog box, open the Management Components, and then
clear the list of components.
3. Add two appHostConfig providers to synchronize the Default Web
Site/BlueYonder.Server.Booking.WebHost and the Default Web Site/BlueYonder.Server.
FrequentFlyer.WebHost web applications.

4. Add an appPoolConfig provider to synchronize the DefaultAppPool application pool.

5. Store the package in C:\backup.zip. Complete the package creation and close IIS Manager.
6. Copy the backup.zip file from C:\ to \\10.10.0.11\c$.

Task 2: Import the deployment package to a second server


1. In the 20487B-SEA-DEV-B virtual machine, open IIS Manager, select the Default Web Site, and
open the Import Application Package dialog box.
2. Import the package from C:\backup.zip with the following settings.

Physical path of the Booking service: C:\Services\BlueYonder.Server.Booking.WebHost

Physical path of the Frequent Flyer service: C:\Services\BlueYonder.Server.FrequentFlyer.WebHost


3. Close IIS Manager, open the Windows Azure Management Portal website
(http://manage.windowsazure.com), and verify there are two listeners for the booking relay.

In the Management Portal, locate the Service Bus namespace you wrote down at the beginning of
this lab

Open the Service Bus configuration, click the RELAYS tab, and verify that there are two listeners for
the booking relay.

Results: As soon as both servers are online, they will listen to the same Service Bus relay, and will be load
balanced. You will verify that both servers are listening by checking the Service Bus relay listeners
information supplied by Service Bus in the Windows Azure Management Portal.

Question: Why did you synchronize the certificates between the two servers instead of
creating new certificates in the additional server?
Question: What is the benefit of using VIP Swap instead of deploying directly to a
production deployment?
8-38 Deploying Services

Module Review and Takeaways


In this module, you learned how to use Web Deploy with various tools, such as Visual Studio 2012, IIS
Manager, and PowerShell to deploy your web applications to on-premises servers and to Windows Azure.
You also learned how to apply best practices when you deploy to production environments and how
automated builds and continuous delivery can improve the overall quality of your application.
Best Practices: If you are developing a service that is hosted under IIS, incorporate Web Deploy
into your deployment process.

Use MSDeploy or the Web Deploy PowerShell snap-in when you deploy web applications through
scripts, instead of using tools such as XCopy.
Check whether your SCM supports automated build and use them. If it does not provide automated
builds, investigate external third-party automated build tools or consider switching to an SCM system
that does have automated builds.

Deploy to staging deployments in Windows Azure before you deploy an updated version to your
production deployment.

Review Question(s)
Question: What are the tools that use the Web Deployment Framework?

Tools
Visual Studio 2012
IIS

Web Deploy
Windows PowerShell
Windows Azure

Team Foundation Service


9-1

Module 9
Windows Azure Storage
Contents:
Module Overview 9-1

Lesson 1: Introduction to Windows Azure Storage 9-3

Lesson 2: Windows Azure Blob Storage 9-8


Lesson 3: Windows Azure Table Storage 9-19

Lesson 4: Windows Azure Queue Storage 9-25


Lesson 5: Restricting Access to Windows Azure Storage 9-32
Lab: Windows Azure Storage 9-37

Module Review and Takeaways 9-45

Module Overview
Storage Services are an important concept in cloud computing. Due to the volatile nature of cloud
computing a single source of is needed to maintain consistency of application data and static resources.
For this reason most (if not all) cloud platform have a storage solution providing a persistence store in the
cloud.
Windows Azure provides three different storage services for various purposes:

Windows Azure Blob Storage. This provides a file based persistence store, ideal for saving files and
static content.
Windows Azure Table Storage. This provides a simple key/value store.

Windows Azure Queue Storage. This provides a cloud-based persisted queuing mechanism.

You can access all storage services through the various client SDKs or directly by using their HTTP-based
APIs. These storage services provide an out-of-the-box solution for common data storage challenges such
as securing and transferring a large amount of data.

Note: The Management Portal UI and Windows Azure dialog boxes in Visual Studio 2012
are updated frequently when new Windows Azure components and SDKs for .NET are released.
Therefore, it is possible that some differences will exist between screen shots and steps shown in
this module, and the actual UI you encounter in the Management Portal and Visual Studio 2012.

Objectives
After completing this module, you will be able to:

Describe the architecture of Windows Azure Storage.

Use Blob storage in your applications.


9-2 Windows Azure Storage

Use Table storage in your applications

Use Windows Azure Queues as a communication mechanism between different parts of your
application
Control access to your storage items.
Developing Windows Azure and Web Services 9-3

Lesson 1
Introduction to Windows Azure Storage
Windows Azure Storage is a core pillar of the Windows Azure Platform. Built on top of the platform fabric,
Windows Azure Storage represents the power and flexibility of the cloud environment by offering three
types of storage solutions.

In this lesson, you will explore the different types of storage solutions and their major advantages and
strengths.

Lesson Objectives
After completing this lesson, you will be able to:

Describe the structure of Windows Azure Storage hosted environment.


Describe the different types of storage options that Windows Azure Storage offers.

Describe the meaning and the capabilities of a Windows Azure Storage Account.

Create a new Storage Account.

Windows Azure Hosted Environments Transiency


The main difference between the cloud
environment and other hosting approaches is
elasticity, the ability to scale up and down on
demand. This means that cloud-based
applications can use dynamically created virtual
machines on one hand, and on the other hand,
virtual machines can be released at any given
time. Due to the unpredictable nature of cloud
services, data cannot be persisted reliably on the
virtual machines. To deal with the transient nature
of cloud computing, storage services have
become a necessity in any cloud environment.

Using a persistent store in the cloud is not only necessary for application data, it is also used by the
Windows Azure fabric for persisting data. For example, deployment packages that are used to create
instances of roles are stored in Windows Azure storage and are used when deploying new instances such
as during the initial deployment, during scale-up, and when recovering for a failure. Performance metrics
and diagnostics data are also stored in Windows Azure storage.

Windows Azure Storage is an elaborate storage system. While the operating system provides a file system
mechanism, Windows Azure Storage provides additional features. In particular, Windows Azure Storage
provides you the choice of storing your data according to its essential nature. You can use a powerful API
to handle the data efficiently and have more flexibility than Relational Database Management System
(RDBMS) storage.
Windows Azure Storage ensures that all our stored data is replicated on multiple machines. Data can
further be geo-replicated to another data center in the same region (USA, Europe, Asia, etc.) for disaster
recovery scenarios.
9-4 Windows Azure Storage

Storage Approaches Offered by Windows Azure


Windows Azure provides three different storage
services intended for different scenarios:

Blob storage. This type of storage is a non-


structured collection of objects that can be
accessed by using a resource identifier and
can be used for storing files, such as images,
videos, large texts, and other non-structured
data.

Table storage. This type of storage is a semi-


structured collection of objects that can have
fields, but cannot have relations between
objects. The fields are not bound to a schema
structure, and different objects can have different fields within the same collection. Table storage also
provides a queryable API access to find objects.
Queue storage. This type of storage provides a persistent messaging queue.

Note: Despite its name, Table storage is not a relational table store. Windows Azure
provides a relational database as a service solution called SQL Database which is not covered in
this course.

Comparison of Windows Azure Storage Service


All three Windows Azure storage have different have different capabilities. The following table illustrates
some of the attributes of Windows Azure Storage Services:

Storage
Access Mechanism Transactional Size
Type

Blob Files HTTP-based APIs with No 200 GB per block blob, 1


storage Windows Azure Storage terabyte per page blob
Client abstraction and
Windows file I/O API

Table Key- OData with Windows Azure At the table 100 terabytes per table
storage Entity Storage Client abstraction level
pairs

Queue Messages HTTP-based API with At the message 64 kilobytes (KB) per
storage Windows Azure Storage level message
Client abstraction

By comparing all storage options, you can see that Windows Azure Storage offers a great deal of flexibility
with regards to sizes and access mechanisms.
Despite the differences, all storage options have built-in synchronous replication to other machines within
the same Windows Azure data center.

Blob storage and Table storage further offer a geo-replication feature, which copies data to a second
datacenter in the same region (North America, Europe or Asia). This option is enabled by default and
offers better protection in the case of an entire datacenter going offline.

Choosing the right solution depends on the type of application and how the application works with the
data in the cloud. When choosing a solution, you need to take the following into consideration :
Developing Windows Azure and Web Services 9-5

1. Size of data.

2. The nature of the data (static or dynamic)

3. Potential cost

4. Location of the application. Cloud or on-premises

5. Regulation. The type of data that might influence

It might not be possible to arrive at a single solution for these questions. Applications use different types
of data and work in different ways. Because of this diversity, Windows Azure Storage offers a variety of
options for data management. Using each option in an efficient way requires a good understanding of the
scenarios it makes the most sense in.

Note: This module concentrates in storage solutions. However, Windows Azure has a
variety of database solutions, which are not covered in this course.

For more information about SQL databases on Windows Azure see


http://go.microsoft.com/fwlink/?LinkID=298844&clcid=0x409

Windows Azure Storage Accounts


The highest level of hierarchy for storage services
is the storage account. Storage accounts define
the entry point for using storage services by
providing the root addresses DNS prefix and the
authentication keys for the accessing storage
services.
Each storage account has a set of features, some
of which can be turned on and off:

1. Up to 100 terabytes per account


2. CDN. Use for caching blobs locally on one of
the global CDNs nodes

3. Affinity groups that you can use to collocate storage accounts and computer resources in the same
cluster in the data center.
4. Each account has two independent 512 bit shared-secret keys for authenticating clients. These keys
can be regenerated.

Accessing a storage account requires credentials to be supplied. Credentials are built from the account
name, which is unique across all data centers and the 512-bit shared key. A model similar to database
connection strings can be used by writing to a connection string, which is then used to create the
connection to the storage.

Note: You should change the access keys to your storage account periodically to help keep
your storage connections more secure. Two access keys are assigned to enable you to maintain
connections to the storage account using one access key while you regenerate the other access
key.
9-6 Windows Azure Storage

Note: Remember that having access to the 512-bit shared key allows unrestricted access to
your storage account. Ensure you keep it safe.

As part of its installation, the Windows Azure SDK installs the local Windows Azure Storage Emulator. The
storage emulator is ideal for testing applications locally. The Storage Emulator provides local blob, table,
and queue storage, and does not require you to have an active Windows Azure account. There are several
limitations that apply when you use the Storage Emulator. For a complete list of limitations, refer to the
following MSDN article.

Differences Between the Storage Emulator and Windows Azure Storage Services
http://go.microsoft.com/fwlink/?LinkID=298845&clcid=0x409

Microsoft.WindowAzure.Storage.dll
Windows Azure Storage exposes its functionality via HTTP-based APIs, but working with it directly can be
time consuming. Instead, you can use the Windows Azure Storage Client Library for .NET in your
applications for an easier programming model.
You can obtain the assembly in multiple ways; the easiest one is with NuGet. Simply add the
WindowsAzure.Storage package to your project from the NuGet Official Packages Source.
The Windows Azure Storage account is represented by the
Microsoft.WindowsAzure.Storage.CloudStroageAccount class. You can create an instance by using a
connection string that represents the storage account name and private access key. Like all connection
strings, you can put your storage connection string in a configuration file, rather than hard-coding it.

Best Practice: Make sure that you store the connection string in the most affective
configuration file for example, when using the connection string from Windows Azure web or
worker roles, it is recommended you store your connection string using the Windows Azure
service configuration system (*.csdef and *.cscfg files). For other .NET applications, it is
recommended you store your connection string using the .NET configuration system such as the
web.config or app.config file.

Configuring Windows Azure Cloud Services


When working with Windows Azure Cloud Services, you can change configuration settings from the
Windows Azure Management Portal or by using Windows PowerShell scripts. This type of dynamic change
does not require redeployment of the service.

If you wish to use the Storage Emulator instead of a Windows Azure Storage account, set the value of the
connection string to UseDevelopmentStorage=true. In this case, there is no need to specify the
protocol, account name, and account key.

The recommended way to set the connection string is by using the settings tab in the roles properties
window in Visual Studio.

Note: The default connection string points to the Storage Emulator. Keep in mind that the
Storage Emulator cannot be used by a deployed application; therefore you must set the actual
storage account before deploying your application.

After a connection string is configured, you can use it to create an instance of the the
Microsoft.WindowsAzure.CloudStorageAccount class. You can create a CloudStorageAccount
instance by using the CloudStorageAccount.Parse static method, which receives a connection string. To
Developing Windows Azure and Web Services 9-7

get the storage connection string from the deployment packages .cscgf file, use the
CloudConfigurationManager.GetSetting method.

The following code illustrates how to create an instance of the CloudStorageAccount class.

Creating a CloudStorageAccount
var account = CloudStorageAccount.Parse(
CloudConfigurationManager.GetSetting("DataConnectionString"));

Demonstration: Creating a Windows Azure Storage Account


In this demonstration, you will use the Management Portal to create a new Windows Azure Storage
account.

Demonstration Steps
1. Open the Windows Azure Management Portal website (http://manage.windowsazure.com)

2. .Start creating a new storage account. In the URL text box, enter demostorageaccountyourinitials
(yourinitials contains your names initials, in lower-case). The URL you set will be used to access blob,
queue, and table resources for the account.

Note: Storage account URLs are always written in lowercase.

3. In the REGION box, select the region closest to your location. To reduce the communication latency,
it is better to create the storage account in the same region as your application deployment. Click
CREATE STORAGE ACCOUNT at the lower right corner of the portal and wait until the storage
account is created.

Note: If you get a message saying the storage account creation failed because you reached
your storage account limit, delete one of your existing storage accounts and retry the step. If you
do not know how to delete a storage account, consult the instructor.

4. Wait for the storage account to be created, and click the storage account name to review the storage
DASHBOARD and CONFIGURE tabs.

5. Open the storage account keys, and notice the account has two keys, primary and secondary. The
secondary key is intended to be used when renewing the primary key, for example, if the primary key
is compromised.
9-8 Windows Azure Storage

Lesson 2
Windows Azure Blob Storage
Windows Azure Storage introduces the Blob storage service for storing files in a scalable and durable
manner.

In this lesson, you will explore the Windows Azure Blob Storage features and learn how to use them.

Lesson Objectives
After completing this lesson, you will be able to:

Determine when to use a blob.


Pick between Block or Page blobs.

Create and delete containers.


Perform uploads, downloads, deletes and enumerations on blobs.

Use the Development Fabric Storage Emulator.

Define Retry policies.

Introduction to Blob Storage


Windows Azure Storage contains a number of
abstractions for representing data without
structure; one of them is the BLOB (Binary Large
Object). The Windows Azure Blob storage service
is used for storing any type of data that contains
no inherent structure.

Blobs are held in a storage account; each account


can hold any number of blobs where each single
blob may contain hundreds of gigabytes (based
on its type). The accumulative size of all blobs in a
single storage account may be up to 100
terabytes.

Data in blobs can be exposed publicly to anyone with Internet access or privately for our own application.
You can find Blob storage useful for the following scenarios:

1. Storing images or documents to be served directly to the browser

2. Storing data for centralized distribution

3. Storing audio and/or video for streaming


4. Storing data for background analysis either by Widows Azure hosted services or by on-premises
application

5. Replacing existing applications use of file systems

6. Providing secure locations for backups and disaster recovery

This is not a closed list and there are many more scenarios that can benefit from the use of blobs.
However, having such a large number of objects requires some type of organization.
Developing Windows Azure and Web Services 9-9

Blob storages are also used extensively throughout the Windows Azure platform. For example, the
Windows Azure deployment mechanism saves the deployment packages to Blob storage. These packages
are also used by the auto scaling mechanism. Diagnostics logs are also saved to cloud storage and
Windows Azure virtual machines disks are persisted to Blob storage as well.

Blob Service Components


Blobs are stored in containers, which belong to a Blob storage account. The hierarchy of the Windows
Azure Blob is fixed in the following manner:

Storage account. Storage accounts are the root entities of the blob service. Every access to Windows
Azure Storage must be done through a storage account.
Container. Containers are the sub-entities of the storage accounts. Each container can contain blobs.
An account can contain an unlimited number of containers. A container can store an unlimited
number of blobs.
Blob. Blobs are the leaf of the hierarchy; and represent a file of any type. There are two types of
blobs: block blobs or page blobs. The differences between block blobs and page blobs are covered
later in this lesson.

Note: The Windows Azure Client SDK contains a class called CloudBlobDirectory, however
directories are not part of the hierarchy and simple represent substrings of the blobs name
separated by /.

Using this schema each blob can be addressed by using the following URL format :

http://<storage account>.blob.core.windows.net/<container>/<blob>

Block Blobs vs. Page Blobs


There are two types of blobs targeted for different
workloads: block blobs and page blobs.

Block Blobs
Block blobs are designed for streaming workloads
where the entire blob is uploaded or downloaded
as a stream of blocks.
The maximum size for a block blob is 200 GB, and
it can include up to 50,000 blocks.

Splitting the blob into a collection of blocks allows


you to upload a large blob efficiently by using a
number of threads that execute the upload tasks
in parallel.
Each block is identified by a BlockID and can vary in size up to a maximum of 4MB.

To upload a block blob you must first upload a collection of blocks and then commit them by their
BlockID.

Block blobs simplify large file upload over the network by introducing the following features:

Parallel upload of multiple blocks to reduce communication time

An MD5 Hash can be attached to each block to ensure reliable transfer


9-10 Windows Azure Storage

Simple replacement of uncommitted blocks

Automatic cleanup of uncommitted blocks

It is possible to create a new version of an existing blob by uploading new blocks or deleting existing ones
and committing all BlockIDs of the blob in a single commit operation.

The following code shows how to split a file into blocks and upload them to a block blob

Upload and commit blocks into a block blob


//Get a reference to a block blob.
CloudBlockBlob blob = storageClient.GetBlockBlobReference("mycontainer/myblockblob");
var blockList = new List<string>();

var fs = File.OpenRead("MyFile.txt");
byte[] data = new byte[100];
int id = 0;
while (fs.Read(data, 0, 100) != 0)
{
using (var stream = new System.IO.MemoryStream(data))
{
string blockID =
Convert.ToBase64String(Encoding.UTF8.GetBytes((id++).ToString()));
// Upload a block
blob.PutBlock(blockID, stream, null);
blockList.Add(blockID);
}
}

//Commit the block list


blob.PutBlockList(blockList);

You can configure the ParallelOperationThreadCount and


SingleBlobUploadThresholdInBytes properties of the CloudBlobClient object to simplify the
concurrent upload of a large file into a block blob.

When a block blob upload is larger than the value specified in the SingleBlobUploadThresholdInBytes
property, the storage client breaks the file into blocks.
You can set the number of threads used to upload the blocks in parallel using the
ParallelOperationThreadCount property.
The following code shows how to upload a large file to a block blob using multiple threads

Parallel upload to block blob


CloudBlobContainer container = blobStorageClient.GetContainerReference(containerName);
blobStorageClient.ParallelOperationThreadCount = 10;
blobStorageClient.SingleBlobUploadThresholdInBytes = 64000;
CloudBlockBlob blob = container.GetBlockBlobReference(fileName);
blob.UploadFile(Path.Combine(path, fileName));

Page Blobs
Page blobs are designed for random-access workloads in which clients execute random read and write
operations in different parts of the blob.

Page blobs can be treated much like an array of bytes structured as a collection of 512-bytes pages.
Handling a page blob is similar to handling a byte array:

When creating a page blob you specify a maximum size.

Read and write operations are executed by specifying an offset and a range (that align to 512-byte
page boundaries)
Developing Windows Azure and Web Services 9-11

Unlike block blobs, page blobs do not introduce a separate commit phase meaning that writes to page
blobs happen in-place and are immediately committed to the blob.

The maximum size for a page blob is 1 terabyte.


The following code shows how to upload data to page blob.

Upload data to page blob


CloudBlobContainer myContainer = blobClient.GetContainerReference("myContainer");
CloudPageBlob myPageBlob = myContainer.GetPageBlobReference("myPageBlob");

// Get some data


byte[] data = GetData();

//Create a 10 MB page blob.


myPageBlob.Create(10 * 1024 * 1024);

myPageBlob.WritePages(new MemoryStream(data), 0);

int offset = 4096;


pageBlob.WritePages(new MemoryStream(data), offSet);

Reading data from page blobs can be done by using the OpenRead method that lets you stream the full
blob or a range of pages from any offset in the blob, or by using the GetPageRanges method for getting
an enumeration over PageRange objects.

The following code shows how to read from page blob by using OpenRead.

Using OpenRead to read data from page blob


CloudBlobContainer myContainer = blobClient.GetContainerReference("myContainer");
CloudPageBlob myPageBlob = myContainer.GetPageBlobReference("myPageBlob");

BlobStream blobStream = myPageBlob.OpenRead();


byte[] buffer = new byte[4096];
blobStream.Seek(1024, SeekOrigin.Begin);
int numBytesRead = blobStream.Read(buffer, 1024, 4096);

Unlike block blobs, page blobs are not continuous so when reading over pages without any data stored in
them, the blob service will return 0s for those pages. You can use the method GetPageRanges to get a
list of the ranges in the blob that contain valid data. You can then enumerate the list and download the
data from each page range.

The following code shows how to read from page blob by using GetPageRanges.

Using GetPageRanges
CloudBlobContainer myContainer = blobClient.GetContainerReference("myContainer");
CloudPageBlob myPageBlob = myContainer.GetPageBlobReference("myPageBlob");

IEnumerable<PageRange> pageRanges = myPageBlob.GetPageRanges();


BlobStream blobStream = myPageBlob.OpenRead();

foreach (PageRange range in pageRanges)


{
int rangeSize = (int)(range.EndOffset + 1 - range.StartOffset);
blobStream.Seek(range.StartOffset, SeekOrigin.Begin);
byte[] buffer = new byte[rangeSize];
blobStream.Read(buffer, 0, rangeSize);
}
9-12 Windows Azure Storage

Creating and Deleting Containers


Blobs are grouped in containers. Containers are a
mandatory part of the blob service hierarchy, and
every blob in Windows Azure is placed in a
container. If you do not provide a container for a
blob, it will be created in the default container,
named $.

Containers contain a flat list of blobs and enforce


common management properties such as access
control and user-defined metadata.

You can create a container by getting a reference


to its URI and calling the Create or
CreateIfNotExist methods of the
CloudStorageAccount class.
The following code shows how to create a new container.

Creating a new container


var storageClient = CloudStorageAccount.Parse(connectionString);
var blobClient = storageClient.CreateCloudBlobClient();
CloudBlobContainer container = blobClient.GetContainerReference("MyContainer");
container.CreateIfNotExist();

Like any other storage task, it is possible to create a container using the Windows Azure Storage HTTP
API.
To create a container you can send an HTTP PUT request according to the following pattern:

http://myaccount.blob.core.windows.net/mycontainer?restype=container
The request must be authorized by setting the Authorization header. The Date and x-ms-version
headers must also be provided.

The following HTTP request shows how to create a new container by using the Windows Azure Storage
HTTP API.

Create a new container using the Windows Azure Storage HTTP API
PUT http://myaccount.blob.core.windows.net/mycontainer?restype=container HTTP/1.1

Request Headers:
x-ms-version: 2011-08-18
x-ms-date: Sun, 25 Sep 2012 10:12:32 GMT
x-ms-meta-Name: CreateContainerSample
Authorization: SharedKey myaccount:Z5HJWFKK978NFRsKNh0PNtksNc9nbXSSqGHueE00JdjidOQ=

It is possible to create a container using storage management tools such as Visual Studio.
The following figure shows how to create a new container by using Visual Studio.
Developing Windows Azure and Web Services 9-13

FIGURE 9.1: CREATE A NEW CONTAINER BY


USING VISUAL STUDIO
Deleting a container is as simple as calling the delete method. When a container is deleted all the blobs it
contains are with it.

Note: After deleting the container, a container with the same name cannot be created for
at least 35 seconds.

The following code shows how to create and delete a container by using C# storage API.

Create and Delete a Container in C#


var storageClient = CloudStorageAccount.Parse(connectionString);
var blobClient = storageClient.CreateCloudBlobClient();
CloudBlobContainer container = blobClient.GetContainerReference("MyContainer");
container.CreateIfNotExist();

// work with the container ...

container.Delete();

Like any other storage task, it is possible to delete a container using the Windows Azure Storage REST API.

To delete a container you can send an HTTP DELETE request according to the following pattern:

http://myaccount.blob.core.windows.net/mycontainer?restype=container
The request must be authorized by setting the authorization header.

The following http request shows how to delete a container using the Windows Azure Storage REST API.

Delete a container by using the Windows Azure Storage REST API


Request Syntax:
DELETE http://myaccount.blob.core.windows.net/mycontainer?restype=container HTTP/1.1

Request Headers:
x-ms-version: 2011-08-18
x-ms-date: Sun, 25 Sep 2012 10:15:32 GMT
9-14 Windows Azure Storage

x-ms-meta-Name: DeleteContainerSample
Authorization: SharedKey myaccount:Z5HJWFKK978NFRsKNh0PNtksNc9nbXSSqGHueE00JdjidOQ=

Standard Blob Operations


Both CloudBlockBlob and CloudPageBlob
implement the ICloudBlob interface that defines
the basic blob operations such as Upload,
Download, List, and Delete.

The methods in ICloudBlob simplify the


implementation of basic data operations for all
blob types.

Upload. ICloudBlob contains four


synchronous methods for uploading data to
blobs: UploadByteArray, UploadFile,
UploadFromStream and UploadText each
specifically designed for particular types of
input. BeginUploadFromStream can be used to upload a blob asynchronously.

Download. ICloudBlob contains four synchronous methods for downloading data from a blob:
DownloadByteArray, DownloadText, DownloadToFile, and DownloadToStream each specifically
designed for particular types of input. BeginDownloadToStream can be used to download data
from a blob asynchronously.

Delete. ICloudBlob contains two methods for downloading data from a blob: Delete and
DeleteIfExists. BeginDelete and BeginDeleteIfExists can be used to delete a blob asynchronously.

Note: In SDK, tasks-based async methods were added to the ICloudBlob interface.

For more information about ICloudBlob methods, consult MSDN documentation:


http://go.microsoft.com/fwlink/?LinkID=313940

The following code shows how to execute basic blob operations by using ICloudBlob methods.

Basic Blob operations


var blobClient = storageClient.CreateCloudBlobClient();
CloudBlobContainer container = blobClient.GetContainerReference("MyContainer");
CloudBlob myBlob = container.GetBlobReference("MyBlob");

myBlob.UploadFile("MyPicture.jpg");
byte[] buffer = myBlob.DownloadByteArray();
myBlob.Delete();

CloudBlobContainer and CloudBlobDirectory both hold a reference to a list of blobs.

CloudBlobDirectory simulates the notion of directories for Blob storage. A blob container contains a flat
list of blobs yet it is possible to simulate file system directories by naming blobs with names that include
the \ character.
Developing Windows Azure and Web Services 9-15

CloudBlobDirectory is a client-side artifact that you can use to list blobs that have a name that starts
with a directory name followed by either a \ or / delimiter. For example both myDir\MyBlob1 and
myDir/MyBlob2 can be associated with the CloudBlobDirectory myDir.

You can use the method ListBlobs of CloudBlobContainer and CloudBlobDirectory to list the blob they
reference.

The following code shows how to list blobs in a CloudBlobDirectory.

List Blobs in a CloudBlobDirectory


var storageClient = CloudStorageAccount.Parse(connectionString);
var blobClient = storageClient.CreateCloudBlobClient();
CloudBlobDirectory blobDir =
blobClient.GetBlobDirectoryReference("MyContainer/dir1/dir2/");
CloudBlob myBlob1 = container.GetBlobReference("MyContainer/dir1/dir2/file1");
CloudBlob myBlob2 = container.GetBlobReference("MyContainer/dir1/dir2/file2");
myBlob1.UploadFile("picture1.jpg");
myBlob2.UploadFile("picture2.jpg");

//Hierarchical list blobs and directories in this blob directory.


foreach (var blobItem in blobDir.ListBlobs())
Console.WriteLine(blobItem.Uri);

//Flat list blobs in this blob directory


BlobRequestOptions options = new BlobRequestOptions();
options.UseFlatBlobListing = true;
foreach (var blobItem in blobDir.ListBlobs(options))
Console.WriteLine(blobItem.Uri);

The following code shows how to list blobs in a CloudBlobContainer.

List Blobs in a CloudBlobContainer


var storageClient = CloudStorageAccount.Parse(connectionString);
var blobClient = storageClient.CreateCloudBlobClient();
CloudBlobContainer container = blobClient.GetContainerReference("MyContainer");
var myBlob1 = container.GetBlobReference("file1");
var myBlob2 =container.GetBlobReference("file2");
myBlob1.UploadFile("picture1.jpg");
myBlob2.UploadFile("picture2.jpg");
foreach (var blobItem in container.ListBlobs())
Console.WriteLine(blobItem.Uri);

Blobs and containers can contain metadata information stored in a collection of key-value pairs.

You can use metadata to store user-defined information about the blob such as the creation time of the
blob or the identity of the user who created it.
The following code shows how to store metadata on a blob.

Store Metadata on a Blob


var storageClient = CloudStorageAccount.Parse(connectionString);
var blobClient = storageClient.CreateCloudBlobClient();
CloudBlobContainer container = blobClient.GetContainerReference("MyContainer");
var myBlob = container.GetBlobReference("file1");

myBlob.Metadata["creationTime"] = DateTime.UtcNow.ToShortDateString();
myBlob.Metadata["owner"] = Thread.CurrentPrincipal.Identity.Name;
myBlob.SetMetadata();

Some information is provided automatically in a collection of built-in properties. A container has only
read-only properties, while a blob has both read-only and read-write properties. You can use properties
to read information about blobs and containers such as ETags and length.
9-16 Windows Azure Storage

The following code shows how to access blob and container properties.

Access Blob and Container Properties


var storageClient = CloudStorageAccount.Parse(connectionString);
var blobClient = storageClient.CreateCloudBlobClient();
CloudBlobContainer container = blobClient.GetContainerReference("MyContainer");
var myBlob = container.GetBlobReference("file1");

// Fetch container attributes.


container.FetchAttributes();
Console.WriteLine("ETag: " + container.Properties.ETag);
Console.WriteLine("LastModifiedUTC: " + container.Properties.LastModifiedUtc);

// Set the CacheControl property.


myBlob.Properties.CacheControl = "public, max-age=25036000";
myBlob.SetProperties();

// Fetch blob attributes.


myBlob.FetchAttributes();
Console.WriteLine("Length: " + myBlob.Properties.Length);
Console.WriteLine("ETag: " + myBlob.Properties.ETag);
Console.WriteLine("LastModifiedUtc: " + myBlob.Properties.LastModifiedUtc);
Console.WriteLine("BlobType: " + myBlob.Properties.BlobType);

Demonstration: Uploading and Downloading Blobs from the Storage


Emulator
In this demonstration, you will use the Windows Azure Storage APIs to create a blob container, upload
and download files from the container, and get a list of blobs stored in a container. In addition, you will
download blobs by using HTTP GET requests from a browser. Lastly, you will use Visual Studio 2012 to
connect to the local storage emulator and view the contents of a blob container.

Demonstration Steps
1. Open the BlobsStorageEmulator.sln solution located in
D:\Allfiles\Mod09\DemoFiles\BlobsStorageEmulator.

2. Open the properties of the BlobStorage.Web Web Role located in the BlobStorageEmulator
project, and review the settings on the Settings tab. The PhotosStorage connection string points to
the storage emulator, which runs on the local computer

3. In the BlobStorage.Web project, locate the ContainerHelper class and review the code in the
GetContainer method. The method uses the connection string from the Web Role settings to
connect to the storage account, verifies that the blob container named files exists, and creates the
container if it does not exist. The method also verifies that the containers permission level is set to
public.

Note: The name of the container that is passed into the GetContainerReference method
must be in lowercase. The default permissions for a container are private, which means the
container is not publicly accessible from the Internet, and you can only access it by using the
storage account access key

4. In the BlobStorage.Web project, open the HomeController.cs file and review the contents of the
Index method. The method uses the ListBlobs method to get a list of blob items from the container,
similar to a flat list of files.
Developing Windows Azure and Web Services 9-17

5. Still in the HomeController class, review the contents of the UploadFile method. The method
uploads a file to the blob container by using the GetBlockBlobRefrence method to get a reference
to the new block blob and the UploadFromStream method to upload the file to the new blob.

6. In the BlobStorage.Web project, open the BlobsController.cs file and view the code in the Get
method. The method uses the GetBlockBlobRefrence method to get a reference to an existing block
blob, and then uses the OpenRead method to download the content of the blob.
7. Run the BlobStorageEmulator project and use the file upload area to upload the
EmpireStateBuilding.jpg and StatueOfLiberty.jpg files from D:\Allfiles\Mod09\LabFiles\Assets
folder. After uploading the files you will see the list of the blobs in the container.

8. Use the Direct Download link to download one of the photos directly from the storage account by
using HTTP GET request, and use the Download link to download the other photo by using the
Windows Azure Storage API.
9. Return to Visual Studio 2012 and use the Server Explorer window to view the list of blob in the files
blob container of the local (development) storage account.

Creating Retry Policies


Windows Azure Storage is a service that
applications can access over the network.
Network transactions might fail due to temporary
conditions so retrying might be the right thing to
do when a data access operation fails.
The Windows Azure Storage client library has a
built-in retry mechanism that you can use to
instruct your storage client to retry when a data
access operation fails.

To determine how to execute retries, Windows


Azure Storage Client uses the RetryPolicies class.

There are three RetryPolicies built-in within the Storage Client Library:
RetryPolicies.NoRetry. No retry is executed

RetryPolicies.Retry. Retries N number of times with the same backoff interval between each attempt.
RetryPolicies.RetryExponential (Default). Retries N number of times with an exponentially
increasing backoff interval between each attempt.

The following code shows how to use a linear retry policy.

Use Retry Policy


var storageClient = CloudStorageAccount.Parse(connectionString);
var blobClient = storageClient.CreateCloudBlobClient();
CloudBlobContainer container = blobClient.GetContainerReference("MyContainer");
var myBlob = container.GetBlobReference("file1");
BlobRequestOptions blobRequestOptions = new BlobRequestOptions();
blobRequestOptions.Timeout = TimeSpan.FromSeconds(10.0);
blobRequestOptions.RetryPolicy = RetryPolicies.Retry(10, TimeSpan.FromSeconds(10));
myBlob.UploadFile("file.txt", blobRequestOptions);
9-18 Windows Azure Storage

Not all exceptions will cause the storage client to initiate a retry. Exceptions are classified as retryable or
non-retryable. For example, all http status codes >=400 and <500 are non-retryable exceptions statuses,
which imply inability to process the clients request by the service due to the request itself.

All other exceptions are retryable. For example, if a client side-timeout was triggered then it makes sense
to initiate a retry.

After retryable exceptions are caught, the Storage Client Library evaluates the RetryPolicy and decides if
to initiate a retry. The exception will be presented to the client only if the RetryPolicy determines that
there is no need to retry the operation. For example, if the RetryPolicy was configured to execute 3 retry
attempts, the exception is rethrown to the client only when the third attempt fails.

For more information about RetryPolicy properties, consult MSDN documentation:


http://go.microsoft.com/fwlink/?LinkID=298847&clcid=0x409

It is possible to construct custom retry policies and customize the retry algorithm to fit your specific
scenario. For example, you can set a retry algorithm per exception type.

A RetryPolicy is actually a delegate that when evaluated returns a


Microsoft.WindowsAzure.StorageClient.ShouldRetry delegate that contain your custom
implementation.

The following code shows how to create and use a custom retry policy.

A Custom Retry Policy


public static RetryPolicy OnlyForTimeoutLinearRetry(int retryCount, TimeSpan
intervalBetweenRetries)
{
return () =>
{
return (int currentRetryCount, Exception lastException, out TimeSpan
retryInterval) =>
{
if (lastException is TimeoutException)
{
retryInterval = intervalBetweenRetries;
// Decide if we should retry
return currentRetryCount < retryCount;
}
else
{
retryInterval = intervalBetweenRetries;
return false;
}

};
};
}

//Use the custom retry policy


var storageClient = CloudStorageAccount.Parse(connectionString);
var blobClient = storageClient.CreateCloudBlobClient();
CloudBlobContainer container = blobClient.GetContainerReference("MyContainer");
var myBlob = container.GetBlobReference("file1");
BlobRequestOptions blobRequestOptions = new BlobRequestOptions();
blobRequestOptions.Timeout = TimeSpan.FromSeconds(10.0);
blobRequestOptions.RetryPolicy = OnlyForTimeoutLinearRetry(10, TimeSpan.FromSeconds(10));
myBlob.UploadFile("file.txt", blobRequestOptions);
Developing Windows Azure and Web Services 9-19

Lesson 3
Windows Azure Table Storage
Windows Azure Table storage introduces a scalable key-value store designed for storing entities. Table
storage is useful for storing entities in simple yet scalable scenarios, when advanced scenarios, such as
joining and advanced filtering are not needed.

In this lesson, you will explore the Windows Azure Table storage features and learn how to use them.

Lesson Objectives
After completing this lesson, you will be able to:

Describe the basic Table Storage features and compare Table storage with a relational database.

Create and delete a table.

Define table entities in code.


Execute basic operations such as Create, Retrieve, Update and Delete (CRUD) on table entities.

Table Storage vs. Relational Databases


Traditional relational data is stored in powerful yet
expensive RDBMS that are based on database
engines such as SQL server. Database engines
provide powerful relational data management
capabilities through SQL queries, ACID
transactions, and stored procedures. Although
RDBMS are powerful, there is one area in which
they inherently fail: scalability. This limitation
which effect large scale application, is one of the
driving forces behind the NoSQL movement. One
of the simplest types of NoSQL data stores is the
key-value stores.

Windows Azure Table Storage is a key-value store. Key-value stores are designed to store simple data in a
scalable manner. You can use Table storage to store a large set of structured entities at a low cost and
issue simple queries to retrieve entities when required.
Similar to other key-value stores, Windows Azure Table storage was designed for linear scale and enforces
no schema to the entities stored in the Table, this means you can store different types of entities in the
same Table.
Windows Azure Table storage does not provide any way to represent relationships between entities and
thus does not support join operation.
Tables, like all other storage offerings, are accessed via a URI. Tables URIs are formed according to the
following format: http://<storage account name>.table.core.windows.net/<table name>

Windows Azure Table storage can store simple entities that can be easily mapped to .NET objects with
properties of the following types: byte[], bool, DateTime, double, Guid, Int32/int, Int64/long and String.
An entity can contain a maximum of 255 properties and is limited to 1 Mb in size, yet the only limitation
on table size is the 100 terabytes allowed per storage account.

Every entity must contain three basic properties: partition key, row key, and a timestamp.
9-20 Windows Azure Storage

Entities with the same partition key can be queried more efficiently, and inserted/updated in atomic
operations. An entity's row key is its unique identifier within a partition.

Working with Tables: Creating and Deleting


Azure storage has APIs for a large number of
platforms and a basic REST API that can be used
by any platform that can issue HTTP requests.

To create a table using the Windows Azure .NET


SDK, you use the CloudTableClient class, which
allows you get object references to tables and
entities and perform basic operations such as
creating and deleting tables.
The following code shows how to create a
Windows Azure Storage table using the Windows
Azure .NET SDK.

Create a Windows Azure Storage Table


var storageClient = CloudStorageAccount.Parse(connectionString);
var tableClient = storageClient.CreateCloudTableClient();
tableClient.CreateTableIfNotExist("customers");

The following code shows how to delete a Windows Azure Storage table by using the Windows Azure
.NET SDK.

Delete a Windows Azure Storage Table


var storageClient = CloudStorageAccount.Parse(connectionString);
var tableClient = storageClient.CreateCloudTableClient();
tableClient.DeleteTableIfExist("customers");

Like any other storage task, it is possible to create a table using the Windows Azure Storage HTTP-based
API.

To create a table, you can send an HTTP POST request according to the following pattern:

http://myaccount.table.core.windows.net/Tables
The request must be authorized by setting the Authorization header. Additionally, the Content-Type,
Content-Length and Date headers must be provided.

The following HTTP request body shows how to create a Windows Azure Storage table by using HTTP-
based API.

A request body for creating a Windows Azure Storage table using HTTP-based API
<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<entry xmlns:d="http://schemas.microsoft.com/ado/2007/08/dataservices"
xmlns:m="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata"
xmlns="http://www.w3.org/2005/Atom">
<title />
<updated>2013-01-10T12:48:31.1230639+02:00</updated>
<author>
<name/>
</author>
<id/>
Developing Windows Azure and Web Services 9-21

<content type="application/xml">
<m:properties>
<d:TableName>customers</d:TableName>
</m:properties>
</content>
</entry>

To delete a table, you can send an HTTP DELETE request according to the following pattern:

http://127.0.0.1:10002/devstoreaccount1/Tables('mytable')
The request must be authorized by setting the Authorization header. Additionally, the Content-Type,
Content-Length and Date headers must be provided.

Creating and deleting tables is possible using a variety of cloud storage management tools.

Creating Entity Structures in Code


Entities are a set of properties, which are
represented in .NET as a simple object. Every
entity must contain three basic properties:
PartitionKey, RowKey, and Timestamp. You can
derive from TableEntity class or manually define
these three properties. All other properties must
be public properties of one of the supported
types: byte[], bool, DateTime, double, Guid,
Int32/int, Int64/long, and String.
The following code shows how to create a table
entity by deriving from TableEntity.

Create a Table Entity by Deriving from TableEntity


public class Person : TableEntity
{
public Person() { }
public Person(string partitionKey, string rowKey)
{
PartitionKey = partitionKey;
RowKey = rowKey;
}

public string Name { get; set; }


public int Age { get; set; }
public string Address { get; set; }
}

Deriving from TableEntity might be problematic when using other class topologies or when using
DataContract serialization because TableEntity is not DataContract-serializable. In such scenarios, it is
possible to create an entity as a simple object that contains the PartitionKey, RowKey, and Timestamp
properties and decorate the type with the DataServiceKey attribute.

The following code shows how to create a table entity as a simple object.

Create a Table Entity as a Simple Object


[DataServiceKey("PartitionKey","RowKey")]
public class Employee
{
9-22 Windows Azure Storage

public Employee() { }

public Employee(string Id, string firstName, string lastName, string


department,int salary)
{
PartitionKey = department;
RowKey = Id;
FirstName = firstName;
LastName = lastName;
Salary = salary;
}

public string PartitionKey { get; set; }


public string RowKey { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
public int Salary { get; set; }
}

The above code sample contains the PartitionKey and RowKey properties, among other entity-related
properties. However, the code does not include the TimeStamp property. The TimeStamp property exists
for every entity, and its value is automatically updated in the table to the last modification date of the
entity. If you include the property in the class, it will be populated with the value from the table. You can
also update the TimeStamp property manually to any DateTime value.

Working with Entities: Query, Add, Update, and Delete


The Windows Azure SDK exposes the
TableServiceContext class for performing table
operations. The TableServiceContext class
provides an API based on the unit-of-work pattern
to perform table operations.

You can get an instance of the


TableServiceContext class from the
CloudTableClient object and use it to insert new
entities, delete or update existing entities, and
create a query object.

The following code shows how to insert a new


entity.

Insert a New Entity


//Insert a new entity to the customers table
var myPerson = new Person("USA", Guid.NewGuid().ToString());
var storageClient = CloudStorageAccount.Parse(connectionString);
var tableClient = storageClient.CreateCloudTableClient();
tableClient.CreateTableIfNotExist("customers");
var context = tableClient.GetDataServiceContext();
context.AddObject("customers", myPerson);
context.SaveChanges();

To retrieve entities, use the CreateQuery method, which accepts the table name. The CreateQuery
method returns an IQueryable implementation, which can be queried using LINQ.

The following code shows how to create a query and retrieve an entity.
Developing Windows Azure and Web Services 9-23

Create a Query and Retrieve an Entity


var storageClient = CloudStorageAccount.Parse(connectionString);
var tableClient = storageClient.CreateCloudTableClient();
tableClient.CreateTableIfNotExist("customers");
var context = tableClient.GetDataServiceContext();
var kids = context.CreateQuery<Person>("customers").Where(p => p.Age < 18);
var firstKid = kids.FirstOrDefault();

To update an entity use the UpdateObject method followed by SaveChanges.

The following code shows how to update an entity.

Update an Entity
var context = tableClient.GetDataServiceContext();
var kids = context.CreateQuery<Person>("customers").Where(p => p.Age < 18);
var firstKid = kids.FirstOrDefault();
if (firstKid != null)
firstKid.Address = "London";
context.UpdateObject(firstKid);
context.SaveChanges();

To delete an entity use the DeleteObject method followed by SaveChanges.


The following code shows how to delete an entity.

Delete an Entity
var context = tableClient.GetDataServiceContext();
var kids = context.CreateQuery<Person>("customers").Where(p => p.Age < 18);
var firstKid = kids.FirstOrDefault();
context.DeleteObject(firstKid);
context.SaveChanges();

Demonstration: Working with Tables and Reshaping Entities


In this demonstration, you will learn how to create a storage table, and query the table using LINQ
queries. You will also learn how to add entities to a table.

Demonstration Steps
1. Open the Windows Azure Management Portal (http://manage.windowsazure.com)
2. If you did not perform the demo in the first lesson, create a new Windows Azure Storage account
named demostorageaccountyourinitials (yourinitials contains your names initials, in lower-case).
Locate the storage accounts primary key and copy it to clipboard.

3. Open the TableStorage.sln solution located at D:\Allfiles\Mod09\DemoFiles\TableStorage in Visual


Studio 2012.

4. Open the Web.config file, locate the StorageAccount application setting and replace the place
holders in the text with the storage account name and the account access key you copied to the
clipboard.

5. Open the Country.cs file from the projects Models folder. The Country class derives from the
TableServiceEntity, so it can be added to the Table storage. The RowKey property contains the
name of the country as the unique identifier of the entity, and the PartitionKey property, which is
used for partitioning and scalability, contains the continent name of the country.
9-24 Windows Azure Storage

6. Open the CountriesController.cs file from the projects Controllers folder, and examine the content
of the GetTableContext method. The method calls the CreateIfNotExists method to verify that the
table exists, create it if it does not exist, and finally, return a TableServiceContext object which is
used for querying and adding entities to the table.

7. Examine the content of the Index method. The method uses the CreateQuery<T> generic method
to create Table storage queries. The first query retrieves all the countries in the Table, and the second
query filters the list of entities according to the PartitionKey property, by using a LINQ statement.
8. Examine the content of the Add method. The method uses the AddObject method to add a new
country to the local context, and then uses the SaveChanges method to persist the changes to the
Table storage.

9. Run the web application and add a country with the following information:

Language: Italian

Continent: Europe
Name: Italy

Verify you see the new country in the list of countries shown at the top of the page.
10. Add another country with the following information:

Language: Chinese

Continent: Asia
Name: China

Verify you see both added countries in the list of countries shown at the top of the page.

11. Append countries?continent=Europe to the browsers address bar and verify you only European
countries.

12. In Visual Studio 2012, add your Windows Azure Storage account to the list of storage accounts in the
Server Explorer window. Use the account name and key you used in the Web.config file.
13. Open the Countries table from the storage account you added and verify you see the PartitionKey,
RowKey, TimeStamp, and Language columns.

14. Open the Country.cs file from the projects Model folder, and add a new int property named
Population to the Country class.

15. Open the CountriesController.cs file from the projects Controllers folder, locate the Add method,
and set the Population property of the country object by getting the value of the Population key
from the collection object. You will need to parse the value to an int value.

16. Run the web application and add a country with the following information:

Population: 65350000

Language: French

Continent: Europe

Name: France

Verify you see the new country in the list of countries shown at the top of the page.

17. Return to Visual Studio 2012, refresh the contents of the Countries table, and verify that the
Population column was added to the table.
Developing Windows Azure and Web Services 9-25

Lesson 4
Windows Azure Queue Storage
Queues are important for implementing high-scale web applications. Windows Azure Storage introduces
a simple yet scalable queuing system that you can leverage for communication between your
applications roles.

In this lesson, you will explore the Windows Azure Queue Storage features and learn how to use them.

Lesson Objectives
After completing this lesson, you will be able to:

Describe the difference between Windows Azure Storage queues and Service Bus queues.

Describe how to create a Windows Azure Storage queue.

Describe how to send a message to a Windows Azure Storage queue.


Describe how to pull and delete a message from a Windows Azure Storage queue.

Work with Windows Azure Storage queue.

Windows Azure Queues vs. Windows Azure Service Bus Queues


Windows Azure applications are distributed in
nature. Multiple roles collaborate together by
exchanging information. Web roles receive
requests from clients and delegate complex
computations to back-end worker roles. High-
scale web applications use one-way
communications and producer-consumer patterns
to achieve high scalability. Together with the roles
infrastructure Windows Azure introduces a
queuing system on which messages are sent
between roles.

Windows Azure queues are part of the Windows


Azure storage offering and are accessed using a CloudStorageAccount object. As such they are
independent of other Windows Azure offerings and can be used by anyone who can issue HTTP requests.
Nonetheless, Windows Azure queues are optimally designed to be used by roles.

Windows Azure queues is a simple queuing infrastructure. It is not transactional, but the queue
guarantees that messages are sent at least once. Message size is limited to 64 kilobytes (KB), there are no
dead-letter queues but poison messages are supported. Windows Azure Queues can contain millions of
messages - up to the 100 terabytes total capacity limit of a storage account - and maintain them in
memory up to seven days.

The security model of Azure Queues is designed for Windows Azure roles to use. To access a queue, you
must supply the Windows Azure storage credentials. Such credentials cannot be distributed to clients for
obvious reasons.
Service Bus Queues

It is important to know that Windows Azure Queues are not the only queue offering in Windows Azure.
The Service Bus infrastructure provided by Windows Azure was designed for collaboration and integration
of applications at large scale. As such, its security model was designed for customers to use. Like all
9-26 Windows Azure Storage

Service Bus resources, queues can be authenticated with Access Control Service (ACS) and leverage
federated authentication. The service bus queue message size is larger than an Azure queue and is limited
to 256 KB. The service bus queue is transactional and messages are sent exactly once. The service bus
queue supports batching and locking at the queue level and provides built-in WCF integration.

The following table contains a feature comparison between Azure Queues and the Service Bus Queues.

Comparison Criteria Windows Azure Queues Service Bus Queues

Ordering guarantee No FIFO

Delivery guarantee At-Least-Once At-Least-Once


At-Most-Once

Transaction support No Yes

Receive behavior Non-blocking Blocking with/without timeout, Long


Polling, and non-blocking (.Net API)

Receive mode Peek & Lease Peek & Lock


Receive & Delete

Exclusive access mode Lease-based Lock-based

Lease/Lock duration 30 seconds (default) 60 seconds (default)


7 days (maximum) 5 minutes (maximum)

Lease/Lock granularity Message level Queue level

Batched receive Yes Yes

Batched send No Yes

Scheduled delivery Yes Yes

Automatic dead No Yes


lettering

Message deferral Yes Yes

Poison message Yes Yes


support

In-place update Yes No

Server-side transaction Yes No


log

Storage metrics Yes No

Purge queue function Yes No

Message groups No Yes

Duplicate detection No Yes


Developing Windows Azure and Web Services 9-27

Comparison Criteria Windows Azure Queues Service Bus Queues

WCF integration No Yes

For more information about comparison between Azure Queues and the Service Bus Queues, consult
MSDN documentation:
http://go.microsoft.com/fwlink/?LinkID=298848&clcid=0x409

Creating and Deleting Queues


It is possible to create and delete a queue using all
storage APIs and using cloud storage
management tools.
The Windows Azure .NET SDK introduces the
CloudQueueClient that lets you get object
references for queues and perform basic
operations.

The following code shows how to create a new


queue using the Windows Azure .NET SDK API.

Create a New Queue Using the Windows Azure


.NET SDK
var storageClient = CloudStorageAccount.Parse(connectionString);
var queueClient = storageClient.CreateCloudQueueClient();
var myQueue = queueClient.GetQueueReference("myqueue");
myQueue.CreateIfNotExist();

The following code shows how to delete an existing queue using the Windows Azure .NET SDK.

Delete an Existing Queue Using The Windows Azure .NET SDK


var storageClient = CloudStorageAccount.Parse(connectionString);
var queueClient = storageClient.CreateCloudQueueClient();
var myQueue = queueClient.GetQueueReference("myqueue");
myQueue.Delete();

Like any other storage tasks, it is possible to create a queue by using the Windows Azure Storage HTTP-
based API.

To create a queue, you can send an HTTP PUT request according to the following pattern:

http://myaccount.queue.core.windows.net/myqueue. The request must be authorized by setting the


Authorization and the Date headers. The x-ms-version header is optional.

To delete a queue, you have to send a DELETE request according to the following pattern:

http://myaccount.queue.core.windows.net/myqueue.
The request must be authorized by setting the Authorization and the Date headers. The x-ms-version
header is optional.

It is possible to create and delete queues with all Azure Storage management tools and with Visual Studio.
9-28 Windows Azure Storage

The following figure shows how to create a new queue using Visual Studio Server Explorer.

FIGURE 9.2: CREATE A QUEUE WITH VISUAL STUDIO


SERVER EXPLORER

Sending Messages to a Queue


To add a message to an existing queue, first
create a new instance of the
CloudQueueMessage class and then call the
AddMessage method of a CloudQueue object.
A CloudQueueMessage represents a message in
queue and contains information such as the
message ID, ExpirationTime, InsertionTime,
NextVisibleTime, and DequeueCount, which is
the number of times this message has been
dequeued.

The following code shows how to add a message


into an existing queue using Windows Azure .NET
SDK.

Add a Message Into an Existing Queue


var queueClient = storageClient.CreateCloudQueueClient();
var myQueue = queueClient.GetQueueReference("myqueue");
CloudQueueMessage message = new CloudQueueMessage("Hello, World");
myQueue.AddMessage(message);

You can add a message to a queue by using the HTTP-based API.

The message content must be in a format that may be encoded with UTF-8. The request must be
authorized by setting the Authorization and the Date headers. The x-ms-version header is optional.

The following snippet shows how to add a new message into an existing queue by using HTTP.
Developing Windows Azure and Web Services 9-29

Add a New Message Into an Existing Queue Using HTTP


POST http://myaccount.queue.core.windows.net/messages?visibilitytimeout=30&timeout=30
HTTP/1.1

Headers:
x-ms-version: 2011-08-18
x-ms-date: Tue, 30 Aug 2012 04:03:21 GMT
Authorization: SharedKey myaccount:sr8rIheJmCd6npMSx7DfAY3L//V3uWvSXOzUBCV9wnk=
Content-Length: 100

Body:
<QueueMessage>
<MessageText>Hello World</MessageText>
</QueueMessage>

Pulling Messages from Queues: Peek and DeQueue


There are multiple patterns for receiving message
from a queue:

1. Two-phase dequeue

2. Peek messages
3. Batch dequeue / Batch peek

In a two-phase dequeue pattern, you call the


method GetMessage of the CloudClient object.
The method is non-blocking meaning that it will
return even if there was no message in the queue.
A message returned from GetMessage becomes
invisible to any other client that tries to read messages from the queue. You can set the invisibility time by
supplying invisibility duration as a parameter. After the invisibility period elapses, the message becomes
visible to other code unless you delete it first, so after handling the message you must delete it by calling
DeleteMessage.

The two-phase dequeue pattern assures that if your code fails to process a message, another instance can
get the same message and try again. When a message becomes visible and is read by another instance
the FIFO ordering is disrupted. You have to design your code and messages in such a way that they
support multiple processing of the same message.
The following code shows how to dequeue a message from a queue.

Dequeue a Message From a Queue


var storageClient = CloudStorageAccount.Parse(connectionString);
var queueClient = storageClient.CreateCloudQueueClient();
var myQueue = queueClient.GetQueueReference("myqueue");
var message = myQueue.GetMessage(TimeSpan.FromSeconds(60));
//Process message
myQueue.DeleteMessage(message);

Using the peek pattern, you call the method PickMessage of the CloudClient object to peek at the
message in the front of a queue without removing it from the queue. The method is non-blocking,
meaning that it will return even if there was no message in the queue. A message returned from
PickMessage is visible to any other client that tries to read messages from the queue.

The following code shows how to peek at a message in the front of a queue.
9-30 Windows Azure Storage

Peek at a Message in The Front of a Queue


var queueClient = storageClient.CreateCloudQueueClient();
var myQueue = queueClient.GetQueueReference("myqueue");
var message = myQueue.PeekMessage();
//Process message

It is possible to dequeue or peek at messages from the queue in a batch operation to reduce the number
of network calls and improve performance.

The following code shows how to dequeue and peek messages in a batch.

Batch Dequeue and Peek


var queueClient = storageClient.CreateCloudQueueClient();
var myQueue = queueClient.GetQueueReference("myqueue");
var messages = myQueue.GetMessages(100, TimeSpan.FromSeconds(60));
foreach (var cloudQueueMessage in messages)
{
//Process message
myQueue.DeleteMessage(cloudQueueMessage);
}

messages = myQueue.PeekMessages(100);
foreach (var cloudQueueMessage in messages)
{
//Process message
}

Demonstration: Working with queues


In the demonstration, you will create a Windows Azure Queue, add messages to it, and then dequeue
messages to it. To dequeue messages, you will use the GetMessage and DeleteMessage methods.

Demonstration Steps
1. Open the Windows Azure Management Portal (http://manage.windowsazure.com)
2. If you did not perform the demo in the first lesson, create a new Windows Azure Storage account
named demostorageaccountyourinitials (yourinitials contains your names initials, in lower-case).
Locate the storage accounts primary key and copy it to clipboard.

3. Open the WorkingWithAzureQueues.sln solution located at


D:\Allfiles\Mod09\DemoFiles\WorkingWithAzureQueues in Visual Studio 2012.

4. Open the App.config file located in the WorkingWithAzureQueues.Sender project. Locate the
StorageConnectionString connection string and replace the place holders in the text with the
storage account name and the account access key you copied to the clipboard.

5. Open the App.config file located in the WorkingWithAzureQueues.Receiver project. Locate the
StorageConnectionString connection string and replace the place holders in the text with the
storage account name and the account access key you copied to the clipboard.

6. Open the Program.cs file from the WorkingWithAzureQueues.Sender project, and review the
Main method. The code in the method creates a new Windows Azure Queue named messagequeue
by calling the GetQueueReference and CreateIfNotExists methods, and then sends messages to the
queue by creating new CloudQueueMessage objects and sending them to the queue by calling the
AddMessage method.
Developing Windows Azure and Web Services 9-31

7. Open the Program.cs file from the WorkingWithAzureQueues.Receiver project, and review the
Main method. Focus on the use of the GetMessage and DeleteMessage methods of the
ClientQueue class.

8. Configure the WorkingWithAzureQueues.sln to have multiple startup projects, starting both the
WorkingWithAzureQueues.Sender and WorkingWithAzureQueues.Receiver projects.

9. Run the projects without debugging and view how each message sent to the queue in the Sender
console window is retrieved from queue in the Receiver console window.

10. Close the Sender console window. Wait for the Receiver application to finish handling the queued
messages, and then close the Receiver console window.
9-32 Windows Azure Storage

Lesson 5
Restricting Access to Windows Azure Storage
Information in Azure Storage can be private or sensitive. You can define elaborate access control policies
to ensure privacy while granting access to information owners and allowing them to perform actions.

In this lesson, you will explore Windows Azure Storage data access capabilities and learn how to define
access policies.

Lesson Objectives
After completing this lesson, you will be able to:

Configure access levels for a blob container.

Describe Shared Access Signatures.

Configure Shared Access Signatures for a storage resource.


Reference a container access policy when creating a Shared Access Signature for a blob.

Configuring Access Level for Blob Containers and their Content


Windows Azure blob containers store information
that might be required to be publicly accessible or
on the contrary, should be kept private. To set the
proper permissions, blob containers can be
configured for various access policies.
By default containers are private, meaning that
only blob owners who have the storage account
credentials can access the containers. If public
access to a container and its blobs is required, you
can set the container permissions to allow public
access. This means that anybody can read the
contents of the blob without the need to
authenticate their request.

There are three possible container policies you can use:

1. Full public read access. Container and blob data can be accessed for reads via anonymous requests
but enumeration of containers in the storage account is blocked. Enumeration of blobs inside a
container, however, is permitted.

2. Public read access for blobs only. Blob data can be accessed for read via anonymous request but
enumeration of blobs in a container is blocked.

3. Private only. All anonymous requests are blocked


To set a blob container policy, you have to create a BlobContainerPermissions object and set its
PublicAccess property to one of the BlobContainerPublicAccessType values. Finally, call the
SetPermissions method on the CloudBlobContainer object and pass the permissions object.
The following code shows how to set a blob access policy to Public read access for blobs only

Set Container Access Policy


var storageClient = CloudStorageAccount.Parse(connectionString);
var blobClient = storageClient.CreateCloudBlobClient();
Developing Windows Azure and Web Services 9-33

var container = blobClient.GetContainerReference("MyContainer");


var permissions = new BlobContainerPermissions() {PublicAccess =
BlobContainerPublicAccessType.Blob};
container.SetPermissions(permissions);

Shared Access Signatures


One of the strengths of Windows Azure Storage
services is the fact that clients can access them
directly reliving the load from the application
roles. However, the client data should be kept
private. It is unacceptable to provide clients with
the storage credentials for obvious reasons so
there should be a way to grant specific clients the
permission they need for a short period of time.
Shared Access Signature is a short-lived URL that
grants specific access rights to storage resources
such as containers, blobs, tables and queues for a
certain period of time.

Your client can call your web service which will return a Shared Access Signature for a specific resource.
Now the client has a short window of time in which they can perform the operation you allow on a
specific resource.

The access rights granted in a Shared Access Signature define which operations can be performed on the
resource.

For blobs:

Reading and writing blob content block lists, properties, and metadata

Deleting, leasing, and creating a snapshot of a blob


Listing the blobs within a container

For queues:
Adding a queue message

Removing a queue message

Updating a queue message


Deleting a queue message
Getting queue metadata, including the message count

For tables:

Inserting table entities


Updating table entities

Querying table entities


Deleting table entities

All the information about the granted access levels, the specific resource and the allotted time frame is
incorporated within the Shared Access Signature URL as query parameters. In addition, the Shared Access
Signature URL contains a signature that the storage services use to validate the request.
9-34 Windows Azure Storage

It is possible to specify all access control information in the URL or to embed a reference to an access
policy. With access policies, you can modify or revoke access to the resource if necessary.

For more information about the structure of the Shared Access Signature URL, consult MSDN
documentation:
http://go.microsoft.com/fwlink/?LinkID=298849&clcid=0x409

Configuring Shared Access Signatures


To create a Shared Access Signature for a blob,
call the GetSharedAccessSignature method of a
CloudBlob object, and specify the permissions in
the SharedAccessPolicy parameter.

The following code shows how to create a Shared


Access Signature for a blob.

Create a Shared Access Signature for a Blob


static string
CreateSharedAccessSignature(string
blobName, string path)
{
var storageClient =
CloudStorageAccount.Parse(connectionString);
var blobClient = storageClient.CreateCloudBlobClient();
var container = blobClient.GetContainerReference("MyContainer");

var blob = container.GetBlobReference(blobName);


blob.UploadFile(path);

var sas = blob.GetSharedAccessSignature(new SharedAccessPolicy()


{
Permissions = SharedAccessPermissions.Read | SharedAccessPermissions.Write,
SharedAccessExpiryTime = DateTime.UtcNow + TimeSpan.FromMinutes(5)
});
return (blob.Uri.AbsoluteUri + sas);
}

Configuring Shared Access Signatures Using Policies


Instead of defining all access rights details
explicitly in the Shared Access Signature of the
blob, you can reference a container access policy.

Set the container to be private by creating a


BlobContainerPermissions object and set its
PublicAccess to be

BlobContainerPublicAccessType.Off.

Create a SharedAccessPolicy object and set the


appropriate access rights for the container and
then add it to the containers
Developing Windows Azure and Web Services 9-35

SharedAccessPolicies collection.

Create a SharedAccessPolicy object and set the appropriate access rights for the blob.

Finally, create a Shared Access Signature URL by calling GetSharedAccessSignature on the CloudBlob
object with the blob SharedAccessPolicy object and the key of the container policy in the containers
SharedAccessPolicies collection, as parameters.
The following code shows how to create a Shared Access Signature for a blob using a reference to a
container policy.

Create a Shared Access Signature for a Blob Using a Reference to a Container Policy
static string CreateReferencedSharedAccessSignature(string blobName, string path)
{
var storageClient = CloudStorageAccount.Parse(connectionString);
var blobClient = storageClient.CreateCloudBlobClient();
var container = blobClient.GetContainerReference("MyContainer");
var blob = container.GetBlobReference(blobName);
blob.UploadFile(path);
BlobContainerPermissions permissions = new BlobContainerPermissions();
permissions.PublicAccess = BlobContainerPublicAccessType.Off;
SharedAccessPolicy containerPolicy = new SharedAccessPolicy()
{
Permissions = SharedAccessPermissions.Read
};
permissions.SharedAccessPolicies.Clear();
permissions.SharedAccessPolicies.Add("MyReadOnlyPolicy", containerPolicy);
container.SetPermissions(permissions);
SharedAccessPolicy blobPolicy = new SharedAccessPolicy()
{
Permissions = SharedAccessPermissions.Read,
SharedAccessExpiryTime = DateTime.UtcNow.AddDays(1d)
};
var sas = blob.GetSharedAccessSignature(blobPolicy, "MyReadOnlyPolicy");
return (blob.Uri.AbsoluteUri + sas);
}

Demonstration: Configuring Shared Access Signature for a Blob Container


In this demonstration, you will create a shared access policy for a blob container, and then use the policy
to create a shared access token for viewing blobs. In addition, you will see how to modify an existing
access policy.

Demonstration Steps
1. Open the solution D:\Allfiles\Mod09\DemoFiles\SharedAccessSignature\SharedAccessSignature.sln in
Visual Studio 2012.

2. Open the AzureHelper.cs file from the SASDemo project, under the Model folder, and review the
code in the GetBlobContainer method. After the method creates the blob container, it sets its
permissions to prevent public access by setting the BlobContainerPermissions.PublicAccess
property to BlobContainerPublicAccessType.Off. In addition, the method creates a
SharedAccessBlobPolicy object and sets the shared access policy to be valid for one minute from
the time the container is created. Finally, the method applies the permissions to the blob container by
calling the SetPermissions method.

3. Locate the GetPicturesReferences method and view its content. The method iterates the list of blobs
and returns information about each blob. The information contains the public URL, which is not
accessible, and an accessible URL which is created with a shared access key. To create the shared
9-36 Windows Azure Storage

access key, the method calls the CloudBlobContainer.GetSharedAccessSignature method with the
name of the access policy which was created for the blob container.

4. Locate the ExtendPolicy method and view its content. The method updates the expiration time of
the access policy by updating the SharedAccessExpiryTime property.
5. Run the SharedAccessSignatureDemo project and use the file upload area to upload the
EmpireStateBuilding.jpg file from D:\Allfiles\Mod09\LabFiles\Assets folder. After the file is
uploaded to the blob, click the Will not work link and verify the public access to the blob is not
available.

6. Return to the home page. If the expiration time has passed, click Extend Policy. Click the Will Work
until link, and verify you see the uploaded photo.

7. Observe the address in the address bar. The query string contains several parameters which make the
shared access signature:
sv: Signed version. The version of the Windows Azure storage service

sr: Signed resource. Specifies whether the signature is for a single blob (b) or the entire container (c)

si: Signed identifier. The name of the shared access policy used for this signature.
sig: Signature. The hashed authentication signature.

8. Return to the home page, wait for the expiration time to pass, and then click the Will work until link.
Verify you receive an error message saying the authentication has failed.
9. Return to the home page, click Extend Policy, wait for the page to refresh the expiration time, and
then click the Will Work until link, and verify you see the uploaded photo.
Developing Windows Azure and Web Services 9-37

Lab: Windows Azure Storage


Scenario
Blue Yonder Airlines is planning to add a new feature to its Travel Companion app: uploading photos
from trips so other people can view them when searching for destinations to visit. In this lab, you will add
the upload service, as well as the service for viewing uploaded photos.

In addition, the app should support two types of upload, public upload which makes the photo available
publicly for everyone to see and private upload which only permits the owner of the photo to view it. In
this lab, you will create shared access signatures for the end users blob container to allow them to view
their private content.

Objectives
After completing this lab, you will be able to:

Upload and download files to Windows Azure blobs.


Insert and query photo metadata with Windows Azure tables.

Generate shared access key for blob resources.

Lab Setup
Estimated Time: 60 minutes

Virtual Machine: 20487B-SEA-DEV-A, 20487B-SEA-DEV-C


User name: Administrator, Admin

Password: Pa$$w0rd, Pa$$w0rd

For this lab, you will use the available virtual machine environment. Before you begin this lab, you must
complete the following steps:

1. On the host computer, click Start, point to Administrative Tools, and then click Hyper-V Manager.

2. In Hyper-V Manager, click MSL-TMG1, and in the Action pane, click Start.

3. In Hyper-V Manager, click 20487B-SEA-DEV-A, and in the Action pane, click Start.
4. In the Action pane, click Connect. Wait until the virtual machine starts.

5. Sign in using the following credentials:

User name: Administrator

Password: Pa$$w0rd

6. Return to Hyper-V Manager, click 20487B-SEA-DEV-C, and in the Action pane, click Start.

7. In the Action pane, click Connect. Wait until the virtual machine starts.

8. Sign in using the following credentials:

User name: Admin

Password: Pa$$w0rd

9. Verify that you received credentials to log in to the Azure portal from you training provider, these
credentials and the Azure account will be used throughout the labs of this course.
9-38 Windows Azure Storage

Exercise 1: Storing Content in Windows Azure Storage


Scenario
When a user uploads a photo from a specific trip to the ASP.NET Web API service, the photo is uploaded
by the service to a special blob container for that trip. To support that feature, you will first create a
Windows Azure Storage account, where these blobs will be stored, and then add the required code to the
ASP.NET Web API service to connect to the storage account, create the required blob container, and
upload the file to the container.

Note: Windows 8 Store Apps can write directly to Windows Azure Storage. However, due
to the business logic of how data is stored in Blob storage, and in Table storage, which will be
used in the next exercise, it was decided that these features will be implemented on the server
side.

The main tasks for this exercise are as follows:

1. Create a storage account

2. Add a storage connection string to the Cloud project


3. Create blob containers and upload files to them

4. Explore the asynchronous file upload action

Task 1: Create a storage account


1. Log on to the virtual machine 20487B-SEA-DEV-A as Administrator with the password Pa$$w0rd
and run the Setup.cmd script from the D:\AllFiles\Mod09\LabFiles\Setup folder. When prompt for
information, provide it according to the instructions. Write down the name of the cloud service that is
shown in the script. You will use it later on during the lab.

Note: You may see a warning saying the client model does not match the server model.
This warning may appear if there is a newer version of the Windows Azure PowerShell Cmdlets. If
this message is followed by an error message, please inform the instructor, otherwise you can
ignore the warning.

2. Open the Windows Azure Management Portal website (http://manage.windowsazure.com)

3. Click STORAGE in the left navigation pane and create a new Windows Azure Storage account named
blueyonderlab09yourinitials (yourinitials contains your names initials, in lower-case).
Use the NEW and then the QUICK CREATE button.

Select the region closest to your location.

Wait until the storage account is created.

Note: If you get a message saying the storage account creation failed because you reached
your storage account limit, delete one of your existing storage accounts and retry the step. If you
do not know how to delete a storage account, consult the instructor.

4. Open the configuration for the newly created account, click MANAGE ACCESS KEYS button, and
copy the PRIMARY ACCESS KEY.
Developing Windows Azure and Web Services 9-39

Task 2: Add a storage connection string to the Cloud project


1. Open the BlueYonder.Companion solution file from the
D:\AllFiles\Mod09\LabFiles\begin\BlueYonder.Server folder.

2. Add a storage account connection string to the BlueYonder.Companion.Host role in the


BlueYonder.Companion.Host.Azure project. Use the following settings:

Key Value

Name BlueYonderStore

Type Connection String

Value->connect using Manually entered credentials

Value->Account name blueyonderlab09yourinitials


(yourinitials contains your names initials, in lower-case)

Value->Account keys Press Ctrl+V to paste the Primary access key

Note: To create the connection string for the storage account, click the ellipsis in the Value
box.

Task 3: Create blob containers and upload files to them


1. In the BlueYonder.Companion.Storage project, open the AsyncStorageManager class and add
code to the constructor to retrieve the connection string from the role settings.
use the CloudConfigurationManager.GetSetting static method to retrieve the storage connection
string

Use the CloudStorageAccount.Parse static method to create the CloudStorageAccount object.


Store the result in the _account field.

2. In the GetContainer method, add code to get the container reference:

Use the CreateCloudBlobClient method of the _account member, and then use the
GetContainerReference method to get the container reference.

Create the container if it does not exist by using the CreateIfNotExists method of the container, and
then return the container.
3. In the GetBlob method, add code to get the container and return a block blob reference.

Get the container using the GetContainer method you implemented before.

Check against the isPublic field and if it is true, use the container's SetPermissions method to set the
access type to Blob.

Return a reference to the blob by using the container's GetBlockBlobReference. Use the fileName
parameter as the blob's name.

4. Explore the implementation of the UploadStreamAsync method. The method uses the previous
methods to retrieve a reference to the new blob, and then uploads the stream to it.
9-40 Windows Azure Storage

Task 4: Explore the asynchronous file upload action


1. Open the FilesController class from the BlueYonder.Companion.Controllers project and locate the
UploadFile method.

The method calls the UploadStreamAsync method asynchronously, and after the upload completes,
returns the response.
2. Explore the Public and Private methods of the FilesController class.

The methods use the UploadFile function with a boolean indication whether the blob container is
public or private.

Note: The client app calls these service actions to upload files as either public or private.
Public files can be viewed by any user, whereas private files can only be viewed by the user who
uploaded them.

Results: You can test your changes at the end of the lab.

Exercise 2: Storing Content in Windows Azure Table Storage


Scenario
In the previous exercise, each photo was uploaded to a blob container of a specific trip. By creating
containers for each trip, the user can easily find the photos of a specific trip. However, publicly uploaded
photos taken in the same location can also be viewed by other users, when they are planning a trip to that
location. To support this feature, for every uploaded photo a record will be stored in Table storage,
specifying the location where the photo was taken. In this exercise you will create the file entity and its
containing table, and the code for storing entities in the table and querying the table for photos matching
a specific location.

The main tasks for this exercise are as follows:

1. Write the files metadata to Table storage


2. Query the Table storage

Task 1: Write the files metadata to Table storage


1. In the BlueYonder.Companion.Storage project, open the FileEntity class and derive the class from
the TableServiceEntity abstract class.
2. In the BlueYonder.Companion.Storage project, open the AsyncStorageManager class, and implement
the GetTableContext method.

Use the CreateCloudTableClient method of the _account member to create a CloudTableClient


object.

Use the table client object to retrieve the table by calling the GetTableReference method. Use the
table name stored in the MetadataTable static field of the class.
Create the table if it does not exist using the CreateIfNotExists method of the CloudTable class.

Use the CloudTableClient. GetTableServiceContext method to return a new table service context.

Note: You should make sure the table exists before you return a context for it, otherwise
the code will fail when running queries on the table. If you already created the table, you can skip
calling the GetTableReference and CreateIfNotExists methods.
Developing Windows Azure and Web Services 9-41

3. Add the file data to the table's context in the SaveMetadataAsync method.

Use the TableServiceContext.AddObject method and supply it with the target table name and
object to add. Use the table name stored in the MetadataTable static field of the class.
Add the code before calling the asynchronous save changes method.

4. In the BlueYonder.Companion.Controllers project, open the FilesController class, locate the


CreateFileEntity method, and set the RowKey and PartitionKey properties of the new FileEntity
object:

Set the two properties after the entity variable is initialized.

Set the entity's partition key to the locationID parameter. Convert the locationID from int to string.

Set the entity's row key to a URI encoded value of the entity's Uri property. Use the
HttpUtility.UrlEncode static method to encode the URI.

Note: The RowKey property is set to the files URL, because it has a unique value. The URL is
encoded because the forward slash (/) character is not valid in row keys. The PartitionKey property is
set to the locationID property, because the partition key groups all the files from a single location in
the same partition. By using the locations ID as the partition key, you can query the table and get all
the files uploaded for a specific location.

5. Explore the code in the Metadata method. The method creates the FileEntity object and saves it to
the table.

Note: The client app calls this service action after it uploads the new file to Blob storage. By
storing the list of files in Table storage, the client app can use queries to find specific images,
either by trip or location.

Task 2: Query the Table storage


1. In the BlueYonder.Companion.Storage project, open the AsyncStorageManager class, and implement
the GetLocationMetadata method.

Get the table service context by using the GetTableContext method you implemented in the
previous task.

Use the context's CreateQuery<T> generic method to create a data source for a LINQ query. Use the
table name stored in the MetadataTable static field of the class.
Create a LINQ query that searches for files with a partition key identical to the locationId parameter.

Note: Recall that the location ID was used as the entity's partition key.

2. In the GetFilesMetadata method, uncomment the where clause.

Note: The method queries the table for each row key and returns the matching FileEntity
object by using the yield return statement

3. In the BlueYonder.Companion.Controllers project, open the FilesController class, and review the
implementation of the LocationMetadata method.
9-42 Windows Azure Storage

Note: The method calls the GetLocationMetadata method from the


AsyncStorageManager class, and converts the FileEntity objects that are marked as public to
FileDto objects. The client app calls this service action to get a list of all public files related to a
specific location.

4. In the ToFileDto method, uncomment the initialization of the LocationId property.

5. In the FilesController class, explore the code in the TripMetadata method.

Note: The method retrieves the list of files in the trips public blob container, and then uses
the GetFilesMetadata method of the AsyncStorageManager class to get the FileEntity object
for each of the files. The client app calls this service action to get a list of all files related to a
specific trip. Currently the code retrieves only the public files. In the next exercise you will add the
code to retrieve both public and private files.

Results: You can test your changes at the end of the lab.

Exercise 3: Creating Shared Access Signatures for Blobs


Scenario
In the first exercise, all the uploaded photos were stored in a publicly accessible blob container. To
support privately stored photos, which can only be viewed by the user who uploaded them, you will
implement a second upload action which uploads photos to a private container which can only be
accessed by using a shared access signature.
After completing the code, you will deploy the newly created ASP.NET Web API service to Windows
Azure, so it can be tested.

The main tasks for this exercise are as follows:

1. Change the public photos query to return private photos


2. Upload public and private files to Windows Azure Storage

3. View the Content of the Blobs and Table

Task 1: Change the public photos query to return private photos


1. In the BlueYonder.Companion.Storage project, open the AsyncStorageManager class, and start
implementing the CreateSharedAccessSignature method by creating a new SharecAccessBlobPolicy
object.

Set the policy's permission to Read and the expiration time to one hour from the current time. To
calculate future time, use the DateTime.UtcNow.AddHours method.

2. In the CreateSharedAccessSignature method, add code to create a new


BlobContainerPermissions object, add the policy to it, and apply the permissions to the blob
container.

Add the policy to the SharedAccessPolicies collection of the BlobContainerPermissions object.


Name the policy blueyonder.
Get the container by calling the GetContainer method, and apply the policy to the container by
calling the containers SetPermission method.

3. Use the containers GetSharedAccessSignature to return a shared access signature string for the
new policy. Pass the policy you created a few steps back to the method as a parameter.
Developing Windows Azure and Web Services 9-43

Note: The shared access key signature is a URL query string that you append to blob URLs.
Without the query string, you cannot access private blobs.

4. In the BlueYonder.Companion.Controllers project, open the FilesController class, and update the
TripMetadata method to retrieve a list of private trip files in addition to the public trip files.

To get private files, duplicate the call to the GetFileUris and set the Boolean parameter to false. Store
the result in a variable named privateUris.

Use the Union extension method to combine the private and public collections to a single collection.
Store the collection in a variable named allUris.
Change the code so the allKeys collection will use the allUris collection instead of just the publicUris
collection.

5. Locate the ToFileDto method and explore its code. If the requested file is private, you create a shared
access key for the blob's container, and then set the Uri property of the file to a URL containing the
shared access key.

6. Use Visual Studio 2012 to publish the BlueYonder.Companion.Host.Azure project. If you did not
import your Windows Azure subscription information yet, download your Windows Azure credentials,
and import the downloaded publish settings file in the Publish Windows Azure Application dialog
box.
7. Select the cloud service that matches the cloud service name you wrote down in the beginning of the
lab, while running the setup script.
8. Finish the deployment process by clicking Publish.

Task 2: Upload public and private files to Windows Azure Storage


1. Log on to the virtual machine 20487B-SEA-DEV-C as Admin with the password Pa$$w0rd.

2. Open the BlueYonder.Companion.Client.sln solution from


D:\AllFiles\Mod09\LabFiles\begin\BlueYonder.Companion.Client\ in a new Visual Studio 2012
instance.

3. In the Addresses class of the BlueYonder.Companion.Shared project, set the BaseUri property to
the Windows Azure Cloud Service name you wrote down in the beginning of this lab.

4. Run the client app, search for New, and purchase a flight from Seattle to New-York

5. Select the current trip from Seattle to New York, and then select Media from the app bar.

6. In the Media page, use the app bar to add the StatueOfLiberty.jpg file from the
D:\Allfiles\Mod09\LabFiles\Assets folder. Use the app bar to upload the file to the public storage.

7. In the Media page, use the app bar to add the EmpireStateBuilding.jpg file from the
D:\Allfiles\Mod09\LabFiles\Assets folder. Use the app bar to upload the file to the private storage.
8. Return to the Current Trip page and then enter the Media page again. Wait for a few seconds until
the photos are downloaded from storage, and verify you see both the private and public photos.

9. Return to the Blue Yonder Companion page (the main page). Under New York at a Glance, verify
you see the photo of the Statue of Liberty you uploaded to the public container.

Task 3: View the Content of the Blobs and Table


1. Return to the 20487B-SEA-DEV-A virtual machine, to Visual Studio 2012, and connect to the
Windows Azure Storage account you created in the first exercise, by using Server Explorer.

Add new Azure Storage Account to the Server Explorer.


9-44 Windows Azure Storage

2. In Server Explorer, expand the Blobs node and inspect the two folders which were created, one for
private photos and one for public photos.

3. Open the public blob container, copy the photos URL and browse to the copied address. Verify you
see the photo.
4. Open the private blob container, copy the photos URL and browse to the copied address.

The private photos cannot be accessed by a direct URL, therefore an HTTP 404 (The webpage cannot
be found) page is shown.

Note: The client app is able to show the private photo because it uses a URL that contains a
shared access permission key.

5. In Server Explorer, open the contents of the FilesMetadata table. The table contains metadata for
both public and private photos.

Results: After you complete the exercise, you will be able to use the client App to upload photos to the
private and public blob containers. You will also be able to view the content of the Blob and Table storage
by using Visual Studio 2012.
Developing Windows Azure and Web Services 9-45

Module Review and Takeaways


In this module, you learned about Windows Azure Storage and how to create storage accounts. You learned what
Blob Storage is, and how to use it to manage files in the cloud. You also learned what Table Storage is, the
difference between table storage and relational databases, and how to use table storage to store structured data.
You then learned what is Queue Storage, how to use send and receive messages from it, and the difference
between queue storage and Windows Azure Service Bus Queues. Finally, you learned how to secure access to
blobs, tables, and queues, using shared access signatures and shared access policies.

Review Question(s)
Question: You have been approached by a local high school and asked to design an
application for tracking student grades. How would you use Windows Azure Storage for this
task?
9-46 Windows Azure Storage
10-1

Module 10
Monitoring and Diagnostics
Contents:
Module Overview 10-1

Lesson 1: Performing Diagnostics by Using Tracing 10-2

Lesson 2: Configuring Service Diagnostics 10-7


Lesson 3: Monitoring Services Using Windows Azure Diagnostics 10-18

Lesson 4: Collecting Windows Azure Metrics 10-29


Lab: Monitoring and Diagnostics 10-35
Module Review and Takeaways 10-40

Module Overview
In the real world, most application failures often occur only in production environments and not on the
developers machine. Understanding why applications fail, and obtaining as much information as possible
from the runtime environment, is of paramount importance to operations engineers and developers
looking to resolve bugs or understand application performance. Additionally, security concerns frequently
require collecting audit information from production machines for accountability and analysis purposes.
This module discusses tracing, with a focus on web service tracing and on auditing technologies provided
by Windows Azure. The module begins with tracing in the .NET Framework by using System.Diagnostics,
and then describes tracing in web service infrastructures such as Windows Communication Foundation
(WCF) and ASP.NET Web API. Finally, it explains the information you can get from the host with Microsoft
Internet Information Services (IIS), as well as Windows Azure monitoring and diagnostics.

Note: The Management Portal UI and Windows Azure dialog boxes in Visual Studio 2012
are updated frequently when new Windows Azure components and SDKs for .NET are released.
Therefore, it is possible that some differences will exist between screen shots and steps shown in
this module, and the actual UI you encounter in the Management Portal and Visual Studio 2012.

Objectives
After completing this module, you will be able to:

Perform tracing in the .NET Framework with the System.Diagnostics namespace.

Configure and explore web service and IIS tracing.

Monitor services by using Windows Azure Diagnostics.

View and collect Windows Azure metrics in the management portal.


10-2 Monitoring and Diagnostics

Lesson 1
Performing Diagnostics by Using Tracing
This lesson provides an overview of the .NET Framework and the application programming interface (API)
provided by the System.Diagnostics namespace. You use tracing to instrument applications and produce
informative messages that can help diagnose problems or analyze performance. If your application emits
trace messages, you can record logs to a variety of destinations, including files. With the proper
instrumentation in place, IT professionals can look at the trace message logs and point out problems,
identify their sources, and suggest solutions.

Lesson Objectives
After completing this lesson, you will be able to:

Describe the role of tracing and the tracing infrastructure provided by the .NET Framework.
Write trace messages by using the Trace class.

Write trace messages by using the TraceSource class.


Configure trace listeners and trace switches, and associate filters with traces.

Overview of .NET Diagnostics Tracing


The .NET Framework has an extensible tracing
mechanism. An application sends information to
be traced in a way that is independent of the
tracing destination. You can attach multiple trace
listeners using a configuration file and store the
tracing information in log files and databases.
The tracing API is provided by the
System.Diagnostics namespace, which is part of
the core .NET Framework. Any managed
application can use this API. Among the many
diagnostic classes in this namespace, you will find
the Debug and Trace classes. The Debug class is
intended for use in debug builds, and the Trace class is used for release builds. Both expose the same set
of methods, such as Write, Assert, and Fail.

You can associate each trace message with a trace level, which describes the importance and urgency of
the trace information. For example, the Error trace level is reserved for error messages, whereas the Info
level describes a message that is not of critical importance. When you configure tracing, you can define
which trace levels should be recorded to persistent storage for later analysis. In a typical production
environment, only a minimal set of information is stored, but when something goes wrong, the trace level
can be adjusted, so more information is collected.

Trace messages that the code emits are received by trace listeners. The role of a listener is to collect, store,
and route tracing messages to the appropriate target. The information can be presented in the user
interface or in Microsoft Visual Studio and at the same time saved in a text file or database. You can
configure trace listeners to add additional formatting and attach more information such as the time,
module, and executing method that emitted the trace. Trace listeners are one of the extensibility points of
the diagnostic infrastructure in the .NET Framework. There are a number of built-in listeners available, and
it is also easy to build a custom listener and attach it to the diagnostic pipeline.
Developing Windows Azure and Web Services 10-3

Application-level frameworks, such as ASP.NET and WCF, use tracing extensively. By attaching a trace
listener, you can access internal information, which will simplify diagnostics and facilitate monitoring of
the execution path at critical points.

Writing Trace Messages with System.Diagnostics.Trace


The System.Diagnostics.Trace class is the entry
point for tracing code execution in release builds.
You can use the Trace class to emit trace
messages and to display an assertion dialog box if
a certain condition is not met. For example, if a
configuration file required for the application to
function is not available, an assertion dialog box
ensures that the error does not go unnoticed.
The following code example shows how to write a
simple trace message by using the Trace class.

Simple Tracing
catch (Exception ex)
{
Trace.WriteLine("There is a problem: " + ex.ToString() , "Error");
}

You use the Write, WriteIf, WriteLine, and WriteLineIf methods to write simple trace messages. The
Trace.WriteLine method, similar to the Console.WriteLine method, adds a new line at the end of the
output. The Trace.Write method does not add a new line at the end of the output. The WriteIf and
WriteLineIf methods provide conditional tracing capabilities, and will output the trace message only if
the condition evaluates to true.
The Trace class provides some additional methods:
The Fail method produces error messages to simplify error handling.

The Flush method immediately forces buffered data to be sent to the Listeners collection. This
method is described in topic 4, "Configuring Trace Listeners".

The Close method implicitly calls the Flush method and then closes the listeners.

The Assert method evaluates a condition, and displays a diagnostic message that shows the call stack
if the condition is false.
You can provide the various Write methods with a category string that will be used later to filter
messages by trace listeners. You can also use built-in categories and call the TraceInformation,
TraceWarning, or TraceError methods.

The Trace class defines tracing methods. It does not provide any mechanism for persisting or handling
trace messages. Persisting and handling of messages is the task of trace listeners, which are specified in
the application configuration file. Trace listeners are explained in topic 4, "Configuring Trace Listeners".
10-4 Monitoring and Diagnostics

Writing Trace Messages with System.Diagnostics.TraceSource


Another option for writing trace messages is to
use the TraceSource class, which was introduced
in the .NET Framework 2.0 as an enhancement to
the tracing infrastructure. The methods of the
TraceSource class are similar to those of the
Trace class.

You can use the TraceSource class to attach a


separate configuration to each trace source. You
can use this feature to implement different tracing
policies in different parts of your application. Each
instance of the TraceSource class is identified by
a name, which is used to write messages and
apply configuration. You implement tracing policies by attaching a trace switch and trace listeners to a
trace source. The switch determines which trace messages to collect and the listeners are responsible for
storing, displaying, or routing the messages.

Using the SourceSwitch class in an application configuration file, you can dynamically turn off tracing or
change the level at which tracing occurs. Trace listeners can introduce an additional layer of filtering
through trace filters, as will be described in the next topic.

The following code example shows how to configure a tracing policy and attach it to a trace source.

Trace Source Configuration


<system.diagnostics>
<sources>
<source name="MyTraceSource" switchName="SourceSwitch"
switchType="System.Diagnostics.SourceSwitch" >
<listeners>
<add name="console" />
<remove name ="Default" />
</listeners>
</source>
</sources>
<switches>
<add name="SourceSwitch" value="Warning" />
</switches>
<sharedListeners>
<add name="console"
type="System.Diagnostics.ConsoleTraceListener"
initializeData="false"/>
</sharedListeners>
<trace autoflush="true" indentsize="3">
<listeners>
<add name="console" />
</listeners>
</trace>
</system.diagnostics>

In the preceding configuration file, the <sources> element contains a <source> element that defines a
source named MyTraceSource, associated with a switch named SourceSwitch. Additionally, the trace
source is associated with a trace listener named console. The <switches> element declares the
SourceSwitch switch, and configures its trace level to Warning. As a result, any Warning or Error
messages emitted through the MyTraceSource trace source will be passed through the trace listeners.

The following code example presents some simple operations with a TraceSource class.
Developing Windows Azure and Web Services 10-5

Writing a Trace to a TraceSource


TraceSource source = new TraceSource("MyTraceSource");
source.TraceData(TraceEventType.Warning, id, someDataObject);
source.TraceEvent(TraceEventType.Error, id, "someEvent");
source.TraceInformation("{0} --- {1}", value1, value2);
source.Flush();
source.Close();

The difference between the TraceData and TraceEvent methods is that the TraceData method is
intended for logging the contents of an object, whereas the TraceEvent logs an event description,
supplied as a string. The TraceInformation method is the same as calling the TraceEvent method with
the first parameter being TraceEventType.Information.

Configuring Trace Listeners


Trace listeners are responsible for handling trace
messages. They can store or route messages to
the appropriate handler. You can use one of the
built-in listeners, such as the
DefaultTraceListener, TextWriterTraceListener,
or EventLogTraceListener classes, or you can
build your own trace listener and declare it in the
application configuration file.

The following configuration declares a trace


listener.

Simple Trace Listener Configuration


<system.diagnostics>
<trace autoflush="false" indentsize="4">
<listeners>
<add name="myListener" type="System.Diagnostics.TextWriterTraceListener"
initializeData="TextWriterOutput.log" />
<remove name="Default" />
</listeners>
</trace>
</system.diagnostics>

The preceding configuration declares a trace listener named myListener of the type
TextWriterTraceListener, which writes to a file named TextWriterOutput.log. A trace source can now
reference this listener, as explained in the previous topic, "Writing Trace Messages with
System.Diagnostics.TraceSource".

Creating a Custom Trace Listener


To create a custom trace listener, create a new class that derives from the TraceListener class, and
override the methods for which you want to implement your custom behavior. For more information
about implementing a trace listener, refer to the MSDN documentation.

TraceListener Class
http://go.microsoft.com/fwlink/?LinkID=298850&clcid=0x409

You can add filters to listeners by using code or in the application configuration file. Filters introduce
message filtering at the listener level, in contrast to the message filtering at the TraceSource level that is
10-6 Monitoring and Diagnostics

introduced by a SourceSwitch. For example, an EventTypeFilter class can be added to a trace listener to
control the event types that are output by the listener.

The following configuration sets up a trace listener with filters.

Trace Listeners and Filters


<system.diagnostics>
<sources>
<source name="TraceSourceApp"
switchName="sourceSwitch"
switchType="System.Diagnostics.SourceSwitch">
<listeners>
<add name="console"
type="System.Diagnostics.ConsoleTraceListener">
<filter type="System.Diagnostics.EventTypeFilter"
initializeData="Warning"/>
</add>
<add name="myListener"/>
<remove name="Default"/>
</listeners>
</source>
</sources>
<switches>
<add name="sourceSwitch" value="Warning"/>
</switches>
<sharedListeners>
<add name="myListener"
type="System.Diagnostics.TextWriterTraceListener"
initializeData="myListener.log">
<filter type="System.Diagnostics.EventTypeFilter"
initializeData="Error"/>
</add>
</sharedListeners>
</system.diagnostics>

The configuration section for the source named TraceSourceApp declares a trace listener named console
of type ConsoleTraceListener, which outputs trace information to the console. It also associates this
listener with a filter of type EventTypeFilter, which will only log messages with a Warning or Error trace
level. The <sharedListeners> element illustrates that you can declare trace listeners in a single section of
your configuration file, and then reference them by name from your trace source configuration, instead of
specifying their type and filter for each trace source.

Question: Why should you use the TraceSource class and not the Trace class?
Developing Windows Azure and Web Services 10-7

Lesson 2
Configuring Service Diagnostics
Web services implemented by using WCF or ASP.NET Web API apply a long messaging pipeline to every
incoming message before the actual service method executes. Diagnosing errors along this pipeline can
be difficult. You can use tracing to obtain internal information about the execution of the messaging
pipeline. This information can help solve problems and increase performance of the application.

Lesson Objectives
After completing this lesson, you will be able to:

Describe how to implement tracing in ASP.NET Web API applications.

Explore WCF tracing results.

Use performance counters to monitor web services.


Troubleshoot IIS by using logs and performance counters.

Tracing with ASP.NET Web API


To enable tracing when using ASP.NET Web API,
simply add a trace writer to the ASP.NET Web API
messaging pipeline. A trace writer is a class that
implements the ITraceWriter interface. Using a
trace writer gives you immediate access to the
traces created by the ASP.NET Web API pipeline.
With a trace writer, you can trace what the
ASP.NET Web API framework does before and
after it invokes your controller.

The ITraceWriter interface defines only one


method, the Trace method. The role of the Trace
method is to create and persist a trace record,
which is an instance of the TraceRecord class. The caller of the Trace method must provide the details of
the new trace record: a category, a trace level, and a delegate. The Trace method initializes a new trace
record, and then invokes the delegate to fill it with details. The Trace method is also responsible for
storing the trace record in persistent storage for subsequent diagnostics.

The following code example illustrates a trace writer.

ITraceWriter in ASP.NET Web API


public class MyTraceImplementation : ITraceWriter
{
public void Trace(HttpRequestMessage request, string category, TraceLevel level,
Action<TraceRecord> traceAction)
{
TraceRecord rec = new TraceRecord(request, category, level);
traceAction(rec);
PersistTrace(rec);
}

protected void PersistTrace(TraceRecord record)


{
var message = string.Format("The message is: {0};{1};{2}",
record.Operator, record.Operation, record.Message);
10-8 Monitoring and Diagnostics

System.Diagnostics.Trace.WriteLine(message, rec.Category);
}
}

Note that the preceding code uses the Trace.WriteLine method to emit trace messages from the
ASP.NET Web API pipeline. As a result, you can use trace listeners and filters in the application
configuration file to specify what to do with the emitted messages.

Note: The preceding example persists trace records by using System.Diagnostics.Trace.


However, you are not limited to using the Trace or TraceSource classes of System.Diagnostics.
You can use any logging mechanism or framework to persist the trace records, such as using
Event Tracing for Windows (ETW), the Enterprise Library Logging Application Block, or any other
third-party logging framework.

To attach your ITraceWriter implementation to the ASP.NET Web API pipeline, you use the
HttpConfiguration object.

Attaching an Implementation of ITraceWriter to the ASP.NET Web API Pipeline


public static void EnableTracing(HttpConfiguration config)
{
config.Services.Replace(typeof(ITraceWriter), new MyTraceImplementation());
}

To write a trace message, you can access the trace writer by using the
HttpConfiguration.Services.GetTraceWriter method. However, it is more convenient to use the
extension methods provided by the ITraceWriterExtensions class. For example, the Error method creates
a trace with trace level Error.
The following code shows how to write trace messages in ASP.NET Web API.

Writing Trace Messages in ASP.NET Web API


catch (Exception ex)
{
Configuration.Services.GetTraceWriter().Error(Request, "MyCategory", ex);
}

The message you emit using the Error method in the preceding code example is passed to the
ITraceWriter interface implementation you saw earlier. In addition to the messages you write, ASP.NET
Web API also writes trace messages for specific events in the message handling pipeline, such as before
and after invoking the controller's action.
For additional information regarding the ASP.NET Web API trace writer, refer to the following article.

Tracing in ASP.NET Web API


http://go.microsoft.com/fwlink/?LinkID=313742
Developing Windows Azure and Web Services 10-9

Recording WCF Diagnostic Information


Similar to ASP.NET Web API, WCF uses tracing
extensively, but it does not require the developer
to write any code. The WCF messaging pipeline is
very complex, and involves many opportunities for
failure. Recording traces from the messaging
pipeline is instrumental for diagnosing errors.

For example, WCF catches exceptions to prevent


the server from failing, and sends an error
message to the client. However, for security
reasons, this message does not by default include
the exception details. Even if the server is
configured to include exception details in fault
messages, the exception will not necessarily provide enough information to lead you to the source of the
problem because of internal try-catch-throw chains in the WCF messaging pipeline. Therefore, it is
important that you see all of the internal WCF information that is produced while processing messages
and handling communication protocols so that you can diagnose any problems.

Note: WCF fault messages are covered in, Module 5, "Creating WCF Services", Lesson 2,
"Creating and Implementing a Contract" in Course 20487.

Turning on tracing and message logging requires a simple configuration update, which you can perform
by using the Visual Studio 2012 WCF Service Configuration Editor tool. The tool adds dedicated trace
listeners to the <system.diagnostics> configuration section in the configuration file, which you can take
advantage of by using trace sources in your code. To open the WCF Service Configuration Editor tool, in
Solution Explorer, right-click the configuration file in your WCF project, and then click Edit WCF
Configuration. After the tool opens, click the Diagnostics node in the Configuration pane, and then
click the Enable Tracing link.

The following image illustrates how to turn on tracing using the WCF Service Configuration Editor.

FIGURE 10.1: TURNING ON TRACING USING THE WCF


CONFIGURATION EDITOR
WCF stores its tracing information in .svclog files, which you can view using the SvcTraceViewer.exe tool
provided by Visual Studio 2012. The tool can open multiple trace files simultaneously and correlate them
to provide a single view of the tracing and activity flow. When end-to-end tracing is enabled, an
10-10 Monitoring and Diagnostics

ActivityID header is added to every message on the wire. This facilitates correlation between different
trace files collected on different machines and helps the viewer present the messages in the correct order.
Additionally, every message can be correlated with tracing information related to it. If an error occurred
while handling a message, it will be highlighted in red.

The SvcTraceViewer.exe tool has two display modes for message traces:

Activity (detail) view, which shows every message and its corresponding tracing information emitted
by the WCF messaging pipeline.

Graph view, which shows how messages are delivered from one machine to the other, and how they
are processed by the WCF messaging pipeline.

The following image illustrates the graph view in the SvcTraceViewer.exe tool.

FIGURE 10.2: TRACING GRAPH VIEW


The following image illustrates the activity (detail) view in the SvcTraceViewer.exe tool.

FIGURE 10.3: TRACING DETAIL VIEW


Developing Windows Azure and Web Services 10-11

In addition to tracing, you can use message logging to record every message delivered to and from your
WCF service. Monitoring communication on the wire not only helps solve problems, it will also give you a
greater understanding of WS-* protocols, such as WS-Trust and WS-ReliableMessaging. Network sniffers
are popular tools that inspect messages on the wire, but WCF message logging makes communication
monitoring much simpler. Unlike common sniffers, WCF message logging can decrypt and decode
messages, providing a clear view of their content.

Similar to tracing, you enable message logging through the applications configuration file. You can use
the interface provided by the WCF Service Configuration Editor to update the configuration. After the tool
opens, click the Diagnostics node in the Configuration pane, and then click the Enable
MessageLogging link. When you turn on message logging, WCF stores all messages in files for
subsequent inspection.

The following image illustrates how to enable message logging.

FIGURE 10.4: ENABLING MESSAGE LOGGING


After you turn on message logging, you can configure which parts of the message will be logged. For
example, you can control whether to log the entire message or only its headers. To change these settings,
expand the Diagnostics node in the Configuration pane, click the Message Logging node, and then
change the settings shown in the property grid.

Note: The default setting of message logging is to only log message headers. This is due to
the security risk involved in logging possible sensitive content from the message body.

To view WCF message logs, you use the SvcTraceViewer.exe tool.

The following image illustrates a message logging report, opened in the SvcTraceViewer.exe tool.
10-12 Monitoring and Diagnostics

FIGURE 10.5: MESSAGE LOGGING REPORT


You can use the same service trace viewer tool to open both trace and message logging files. This
correlation of tracing and messaging will enable you to diagnose exceptions and see the message that
caused the exception to be thrown.

Monitoring Services with Performance Counters


Performance counters are a useful tool for
monitoring the behavior of a running application.
Performance counters are a part of Windows, and
many components in the Windows operating
system and in application frameworks, such as the
.NET Framework, provide monitoring information
using performance counters. WCF and ASP.NET
also define and use performance counters to
provide information about their behavior.
Performance counters are grouped into topic-
based categories, such as memory, processor, and
network.
To monitor performance counters, you use tools like System Center and Performance Monitor. Typical
information that can be retrieved by monitoring the performance counters includes the number of
running instances, number of requests per second, average request execution time, the number and rate
of various types of errors, and the number of concurrent service calls.

WCF performance counters are located under the performance categories ServiceModelService,
ServiceModelOperation, and ServiceModelEndpoint. ASP.NET performance counters are located under
the performance categories ASP.NET and ASP.NET Applications.

Note: There are two sets of performance categories for WCF, according to the .NET
Framework version. For WCF 3.5, use the categories ServiceModelService 3.0.0.0,
ServiceModelOperation 3.0.0.0, and ServiceModelEndpoint 3.0.0.0. For WCF 4 and on, use
the categories ServiceModelService 4.0.0.0, ServiceModelOperation 4.0.0.0, and
ServiceModelEndpoint 4.0.0.0. The categories for WCF 4 have several counters that did not
exist in prior versions of WCF.
Developing Windows Azure and Web Services 10-13

Counter measurements in WCF can be scoped according to service, endpoint, or operation. Similarly,
ASP.NET classifies its performance counters according to system and application. Unlike ASP.NET counters,
WCF counters are instance-based, meaning that they must be associated with a running process before
they can be presented by Performance Monitor or any other sampling tool. By default, only service
counters are enabled in WCF. You can enable other WCF counters by using the WCF Service Configuration
Editor.

The following configuration enables all the WCF performance counters.

Enabling All WCF Performance Counters


<system.serviceModel>
<diagnostics performanceCounters="All" />
</system.serviceModel>

To view performance counters using Performance Monitor, run Performance Monitor (perfmon.exe) and
then add the WCF performance counters in which you are interested. Performance Monitor can display
counter information as a graph, a histogram, or a line-by-line numeric report.
The following image illustrates WCF performance counters in Performance Monitor.

FIGURE 10.6: WCF PERFORMANCE COUNTERS IN PERFORMANCE


MONITOR
For more information on WCF performance counters, see:

WCF Performance Counters


http://go.microsoft.com/fwlink/?LinkID=298851&clcid=0x409

For more information on ASP.NET performance counters, see:


ASP.NET Performance Counters
http://go.microsoft.com/fwlink/?LinkID=298852&clcid=0x409

You can define custom performance counters, which augment the information provided by WCF,
ASP.NET, and other sources. For example, in a flight booking system, you might implement a performance
counter that displays the number of active bookings, the number of cancelled flights, and the number of
airlines for which there are outstanding bookings. To define custom performance counters, you use the
PerformanceCounterCategory and PerformanceCounter classes from the System.Diagnostics
namespace.

For more information on implementing custom performance counters, see:

Implementing Custom Performance Counters


10-14 Monitoring and Diagnostics

http://go.microsoft.com/fwlink/?LinkID=298853&clcid=0x409

Question: How can you combine WCF tracing information with performance counters to
diagnose errors?

Demonstration: Tracing WCF Services


In this demonstration, you will configure a WCF service to collect trace messages and message logs. You
will then open the trace and messaging logs in the SvcTraceViewer.exe tool.

Demonstration Steps
1. Open the D:\Allfiles\Mod10\DemoFiles\TracingWCFServices\CalcService\ CalcService.sln solution in
Visual Studio 2012.

2. Open the Web.config file in the WCF Service Configuration Editor.

Note: You can open the configuration editor from the Tools menu in Visual Studio 2012,
or by right-clicking the configuration file in Solution Explorer and then clicking Edit WCF
Configuration.

3. In the Diagnostics configuration, click Enable Tracing.

4. In the Diagnostics configuration, click Enable MessageLogging, and then configure the message
logging to log the entire message.

Note: To change message logging settings, expand the Diagnostics configuration node,
and click Message Logging.

5. Run the solution, start the WCF Test Client, and call the Div operation twice, once with valid values,
and again with the b property value set to 0.
6. Open the trace and message logs in the same service trace viewer window. The logs are located in the
D:\Allfiles\Mod10\DemoFiles\TracingWCFServices\CalcService\CalcService folder.

Note: Double-clicking the log file in File Explorer will open the service trace viewer tool.
After the tool opens, you can either drag-and-drop the second log file to the application window,
or from the File menu, click Add, and then add the second file.

7. Click the faulted request on the Activity tab, and observe the details of the exception on the pane to
the right. Click the Formatted tab to see the exception information.

Note: Faulted requests on the Activity tab are marked in red.

8. On the pane to the right, click the message log trace and view the message content for the faulted
request.
Question: When should you use WCF message logging in addition to tracing?
Developing Windows Azure and Web Services 10-15

Troubleshooting IIS by Using Logs and Performance Counters


IIS provides logging features such as logging
request and response information, and logging
tracing output of request handling pipeline. You
can configure logging at various levels in IIS, from
the web server down to a single URL. After server
logging is enabled, you can enable selective
logging for any site on the server.

Logging
Logs provide information about client requests
and their matching responses, and are recorded to
text files. Each record of a request and its response
contains metadata about the structure of the
request and the status of the response. Logs provide a general picture of what the web server is doing.
The following is a sample IIS log file.

IIS Log File


#Software: Microsoft Internet Information Services 8.0
#Version: 1.0
#Date: 2012-11-09 08:07:09
#Fields: date time s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip
cs(User-Agent) cs(Referer) sc-status sc-substatus sc-win32-status time-taken
2012-11-09 08:07:09 78.65.196.147 GET /vtigercrm/graph.php
current_language=../../../../../../../..//etc/elastix.conf%00&module=Accounts&action 443
- 91.103.85.100
Mozilla/5.0+(Windows;+U;+Windows+NT+5.1;+de;+rv:1.9)+Gecko/2008052906+Firefox/3.0 - 404 0
2 191
#Software: Microsoft Internet Information Services 8.0
#Version: 1.0
#Date: 2012-11-09 08:37:54
#Fields: date time s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip
cs(User-Agent) cs(Referer) sc-status sc-substatus sc-win32-status time-taken
2012-11-09 08:37:54 78.65.196.147 GET /remote.php - 80 - 91.226.212.41 - - 404 0 2 416

By inspecting log files, you can detect issues such as failed authentications and server errors. You can also
use the log files to calculate statistics, such as number of requests per service, client global distribution
(according to the client IP address), and daily bandwidth. You can use the Log Parser tool to parse the log
files and query them for the information you seek. You can download the Log Parser tool from the
following link.

Log Parser 2.2


http://go.microsoft.com/fwlink/?LinkID=313743

For specific steps on how to enable logs and configure the logging options, refer to the TechNet
documentation.

Configuring Logging in IIS


http://go.microsoft.com/fwlink/?LinkID=313744

Failed Request Tracing


Failed request tracing provides request-based tracing. With failed request tracing, you can record events
in the request handling pipeline, such as authentication events, cache hits events, and handler selection
10-16 Monitoring and Diagnostics

events, and then log the event information to disk. After you turn on the failed request tracing feature,
you configure the rules when request-related traces are saved to disk. For example, you can configure
failed request tracing to save the traces of all the requests that got an unauthorized response (HTTP 401),
or all the traces of requests made to a specific service URL, whether the request processing succeeded or
failed.

Note: The term failed request tracing is somewhat misleading, because you can trace both
failed requests and successful requests. For example, you can trace all the requests that got an OK
response (HTTP 200).

When you configure the trace rule, you also configure which trace events you want to trace. For managed
code, you can log trace events from ASP.NET, as well as trace events that are emitted by IIS modules such
as the authentication, caching, compression, and URL rewrite modules.

Failed request tracing logs are XML-based files. However, IIS ships with an XSL (stylesheet) file that shows
the XML content in an easy-to-use UI. To use the XSL file, open the XML file with Internet Explorer.

Note: Only Internet Explorer will use the XSL file to render the output of the XML file.
Opening the XML file in other browsers will show the content of the XML file "as-is".

The following image shows the failed request tracing of a response with a 401 Unauthorized HTTP status
code. The client does not receive any additional data about the causes of this response, but failed request
tracing indicates that the user is disabled.

FIGURE 10.7: FAILED REQUEST TRACING


At the site level, you define only the path of the trace log files. At the application level, you can specify the
failure conditions for capturing trace events and also configure which trace events should be captured in
the log file entries.

For more information about failed request tracing, see:

Failed Request Tracing in IIS


http://go.microsoft.com/fwlink/?LinkID=298855&clcid=0x409
Developing Windows Azure and Web Services 10-17

Using Performance Counters


Additionally, IIS provides performance counters that describe the IIS runtime behavior at the worker
process and application levels. Performance counters are useful for checking the overall health of specific
processes and applications on well-loaded servers. Counters are divided into two categories: WAS_W3WP
and W3SVC_W3WP. The counters in the WAS_W3WP category expose service-related information
regarding worker processes. The counters in the W3SVC_W3WP category relate to HTTP request
processing, describing how the web server handles requests.

Performance counters that were previously widely used with IIS 6.0, such as Web Service Measures and
Web Service Cache Measures, are still available, and you can access them through the Internet
Information Services Global category.
10-18 Monitoring and Diagnostics

Lesson 3
Monitoring Services Using Windows Azure Diagnostics
In the cloud, services run in a multi-server environment located in a remote data center with no physical
access. Servers in such environments are based on volatile virtual machines that can be replaced in
runtime. This scenario introduces a complicated management challenge. Often, you cannot configure
tracing and logs in the same way you do on an on-premises server machine. Extracting diagnostic
information may be difficult as well, especially considering the typical sizes of log files on a heavily loaded
production system.

In this lesson, you will explore the infrastructure provided by Windows Azure to simplify to process of
collection diagnostics information when hosting services in the cloud.

Lesson Objectives
After completing this lesson, you will be able to:

Describe the features of Windows Azure Diagnostics.


Identify the types of data and logs that can be collected by Windows Azure Diagnostics.

Describe how to configure services to use Windows Azure Diagnostics.

Describe how to view the information collected by Windows Azure Diagnostics.


Log and view Azure Storage requests.

Describe how to use IntelliTrace in Windows Azure.

Overview of Windows Azure Diagnostics


. Traditionally, on- Almost every service in the
cloud runs on multiple computers premises
services run on dedicated servers, persisting their
logs and traces on the local disk. The elastic
nature of the cloud introduces a challenge when it
comes to storing diagnostics data. Because virtual
machines can be provisioned or released during
runtime, data stored on the virtual machines can
be lost. Therefore other means of persisting
diagnostics data is required for cloud-based
service hosting. In addition, in a remote, multi-
server environment, diagnostics are more
complicated to collect, because multiple computers produce large logs and traces at the same time,
making it difficult to collect all of them and get an overall view of the system.

By using Windows Azure Diagnostics, you can collect diagnostic information from a range of data sources
located on different servers, store that information in a single location in Windows Azure Storage, and
produce an aggregated view that depicts the behavior and state of the whole environment.

With Windows Azure Diagnostics in place, you can continue to use the same logging and auditing code
and infrastructure that you use in your on-premises deployments. The logging and auditing infrastructure
will continue to write information locally for each running instance, and Windows Azure Diagnostics will
collect and aggregate that information, storing it in Windows Azure Storage.
Developing Windows Azure and Web Services 10-19

You can then a variety of third-party tools to generate reports based on the diagnostic information that is
stored in the Windows Azure Storage. You can use these reports for debugging, troubleshooting,
measuring performance, monitoring resource usage, executing traffic analysis, and planning capacity.

Diagnostic information is also useful when implementing cloud elasticity. You can use a dedicated process
to automatically scale up or down the number of role instances in your application according to the
applications overall performance. The Microsoft Patterns and Practices (P&P) team released the Windows
Azure Auto Scaling Application Block (WASABi) for this purpose. For more information about the
Windows Azure Auto Scaling Application Block, see the MSDN documentation at:

Auto Scaling Application Block


http://go.microsoft.com/fwlink/?LinkID=298856&clcid=0x409

If you enable Windows Azure Diagnostics, every role instance will have a dedicated process running the
diagnostic monitor. The diagnostic monitor collects diagnostic information from a collection of local data
sources and captures it in local storage buffers. When you deploy the service, you configure the type of
information that is captured. Windows Azure diagnostics also supports changing these settings after the
service has been deployed, without needing to redeploy it. Windows Azure Diagnostics can store
performance counters values, Windows event log entries, internal Windows Azure infrastructure logs, IIS
logs, failed request tracing logs, and any other log file your application creates. After the diagnostic
information is captured, it can be transferred either on a schedule or on demand to be persisted in
Windows Azure Storage.

For more information regarding Windows Azure Diagnostics, see the MSDN documentation at:
Windows Azure Diagnostics
http://go.microsoft.com/fwlink/?LinkID=313745

Types of Collectable Diagnostic Data


Windows Azure Diagnostics supports many types
of diagnostics such as Windows Azure Diagnostics
trace listeners that collect traces and save them to
table storage, performance counters, IIS logs,
crash dumps, Windows Event Logs, and custom
file-based logs.

Except for file-based data buffers, which are


stored in Blob Storage, all other information is
stored in Table Storage. Therefore, when you
configure diagnostics collection, you need to
specify the name of the storage account to use,
and the name of the blob container and table to
use. Windows Azure Diagnostics creates tables for each type of data, such as the
WADPerformanceCountersTable for performance counters and the WADWindowsEventLogsTable for
Windows Event Logs. Windows Azure Diagnostics also creates blob containers for storing the IIS log files
for each Web role instance.

Windows Azure also provides the


Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener class, which is a trace listener
designed for storing data in table storage, in a table named WADLogsTable. To use the
DiagnosticMonitorTraceListener class, you need to configure your application's trace listeners.
10-20 Monitoring and Diagnostics

The following code shows how to configure the .NET tracing infrastructure to use
DiagnosticMonitorTraceListener as a trace listener.

Configuring .NET Tracing to Use DiagnosticMonitorTraceListener


<system.diagnostics>
<trace>
<listeners>
<add type="Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener,
Microsoft.WindowsAzure.Diagnostics, Version=2.0.0.0, Culture=neutral,
PublicKeyToken=31bf3856ad364e35"
name="AzureDiagnostics">
<filter type="" />
</add>
</listeners>
</trace>
</system.diagnostics>

In addition to adding the DiagnosticMonitorTraceListener to the list of trace listeners, you also need to
configure Windows Azure Diagnostics to transfer the collected data to storage. This will be covered in the
next topic.

Configuring Diagnostics
When you create a new cloud project in Visual
Studio 2012, each role is configured to use
diagnostics. Diagnostics is also automatically
enabled for every role you add to an existing
cloud project.
By default, the diagnostics level for a role is set to
collect only errors and critical events from .NET
tracing and Windows event logs. To open the
diagnostics configuration, right-click the role in
Solution Explorer, click Properties, and then click
the Configuration tab. On the Configuration tab
you can set the level of collected traces, create a
custom collection plan, or disable diagnostics entirely for the specific role.

Note: Because the default level of diagnostics is set to collect errors only, other data that is
informative, such as performance counters and IIS logs, will not be collected. By default, only the
Application events are collected from the Windows event logs.

The following screenshot shows the diagnostics section on the Configuration tab of a role.
Developing Windows Azure and Web Services 10-21

FIGURE 10.8: DIAGNOSTICS CONFIGURATION FOR A ROLE


You can change the diagnostics collection level to All information. This will collect all trace events and
Windows event logs. If you want to collect other types of information, such as performance counters, or
IIS log files, change the diagnostics mode to Custom plan. After you select to use a custom plan, click
Edit to configure which sources to collect, and for each source configure the following properties:

Buffer size. Size of the local allocated buffer that stores the source output, before uploading it to
storage. The total buffer size of all collected sources cannot exceed the overall quota, which is 4GB by
default. If a collected source exceeds its buffer size before begin uploaded, older items will be
removed from the buffer.
Transfer period. The interval for uploading the collected information to storage.
For each of the sources, you can specify additional settings, depending on the source type.

The following screenshot shows the Performance counters tab in the Diagnostics configuration dialog
box.

To configure performance counters collection, select the counters you want to collect and specify the
sampling rate for each counter. Then, configure the buffer size and transfer period, as explained before.
10-22 Monitoring and Diagnostics

Whether you choose to use the Errors, All information, or Custom plan option, the diagnostics
configuration is written to an XML configuration file named diagnostics.wadcfg. This file is located in
Solution Explorer under the role node, and is included as part of your deployment.

Note: The Windows Azure Storage account connection string where the diagnostic data is
collected is not located in the diagnostics.wadcfg file. The connection string is located in the
ServiceConfiguration.*.cscfg file, and may have different values, depending on whether you
deploy to a cloud environment or to the local Windows Azure Emulator.

The Diagnostics.wadcfg File


The diagnostics.wadcfg file holds the configuration you set through the role's diagnostics configuration.
Some settings, such as the maximum size of all buffers, can only be set by manually editing the file.

The following code is an example of the Errors only diagnostics configuration in a diagnostics.wadcfg file.

Windows Azure Diagnostics Configuration


<?xml version="1.0" encoding="utf-8"?>
<DiagnosticMonitorConfiguration configurationChangePollInterval="PT1M"
overallQuotaInMB="4096"
xmlns="http://schemas.microsoft.com/ServiceHosting/2010/10/DiagnosticsConfiguration">
<DiagnosticInfrastructureLogs />
<Directories>
<IISLogs container="wad-iis-logfiles" />
<CrashDumps container="wad-crash-dumps" />
</Directories>
<Logs bufferQuotaInMB="1024" scheduledTransferPeriod="PT1M"
scheduledTransferLogLevelFilter="Error" />
<WindowsEventLog bufferQuotaInMB="1024" scheduledTransferPeriod="PT1M"
scheduledTransferLogLevelFilter="Error">
<DataSource name="Application!*" />
</WindowsEventLog>
</DiagnosticMonitorConfiguration>

The preceding example contains configuration for infrastructure logs, IIS logs, crash dumps, trace logs,
and Windows event logs. However, only the trace logs and Windows event logs have the
scheduledTransferPeriod attribute, meaning only these two sources will be collected.

The <DiagnosticMonitorConfiguration> root element has two important attributes which cannot be
editing from the configuration UI:
overallQuotaInMB. Defines the maximum size of all buffers.

configurationChangePollInterval. Defines the time interval period for looking for configuration
changes. The diagnostics.wadcfg file is also uploaded to a blob container named wad-control-
container. If you want to change the diagnostics configuration of a running role, you only need to
replace the configuration file in the container, and wait for the diagnostics monitor process to reload
the configuration.

You will also need to manually edit the configuration file if you want to collect IIS failed request tracing
files or custom log files that you create in the server, since the UI has option to control these settings.

The following is an example of configuring collection of log files.

Configuring Collection of Custom Log Files

<DiagnosticMonitorConfiguration
xmlns="http://schemas.microsoft.com/ServiceHosting/2010/10/DiagnosticsConfiguration"
configurationChangePollInterval="PT1M" overallQuotaInMB="4096">
Developing Windows Azure and Web Services 10-23

<Directories bufferQuotaInMB="1024" scheduledTransferPeriod="PT1M">


<FailedRequestLogs container="wad-frq" directoryQuotaInMB="256" />
<DataSources>
<DirectoryConfiguration container="wad-my-logs" directoryQuotaInMB="256">
<Absolute path="c:\logs" />
</DirectoryConfiguration>
</DataSources>
</Directories>
</DiagnosticMonitorConfiguration>

The <Absolute> element configures the path where log files are collected from. Every log file you create
in that path will upload to storage.

Note: You can also collect local storage resources, which are files you create from code in a
dedicated local folder. Local storage resources are useful when you want to store data locally, but
you do not know the disks and folder structure in the hosting VM, or you are not sure if you have
permissions to write to a specific folder. Creating local storage resources and collecting them
with the Windows Azure Diagnostics is outside the scope of this course.

For more information about the Windows Azure Diagnostics configuration file schema, consult the MSDN
documentation:
Windows Azure Diagnostics Configuration File Schema
http://go.microsoft.com/fwlink/?LinkID=298858&clcid=0x409

There are two other ways to configure diagnostics, without using the diagnostics.wadcfg file:

Local configuration with the Windows Azure Diagnostics API. You can initialize and configure
Windows Azure Diagnostics in code by using the Windows Azure Diagnostics API in the role startup
execution. For more information about the Windows Azure Diagnostics API, consult the MSDN
documentation.
Windows Azure Diagnostics API
http://go.microsoft.com/fwlink/?LinkID=298859&clcid=0x409

Remote configuration by using the DeploymentDiagnosticManager class. The


DeploymentDiagnosticManager class updates the configuration information that is stored in
storage. Any application connected to the web can use the DeploymentDiagnosticManager class
and introduce a configuration update. For more information about remotely changing the
diagnostics configuration, consult the MSDN documentation.

Remotely Change the Diagnostic Monitor Configuration


http://go.microsoft.com/fwlink/?LinkID=313746
10-24 Monitoring and Diagnostics

Viewing Collected Diagnostics Data


After you have enabled Windows Azure
Diagnostics and transferred diagnostic data to
Windows Azure Storage, you can view the data by
using one of several tools. You can use the
general purpose Windows Azure Storage
explorers such as the Visual Studio Server Explorer,
or use dedicated tools which help to view,
download, and manage the data.

IT professionals familiar with Microsoft System


Center Operations Manager can use the Windows
Azure Management Pack and make Windows
Azure Diagnostics information available within
System Center Operations Manager. With System Center Operations Manager, IT professionals can use
their existing skills to monitor application running in Windows Azure.

Viewing Collected Diagnostics with Visual Studio 2012


You can view the collected diagnostics data with Visual Studio 2012 by viewing the contents of blob
containers and tables where the collected diagnostics sources are stored. For example, you can use the
Server Explorer window of Visual Studio 2012 to connect to your Windows Azure Storage account, open
the wad-iis-logfiles container, and download each of the IIS log files collected from your compute
instances.

Note: The default container name for IIS logs if wad-iis-logfiles. If you used a different
container name to store IIS log files, you will need to look for that blob container in the list of
containers shown in Server Explorer. Using Visual Studio 2012 to browse Windows Azure Storage
resources, such as tables and blobs, was demonstrated in Module 9, "Windows Azure Storage", in
Course 20487.

You can also view other sources, such as Windows event logs and IIS failed request trace files, by opening
the matching tables and blob containers. However, you cannot easily view performance counters, as this
type of data source contains a lot of data collected over time and Visual Studio 2012 is only capable of
showing this data in its raw data, rather than in a bar graph, as you would usually choose to view this type
of collected data. You can create bar graphs based on the raw data by copying the tabular content to
Microsoft Office Excel, and then create a graph based on the data.

Instead of using Visual Studio 2012, you can use 3rd-party tools which were designed especially for
showing all types of collected sources, including performance counters. For a partial list of such tools,
refer to the MSDN documentation.

View Diagnostic Data Stored in Windows Azure Storage


http://go.microsoft.com/fwlink/?LinkID=313747

Using the Management Portal to View Performance Counters in Runtime


You can also use the Management Portal to view collected performance counters. By using the
Management Portal, you can view the current values of collected counters, including historical values of
up to 7 days in the past. Unlike Visual Studio 2012, the Management Portal is capable of showing
performance counters using bar graphs.
Developing Windows Azure and Web Services 10-25

In addition to your collected counters, the Management Portal is also capable of monitoring your cloud
service instances for CPU, network, and disk throughput. Cloud service monitoring is described later in this
module in Lesson 5, "Collecting Windows Azure Metrics".

The following image illustrates the diagnostic view in the Windows Azure Management Portal.

FIGURE 10.9: DIAGNOSTIC VIEW IN AZURE MANAGEMENT PORTAL


To view your collected performance counters in the Management Portal, you first need to change the
monitoring level in the cloud service from minimal to verbose. To change the monitoring level, open
your cloud service settings in the Management Portal, click the CONFIGURE tab, and then click the
VERBOSE button in the monitoring section. Click SAVE to save the changes.

Note: Changing the monitoring level to verbose increases the amount of data collected by
the cloud service monitoring infrastructure, and therefore may increase the billing for your
storage account.

Demonstration: Configuring Windows Azure Diagnostics


In this demonstration, you will see how to configure Windows Azure Diagnostics to collect data such as
performance counters, event logs, and IIS logs files.

Demonstration Steps
1. Open the
D:\Allfiles\Mod10\DemoFiles\AzureDiagnostics\Begin\AzureDiagnostics\AzureDiagnostics.sln solution
using Visual Studio 2012.

2. Open the DiagnosticsWebRole role's Properties window. On the Configuration tab, select Custom
plan, and then click Edit.

3. On the Application logs tab, change the Log level from Error to Verbose.

4. On the Log directories tab, set the Transfer period to 1 minute, the Buffer size to 1024MB, and the IIS
logs to a 1024MB quota.

5. Click OK to close the dialog box, and save the changes you made to the role configuration.

6. Run the cloud project without debugging and use the HTML form to write data to the logs.
10-26 Monitoring and Diagnostics

7. In Storage Explorer, expand the Windows Azure Storage node, and then expand the Development
node. Explore the logs in the WADLogsTable table. It might take a minute or two for the logs to
upload.

Note: Log data will be transferred to the Windows Azure storage emulator once a minute.
If you cannot see the Log data, please wait one minute before checking again.

8. In Server Explorer, under the Development storage account, expand Blobs, and then open the wad-
iis-logfiles blob container. Explore the list of log files and open one of them using Notepad.

Question: Why does Windows Azure require a special diagnostic solution?

Logging Windows Azure Storage Requests


Although Windows Azure Storage is an
independent service, its performance has a direct
effect on the performance of your application. It is
important to audit the behavior of the Windows
Azure Storage service to diagnose storage related
problems. By using Windows Azure Storage
Analytics, you can log all the requests to your
storage account.

By enabling Windows Azure Storage Analytics,


you can monitor storage requests to all resources
such as blobs, tables, and queues, diagnose the
performance of individual requests, analyze usage
of specific resources, and verify your storage account works properly under load.

You can enable Windows Azure Storage Analytics by using the CONFIGURE page of your storage
account, in the Windows Azure Management Portal.
The following image illustrates how to enable storage logging.

FIGURE 10.10: ENABLE STORAGE LOGGING


Developing Windows Azure and Web Services 10-27

You configure logging separately for each service you want to monitor, which can include blobs, tables,
and queues. If there is no activity in the services for which you enabled logging, no logs will be created.
You can find the collected logs under a special blob container named $logs.

For more information about Windows Azure Storage Analytics, consult the MSDN documentation:
Windows Azure Storage Analytics
http://go.microsoft.com/fwlink/?LinkID=298860&clcid=0x409

Using IntelliTrace in Windows Azure


Diagnostics provides general information about a
running application and its environment.
Sometimes the general information might not be
enough. If you want to get detailed information
about the applications execution flow, about the
values of all variables for each method in the call
stack, about execution events such as first chance
exceptions, task start or database command
execution, you might want to consider using
IntelliTrace.
IntelliTrace is a debugging feature that was
introduced in Visual Studio 2010 Ultimate. This
feature simplifies debugging by recording execution events and tracing all method calls. By using
IntelliTrace, you can run the application once and browse through its execution history. All stack frames
are recorded, so values of local variables for all methods are always available. Traditional debuggers show
you the state of your application when you break into it, but with IntelliTrace you can see events that
occurred in the past and obtain information about the context in which they occurred.

IntelliTrace records information in a special log file that you can open in Visual Studio. With IntelliTrace
logs, you can perform post-failure debugging or diagnose problems at production sites that are difficult
to reproduce in the development environment.

You can enable IntelliTrace in Windows Azure and collect valuable execution information while the
application is running on the cloud. After you download the IntelliTrace logs, you step through your code
from Visual Studio as if it were running in Windows Azure. IntelliTrace has an effect on the applications
performance and you should refrain from using it in production environment.

To enable IntelliTrace, you have to create a dedicated deployment package, make sure it is built in Debug
mode, enable IntelliTrace, and publish it using Visual Studio. A running application that was published
without IntelliTrace enabled has to be published again from Visual Studio.

The following image shows how to enable IntelliTrace when publishing an application to Windows Azure.
10-28 Monitoring and Diagnostics

FIGURE 10.11: ENABLE INTELLITRACE WHEN PUBLISHING AN


APPLICATION TO WINDOWS AZURE
In addition to enabling IntelliTrace, you can customize the IntelliTrace configuration. You can specify
which events to log, whether to collect method call information, which modules and processes to collect
logs for, and how much storage to allocate for recorded files.

IntelliTrace log files are stored on the file system of your roles virtual machine. When you request to
download the log files, a snapshot is taken at that point in time and downloaded to your local machine.
To download IntelliTrace logs for a role instance, open the Windows Azure Compute node in Visual
Studio Server Explorer, right-click the instance of interest, and then click View IntelliTrace Logs.

For more information about IntelliTrace in Windows Azure, consult the MSDN documentation:

IntelliTrace in Windows Azure


http://go.microsoft.com/fwlink/?LinkID=298861&clcid=0x409

Question: When should you use each of the diagnostic mechanisms explained in this lesson?
Developing Windows Azure and Web Services 10-29

Lesson 4
Collecting Windows Azure Metrics
Windows Azure presents an accessible view of performance counters in the Windows Azure Management
Portal. This graph-based view is known as Windows Azure Metrics. This lesson describes how to view
metrics for all Windows Azure deliverables, such as cloud services, web sites, and storage services.

Lesson Objectives
After completing this lesson, you will be able to:

Collect metrics for Windows Azure Cloud Services.


Collect metrics for Windows Azure Storage.

Collect metrics for Web sites.

Viewing Metrics for Windows Azure Cloud Services


The Windows Azure Management Portal presents
monitoring information about all the resources
you consume, including cloud services. You can
view key performance characteristics of your
cloud service as it executes in Windows Azure by
opening the Monitor and Dashboard tabs for
your cloud service, in the Windows Azure
Management Portal.
Cloud services that are not deployed with
Windows Azure Diagnostics enabled will still
present some monitoring information. When
Windows Azure Diagnostics is enabled, you can
configure monitoring to run in verbose mode, and present a much more diverse set of information.

Note: The information presented in the Windows Azure Management Portal is based on
performance counters only. Other types of diagnostics information, such as log files and
Windows Event Log data, cannot be presented by the Windows Azure Management Portal and
require a different tool, such as Visual Studio 2012. Viewing collected diagnostics data was
discussed in the previous lesson.

The monitoring displayed in the Management Portal is configurable. You can choose the metrics you
want to monitor and these metrics will be displayed in the Monitor and Dashboard tabs. You can switch
between displaying relative and absolute values, and change the time range of the metrics charts.

Note: Using absolute values will display a y-axis in the bar graph. If you only shown one set
of metrics, such as CPU usage percentage, or disk throughput, you will see the graph normally.
However, if you use the absolute value display with different types of metrics, such as percentage
and KB/sec, the results will be aligned to the same y-axis and may be harder to interpret. To view
different types of metrics in the same graph, you should use the relative value display.
10-30 Monitoring and Diagnostics

To configure the monitoring mode, click the Configure tab and choose between the Minimal and
Verbose modes. To use verbose mode, you must enable diagnostics in your cloud service deployment.

The following image illustrates how to configure application monitoring.

FIGURE 10.12: CONFIGURE APPLICATION MONITORING


Monitoring in verbose mode will automatically log additional counters, such as ASP.NET, memory, and
Web services counters.
The following image illustrates how to choose the metrics to monitor for your cloud services.

FIGURE 10.13: CHOOSE METRICS TO MONITOR


The counters available for the minimal mode are only viewable through the Management Portal. These
metrics are not stored in any storage account. The additional metrics that become available in verbose
mode are stored in your storage account, in tables. You can access these metrics by using tools such as
Visual Studio 2012 and other third-party tools, as described in the previous lesson.

Metrics are stored in six storage tables per role, using the storage connection string you provided for the
deployment
Developing Windows Azure and Web Services 10-31

The six tables that are created are made of two tables for each aggregation interval: 5 minutes, 1 hour,
and 12 hours. For each aggregation interval, one table stores role-level aggregations and the other table
stores role instance aggregations. The tables are named according to the following format:

WAD*deploymentID*PT*aggregation_interval*[R|RI]Table
In the table name format:

deploymentID is a the name of the cloud service deployment (a GUID)

aggregation_interval is either 5M, 1H, or 12H


R indicates role-level information and RI indicates role instance information

For example, a table named WAD98563c538020845642d0cc9eb9d8dd9fPT5MRTable contains role-


level information at five minute intervals.

Question: What are the benefits of using metrics for cloud services?

Viewing Windows Azure Web Site Metrics


You can view key performance characteristics of
your Windows Azure Web Site as it executes by
browsing to the Monitor or Dashboard tabs in
the Management Portal. Web site metrics are
enabled by default.
The following image illustrates how to choose
web site monitoring metrics.
10-32 Monitoring and Diagnostics

FIGURE 10.14: CHOOSE WEB SITE MONITORING METRICS


Unlike cloud services and storage monitoring, web site monitoring cannot be configured to run in the
Minimal or Verbose mode.

Viewing metrics is not the only way to diagnose Web Sites. You can also collect IIS logs, failed request
tracing logs, and trace logs for your web site. For more information on Windows Azure Web Sites
monitoring, consult the MSDN documentation.
Web Site Monitoring
http://go.microsoft.com/fwlink/?LinkID=298863&clcid=0x409

Viewing Metrics for Windows Azure Storage


Similar to cloud services, you can monitor your
Windows Azure Storage accounts in the Windows
Azure Management Portal and view a large
number of key performance characteristics and
counters concerning your storage services.

You can configure monitoring for each of your


storage account services (Blobs, Tables, and
Queues) to Minimal or Verbose and specify the
appropriate data retention policy. Storage is not
monitored automatically, which means that until
you configure monitoring for your storage
account service, no data is collected, and the
metrics charts on the Dashboard and Monitor tabs will be empty.
The following image illustrates how to configure storage monitoring.

FIGURE 10.15: CONFIGURE STORAGE MONITORING


You can select metrics for each of your storage services (Blobs, Queues, and Tables) from a large number
of available key performance indicators. In the Minimal mode, you can choose from metrics such as
ingress/egress, availability, success percentages, and latency. In the Verbose mode, the same metrics are
available, but are stored per storage operation, instead of being aggregated. For example, in minimal
mode, successful blob requests are stored as a single value, but in verbose mode, there are separate
counters for each successful blob operation, such as CopyBlob, GetBlob, and DeleteBlob.

The following image illustrates how to select storage metrics.


Developing Windows Azure and Web Services 10-33

FIGURE 10.16: SELECT STORAGE METRICS


The information collected by storage monitoring is persisted in the monitored storage account. The
information is stored in the following tables: $MetricsTransactionsBlob, $MetricsTransactionsTable,
$MetricsTransactionsQueue, and $MetricsCapacityBlob. You can access these tables independently
and present the data as you choose. The transactions tables store the metrics collected in each time
interval. The capacity table contains daily statistics for the size of the account's blob containers, with a
separate value for the size of the special $logs blob container, which contains the storage analytic logs,
discussed in the previous lesson.

For more information about Windows Azure Storage monitoring, consult the MSDN documentation:
Windows Azure Storage Monitoring

http://go.microsoft.com/fwlink/?LinkID=298862&clcid=0x409

Demonstration: Viewing Windows Azure Web Site Metrics


This demonstration shows how to create and deploy a Windows Azure Web Site and then how to view
monitoring information presented in the Windows Azure Management Portal.

Demonstration Steps
1. Open the Windows Azure Management Portal (https://manage.windowsazure.com/).
2. Create a new Windows Azure Web Site named metricsdemoYourInitials (Replace YourInitials with
your initials). Select the region closest to your location for the web site.

3. Browse to the new web site, open the DASHBOARD tab, and download the web site's publish profile
file.

4. On the MONITOR tab, configure monitoring to collect only the Http Successes metric. Make sure
the metric is shown in the graph.
10-34 Monitoring and Diagnostics

5. Open the
D:\Allfiles\Mod10\DemoFiles\WebSiteMonitoring\SimpleWebApplication\SimpleWebApplication.sln
solution in Visual Studio 2012.

6. Publish the SimpleWebApplication web site to Windows Azure, using the publish profile file you
downloaded. Browse to the web site and submit HTTP requests using the ClickHere button.

7. Return to the Management Portal, refresh the graph, and verify the number of Http Successes metric
increased.

Question: What are some uses of the Windows Azure Web Sites metrics?
Developing Windows Azure and Web Services 10-35

Lab: Monitoring and Diagnostics


Scenario
The helpdesk team at Blue Yonder Airlines has complained that ever since the ASP.NET Web API services
have been deployed to Windows Azure, they cannot gather statistical information about the services. In
addition, the helpdesk team has complained that in the past, when there were problems with the WCF
booking service, they had a difficult time understanding what caused the problem. In this lab, you will
help the helpdesk team by configuring the WCF booking service to output trace and message logging.
You will also configure the Windows Azure Web Role running the ASP.NET Web API service to output
diagnostics so it can be collected and analyzed.

Objectives
After completing this lab, you will be able to:

Configure WCF tracing and message logging.


Configure Windows Azure Diagnostics and view the collected information.

Lab Setup
Estimated Time: 45 minutes
Virtual Machine: 20487B-SEA-DEV-A, 20487B-SEA-DEV-C

User name: Administrator, Admin


Password: Pa$$w0rd, Pa$$w0rd
For this lab, you will use the available virtual machine environment. Before you begin this lab, you must
complete the following steps:

1. On the host computer, click Start, point to Administrative Tools, and then click Hyper-V Manager.
2. In Hyper-V Manager, click MSL-TMG1, and in the Action pane, click Start.

3. In Hyper-V Manager, click 20487B-SEA-DEV-A, and in the Action pane, click Start.

4. In the Action pane, click Connect. Wait until the virtual machine starts.
5. Sign in using the following credentials:

User name: Administrator


Password: Pa$$w0rd

6. Return to Hyper-V Manager, click 20487B-SEA-DEV-C, and in the Action pane, click Start.

7. In the Action pane, click Connect. Wait until the virtual machine starts.

8. Sign in using the following credentials:

User name: Admin

Password: Pa$$w0rd

9. Verify that you received credentials to log in to the Azure portal from you training provider, these
credentials and the Azure account will be used throughout the labs of this course.

Exercise 1: Configuring Message Logging


Scenario
WCF supports message logs and activity (trace) logs. To provide the helpdesk the information they
require, you will start by turning on both types of logs. Because the WCF messages are encrypted, you will
10-36 Monitoring and Diagnostics

log messages at the service level, which will log the messages in their decrypted form, instead of at the
transport level, where messages are logged in the encrypted form.

The main tasks for this exercise are as follows:


1. Open the WCF service configuration editor

2. Configure WCF message logging

Task 1: Open the WCF service configuration editor


1. In the 20487B-SEA-DEV-A virtual machine, run the setup.cmd file from
D:\AllFiles\Mod10\LabFiles\Setup. Provide the information according to the instructions, and write
down the name of the Windows Azure Cloud Service.

Note: You may see a warning saying the client model does not match the server model.
This warning may appear if there is a newer version of the Windows Azure PowerShell Cmdlets. If
this message is followed by an error message, please inform the instructor, otherwise you can
ignore the warning.

2. Open the BlueYonder.Server.sln solution from the


D:\AllFiles\Mod10\LabFiles\begin\BlueYonder.Server folder.
3. Open the Web.config file from the BlueYonder.Server.Booking.WebHost project with the WCF
Configuration Editor. When prompted that the Microsoft.ServiceBus assembly could not be found,
click Yes, select the Microsoft.ServiceBus.dll assembly from web project's bin folder, and approve
the warning message that appears next.

Note: To open the WCF Configuration Editor, in Solution Explorer, right-click the
Web.config file, and then click Edit WCF Configuration.

Task 2: Configure WCF message logging


1. In the Diagnostics configuration, enable Message Logging.
2. Expand the Diagnostics configuration node, and in the Message Logging configuration, set the
LogEntireMessage and LogMessageAtServiceLevel to True, and the LogMessageAtTransportLevel to
False.

3. Save the changes to the configuration and close the Service Configuration Editor window.

Results: You can test your changes at the end of the lab.

Exercise 2: Configuring Windows Azure Diagnostics


Scenario
To provide the helpdesk with the required information about the Windows Azure deployment, you will
need to provide user information from IIS logs and statistical information about purchases. In this exercise,
you will add trace logging to log every purchase performed in the service and configure the Windows
Azure deployment to collect those trace files. In addition, you will collect IIS logs from the deployment,
which will assist the helpdesk to calculate the user activity statistics.

The main tasks for this exercise are as follows:


1. Add trace messages to the ASP.NET Web API service

2. Configure Windows Azure Diagnostics for the Web Role


Developing Windows Azure and Web Services 10-37

3. Deploy the ASP.NET Web API Application to Windows Azure

4. Run the client app to create logs

5. View the collected diagnostics data

Task 1: Add trace messages to the ASP.NET Web API service


1. Open the BlueYonder.Companion.sln solution from the
D:\AllFiles\Mod10\LabFiles\begin\BlueYonder.Server folder in a new instance of Visual Studio
2012.

2. Open the TraceWriter.cs file from the BlueYonder.Companion.Host project and implement the
Trace method. Use .NET Diagnostics tracing to write the trace messages.

Note: You can see an example for implementing the ITraceWriter interface in lesson 2,
"Configuring Service Diagnostics".

3. In the BlueYonder.Companion.Host project, under the App_Start folder, open the


WebApiConfig.cs file, and add the code to replace the default trace writer service of ASP.NET Web
API with a new instance of the TraceWriter class.

Note: You can see an example for registering the TraceWriter class in lesson 2,
"Configuring Service Diagnostics".

4. In the BlueYonder.Companion.Controllers project, open the ReservationsController.cs file, locate


the Post method, and after the call to the Save method, add code to trace an information message.
Add a using directive for the System.Web.Http.Tracing, to get the tracing extension methods

Use the Configuration.Services.GetTraceWriter method to get an instance of the trace writer.


Use the Info extension method to write the trace message. Set the category to
ReservationController, and include the reservation's confirmation code in the trace message.

Note: You can see an example for tracing messages with the TraceWriter class in lesson 2,
"Configuring Service Diagnostics".

Task 2: Configure Windows Azure Diagnostics for the Web Role


1. In the BlueYonder.Companion.Host.Azure project, open the BlueYonder.Companion.Host role's
Properties window and set the role to use custom diagnostics.

On the Configuration tab, select Custom plan.


Click Edit to open the diagnostics configuration.

2. On the Application logs tab, change the Log level from Error to Verbose.

3. On the Log directories tab, set the Transfer period to 1 minute, the Buffer size to 1024MB, and set the
IIS logs to a 1024MB quota.

4. Click OK to close the dialog box, and save the changes to the role's properties.

Task 3: Deploy the ASP.NET Web API Application to Windows Azure


1. Use Visual Studio 2012 to publish the BlueYonder.Companion.Host.Azure project.
10-38 Monitoring and Diagnostics

If you did not import your Windows Azure subscription information yet, download your Windows
Azure credentials, and import the downloaded publish settings file in the Publish Windows Azure
Application dialog box.

2. Select the cloud service that matches the cloud service name you wrote down in the beginning of the
lab, while running the setup script.

3. Finish the deployment process by clicking Publish. The publish process might take several minutes to
complete.

Task 4: Run the client app to create logs


1. Log on to the virtual machine 20487B-SEA-DEV-C as Admin with the password Pa$$w0rd.

2. Open the BlueYonder.Companion.Client.sln solution from the


D:\AllFiles\Mod10\LabFiles\begin\BlueYonder.Companion.Client folder.
3. In the Addresses class of the BlueYonder.Companion.Shared project, set the BaseUri property to
the Windows Azure Cloud Service name you wrote down in the beginning of this lab.

Replace the {CloudService} string with the cloud service name.


4. Run the client app, search for New, and purchase a flight from Seattle to New-York. Close the app
after you purchase the trip.

Task 5: View the collected diagnostics data


1. Return to the BlueYonder.Companion solution in the 20487B-SEA-DEV-A virtual machine, open
the Publish Windows Azure Application dialog box, and write down the name of the storage
account.
2. Open the Windows Azure Storage account in Server Explorer and explore the logs in the
WADLogsTable table. Search the table for a message starting with ReservationsController and
double-click it to view the message.

Note: In addition to the trace message your code writes to the log, ASP.NET Web API
writes several other infrastructure trace messages.

3. Open the wad-iis-logfiels blob container in Server Explorer and explore the log files. Verify you see
the requests for the Travelers, Locations, Flights, and Reservations controllers.

Note: It is possible it will take more than a minute from the time the request is sent and
until it is logged by IIS. If you do not yet see any logs, or the requests are missing from the log,
wait for another minute, refresh the blob container, and then download the log again.

4. Open the WCF message log located in the


D:\AllFiles\Mod10\LabFiles\begin\BlueYonder.Server\blueyonder.server.booking.webhost
folder using the Service Trace Viewer. Review the booking services CreateReservation request and
response messages.

Note: You can view the messages by clicking the Message tab in the left pane, selecting the
message to view (either the
http://blueyonder.server.interfaces/IBookingService/CreateReservation or
http://blueyonder.server.interfaces/IBookingService/CreateReservationResponse message), and
then clicking the Message tab in the bottom-right pane.
Developing Windows Azure and Web Services 10-39

Results: After you complete the exercise, you will be able to use the client App to purchase a trip, and
then view the created log files, for both the Windows Azure deployment and the on-premises WCF
service.

Question: Why is it important to configure maximum buffer sizes for diagnostic


information?
10-40 Monitoring and Diagnostics

Module Review and Takeaways


In this module, you learned how to monitor WCF and ASP.NET Web API services and how to collect
diagnostic information from them. You also learned how to monitor IIS using performance counters and
AppFabric, and how to collect information from services running in Windows Azure. Finally, you learned
how Windows Azure collects and displays common metrics for web sites, cloud services, and storage
services, making it easy to see at a glance whether your application is experiencing performance problems
or other issues.

Best Practice: Invest considerable time in instrumenting your application with tracing and
performance counters. Make sure you can successfully monitor the application in the
development environment. This will make it easier to monitor in Windows Azure, and guarantee
that you can diagnose problems that occur only in the production environment, such as under
heavy load.

Review Question(s)
Question: How can you monitor applications running in Windows Azure?

Tools
Visual Studio 2012
WCF Service Configuration Editor

SvcTraceViewer.exe
11-1

Module 11
Identity Management and Access Control
Contents:
Module Overview 11-1

Lesson 1: Claims-based Identity Concepts 11-2

Lesson 2: Using the Windows Azure Access Control Service 11-8


Lesson 3: Configuring Services to Use Federated Identities 11-12

Lab: Identity Management and Access Control 11-18


Module Review and Takeaways 11-26

Module Overview
Managing identities in distributed systems can be challenging. Identities are often shared across
application and organization boundaries. Claims-based identity is a modern approach designed to
overcome these challenges in distributed systems. This module describes the basic principles of modern
identity handling. The module also demonstrates how to use infrastructures such as Windows Azure
Access Control Service (ACS) to implement authentication and authorization with claims-based identity in
ASP.NET Web API services and Windows Azure Service Bus brokered messaging.
By applying the concepts and technologies covered in this module, you can simplify authentication and
authorization in your distributed applications integrating with modern identity providers.

Note: The Management Portal UI and Windows Azure dialog boxes in Visual Studio 2012
are updated frequently when new Windows Azure components and SDKs for .NET are released.
Therefore, it is possible that some differences will exist between screen shots and steps shown in
this module, and the actual UI you encounter in the Management Portal and Visual Studio 2012.

Objectives
After completing this module, you will be able to:

Describe the basic principles of claims-based identity.


Create a Security Token Service (STS) by using Windows Azure ACS.

Configure services to use federated identity.


11-2 Identity Management and Access Control

Lesson 1
Claims-based Identity Concepts
Identity is a set of information that describes an entity. Identities are used to authorize user access to
resources and services, personalize applications, and to provide better experience for the client as well as
the IT professional.

Windows Identity Foundation (WIF) is the Microsoft .NET Framework infrastructure that you can use to
create, validate, and handle claims-based identities. This lesson will provide a high-level overview of WIF
and its capabilities.

Lesson Objectives
After completing this lesson, you will be able to:

Explain the main concepts of identity management.


Describe claims-based identity.

Describe WIF.

Introduction to Identities
In the physical world, the term identity is used to
describe a set of attributes that belongs to a
specific entity. For example people have names,
passport numbers, and musical taste. Buildings
have addresses, number of stories and more
attributes. In the virtual world, different entities
might also have identities. Users often have user
names, email address and roles. Computers have
also have names as well as a network address,
operating system, and so on. In Windows (and
most other operating systems), different processes
and even threads, run under a specific identity.

Identity is an important concept and is used to authorize access to resources and services, personalize
applications, and provide better experience for the client as well as the IT professional.
Systems often need to store identity information in identity stores such as Active Directory Domain
Services (AD DS) or a Microsoft SQL Server database. Applications query these stores for identity
information, which can be used to authorize the entity and personalize applications, and to provide better
experience.

Identity information is both personal and sensitive, and therefore should be carefully managed. Some
information should be kept private and some can be shared based on a variety of security policies. An
identity store should properly secure the identity and provide identity information to trusted entities only
according to predefined security and privacy policies.

Due to the sensitive nature of identities, there is a need for a verification mechanism. Such mechanisms
rely on credentials, a set of attributes that are used to verify an identity. Some of the common credentials
types in modern systems include: username and password, smart cards, and certificates. The process of
verifying credentials is called authentication.

Organizations and applications often need to share identities, which can complicate common identity
related scenarios. For example, if an identity is revoked in one organization or application, such as when
Developing Windows Azure and Web Services 11-3

an employee is fired, all partners must revoke the identity as well. Synchronizing identity across
organizations and applications can be a complex task and therefore, a single identity view for all
organizations and applications can help simplify identity management.

Question: Why would you need different types of credentials, such as smart card, or
username and password?

What Is Claims-Based Identity?


Traditionally, applications use a role-based
approach to authorize identities. In role-based
authorization, identity has a name and a set of
roles, or groups, that it belongs to, and according
to that set of information, you declare
authorization rules. This simple approach might
be appropriate in some scenarios, but for
distributed applications, it does not provide
enough reliable information. Applications have to
submit queries to identity providers
independently, to retrieve additional information,
which introduces an undesired complexity.

Traditional applications often base their own identity store on a custom database. The most popular
credentials that applications use to authenticate their clients are username and password. The result is
that clients own a large set of username-password credentials for all the applications they subscribed to. It
is difficult to remember and handle a large set of credentials so people tend to reuse credentials. This
compromises their security because if one application is hacked and their credentials are stolen, these
credentials could be used by other unauthorized individuals to log on to other applications.
Another disadvantage of this approach occurs when identity is shared across applications and
organizations. In order to share identity across applications, the applications must expose their identity
store creating tight coupling and potentially undesired overhead.
In conclusion, identity management is a complex process that should not be taken lightly. When using a
custom identity store, applications have to secure the identities and handle all the appropriate scenarios
such as registration, credentials reset, update details, identity revocations, and more. Traditional identity
management has many disadvantages for the user as well as for the application. Therefore, another
approach is required.

Claim based identity is a modern approach, in which the client provides token containing a set of
attributes (claims), describing its identity. The token is signed by an issuer which is a service in charge of
authenticating the user and providing the token.

Using a token that is being presented by the client decouple the service or application from the identity
provider. The service or application rely on verifying the authenticity of the token rather than having an
actual connection to the identity provider. Services and applications can support multiple identity
providers by using standards, such as WS-Federation and OAuth2, for defining such tokens.

The Claims-based Identity Workflow


In a claims-based identity approach, identity management is centralized. The identity provider is
responsible for handling identity management and authenticating the client. The main strength of claims-
based identity comes from its unique workflow which decuples the identity provider from the services and
applications which consume the identity information.

The following figure describes a claims-based basic scenario.


11-4 Identity Management and Access Control

FIGURE 11.1: CLAIMS-BASED BASIC SCENARIO


The following section explains the preceding diagram:

0. In federated identity, before a service or application uses an identity provider, it must form a trust
relation with that identity provider. To do this, the application and identity provider exchange their
public keys. The application sends its public key to the identity provider and the identity provider
sends its public key to the application. Once a trust relation is established, claims-based identity can
be used. The identity provider uses the application's public key to encrypt the token, if required. The
identity provider's public key is used to validate the digital signature placed in the token by the
identity provider.

1. The client sends a request to the service or application. The service or application checks the request
for a token. Since this is the first time the client accesses the application, the token is not present, and
the client is required to access the identity provider.

2. The client calls the identity provider for authentication. The client presents its credential and gets
authenticated.
3. Once the client is authenticated by the identity provider, the client identity provider returns a token
containing the different claims. The identity provider also signs the token it generates using the
identity provider's private key and then encrypts the token with the applications public key. This
prevents the client and any unauthorized parties from reading and altering the content of the token.

4. The client sends a request to the service or application for the second time, this time providing the
token. The service or application open the token using the private key and verify the token signature
using the identity providers public key. After verifying the token, the service or application can
extract the claims from it.
5. Based on the claims, the service or application can now authorize the client, and decide whether to
process the request and/or return a response.

Note: A digital signature is a cryptographic action to ensure information is originated from


a valid source and has not been tampered with.

For more information about digital signatures, consult the MSDN documentation:

Digital Signatures
http://go.microsoft.com/fwlink/?LinkID=298864&clcid=0x409

Advanced Scenarios for Claims-based Identity


Claims-based identity supports complex scenarios such as federation, in which a client obtains a token
from one identity provider and exchanges it with a token from another identity provider before handing it
to the application.
Developing Windows Azure and Web Services 11-5

In a federation scenario, a client authenticates against an identity provider by using credentials and
receives a token. The token is then forwarded to another identity provider, which generates a new token.
The client then uses the new token to access the application. Both identity providers must have mutual
trust. Similar to the basic scenario, the application opens the token and uses the claims to authorize the
client before granting access.

The following figure describes identity federation between two organizations.

FIGURE 11.2: IDENTITY FEDERATION BETWEEN TWO


ORGANIZATIONS
An advanced scenario for claim-based identities is delegation. Delegation is used when an application
needs to use the identity of the original user to access other organizational resources. An application that
requires delegation requests a token from an identity provider in the name of the original user, and then
uses the new token to call another application. With federation and delegation, you can use identities
across organizations and manage them in a single location.

Introduction to WIF
Claims-based identity standards are complex and
involve advanced cryptography. To simplify the
development of applications that use claims-
based identity, you need to use an infrastructure
that will not require you to implement the
standards and understand advanced
cryptography.

Windows Identity Foundation (WIF) is a .NET


Framework infrastructure designed for handling
claims-based identity. With WIF, you can request a
token from an Identity provider, validate a token
at the application side, create a security context,
and implement a custom identity provider.

In WIF terminology, an identity provider is referred to as a Security Token Service (STS). The application
has to trust and rely on the STS, and therefore it is referred as a Relying Party (RP). WIF provides the
necessary classes and tools for all claims-based identity tasks, such as creating a custom STS, configuring
an RP to require a token generated by a specific STS, and creating the appropriate client configuration.
WIF uses the WS-Federation standard to implement the communication between all the parties. To learn
more about WS-Federation, refer to the MSDN documentation:
11-6 Identity Management and Access Control

WS-Federation
http://go.microsoft.com/fwlink/?LinkID=298865&clcid=0x409

WIF simplifies the process of creating a claims-based Web application. STS and RPs exchange metadata
documents to establish the initial trust. Metadata documents adhere to the WS-MetadataExchange
standard and contain details about the location of the STS and the RP, the required claims, and the
signing and encryption keys. WIF provides tools integrated with Visual Studio 2012 to generate metadata
documents and configuration files (either App.config or Web.config) for the RP. These tools simplify the
integration of Web applications with an existing STS.

To learn more about WS-MetadataExchange, consult the following documentation:


WS-MetadataExchange
http://go.microsoft.com/fwlink/?LinkID=298866&clcid=0x409

Typically, you will use a well-known STS infrastructure, such as Active Directory Federation Services (AD
FS) 2.0. However, sometimes you may be required to create a custom STS. WIF provides the infrastructure
to do that. WIF handles all the cryptographic heavy-lifting for creating, encrypting, and signing tokens.
WIF API has simple abstractions that can be used as an entry point to create a custom STS that supports
the WS-Trust protocol.

WIF also handles all the cryptographic work for dispatching the tokens provided to the RP and creating
the appropriate security context. WIF decrypts the token, validates its signature, and finally dispatches all
its claims. WIF introduces the ClaimsIdentity class, which implements the IIdentity interface and also
provides access to the full list of incoming claims. The ClaimsIdentity object is placed into a
ClaimsPrincipal object and attached to the current security context. Finally, WIF provides a rich set of
APIs to help the user make authorization decisions based on the incoming claims.

The following code demonstrates how to access claims by using the ClaimsIdentity class in an ASP.NET
Web API service.

Using the ClaimsIdentity Class


var identity = HttpContext.Current.User as ClaimsIdentity;

foreach(var claim in identity.Claims)


{
Trace.WriteLine(claim.Issuer);
Trace.WriteLine(claim.ValueType);
Trace.WriteLine(claim.Value);
}

When Microsoft introduced WIF, they provided WIF as both a separate download and an SDK. Since .NET
Framework 4.5, WIF is integrated into the core of the .NET Framework and is now part of the mscorlib.dll
and System.IdentityModel assemblies. WIF has close integration with WCF and provides relevant classes
in the System.IdentityModel.Services and System.ServiceModel assemblies.

For more information about WIF API, consult MSDN documentation:

WIF API
http://go.microsoft.com/fwlink/?LinkID=298867&clcid=0x409

WIF is fully compatible with other Microsoft claims-based infrastructures such as AD FS 2.0, Windows
Azure Active Directory, and Windows Azure ACS. WIF simplifies the integration of those infrastructures
into .NET Framework-based applications and helps developers build claims-aware solutions.
Developing Windows Azure and Web Services 11-7

Demonstration: Using claims in an ASP.NET Website


This demonstration shows how to configure an ASP.NET MVC website for claims-based identities with
WIF.

Demonstration Steps
1. Create a new ASP.NET MVC 4 Web Application project, using the Internet Application template.

2. Run the application without debugging. Observe the Log in link on the upper-right corner of the
web page.

Note: The default login functionality is implemented by using ASP.NET Membership, which
uses a local identity store. ASP.NET Membership has a default implementation using SQL Server
as its identity store.

3. Open the Identity and Access dialog box, by right-clicking the project, and then clicking Identity
and Access. Configure the project to use the Local Development STS, and review the test claims that
will be used during development. Click OK to save the changes.

4. Add code to the Index method of the HomeController to pass the user's claims to the HTML page.
Retrieve the identity object stored in the Thread.CurrentPrincipal.Identity property.
Retrieve the claims from the identity's Claims property.

Store the claims in the controller's ViewBag dynamic property, under a new property named Claims.
5. In the ClaimsApp project, open the Index view for the Home controller and output the claims you
stored in the ViewBag.

Open the Index.cshtml file from the project's Views\Home folder.

Add the following code before the <h3> HTML element.

@foreach (System.Security.Claims.Claim claim in @ViewBag.Claims)


{
<div><b> @claim.Type </b> @claim.Value<br/></div>
}

6. Run the application and verify that STS is running. Observe the value of the name claim, which now
appears on the upper-right corner of the web page and the list of claims that is outputted in the
HTML.

Note: In a browser-based environment such as ASP.NET, a message flow called passive


federation is used to pass the claims to the application. Passive federation is discussed in depth in
Lesson 3, "Configuring Services to Use Federated Identities", of this module.

Question: What are the advantages of claims-based identity and STS in comparison to
traditional methods of managing identities, such as using a custom store of usernames and
passwords?
11-8 Identity Management and Access Control

Lesson 2
Using the Windows Azure Access Control Service
Windows Azure ACS is a cloud-based STS created to provide federated claim-based identities. ACS
simplifies identity federation because it integrates with standards-based identity providers, such as AD FS
2.0, and web identities such as Microsoft Live ID, Google, Yahoo!, Facebook, and OpenID providers.

Note: Windows Azure also provides an STS through the Windows Azure Active Directory
service, which is not covered in this course.

For more information about Windows Azure Active Directory, please see the following documentation:

Windows Azure Active Directory


http://go.microsoft.com/fwlink/?LinkID=313748

This module describes how to use and manage Windows Azure ACS.

Lesson Objectives
After completing this lesson, you will be able to:
Describe Windows Azure ACS functionality.

Configure identity providers and claims mapping rules.

Handle identities within ACS.

Introduction to Windows Azure Access Control Service


There are different types of identity providers
available, such as AD FS 2.0 in the corporate world
of directory services, and such as Facebook and
Twitter in the social network world. Depending on
the nature of your application, you can choose the
appropriate provider.

From a business point of view, it is essential that


you choose the right identity provider. In a
consumer-facing application for example, it makes
perfect sense to use a provider such as Facebook
and Windows Live ID to ease the process of
creating a user for a website while ensuring the
users true identity. An enterprise application on the other hand would benefit by integrating with other
business applications such as Microsoft Exchange and Lync.

Interfacing with so many identity providers is not an easy task. Each provider can use a different protocol
and expose different claims. ACS is a cloud-based service that provides a reliable and available STS. ACS is
designed for federation. It has no identity store of its own. Instead, it shims other identity providers,
forwards authentication requests to them and maps tokens from different providers to a unified standard,
based on the relying party choice. ACS also supports various types of tokens to match your application's
needs, such as SAML, SWT (Simple Web Token), and JWT (JSON Web Token).
Developing Windows Azure and Web Services 11-9

Setting up ACS for federation


While ACS can manage identities, it is designed
for a federated scenario. This means that instead
of managing identities, ACS is focused around the
idea of mapping tokens and claims from other
identity providers to a unified token that will be
sent to the RP.

Using this approach, you can configure an RP in


ACS to trust a number of identity providers, and
while each identity provider produces different
tokens with a different set of information, you can
use ACS to receive the tokens in a unified format.

ACS should create a token that the RP can handle.


For this, the token should meet the following requirements:
The token type should be the type the RP expects.

It should contain the information (claims) the RP requests.

It should be signed by the ACS and encrypted by the RPs public key.

Identity Providers
The first step for mapping tokens is to select which identity providers the service will support. You can do
this on the Identity providers configuration page, in the ACS portal. You can choose from a list of well-
known web identity providers such as Windows Live ID, Google, and Yahoo!, or provide the address of the
metadata document of a custom identity provider such as AD FS 2.0. Metadata documents contain all the
information required to establish trust with an identity provider. Additionally, you can configure the ACS
namespace as a Facebook application to integrate with Facebook and use it as an identity provider.

Note: All the functionality of the ACS portal is also available in an HTTP-based API, which is
beyond the scope of this module.

Mapping Rules
Mapping rules is the ACS mechanism for claim forwarding. This means that ACS will analyze incoming
claims, transform them (if needed) and write them as output claims. ACS can produce multiple types of
tokens. Mapping rules let ACS act as an adapter that transforms one token type to another. For example,
you can use ACS to transform a SAML 2.0 token to a light-weight SWT and use it to call HTTP-based web
services.

After the transformation is complete, a new token, which is the token signed by ACS is passed to the RP.

Creating Trust relation


The RP configuration in the ACS defines the format of token to be produced, the type of signature, and
the signing keys. This contract between the ACS and RP is the trust relation needed for claim-based
identity.
The portal also contains service settings that you can use to configure the certificates and keys that ACS
will use to sign and encrypt tokens. SAML tokens can be signed by using certificates, but SWT tokens can
only be signed by using a symmetric key.

Question: When would you use claims mapping in federation scenarios?


11-10 Identity Management and Access Control

Identity Management in ACS


ACS can manage identities independently by
providing its own identity store. You can use this
self-managed identity store if you do not want to
store those identities locally. ACS uses these
identities to manage access to Service Bus
components: relays, queues, and topics. However,
you can also use those identities to secure other
applications. For example, you can configure
services in a business-to-business (B2B) scenario
to authenticate with each other using service
identities stored in the ACS. You can add or
remove identities under the Service Identities tab
of the ACS portal.

Each identity you create has a name, a description, and a credential. ACS supports several types of
credentials that you can attach to the identity, such as symmetric key, password, and X.509 certificates.

For additional information about ACS and Service Identities, refer to MSDN documentation.

Service Identities
http://go.microsoft.com/fwlink/?LinkID=313749

In comparison to other identity stores, such as Active Directory Domain Services (AD DS), service identities
provide less functionality and robustness. If you want to manage a large list of user identities, consider
using the Windows Azure Active Directory. Windows Azure Active Directory is beyond the scope of this
course.

Demonstration: Configuring ACS Using the Management Portal


In this demonstration, you will see how to configure ACS by using the Management Portal.

Demonstration Steps
1. Open the Windows Azure Management Portal website (http://manage.windowsazure.com).

2. Create a new Access Control Service namespace named BlueYonderServerDemo11YourInitials


(YourInitials will contain your initials). For the region, select the region closest to your location.

3. Open the ACS portal for the new namespace.

You can view the list of ACS namespaces by clicking ACTIVE DIRECTORY in the navigation pane, and
then by clicking the ACCESS CONTROL NAMESAPCES tab.

4. Open the list of identity providers and Add Google as a trusted identity provider for your namespace.

5. Add an RP application called BlueYonderServerDemo with the following information:


Name: BlueYonderServerDemo

Realm: http://localhost/WebApplication/

Return URL: http://localhost/WebApplication/


6. In the Rule groups page, select the default group created for the BlueYonderServerDemo RP and
generate claim transformation rules for the newly created rule group.
Developing Windows Azure and Web Services 11-11

7. Locate and copy the WS-Federation Metadata endpoint address in the Application integration
page.

8. Open the D:\AllFiles\Mod11\DemoFiles\ConfigureTheACS\begin\ConfigureTheACS.sln solution


file.
9. Open the Identity and Access dialog box, by right-clicking on the project, and then click Identity
and Access. Configure the project to use the business identity provider and paste the address you
copied to the STS metadata document box. Click OK to save the changes.

10. Open the Web.config file and explore the configuration added to the <system.identityModel> and
<system.web> configuration sections.

11. Run the application without debugging and verify you are redirected to ACS for authentication.
Question: Why should you use the Windows Azure ACS?
11-12 Identity Management and Access Control

Lesson 3
Configuring Services to Use Federated Identities
Services that use claims-based identity need to be properly configured for the correct address of the
identity provider, the required claims, and keys to sign and encrypt the token.

This module describes how to configure a service application, such as a WCF or ASP.NET Web API service,
to use federated claims-based identities.

Lesson Objectives
After completing this lesson, you will be able to:

Explain the differences between active and passive federation.

Describe How ACS is integrated with ASP.NET Web API.

Describe how to configure Service Bus endpoint with ACS.

Active and Passive Federation


You can configure claims-aware web applications
to use active or passive federation. Passive
federation supports HTTP and uses HTTP
redirection messages and cookies to point the
client to the identity provider and receive the
token. In active federation, the client calls the
identity provider explicitly and asks for a token. To
submit the request the client must authenticate by
providing credentials or an authentication cookie.

Note: Passive federation supports both


HTTP and HTTPS.

The passive federation process includes the following steps:

1. The client submits a simple HTTP request to the RP, which responds with an HTTP 302 redirect
message because the original request has no token. The redirect message contains all the information
the identity provider needs about the relying party such as its realm (unique identifier of the
application), the required claims, and return address.
2. To accept the redirected request, the identity provider asks the user to authenticate by submitting
credentials (which is done using an HTML form, since you are in a browser-based environment) or
attaching an authentication cookie, which might be available from previous requests.
3. The identity provider generates a token, attaches it to the response as a cookie, and redirects the
clients request back to the RP.

Note: Mobile and desktop-based client applications often use browser-based controls,
such as the Web authentication broker for Windows Store apps (which is shown later in the lab)
to enable passive federation.
Developing Windows Azure and Web Services 11-13

To execute passive federation, the client must know how to handle HTTP redirect messages. Browsers can
handle redirect messages but smart clients and mobile applications usually do not have redirection
capabilities. This is why web applications targeted to serve clients running on a browser usually use
passive federation and applications targeted for other clients use active federation. For example, a service
that needs to authenticate against another service, using claims-based authentication, will use active
federation instead of passive federation.

The active federation process includes the following steps:


1. The client accesses the identity provider directly. In the request, the client submits all the information
required to create a token such as the name of the RP, and the requested claims.

2. The client then attaches the token to the request before sending it to the RP. For example, in HTTP-
based web services, the client attaches the token in the HTTP Authorization header. In WCF SOAP-
based services, the token is sent to the service on top of a WS-Trust conversation before establishing
a secure channel with the service.

ACS Integration with ASP.NET Web API


If a web service client is not browser-based, it
must use active federation. When calling an HTTP-
based web service with active federation, you
attach the token to the HTTP requests
Authorization header. There are several
standards for tokens, such as SAML, SWT, which
are XML-based, and JWT, which is JSON-based.
SAML token are considered large, and are rarely
used with HTTP-based web services. SWT and JWT
are much more compact, and are therefore more
suitable for such scenarios.

Note: In IIS, HTTP headers are limited in size


according to the MaxFieldLength and MaxRequestBytes. You can change these values to
support larger tokens, if required.

Some technologies, such as WCF and ASP.NET MVC support automatic handling of tokens with the
support of WIF. ASP.NET MVC supports passive federation, whereas WCF uses active federation. In
contrast to WCF and ASP.NET MVC, in ASP.NET Web API, WIF is not automatically integrated into the
message handling pipeline. To validate tokens, you have to write the code manually and integrate it into
the request pipeline by implementing a delegating handler. Delegating handlers are Web API extensibility
points that process all incoming and outgoing requests. Delegating handlers are covered in Course 20487,
Module 4, "Extending and Securing ASP.NET Web API Services" Lesson 1 "The ASP.NET Web API Pipeline".

The SendAsync method of the DelegatingHandler class sends an HTTP request to the inner handler as
an asynchronous operation. This is where you implement token processing and authorization using the
following logic:

1. Inspect the incoming request. For requests that contain an Authorization header, try to authenticate
the token presented by the client:

a. If authentication is successful, create a claims principal, and call the next handler.
b. If authentication fails, return a 401 (unauthorized) HTTP status code, and set the www-
Authenticate header to the required token type.
11-14 Identity Management and Access Control

2. After the next handler has processed the request, inspect the outgoing response. If the status code is
401, set the www-Authenticate header to the required token type.

Note: Although the process of authenticating tokens is explained in this topic, it is not
recommended to implement your own delegating handler to do so. There are several open
source projects that implement token validation and claim parsing. Later, in the lab, you will use
the Thinktecture.IdentityModel open source project to implement token validation and claim
parsing.

The following code is an example of how to implement an authentication delegation handler with
ASP.NET API.

Authentication Delegation Handler in ASP.NET API


public class AuthenticationHandler : DelegatingHandler
{
protected override async Task<HttpResponseMessage> SendAsync(HttpRequestMessage
request, CancellationToken cancellationToken)
{
HttpResponseMessage response;
try
{
// Implement the Authenticator method by validating the token.
// Do not implement the validation process on your own. Use a third-party
framework.
var principal =Authenticate(request);
if (principal.Identity.IsAuthenticated)
{
// Implement the SetPrincipal method by extracting claims from the
token.
// Do not implement the extraction process on your own, use a third-
party framework.
SetPrincipal(principal);
}
else
{
response = new HttpResponseMessage(HttpStatusCode.Unauthorized);
SetAuthenticateHeader(response);
return response;
}
}

// Call the next handler, and inspect the response.


var response = await base.SendAsync(request, cancellationToken)

// Response can be unauthorized if client failed authorization.


if (response.StatusCode == HttpStatusCode.Unauthorized)
{
SetAuthenticateHeader(response);
}
return response;
}

protected virtual void SetAuthenticateHeader(HttpResponseMessage response)


{
response.Headers.WwwAuthenticate.Add(new AuthenticationHeaderValue("SWT"));
}

//The rest of the code was removed for brevity.


}

Although WIF has no automatic support for ASP.NET Web API, you can use WIF in your implementation
to validate credentials and turn security tokens into claims.
Developing Windows Azure and Web Services 11-15

Service Bus Endpoint Configuration with ACS


Another type of potential RPs for ACS are
Windows Azure Service Bus endpoints. In
Windows Azure Service Bus, every relay endpoint,
queue, and topic can have a distinct security
policy. In fact, every Windows Azure Service Bus
namespace is automatically configured with an
ACS namespace. When you create a Windows
Azure Service Bus namespace, ACS creates a
dedicated management namespace for it with the
following URL: https://{SBnamespaceName}-
sb.accesscontrol.windows.net

Browsing to the management namespace opens


an ACS portal designated to handle identities that should be granted access to the Windows Azure Service
Bus namespace. You can manage such identities directly in ACS as described previously in Topic 3,
"Identity Management in ACS", of Lesson 2, "Using the Windows Azure Access Control Service", or you can
use other identity providers. You can configure each node in the tree as a separate relying party with a
matching rule group.
Windows Azure Service Bus Endpoints support the net.windows.servicebus.action claim type , which
can have one of the following values:

Listen. Clients with this claim value can listen on nodes, such as queues, topics or relays.
Send. Clients with this claim value can send messages.

Manage. Clients with this claim value can create and delete nodes, as well as managing permissions of
other users.

As you saw in the demos and lab in Module 7, "Windows Azure Service Bus" of Course 20487, the
authentication against the Service Bus Relays, Queues, and Topics, is with an identity named owner. The
owner identity is the default identity created by ACS, and it has the Manage permissions. The owner
identity therefore is capable of doing more than just send and receive messages, and therefore you
should refrain from using it. Instead, create two identities, one for the service, with Listen permissions,
and another for the client, with Send permissions.
For more information on securing Service Bus endpoints, refer to the MSDN documentation.

Service Bus Authentication and Authorization with the Access Control Service.
http://go.microsoft.com/fwlink/?LinkID=313750

The Service Bus authentication support is one of the major differences between Windows Azure Service
Bus Queues and Azure Storage Queues. In Azure Storage Queues, you need the primary storage key to
authenticate against the storage account.

Question: Why would you use different identities for sending and receiving messages with
Service Bus Queues?

Demonstration: Configuring ACS for Service Bus Endpoints


In this demonstration, you will see how to configure a new security policy for a Service Bus queue.
11-16 Identity Management and Access Control

Demonstration Steps
1. Open the Windows Azure Management Portal website (http://manage.windowsazure.com).

2. Create a new Service Bus Queue named BlueYonderQueue in a new Service Bus namespace named
BlueYonderServerDemo11YourInitials (YourInitials will contain your initials).
3. Open the Service Bus ACS website for the Service Bus namespace you created in the preparation step.
Use the address https://BlueYonderServerDemo11YourInitials-sb.accesscontrol.windows.net
(YourInitials will contain your initials).

Note: The browser should log you on automatically to the portal. If you are redirected to
the Windows Live ID Sign in page, type your email and password, and then click Sign in.

4. Create a new Relying Party with the following information:

Name: BlueYonderQueue

Realm: http://blueyonderserverdemo11YourInitials.servicebus.windows.net/blueyonderqueue
(YourInitials will contain your initials in lowercase).
Token lifetime: 1200
Identity providers: none (uncheck Windows Live ID)

Rule groups: check both Create new Rule Group and Default Rule Group for ServiceBus.
5. Create a new Service Identity. Set the service identity name to QueueClient and generate a
symmetric key for the service identity. After you generate the symmetric key, copy its value to the
clipboard.
6. Add a password credential for the new service identity, and set the password to be the same as the
symmetric key you created before.

7. Edit the default rule group for BlueYonderQueue and add a claim rule with the following settings:

Input claim issuer: Access Control Service


Input claim type: Select type

Input claim value: QueueClient


Output claim type: net.windows.servicebus.action

Output claim value: Send

8. Open the
D:\AllFiles\Mod11\DemoFiles\ACSForServiceBus\begin\ServiceBusQueue\ServiceBusQueue.sln
solution file.

9. Examine the Main method in the Program class, and observe the use of the two identities: the
QueueClient is used for sending messages to the queue, and the owner is used for listening to the
queue and receiving messages.

10. Replace the service bus namespace with the service bus namespace that you created in step 2 of this
demonstration.

11. Replace the {Password} values with the passwords of each of the service identities. You can find the
password in the ACS portal, under the Service identities link on the pane to the left.

12. Run the application and verify you are able to send messages to the queue and read from the queue.
The last step should throw an unauthorized exception, because the QueueClient identity is not
permitted to listen to the queue.
Developing Windows Azure and Web Services 11-17

Question: What is passive and active federation, and when should you use them?
11-18 Identity Management and Access Control

Lab: Identity Management and Access Control


Scenario
One of the features that customers have requested for the Travel Companion app is the ability to access a
list of their booked flights from multiple devices. After investigating the current behavior of the app, you
found that it currently stores the booking information for each device and that each booking has the
device ID that was used when booking. Therefore, it was not possible to view the bookings from other
devices.

To address this issue, you decided that users should be able to log on to the app, so that the booking is
saved for a user and not for a device. Users would then be able to log on to the app from other devices
and still see their future and past flights.

To reduce the amount of work required to manage user identities and passwords, you decided that the
authentication process will be accomplished by using known identity providers, such as Windows Live ID,
with the help of Windows Azure ACS. In this lab, you will configure ACS to support user authentication
with Windows Live ID, and configure the ASP.NET Web API service and client app to support this
authentication process.

Objectives
After completing this lab, you will be able to:
Configure Windows Azure ACS.

Configure an ASP.NET Web API service to trust the ACS.

Configure a Windows Store app to use federated authentication.

Lab Setup
Estimated Time: 60 Minutes.

Virtual Machine: 20487B-SEA-DEV-A, 20487B-SEA-DEV-C


User name: Administrator, Admin

Password: Pa$$w0rd, Pa$$w0rd

For this demo, you will use the available virtual machine environment. Before you begin this demo, you
must complete the following steps:

1. On the host computer, click Start, point to Administrative Tools, and then click Hyper-V Manager.

2. In Hyper-V Manager, click MSL-TMG1, and in the Action pane, click Start.

3. In Hyper-V Manager, click 20487B-SEA-DEV-A, and in the Action pane, click Start.
4. In the Action pane, click Connect. Wait until the virtual machine starts.

5. Sign in using the following credentials:

User name: Administrator

Password: Pa$$w0rd

6. Return to Hyper-V Manager, click 20487B-SEA-DEV-C, and in the Action pane, click Start.

7. In the Action pane, click Connect. Wait until the virtual machine starts.

8. Sign in using the following credentials:

User name: Admin

Password: Pa$$w0rd
Developing Windows Azure and Web Services 11-19

9. Verify that you received credentials to log in to the Azure portal from you training provider, these
credentials and the Azure account will be used throughout the labs of this course.

In this lab, you will install NuGet packages. It is possible that some NuGet packages will have newer
versions than those used when developing this course. If your code does not compile, and you identify
the cause to be a breaking change in a NuGet package, you should uninstall the NuGet package and
instead, install the old version by using Visual Studio's Package Manager Console window:
1. In Visual Studio, on the Tools menu, point to Library Package Manager, and then click Package
Manager Console.

2. In Package Manager Console, enter the following command and then press Enter.

install-package PackageName -version PackageVersion -ProjectName ProjectName


(The project name is the name of the Visual Studio project that is written in the step where you were
instructed to add the NuGet package).

3. Wait until Package Manager Console finishes downloading and adding the package.

The following table details the compatible versions of the packages used in the lab:

Package name Package version

Wif.Swt 0.0.1.4

Thinktecture.IdentityModel 2.2.1

Exercise 1: Configuring Windows Azure ACS


Scenario
Before you configure the ASP.NET Web API services for claims-based identities and authentication, you
need to create your STS and configure it for your Web application. In this exercise, you will create an ACS
namespace and in it, configure an RP for your application.
The main tasks for this exercise are as follows:

1. Create a new ACS namespace

2. Configure the Relying Party application


3. Create a rule group

Task 1: Create a new ACS namespace


1. Open the Windows Azure Management Portal website (http://manage.windowsazure.com).

2. Create a new ACS namespace named BlueYonderCompanionYourInitials (YourInitials will contain


your initials). When you create the namespace, select the region closest to your location.

Task 2: Configure the Relying Party application


1. Open the ACS portal for the new namespace.

You can view the list of ACS namespaces by clicking ACTIVE DIRECTORY in the navigation pane, and
then then clicking the ACCESS CONTROL NAMESAPCES tab.

2. Create a new Relying Party with the following information:

Name: BlueYonderCloud
Realm: urn:blueyonder.cloud
11-20 Identity Management and Access Control

Return URL: https://CloudServiceName.cloudapp.net/federationcallback (CloudServiceName is


the name of the cloud service you wrote down in the beginning of the lab while running the setup
script)

Token format: SWT


Token signing key: Generate a new token.

Task 3: Create a rule group


1. Generate a rule group for the BlueYonderCloud RP. The rule group should propagate all the
incoming claims from the identity providers to the RP.

To generate a rule group, in the ACS portal, open the Rule Groups page, select the default rule
group for BlueYonderCloud, and then click the Generate link.
Make sure you generate the rules for the Windows Live ID identity provider.

Results: After you complete this exercise, you would have created a new ACS namespace and configured
an RP for the ASP.NET Web API services. You will test the RP configuration at the end of the lab.

Exercise 2: Integrating ACS with the ASP.NET Web API Project


Scenario
Now that you have created the ACS namespace, the next step is to configure your ASP.NET Web API
services to support authentication and authorization of claims-based identities. ASP.NET Web API does
not provide out-of-the-box solution for claims-based identities. Therefore, you will use third-party NuGet
packages to add that support.
In this exercise, you will implement SWT token authentication with the Thinktecture.IdentityModel
NuGet package, and SWT claims extraction with the Wif.Swt NuGet package. In addition, you will
configure several of the API controllers to authorize the user.
The main tasks for this exercise are as follows:
1. Add the Thinktecture.IdentityModel NuGet package

2. Add token validation to ASP.NET Web API


3. Add a federation callback controller

4. Update the Routing with the New Authentication Configuration

5. Decorate the ASP.NET Web API controllers for authorization

Task 1: Add the Thinktecture.IdentityModel NuGet package


1. Open the D:\AllFiles\Mod11\LabFiles\begin\BlueYonder.Server\BlueYonder.Companion.sln solution
file in Visual Studio 2012.

2. Install version 2.2.1 of the ThinkTecture.IdentityModel NuGet package in the


BlueYonder.Companion.Host project.

To install a specific version of a NuGet package, first open the Packager Manager Console window
(on the View menu, under Other Windows).

In Package Manager Console, type the following command and then press Enter:
install-package ThinkTecture.IdentityModel -version 2.2.1 -ProjectName BlueYonder.Companion.Host
Developing Windows Azure and Web Services 11-21

Note: The last known version of the ThinkTecture.IdentityModel NuGet package that
supports the SWT token is 2.2.1. Therefore, you need to use the Package Manager Console to
install this NuGet package, rather than using the Manage NuGet Packages dialog box.

Task 2: Add token validation to ASP.NET Web API


1. In the BlueYonder.Companion.Host.Azure project, open the properties of the
BlueYonder.Companion.Host web role, and add a string setting to store the issuer (STS) name of
the token.

Name the new setting ACS.IssuerName, and set its value to


https://BlueYonderCompanionYourInitials.accesscontrol.windows.net/ (YourInitials will contain
your initials).
Make sure there are no spaces at the end of the string.

2. Add another string setting to the web role to store the realm of the relying party.

Name the new setting ACS.Realm, and set its value to urn:blueyonder.cloud.
3. Add another string setting to the web role to store the token signing key of the relying party you
used. Name the new setting ACS.SigningKey.

To find the token siging key, in the ACS portal, open the Certificates and Keys page, click
BlueYonderCloud, and then click Show Key.

4. Create a new folder named Authentication in the BlueYonder.Companion.Host project.

5. Create a new class named AuthenticationConfig in the new folder you created.
6. In the AuthenticationConfig class, create a static method named CreateConfiguration, which
returns an object of type AuthenticationConfiguration, and start implementing the method.

Create a local variable named config, of type AuthenticationConfiguration.


Initialize the new variable and return its value from the method;

7. Continue implementing the method by retrieving the ACS.IssuerName, ACS.Realm, and


ACS.SigningKey settings you created in the web role settings.

Use the Microsoft.WindowsAzure.CloudConfigurationManager class to access the role's settings.


Use the Trim method on each of the retrieved strings to remove any whitespace which may have
been added to them.

Make sure you add the code before the return statement.
8. Continue implementing the method by adding a new SWT token to the configuration.

Call the config's AddSimpleWebToken method to add a definition for the SWT token.
Set the method's issuer and signingKey parameters to the strings you retrieved from the role's
settings.

Set the method's audience parameter to the ACS.Realm string you retrieved from the role's settings.

Note: Realm is the unique identifier of your RP. Audience refers to the realm of the RP that
redirected the client to the STS. In most cases the realm and audience are the same, because you
are redirected back to the application you came from. There are scenarios where the RP that got
the token is not the same RP that requested the token.
11-22 Identity Management and Access Control

Set the method's options parameter to the following code.

AuthenticationOptions.ForAuthorizationHeader("OAuth"));

Note: The AuthenticationOptions.ForAuthorizationHeader method sets the value of the


HTTP Authorization header that is added to unauthorized responses. The OAuth value specifies
that OAuth authentication should be used.

Make sure you add the simple web token configuration before the return statements.

9. Set the default authentication scheme to OAuth, and enable the use of session tokens.

Set the config's DefaultAuthenticationScheme property to OAuth. This setting will define the
authentication scheme returned for requests without any HTTP Authorization header.

Set the config's EnableSessionToken property to true to support client requests for session tokens.
Clients can use session token instead of including the SWT token in each request. Session tokens are
usually stored in cookies.

Make sure you set the two properties before the return statement.

Task 3: Add a federation callback controller


1. Install the Simple Web Token Support for Windows Identity Foundation NuGet package in the
BlueYonder.Companion.Host project.

You can locate the NuGet package by searching for WIF.SWT.

2. Add a new class named FederationCallbackController to the BlueYonder.Companion.Host


project. Set the class to derive from ApiController, and set the access level of the class to public.

3. In the FederationCallbackController, add a method to handle POST requests which hold the
authentication token.
Name the method Post, and set its return type to HttpResponseMessage.

Use the Request.CreateResponse method to create a Redirect response message.


Use the response's Headers collection to add the Location HTTP header to the response
Set the Location header to the string format FederationCallback/end?acsToken={0}

Replace the {0} placeholder with the token passed in the request. Retrieve the token by using the
ClaimsPrincipalExtensions.BootstrapToken extension method.

The BootstrapToken method accepts an IPrincipal parameter. Use the HttpContext.Current.User


to get the current IPrincipal.

Note: The client application extracts the token from the response and uses it to
authenticate against the service in future requests.
The special redirect to FederationCallback/end indicates to the client that the authentication
process has completed successfully. This flow is part of the passive federation process.

4. In the BlueYonder.Companion.Host project, open the Web.config file, and in the <appSettings>
section set the SwtSigningKey application setting to the relying party token signing key you
generated in the first exercise.

You can either locate the signing key in the Relying party configuration in the ACS portal, or copy it
from the role's settings, from the ACS.SigningKey setting.
Developing Windows Azure and Web Services 11-23

5. In the Web.config file, locate the <microsoft.identityModel> section and in the <audienceUris>
element, replace the [yourrealm] placeholder with urn:blueyonder.cloud.

Note: WIF 4.5 uses the <system.identityModel> section. However, the WIF.SWT NuGet
package you installed still uses WIF 4, which uses the <microsoft.identityModel> section.

6. Locate the <trustedIssuers> element and replace the [youracsnamespace] placeholder with your
ACS namespace. Type the namespace in lowercase letters.

7. Add the following federated authentication configuration to the <service> element under the
<microsoft.identityModel> section.

<federatedAuthentication>
<wsFederation passiveRedirectEnabled="false" issuer="urn:unused" realm="urn:unused"
requireHttps="false" />
</federatedAuthentication>

8. Make sure the Microsoft.IdentityModel assembly in the BlueYonder.Companion.Host project is


copied locally when deployed.
In Solution Explorer, under the BlueYonder.Companion.Host project, expand the References node.

Click the Microsoft.IdentityModel reference, and in the Properties window, change Copy Local to
True.

Note: WIF 4 is not installed by default in Windows Azure VMs. Therefore you need to make
sure the assembly is included in the deployed package.

Task 4: Update the Routing with the New Authentication Configuration


1. In the BlueYonder.Companion.Host project, open the WebApiConfig.cs file from the App_Start
folder, and add a route for the FederationCallback controller in the Register method. Use the
following parameters when calling the MapHttpRoute method.

Parameter Value

name callback

routeTemplate FederationCallback

defaults a new anonymous type, with a Controller property set to the string
FederationCallback

Make sure you add the route before any other call to the MapHttpRoute method.

Note: The order of routes is important; you must add the federation callback route before adding
the default route ({controller}/{id}), which handles all the other calls to the controllers. If you add the
default route first, it will be used even when you use a URL that ends with FederationCallback.

2. In the Register method, use the AuthenticationConfig.CreateConfiguration static method you


created before to get an AuthenticationConfiguration object.
Call the method before mapping the HTTP routes.

Store the result in a local variable.


11-24 Identity Management and Access Control

3. Add the AuthenticationHandler message handler to the TravelerReservationsApi, ReservationsApi, and


DefaultApi routes.

In each of the routes, add the handler parameter.


Set the parameter to a new AuthenticationHandler object, and in the constructor, pass the variable
you created in the previous step and the ASP.NET Web API global configuration object.

The config parameter of the Register method contains the global configuration object.
Note: The authentication handler is not used for the first two routes you just added, because the
requests to the FederationCallback controller are sent before the client is authenticated. The
authentication handler is not used for the location's weather route because the GetWeather action is
public and does not require any authentication.

Task 5: Decorate the ASP.NET Web API controllers for authorization


1. In the BlueYonder.Companion.Controllers project, open the ReservationsController,
TravelersController, and TripsController classes, and decorate them with the [Authorize] attribute.

Results: After completing this exercise, you will have configured your ASP.NET Web API services to use
claims-based identities, authenticate users, and authorize users. You will test this configuration at the end
of the lab.

Exercise 3: Deploying the Web Application to Windows Azure and


Configure the Client App
Scenario
Now that the ASP.NET Web API services are configured for claims-based identities, the last step is to
configure the client-side code. ASP.NET Web API services requires the use of active federation, which
requires the client to first authenticate against an identity provider, before sending the authentication
token to the ASP.NET Web API service.

In this exercise, you will deploy your ASP.NET Web API services to Windows Azure, and then configure the
client to the location of your ACS and cloud service. The client-side code for active federation is already
written in the client app. In this exercise, you will also examine the client-side code to understand the
process of active federation.

The main tasks for this exercise are as follows:


1. Deploy the Web Application to Windows Azure

2. Configure the client app for authentication

3. Examine the Client Code That Manages the Authentication Process

Task 1: Deploy the Web Application to Windows Azure


1. Publish the BlueYonder.Companion.Host.Azure project.

If you did not import your Windows Azure subscription information yet, download your Windows
Azure credentials, and import the downloaded publish settings file in the Publish Windows Azure
Application dialog box

2. Select the cloud service that matches the cloud service name you wrote down in the beginning of the
lab, after running the setup script.
3. Finish the deployment process by clicking Publish.
Developing Windows Azure and Web Services 11-25

Task 2: Configure the client app for authentication


1. In the 20487B-SEA-DEV-C virtual machine, open the
D:\AllFiles\Mod11\LabFiles\begin\BlueYonder.Companion.Client\BlueYonder.Companion.Clie
nt.sln solution file in Visual Studio 2012.

2. In the BlueYonder.Companion.Shared project, open the Addresses class, and in the BaseUri
property replace the {CloudService} placeholder with the Windows Azure Cloud Service name you
wrote down at the beginning of this lab.
3. In the BlueYonder.Companion.Client project, expand the Helpers folder, and open the
DataManager class. Set the ACS namespace constant to the namespace you created in Exercise 1,
"Configuring Windows Azure ACS".

Note: The client is already configured to use the blueyonder.cloud realm.

Task 3: Examine the Client Code That Manages the Authentication Process
1. In the DataManager class, locate the GetLiveIdUri method, and examine its code. Observe how the
method retrieves the address of the identity provider logon page.
2. Locate the AuthenticateAsync method, and examine its code. Observe how the method uses the
WebAuthenticationBroker class to handle the authentication process.

3. Locate the GetSessionToken method and examine its code. Observe how the method uses the SWT
it received from the federation callback to start a secure session with the ASP.NET Web API service.

4. Locate the CreateHttpClient method, and examine its code. Observe how the method creates a new
HTTP request object with the HTTP Authorization header.
5. Run the client app and verify the client app requests a Windows Live ID identity. Enter your Windows
Live ID credentials and verify you see the main windows of the app.
6. Display the app bar, log out from the client app, and then close the app.

Results: After completing this exercise, you will be able to run the client app, and log in using your
Windows Live ID credentials.

Question: In this lab, you used the ASP.NET Web API and authenticated with a SWT. On the
other hand, you could use WCF service over any of the WS-Federation bindings. When
should you use each of these technologies?

Question: What types of token can you use when calling a REST-based web service?
11-26 Identity Management and Access Control

Module Review and Takeaways


In this module, you learned about claims-based identity and federation scenarios. You can now use the
Windows Azure ACS as a scalable and reliable STS for your applications needs, but if necessary you know
that you can implement your own STS by relying on WIF.

Next, you saw how to configure WCF and ASP.NET Web API services to use federated identities. You
integrated WIF to the WCF and ASP.NET Web API pipelines using configuration and code, and saw how to
configure ACS for Service Bus endpoints.

Finally, you learned about configuring federated identities on the client side, and how to call REST-based
services with the Authorization HTTP header. In the lab, you added federated identity capabilities to the
ASP.NET Web API project for the booking service, and to the companion Windows Store app.

Best PracticeS:

Use passive federation for websites and active federation for web services.
Use well known infrastructures such as ADFS 2.0, ACS, WAAD, and WIF.

Review Question(s)
Question: What are the advantages of using claims-based identity?

Tools
Visual Studio 2012
Windows Azure Management Portal

Service Trace Viewer


12-1

Module 12
Scaling Services
Contents:
Module Overview 12-1

Lesson 1: Introduction to Scalability 12-2

Lesson 2: Load Balancing 12-5


Lesson 3: Scaling On-Premises Services with Distributed Cache 12-8

Lesson 4: Windows Azure Caching 12-15


Lesson 5: Scaling Globally 12-22
Lab: Scalability 12-25

Module Review and Takeaways 12-28

Module Overview
Services that are successful in providing business value are likely to experience growth in the number of
users and the amount of data that they need to handle. Developers should know how to make sure that
their services can handle the increasing workload while still maintaining a high level of performance and
good user experience. You will learn about the need for scalable services and how to handle increasing
workloads using load balancing and distributed caching.

You will learn about scaling services in both on-premises and cloud deployments, along with the
challenges that such services face while they are growing.

Note: The Management Portal UI and Windows Azure dialog boxes in Visual Studio 2012
are updated frequently when new Windows Azure components and SDKs for .NET are released.
Therefore, it is possible that some differences will exist between screen shots and steps shown in
this module, and the actual UI you encounter in the Management Portal and Visual Studio 2012.

Objectives
After completing this module, you will be able to:
Explain the need for scalability.

Describe how to use load balancing for scaling services.

Describe how to use distributed caching for on-premises as well as Windows Azure services.
Describe how to use Windows Azure caching.

Describe how to scale services globally.


12-2 Scaling Services

Lesson 1
Introduction to Scalability
Scalability is a critical aspect of any service-oriented software. It has a direct impact on how users view the
reliability and trustworthiness of a service and therefore has a bearing on the business.

In this lesson, you will be introduced to the two approaches for scaling large applications and understand
the required components.

Lesson Objectives
After completing this lesson, students will be able to:

Describe the reasons that make scalability important.

Explain the two approaches for scaling applications.

Describe the components of a scaled-out architecture.


Define the architectural challenges of scaling applications and services.

The Reasons for Scaling


Scalability is a systems ability to respond to
growing business needs in an optimal and
effective manner. It is also the ability to take
advantage of increased available resources. This is
often required for several reasons:
A major surge in demand for a service over a
limited period of time (minutes or hours).

A gradual rise in demand for a service over a


long period of time (weeks or months).

An increase in the amount of data that must


be processed by the service.

A scalable system can handle such peaks and spikes in demand without any degradation in service quality,
as experienced by customers. This is very important from a business perspective, as it has direct impact on
how customers perceive the reliability and trustworthiness of the service.
Developing Windows Azure and Web Services 12-3

Scaling Approaches
There are two different approaches for scaling
services:

Scaling out (also known as scaling


horizontally)
Scaling up (also known as scaling vertically)

Both approaches can be used separately or


together.

Scaling Out
To scale out, you add additional nodes to an
existing system. With the increased computing
power and decreased cost of commodity hardware, which is hardware that is easily available to
consumers), adding more processing and storage capacity to a distributed application is a very simple
undertaking. Modern distributed applications often run on large clusters of low-cost computers that are
interconnected into a single cluster. Such applications need to be aware of the fact that they run in a
clustered environment.

Scaling Up
To scale up, you add additional resources (processing or storage) to a single node of the system. This is
often the easiest option to apply, but has inherent limitations such as the maximal memory capacity or
the number of network cards that can be installed in a single computer. At this point, there is no choice
but to replace the node with a better and more capable node. Scaling up might also require the
application to scale up along with the hardware for instance, the application must be able to take
advantage of multiple cores in a single CPU.

The Components of a Scaled Out Architecture


A scaled-out, distributed application is not a
panacea. It requires careful planning and
integration of multiple components. Some of
these required elements are:

Load Balancer. The load balancer has the


responsibility of routing requests to individual
nodes of the application. In particular, this
routing is done without knowledge or
involvement of the clients and can be based
on the current number of requests that are
currently being processed by each node, the
nodes geographical proximity to the client
and many other factors.
Distributed Cache. Distributed Cache is used to maintain an in-memory, low-latency, read-write store
that can be accessed by each node. By virtue of its distributed nature, this cache is shared among all
the nodes in the system that is, changes made by one node are immediately visible to all other
nodes. The distributed cache is a good choice for enabling fast access to commonly-accessed
information (such as user sessions) without incurring the high I/O latency of accessing a relational
database.
12-4 Scaling Services

Shared Configuration. Shared Configuration is used to store and administer configuration settings in
a single location. This location can then be used to automatically configure server software on
multiple nodes, such as Internet Information Services (IIS)

Centralized SSL Certificate Support. Centralized SSL Certificate Support, a new feature in IIS 8, allows
the storage of SSL certificates in a single location. Multiple IIS-hosting nodes can then use this
location for gaining access to the certificates. This makes the administration and management of
secure distributed applications much easier than was previously possible.

Scale Out Architectural Challenges


Scaling out a distributed application presents
many issues that must be dealt with. Some
examples of such issues are:
Understanding which parts of the system can
be run in parallel. One of the most important
aspects of a distributed architecture is the
ability to recognize which parts of the system
can be run in parallel and which are limited to
sequential execution. Scaling out will bring
the most benefit when it is applied to the
former rather than the latter.

Identifying the need to scale out. A


distributed system needs to recognize when it is experiencing high load. It must then decide whether
it needs to provision additional servers or handle the increasing load in a different manner. Similarly,
a distributed system must determine when user load is light and resources can be given up. Finally,
the system needs to report to and alert administrators when abnormal load is detected so that they
can take the appropriate business steps.

Performing an automated scale out. Once a system has identified the need to scale out, it must then
be able to provision additional machines and nodes in a fully automatic manner and add them to the
pool of resources available to the application. Failure to do so may mean that increasing load
demands cannot be met in time, which can have adverse effects on the business. This problem is
often simplified considerably when running on a cloud platform, as the platform will usually contain
APIs and services that are specifically designed for such purposes.

Dealing with failure. A distributed system, by its very nature, runs on multiple hardware elements. This
means that the probability of a hardware failure increases proportionally with the number of physical
machines that are used. Hence, the possibility of a hardware failure and its associated effect on the
application - becomes a very real possibility that must be expected and planned for. In such cases, the
system needs to take steps to isolate the problem and restore full service as soon as possible. Here
too cloud platforms are useful, as hardware failures are detected and new virtual machines
provisioned automatically.
Developing Windows Azure and Web Services 12-5

Lesson 2
Load Balancing
Load balancing is a technique that enables applications to scale and be more resilient to failure. For large-
scale, distributed applications this is an extremely important issue.

In this lesson, you will learn about the different ways in which you can perform load balancing and how to
load-balance your Windows Azure application.

Lesson Objectives
After completing this lesson, students will be able to:

Describe the tools and infrastructure required for load-balancing.

Describe how to perform load-balancing for Windows Azure applications.

Scale out a web application in Windows Azure.

Load Balancing Tools and Frameworks


You can implement load balancing tools in either
software or hardware. Hardware load balancers
are dedicated appliances, much like routers, that
are part of a data centers core infrastructure.
These are often dedicated and expensive devices.
Software load balancing, on the other hand, can
be done by the operating system itself or by
specific applications. Often, such tools can also
serve as caches such as for static HTML content, so
that frequently-accessed content can be served
directly to clients by the load balancer rather than
the back-end server.
The Windows Server range of operating systems support load balancing through the use of the Network
Load Balancing (NLB) feature. This is done by combining two or more computers with the same server
software, such as IIS, into a single cluster, which can then be accessed using its own IP address, but still
maintains the IP addresses of the individual machines . The amount of traffic that each individual
computer can handle is known as load weight and is configurable. Additional machines can be added or
removed from the cluster dynamically when needed.

DNS Round Robin is an additional method of load balancing that does not require dedicated software or
hardware. When clients make calls to some domain such as www.blueyonder.com, the domain name is
resolved into a numerical IP address through the use of a Domain Name System (DNS) server. When using
DNS Round Robin, the DNS server resolves the domain name into a different IP address for each
individual request. The major disadvantage of this technique is that clients are then aware of the existence
of multiple machines.

You can also implement load balancing by using Web Farm Framework (WFF) for IIS. The WFF provides
load balancing, scaling, management, and provisioning solutions for IIS-based Web farms. The WFF also
supports application-related solutions, such as connection stickiness, and central output caching.

For additional information about the WFF, refer to the IIS documentation.

Web Farm Framework http://go.microsoft.com/fwlink/?LinkID=314117


12-6 Scaling Services

Load Balancing with Windows Azure


Windows Azure performs load balancing through
the use of endpoints. Endpoints are assigned a
protocol, which can be either TCP or User
Datagram Protocol (UDP), and a port. If you have
two or more instances of a web or worker role,
Windows Azure will load balance any incoming
requests for you in a round-robin fashion. The
first request will go to the first instance, the
second request to the second instance, the third
request to the first instance again, and so on.
Virtual machines are load-balanced by grouping
several machines in a cloud service and adding an
endpoint to the service. This endpoint will also round-robin requests to each virtual machine in turn.
Windows Azure further has the capability to run health probes for determining whether an instance is
available and/or capable of performing the service it is supposed to such as whether the service is able to
connect to its database and/or any other external dependencies. If these probes are unsuccessful,
Windows Azure will remove the instance from the rotation and no longer send it any requests.

Note that in a load-balanced scenario, each request may arrive at a different instance. This means that any
common data, such as session information in an ASP.NET application, needs to be accessible to all
instances. You can use a database or a distributed cache for this purpose.
A further approach for load balancing is through the use of message queues: either Windows Azure
Service Bus Queues or Windows Azure Queue Storage. In this scenario, you bring up multiple worker roles
that read from a single queue. Because each instance reads a single message, the processing load is
distributed across those workers. For further details on queues, see Modules 7, "Windows Azure Service
Bus" and 9, "Windows Azure Storage".

Demonstration: Scaling out Web Applications in Windows Azure


In this demonstration, you will scale a Web application to multiple instances in Windows Azure.

Demonstration Steps
1. Open D:\AllFiles\Mod12\DemoFiles\ScalingWebApplications\ScalingWebApplications.sln in Visual
Studio.

2. Open the Windows Azure Management Portal (http://manage.windowsazure.com).


3. Create a new Windows Azure Cloud Service named BlueYonderDemo12YourInitials (YourInitials
contains your initials).

4. Use Visual Studio 2012 to publish the WebApplication.Azure project to the cloud service you
created.

5. Open the Web Role properties and note that the Web application is deployed to three instances.

6. Open the HomeController class and observe the Index method. The method prints the name of the
role instance, which contains the instance number.

7. After the publish process completes, browse to


http://BlueYonderDemo12YourInitials.cloudapp.net (YourInitials contains your initials). Refresh
the page several times and verify the instance number at the end of the title changes between each
refresh. You should see all three instance numbers: 0, 1, and 2.
Developing Windows Azure and Web Services 12-7

8. Return to the management portal and delete the production deployment you created in this demo.
12-8 Scaling Services

Lesson 3
Scaling On-Premises Services with Distributed Cache
Distributed cache is a basic component for implementing high scale distributed applications. Application
servers can store a large set of information in a collection of servers forming a cache cluster. The
information is stored in-memory across the cluster to provide low latency and high throughput.

This module describes Windows Sever AppFabric Cache and the API for executing data access operation.

Lesson Objectives
After completing this lesson, students will be able to:

Describe the motivation for distributed caching.

Describe the architecture of Windows Server AppFabric Cache.

Execute basic data access operation on named caches.


Execute tag-based search operations.

The Need for Distributed Cache Mechanisms


Traditional caching improves performance by
storing data close to the application. Instead of
executing a long and expensive data access
transaction to a database, a simple and fast
memory lookup can fetch the data.
Storing data in an in-memory cache is simple and
efficient yet it assumes that the application and
the in-memory cache are located on the same
machine.
Large scale applications run on multiple servers.
Assumptions concerning the identity of the
execution machine simply break in such scenarios.
Load balancers distribute requests across execution servers so clients do not know which servers will
handle their request. The first request of a business transaction might reach server A but the second
request of the same transaction might be handled by server B.
Data stored in-memory on one server is unavailable to other machines so storing data in-memory will be
useless when request span over multiple servers.

In high-scale scenarios, you have to store data in an independent data store that is accessible to all
execution machines. One option is to store the data in the database, but each data access will suffer from
long delays. Another solution is to create a dedicated server for storing data in-memory for all other
execution machines. A single server is limited in its memory capacity and is unreliable by design. High
scalable applications often store much more memory then a single machine can handle and cannot afford
a single point of failure.

The solution is a distributed cache that spans over multiple servers. Data is stored in-memory on multiple
machines so the cache can grow in size and in transactional capacity. However, clients work against a
single logical cache without knowing where data is actually stored.

Cache can be useful to store temporary data. All data items in the cache are automatically removed
according to expiry periods and cleanup policy. The developer is free from handling garbage collection of
Developing Windows Azure and Web Services 12-9

unnecessary data stored in the cache. Applications can store intermediate data in the cache, use it in their
calculations, and then forget about it. It will be automatically cleaned.

Distributed cache simplify the execution of parallel tasks across servers in high-performance computing or
map-reduce applications. A complex job can be divided into simpler tasks, distributed across servers and
executed in parallel. Intermediate results produced by such tasks can be stored in the cache before being
used by other tasks in the execution flow.
If data reliability is required, you can use replication and store the same data on multiple cache servers. If
one server fails the data will be still available.

With distributed cache, you can improve the performance of high-scale applications that span multiple
servers. Distributed cache is as simple to use as traditional in-memory cache but can grow in size
according to demand and can serve multiple applications simultaneously.

Applications such as ASP.NET web sites deployed on a web farm with multiple servers can store their
session state in a distributed cache and gain fast data access across the web farm as well as automatic
cleanup.

Windows Server AppFabric Cache Features


Windows Server AppFabric Cache is a distributed
cache infrastructure. Using Windows Server
AppFabric Cache, you can build a cache cluster
containing multiple named caches for applications
to use. Windows Server AppFabric Cache supports
high availability, which means that a copy of each
cached object is maintained on a separate
secondary cache host to keep the object available
when a primary cache host fails.

Windows Server AppFabric Cache provides the


following basic offerings:

Caching services
Cache client and API

Administration tools

Named Caches
A named cache is the basic caching unit that applications use to store their data.

You can create one or more named caches for each of your applications.

Named caches are independent of the others, which lets you optimize the policies of each cache for your
applications. Each named cache spans all cache hosts in the cluster, meaning that object from the same
named cache can be stored on different servers this way resources are evenly distributed between the
servers.
You do not have to create named caches because a default cache is provisioned for you on startup.

Regions
Regions are an additional data container or a subgroup for cached items that your applications can use to
store and retrieve cached objects by using descriptive strings called tags that you can associate to each
cached object. The ability to search all cached objects in the region dictate that objects in a region are
limited to a single cache host.
12-10 Scaling Services

Named cache are a collection of key value pairs stored in-memory across the cluster where regions are
key value pairs constrained to a single server but with search by tags capabilities. When choosing between
storing data in a region or in a named cache you have to choose between functionality and scalability.

High Availability
When creating a named cache or region you can configure it to run with high availability enabled.

All cached objects will be copied to a secondary cache on creation. The secondary copy of the cached
object is acknowledged on all changes to maintain consistency. If the primary cache host fails on request
for a cached object the cache cluster re-routes the request to the cache host that maintained the
secondary copy of the object. The secondary copy of the object is elevated to become the new primary
object and a new secondary copy is created on another healthy cache host. This is why the cache cluster
must contain at least three cache hosts for high availability to function.

Caching API
When using a distributed cache objects have to be communicated between the application server and the
cache cluster. Window Server AppFabric Cache API abstracts the communication with the cluster and
simplify the interaction with the cache. You can use a large number of methods and interfaces provided
by the Windows Server AppFabric API to configure the cache and access the data it contains.

Cache Notifications
With Windows Server AppFabric cache notifications, you can receive asynchronous notifications when a
variety of cache-related events occur on the cache cluster. You can use cache notifications for automatic
invalidation of locally cached objects. To receive asynchronous cache notifications, enable notification on
a cache level and register a cache notification callback.
Notifications can be grouped as follows:

Region Operations:

o CreateRegion: When a region is created in the cache.

o ClearRegion: When a region is cleared in the cache.


o RemoveRegion: When a region is removed from the cache.

Item Operations:

o AddItem: When an item is added to the cache.

o ReplaceItem: When an item is replaced in the cache.

o RemoveItem: When an item is removed from the cache.

To simplify notification handling it is possible to narrow the scope of cache notifications from the cache
level down to the region level and item level.

Administration Tools
Windows Server AppFabric caching administration is provided on top of Windows PowerShell.
There are more than 130 standard command-line tools that you can use to for managing your distributed
cache environment. Scripting on top of Windows PowerShell is simple and powerful but if required you
can use the Windows Server AppFabric Caching API to build your own custom administration tool.

To learn more about Windows Server AppFabric Cache, refer to the following MSDN
documentation:
http://go.microsoft.com/fwlink/?LinkID=298873&clcid=0x534
Developing Windows Azure and Web Services 12-11

Windows Server AppFabric Cache Components


The architecture of Windows Server AppFabric
Cache contains the following components:

Cache host

Cache cluster

PowerShell cache administration tool

Cache cluster configuration

Cache client

Local cache

Cache Host
The Cache Host is a Windows service that runs on one or more servers and hosts WCF endpoint that the
cache clients use to access cached objects. The cache host is the process that stores the cached objects in-
memory.

Cache Cluster
The cache cluster is a collection of cache host instances running on cache servers. The cluster is managed
by a cluster management role which is responsible for keeping the cache cluster running, monitoring the
availability of all cache hosts in the cache cluster, and adding new cache hosts to the cluster. The cluster
management role can run on a designated lead hosts or on an SQL server, which stores the cluster
configuration.

Cluster Configuration
You can store the cluster configuration in a SQL Server database, XML file on a shared network location or
on a custom storage. The cluster configuration contains the following information:

Cluster settings: Configuration settings concerning the cache cluster such as the cluster size.

Cache settings: Configuration settings concerning each of the caches instances such as name, type
and timeouts.

Host settings: Configuration settings concerning each of the cache host services such as server name,
port numbers and watermarks.

Cache Client
Any application that uses the Windows Server AppFabric Cache is considered as a Cache Client.

To access data on the cache applications, use the Windows Server AppFabric Cache Client API provided in
the AppFabric caching assemblies and specify the appropriate configuration. All the communication and
serialization details are abstracted from the client to keep the data access code as simple as possible.

Local Cache
Distributed Cache stores serialized information on a set of dedicated servers referred as the cache cluster,
meaning that each data access to the cache involves communication and serialization. To speed up the
data access process, an additional read-though caching layer can be introduced in the application
process. This is known as the local cache.

When local cache is enabled, the application store reference to cached objects is retrieved from the cache
cluster in a dedicated in-memory collection. This keeps the object active in the memory space of the client
application. When the application requests the object, the cache client first checks whether the object
12-12 Scaling Services

resides in the local cache. Only if the object does not exist, the cache client establishes communication
with the cache server to retrieve the object and store it in the local cache.

The lifetime of objects in the local cache is controlled by an invocation policy based on the number of
objects in the local cache, timeouts, and invalidation notifications.

Cache API for Data Access


Windows Server AppFabric Cache provides a
simple API to access data on the server and
configure the cluster.

You can use the following classes to work with


Windows Server AppFabric Cache
programmatically.
DataCacheFactoryConfiguration. Configure
a cache client programmatically by providing
information about the location of the cache
hosts and cache properties such as local
caching. DataCacheFactory. Create
DataCache objects that are mapped to a
named cache.
DataCache. The basis class to access data on the cache.

DataCacheItem. Wrapper around the cached object which contain information such as key, tags,
version, and the name of the cache and region it is stored in.
DataCacheTag. String identifier for searching cached objects in regions.

DataCacheItemVersion. Numeric representation of the version of cached objects.

DataCacheLockHandle. Key for locking and unlocking cached objects.


DataCacheException. Basic exception for cache related errors.

To access data on the cache, you have to begin with creating a DataCache object, and then you can use it
to insert, retrieve, or delete objects in the cache. You can use the following DataCache methods to
execute basic data access operations on a named cache.

Add. Adds a new object to the cache. If the item already exists an exception will be thrown.
Put. Adds or replaces an object in the cache.

Get. Returns an object from the cache.

Remove. Removes an object from the cache.


The following code example shows how to insert, retrieve, and delete an object from the cache.

Basic Cache Client Data Access Operations


var myCacheFactory = new DataCacheFactory();
DataCache myCacheClient = myCacheFactory.GetCache("myNamedCache");

myCacheClient.Put("Key", new MyObject());


MyObject obj = (MyObject)myCacheClient.Get("key");
myCacheClient.Remove("key");

You can use optimistic concurrency as well as pessimistic concurrency when updating cached objects.
Developing Windows Azure and Web Services 12-13

In an optimistic concurrency model, updates to cached objects do not involve locking.

To ensure consistency, the cache client sends the version information together with the updated object.

The server executes the update only if the version it receives matches the current version of the object.
Finally, the server updates the cached object version number.

In a pessimistic concurrency model, the client explicitly locks objects before running updates.

Other operations that request locks are rejected until the locks are released.
When locking cached objects, cache returns a lock handle to be used for releasing the object when
possible.

You can use the following DataCache methods to lock and release objects.

GetAndLock. Returns and locks the cached object. The method returns a lock handle (as an output
parameter) to be later used for releasing the object

PutAndUnlock. Updates and release the locked object using a lock handle.
Unlock. Explicitly unlocks a cached object.
This following code is an example how to lock and release a cached object.

Using Pessimistic Concurrency


DataCacheLockHandle handle;
DataCacheFactory myCacheFactory = new DataCacheFactory();
DataCache myCacheClient = myCacheFactory.GetCache("myNamedCache");
MyObject temp = myCacheClient.GetAndLock("myKey", TimeSpan.FromSeconds(1), out handle,
true) as MyObject;
// Do something on temp
//
myCacheClient.PutAndUnlock("myKey", temp, handle);

Cache API for Regions and Tags


Windows Server AppFabric Cache is a collection of
key-value pairs, but when using regions you can
organize and retrieve cached objects by using
tags. With tags, you can add one or more strings
descriptions to your cached object, and later
search for groups of objects based on matching
tags.
You can use the following DataCache methods to
access cached objects using tags:

GetObjectsByTag. Returns objects in a


region that contains the specified tag.

GetObjectsByAnyTag. Returns objects in a region that have tags matching any of the tags provided.

GetObjectsByAllTags. Returns objects in a region that have tags matching all of the tags provided.

GetObjectsInRegion. Returns all objects in a region.

GetCacheItem. Returns a DataCacheItem object. One if this method overloads includes the
associated tags.

Add. Adds an object to cache. One if this method overloads supports associating tags.
12-14 Scaling Services

Put. Replace tags associated with a cached object.

Remove. Deletes the cached object and any associated tags.

The following code example shows how to attach tags to a cached object, and how to retrieve objects
according to tag values.

Basic tags operations


DataCacheFactory myCacheFactory = new DataCacheFactory();
var tags = new List<DataCacheTag>
{
new DataCacheTag("Europe"),
new DataCacheTag("Asia")
};

DataCache myCacheClient = myCacheFactory.GetCache("myNamedCache");


myCacheClient.Put("countryKey", new Country("Russia"), tags, "myRegion");

var dct = new DataCacheTag("Asia");


IEnumerable<KeyValuePair<string, object>> objects = myCacheClient.GetObjectsByTag(dct,
regionName);
Developing Windows Azure and Web Services 12-15

Lesson 4
Windows Azure Caching
Windows Azure applications store state information in independent data stores. You can use a distributed
cache to store and share information between roles instances while maintaining low data access latencies.

Lesson Objectives
After completing this lesson, students will be able to:

Describe the in-role and shared caching options in Windows Azure.


Create a Windows Azure cache worker role.

Use Windows Azure cache from code.

Create a shared cache service.


Implement a cache client to an in-role cache as well as to a shared cache service.

Caching options in Windows Azure


Windows Azure applications are a classic scenario
for leveraging distributed caching because they
are designed for high scale, multi-instance
execution.

In-Role Caching
Distributed caching can improve the performance
and scalability of Windows Azure applications by
caching state and other information originated in
slow data stores such as Windows Azure SQL
Database and Windows Azure Storage. This
caching enables role instances to become
stateless, which simplifies execution of multiple
instances.
Provisioning a distributed cache is simple in Windows Azure. You can host caching nodes within your
Windows Azure roles or define a worker role that is dedicated to Caching.

In contrast to Windows Server AppFabric Cache, all caching administration in Windows Azure is
automatically executed for you and so there is no need to use administration tools such as Windows
PowerShell. The API for data access is similar to Windows Server AppFabric Cache so you can use your
existing knowledge for working with distributed cache in the cloud.

Shared Caching
Apart from dedicated caching running on your roles, Windows Azure provides a separate service called
Windows Azure Shared Caching. With shared caching, you can register a cache through the Windows
Azure Management Portal. Similar to other Windows Azure services, shared caching is hosted on a shared
infrastructure in a multitenant environment. After provisioning your cache, you can access your cache by
using a Service URL and Authentication token.
Windows Azure distributed cache, which is hosted on your application or dedicated roles, supports the
same programming model as in-role caching, but lacks some of the features in-role caching offers, such
as high availability, regions and tagging, and cache notifications.
12-16 Scaling Services

To learn more about Caching in Windows Azure, refer to the following MSDN
documentation:
http://go.microsoft.com/fwlink/?LinkID=298875&clcid=0x536

Creating a Windows Azure Cache Worker Role


Worker and Web roles are deployed to a virtual
machine that you can use to provision a cache
cluster.

In contrast to Windows Server AppFabric Cache in


which the cache cluster runs on dedicated servers
only, in Windows Azure, the cluster can run on the
same machines that execute applicative tasks.
A worker role can act as a dedicated host for
caching or apply a colocated strategy in which the
role shares a percentage of its physical resources
for caching and acts as a traditional applicative
role. Web roles can share their resources with a
cache node but cannot act as a dedicated cache.
Cache provisioning in Windows Azure is done by configuration. You can configure your roles to host a
distributed cache by using Visual Studio. Windows Azure Tools for Visual Studio Tools installed together
with Windows Azure SDK provides a simple user interface for configuring Windows Azure roles for
caching.

To enable caching, open the Caching tab on the role properties dialog, apply the appropriate settings
and check the Enable Caching checkbox. If the cache is colocated in an applicative role, set the
percentage of physical resources that should be saved for the cache.

Finally, you have to supply a valid connection string to a storage account that will be used to persist the
cluster configuration in Windows Azure Storage.

You can create one or more named caches and set their properties. To enable High Availability on a
named cache, it is required that the role instance count will be greater than 1.

The following screenshot shows a co-located cache configuration with High Availability and
Notifications enabled.
Developing Windows Azure and Web Services 12-17

FIGURE 12.1: THE CACHING TAB ON THE ROLE PROPERTIES DIALOG


You can create a dedicated role for caching by choosing the Dedicated Role radio button. In web roles,
this option is disabled.

Using Windows Azure Cache from Code


The API for using a distributed cache in Windows
Azure is identical to the on-premises version
provided by Windows Server AppFabric. The basic
class for interacting with the cache is
Microsoft.ApplicationServer.Caching.DataCach
e. To create an instance of a DataCache you can
use a DataCacheFactory which reads the settings
of the cache client from the application
configuration file.
To reference the Windows Azure Caching API
assemblies, you can download the Windows
Azure Caching NuGet package. After installing
the NuGet package, you can find the Cache Client configuration in your application configuration file in
which you have to provide the name of the hosting role and the name of your named caches.

The following configuration shows how to configure a Windows Azure Cache client

Windows Azure Cache Client Configuration


<dataCacheClients>
<dataCacheClient name="MyNamedCache">
<autoDiscover isEnabled="true" identifier="MyWorkerRole" />
<localCache isEnabled="true" sync="TimeoutBased" objectCount="100000"
ttlValue="300" />
</dataCacheClient>
</dataCacheClients>

After the client cache configuration is in place, it is possible to create an instance of a DataCache and
interact with the cache.
The following code shows how to call basic cache operations on Windows Azure in-role Cache using the
DataCache class.

Basic Cache Operations


DataCacheFactory factory = new DataCacheFactory();
DataCache myCacheClient = factory.GetCache("MyNamedCache");
myCacheClient.Add("key", new MyObject("original"));
myCacheClient.Put("key", new MyObject("updated"));
MyObject fromCache = myCacheClient.Get("key") as MyObject;
myCacheClient.Remove("key");

Because Microsoft.ApplicationServer.Caching.DataCache is used for interaction with Windows Azure


Cache and on-premises Windows Server AppFabric cache, the same API applies as was described in Lesson
3, "Scaling On-Premises Services with Distributed Cache", in Topic 4, "Cache API for Data Access" and 5,
"Cache API for Regions and Tags".
12-18 Scaling Services

Demonstration: Using Windows Azure Caching


In this demonstration, you will see how to create a Windows Azure Cache Worker Role in your cloud
project, use the cache API in a Web application to store and retrieve information from the cache, and test
the Web application and the cache with the Windows Azure Emulator.

Demonstration Steps
1. Open the
D:\Allfiles\Mod12\DemoFiles\WindowsAzureCaching\begin\WindowsAzureCaching\Windows
AzureCaching.sln solution file.

2. Open the LocationsController and examine the implementation of the Get method. The method
currently retrieves the Location entity from the database, but you will change this code to try and
retrieve the entity from a distributed cache, before attempting to call the database.
3. Add a Cache Worker Role to the MvcApplication1.Azure cloud project, and build the solution.
Name the worker role CacheWorkerRole.

4. Add the Windows Azure Caching NuGet package to the MvcApplication1 web application project.

5. Open the web application's Web.config file, and in the <dataCacheClients> section, replace the
[cache cluster role name] string with CacheWorkerRole (the name of the cache worker role you
created).
6. In the MvcApplication1 project, open the LocationsController class, and locate the Get method. In
the method locate the comment // TODO: Place cache initialization here, and after it add the code to
create a DataCache object for the default cache. Use the DataCacheFactory.GetDefaultCache
method to get the data cache object.

7. In the Get method, locate the comment // TODO: Find the location entity in the cache, and after it
add the code to check whether the cache already contains the location. Use the DataCache.Get
method to get the cached item, and store the result in the location variable. For the key, use a string
of the format location_{0}, where {0} is the value of the id parameter.

8. In the Get method, locate the comment // TODO: Add the location to the cache. Add code between it
and the end of the using statement, to store the location variable in the cache. Use the
DataCache.Put method with the cache key as you created it before for the DataCache.Get method.
Putting the item in the cache will make it available the next time the user requests the same location
entity.

9. Place a breakpoint in the beginning of the Get method, set the MvcApplication1.Azure project as
the startup project, and run the project in debug.

10. After the browser opens, browse to the locations controller to get the location with ID 1, and debug
the code to verify that in the first request for the entity, it is retrieved from the database. Use the URL
suffix api/locations/1.
11. Return to the browser and refresh the page. In Visual Studio 2012, debug the code and verify that this
time, the entity is retrieved from the cache.
Developing Windows Azure and Web Services 12-19

Creating a Windows Azure Shared Cache


You can use distributed caching provided as a
service by Windows Azure from any application.

To provision a shared cache all you have to do is


register for the service in the Management Portal.
The cache is provided in different sizes starting
from 128 megabytes (MB) up to 4 gigabytes (GB)
and with a variety of network quotas.
Windows Azure shared caching supports only a
subset of the caching features available through
in-role caching and on-premises distributed
cache. For example notifications, high availability
and regions are not supported by Windows Azure shared caching.

To learn more about the difference between in-role caching and shared caching in Windows
Azure, refer to the following MSDN documentation:
http://go.microsoft.com/fwlink/?LinkID=298877&clcid=0x538

To register for a shared cache service, click Service Bus, Access Control, & Caching in the Management
portal. In the left pane, click Cache and create a new namespace. Verify that the Cache check box under is
selected under Available Services.

Note: At the time of writing, Windows Azure Shared cache can only be provisioned in the
previous management portal (Silverlight).

This screenshot presents how to create a shared cache namespace in Windows Azure management portal.

FIGURE 12.2: CREATE A SHARED CACHE NAMESPACE


Once the namespace is active, you can start using the cache.
12-20 Scaling Services

Using Windows Azure Shared Cache from Code


The API for using a distributed cache in Windows
Azure is similar to the on-premises version
provided by Windows Server AppFabric. The basic
class for interacting with the cache is
Microsoft.ApplicationServer.Caching.DataCach
e. To create an instance of a DataCache, you can
use a DataCacheFactory, which reads the settings
of the cache client from the application
configuration file.

Because some functionality of in-role and on-


premises cache is not supported, an exception will
be thrown if you will try to call unsupported
functionality.

The first step for creating a cache client is to provide the necessary configuration.

You can get the configuration required to access the shared cache by selecting the namespace in the
management portal and clicking the View Client Configuration button. The client configuration
provided by the management portal contains the configuration required to create a DataCacheFactory
and a DataCache and the configuration for an ASP.NET session provider implemented on top of the
cache.
This screenshot presents the client configuration provided by the management portal.

FIGURE 12.3: SHARED CACHE CLIENT CONFIGURATION


Developing Windows Azure and Web Services 12-21

To reference the Windows Azure Caching API assemblies, you can download the Windows Azure Shared
Caching NuGet package. After installing the NuGet package you will find the Cache Client configuration
in your application configuration file in which you have to provide the details of your shared cache
subscription. You can copy the dataCacheClients configuration section from the configuration you
received in the management portal and paste it in the application configuration file.

The following configuration shows how to configure a client of Windows Azure Shared cache.

Windows Azure Shared Cache Client Configuration


<dataCacheClients>
<dataCacheClient name="default">
<hosts>
<host name="sharedcache.cache.windows.net" cachePort="22233" />
</hosts>
<securityProperties mode="Message">
<messageSecurity

authorizationInfo="YWNzOmh0dHBzOi8vc2hhcmVkY2FjaGUtY2FjaGUuYWNjZXNzY29udHJvbC53aW5kb3dzLm
5ldC9XUkFQdjAuOS8mb3duZXImVy9HRjV4R3RsSnRqd2tGNncxUU1RRng0SjJwUzExN2ZUbkEyVHUvaGxtbz0maHR
0cDovL3NoYXJlZGNhY2hlLmNhY2hlLndpbmRvd3MubmV0">
</messageSecurity>
</securityProperties>
</dataCacheClient>
<dataCacheClient name="SslEndpoint">
<hosts>
<host name="sharedcache.cache.windows.net" cachePort="22243" />
</hosts>
<securityProperties mode="Message" sslEnabled="true">
<messageSecurity

authorizationInfo="YWNzOmh0dHBzOi8vc2hhcmVkY2FjaGUtY2FjaGUuYWNjZXNzY29udHJvbC53aW5kb3dzLm
5ldC9XUkFQdjAuOS8mb3duZXImVy9HRjV4R3RsSnRqd2tGNncxUU1RRng0SjJwUzExN2ZUbkEyVHUvaGxtbz0maHR
0cDovL3NoYXJlZGNhY2hlLmNhY2hlLndpbmRvd3MubmV0">
</messageSecurity>
</securityProperties>
</dataCacheClient>
</dataCacheClients>

Once the client cache configuration is in place, it is possible to create an instance of a DataCache and
interact with the cache.
The following code shows how to call basic cache operations on Windows Azure Shared Cache by using
the DataCache class.

Basic Shared Cache Operations


DataCacheFactory factory = new DataCacheFactory();
DataCache myCacheClient = factory.GetCache("MyNamedCache");
myCacheClient.Add("key", new MyObject("original"));
myCacheClient.Put("key", new MyObject("updated"));
MyObject fromCache = myCacheClient.Get("key") as MyObject;
myCacheClient.Remove("key");
12-22 Scaling Services

Lesson 5
Scaling Globally
Scaling services so that they are operating at their optimal level for users in different countries or
continents can be a challenge. Cloud platforms such as Windows Azure provide several features to
simplify the process of scaling.

In this lesson, you will learn about the issues that apply to services that need to scale on a global scale and
how Windows Azure can help.

Lesson Objectives
After completing this lesson, students will be able to:

Describe how to load balance resources by using Content Delivery Networks (CDNs).
Describe the Windows Azure options for load balancing applications across data centers.

Load Balancing Resources with Content Delivery Networks


By making your application available over the
Internet either with or without the use of a cloud
platform such as Windows Azure, you can enable
people all over the world to access it. Your
application may become popular in some part of
the world that is geographically distant from
where that service is hosted. The distance,
however, may lead to long round-trip times and
have a negative effect on user experience. It is
therefore useful to host the application as close to
the target users as possible.

The function of a CDN is to provide content to


users with minimum latency and maximum availability. To that end, CDNs maintain data centers in
selected locations around the world that are meant to reduce round-trip times for users. CDNs are ideally
suited to serving static content such as image, script, multimedia, and application files. Because many
applications contain a large number of such resources, the use of a CDN can be beneficial in two ways:

For users. Static content is delivered quickly and user experience is enhanced. Long round-trip times
are only required for accessing the actual dynamic portions of the application.
For developers. Traffic to the applications servers is reduced to only such requests that require
dynamic content. Scalability is enhanced and costs are lowered. It is the CDN that bears most of the
traffic for the application.

The Windows Azure Content Delivery Network


The Windows Azure CDN is used to maintain copies of the contents of Windows Azure Storage blobs and
static outputs of compute instances in sites around the world. At the time of writing, the Windows Azure
CDN maintains data centers in the following locations:

United States

Holland

Ireland
Developing Windows Azure and Web Services 12-23

United Kingdom

Russia

France

Sweden

Austria

Switzerland

Hong Kong

Brazil

South Korea

Singapore

Australia

Taiwan

Japan
Qatar

The Windows Azure CDN may be enabled on a Windows Azure Storage account or hosted service. The
first time a specific object is requested from the CDN, it will be retrieved from its blog or service and
cached at the CDN endpoint. It will subsequently be served directly from the CDN. Note that only blobs
that are available are cached in the CDN. Similarly, a service must be hosted in the production
environment, provide content on port 80 using HTTP, and place the relevant content in the /cdn folder.
Also note that query strings are ignored when caching blobs in the CDN, but not ignored when caching
service content.

Load Balancing Applications Across Data Centers


Data centers are designed to be redundant. There
are multiple power supplies (including
generators), multiple network providers and all
steps are taken to ensure that when something
goes wrong there is minimal amount of downtime
before your application can return to normal
operation. However, sometimes, due to
unforeseen reasons, an entire data center can go
offline, along with all of the applications that are
hosted inside it. The solution to this problem is to
host your application in multiple data centers
around the world. With Windows Azure, you can
achieve this by creating multiple hosted services and deploying them in different data centers such as
North-Central US and West Europe. Because you now have multiple hosted services in operation, it is
possible to load-balance between them to increase performance of your application and enhance its user
experience.
12-24 Scaling Services

Windows Azure Traffic Manager


The Windows Azure Traffic Manager enables you to load-balance traffic between your hosted services,
regardless of whether they are deployed in one or multiple data centers. The nature of the load-balancing
operation depends on one of the following:

Performance. Traffic is routed to the service which is closest to the users geographical location.

Round Robin. Traffic is routed to each service in turn. If your services are hosted in different data
centers, users may be routed to a service that is geographically distant from them.

Failover. Services are ordered in a list, with traffic being routed to the service at the top of the list. If
this service goes offline for any reason, traffic will be routed to the second service on the list, and so
on.

To use the Traffic Manager, you define one or more policies based on the above criteria. Multiple hosted
services are then assigned to each policy. When the Windows Azure load balancer is queried with the
appropriate policy, it replies with the address of the hosted service that matches the policy.
Developing Windows Azure and Web Services 12-25

Lab: Scalability
Scenario
The final task that you need to perform in the Blue Yonder Airlines application is to reduce the ASP.NET
Web API back-end service database load by storing the static data that was fetched from the database in
a distributed cache. In this lab, you will add a distributed caching mechanism to the ASP.NET Web API
service.

Objectives
After completing this lab, you will be able to:

Use Windows Azure Caching with Web applications.

Lab Setup
Estimated time: 30 minutes
Virtual Machine: 20487B-SEA-DEV-A, 20487B-SEA-DEV-C

User name: Administrator, Admin

Password: Pa$$w0rd, Pa$$w0rd


For this lab, you will use the available virtual machine environment. Before you begin this lab, you must
complete the following steps:

1. On the host computer, click Start, point to Administrative Tools, and then click Hyper-V Manager.

2. In Hyper-V Manager, click MSL-TMG1, and in the Action pane, click Start.
3. In Hyper-V Manager, click 20487B-SEA-DEV-A, and in the Action pane, click Start.

4. In the Action pane, click Connect. Wait until the virtual machine starts.

5. Sign in using the following credentials:


User name: Administrator

Password: Pa$$w0rd

6. Return to Hyper-V Manager, click 20487B-SEA-DEV-C, and in the Action pane, click Start.
7. In the Action pane, click Connect. Wait until the virtual machine starts.

8. Sign in using the following credentials:


User name: Admin

Password: Pa$$w0rd

9. Verify that you received credentials to log in to the Windows Azure portal from you training provider,
these credentials and the Windows Azure account will be used throughout the labs of this course.
In this lab, you will install NuGet packages. It is possible that some NuGet packages will have newer
versions than those used when developing this course. If your code does not compile, and you identify
the cause to be a breaking change in a NuGet package, you should uninstall the NuGet package and
instead, install the old version by using Visual Studio's Package Manager Console window:

1. In Visual Studio, on the Tools menu, point to Library Package Manager, and then click Package
Manager Console.

2. In Package Manager Console, enter the following command and then press Enter.

install-package PackageName -version PackageVersion -ProjectName ProjectName


12-26 Scaling Services

(The project name is the name of the Visual Studio project that is written in the step where you were
instructed to add the NuGet package).

3. Wait until Package Manager Console finishes downloading and adding the package.
The following table details the compatible versions of the packages used in the lab:

Package name Package version

Microsoft.WindowsAzure.Caching 2.0.0.0

Exercise 1: Use Windows Azure Caching


Scenario
To reduce the number of requests sent to the database from the ASP.NET Web API back-end service, you
will add a distributed cache. Each time a user requests information regarding certain flights, the result of
querying the database is stored in the distributed cache. If another user requests the same information,
that information will be retrieved from the cache instead of from the database.
In this exercise, you will add a Windows Azure Cache worker role, and configure the flights API controller
to first check

The main tasks for this exercise are as follows:

1. Add a Caching Worker Role to the Cloud Project


2. Add the Windows Azure Caching NuGet Package to the ASP.NET Web API Project

3. Add Code to Cache the List of Searched Locations

4. Debug Using the Client Application

Task 1: Add a Caching Worker Role to the Cloud Project


1. In the 20487B-SEA-DEV-A virtual machine, run the setup.cmd script from
D:\AllFiles\Mod12\LabFiles\Setup. Provide the information according to the instructions.

Note: You may see a warning saying the client model does not match the server model.
This warning may appear if there is a newer version of the Windows Azure PowerShell Cmdlets. If
this message is followed by an error message, please inform the instructor, otherwise you can
ignore the warning.

2. Open the D:\Allfiles\Mod12\LabFiles\begin\BlueYonder.Server\BlueYonder.Companion solution file.

3. Add a Cache Worker Role named BlueYonder.Companion.CacheWorkerRole to the


BlueYonder.Companion.Host.Azure project and then build the solution.

Task 2: Add the Windows Azure Caching NuGet Package to the ASP.NET Web API
Project
1. Add the Windows Azure Caching NuGet package to the BlueYonder.Companion.Controllers project.

2. Add the Windows Azure Caching NuGet package to the BlueYonder.Companion.Host project.

3. Open the Web.config file from the BlueYonder.Companion.Host project, and in the
<dataCacheClients> section, replace the [cache cluster role name] string with
BlueYonder.Companion.CacheWorkerRole.
Developing Windows Azure and Web Services 12-27

Task 3: Add Code to Cache the List of Searched Locations


1. In the BlueYonder.Companion.Controllers project, open the FlightsController class, and locate the
Get method that receives three parameters. In the method locate the comment // TODO: Place cache
initialization here, and after it add the code to create a DataCache object for the default cache. Use
the DataCacheFactory.GetDefaultCache method to get the data cache object.

2. Still in the Get method, locate the comment // TODO: Place cache check here, and after it add the
code to check whether the cache already contains the requested list of flight schedules. Use the
DataCache.Get method to get the cached item, and store the result in the routesWithSchedules
variable. For the key, use a semicolon-separated string containing the source, destination and date
parameters of the method.

Note: The date parameter of the Get method is a nullable DateTime. You should make
sure the cache key you create will not get set to null if the date parameter is null.

3. Still in the Get method, locate the comment // TODO: Insert into cache here, and after it add the code
to store the routesWithSchedules variable in the cache by using the DataCache.Put method. Use
the cache key as you created it before for the DataCache.Get method call.

Task 4: Debug Using the Client Application


1. Still in the Get method, place a breakpoint in the beginning of the method, set the
BlueYonder.Companion.Host.Azure project as the startup project, and run the project in debug.
2. Switch into the 20487B-SEA-DEV-C virtual machine.

3. Open the BlueYonder.Companion.Client solution file from the


D:\AllFiles\Mod12\LabFiles\begin\BlueYonder.Companion.Client folder.

4. Run the client without debugging, search for a destination that contains the letter N, and go back to
20487B-SEA-DEV-A virtual machine to see the code execution breaks. Debug the code in the Flights
controller to verify it retrieves data from the database.

Note: Normally, the Windows Azure Emulator is not accessible from other computers on
the network. For the purpose of testing this lab from a Windows 8 client, a routing module was
installed on the server's IIS, routing the incoming traffic to the emulator.

5. Close the client app, re-open it without debugging, search again for a destinations that contains the
letter N, and verify the Flights controller retrieves data from the cache.

Results: You will have added a caching worker role to the Cloud project, and implemented other
Windows Azure caching features.

Question: What are the options for adding a distributed cache in Windows Azure?
12-28 Scaling Services

Module Review and Takeaways


In this module, you learned about scaling services and applications. You learned the benefits of using
load-balancers in on-premises and in Windows Azure environments, and you learned the importance of
using distributed caches instead of in-memory caches. Finally, you learned how to use distributed cache
solutions for on-premises and Windows Azure applications.
13-1

Appendix A
Designing and Extending WCF Services
Contents:
Module Overview 13-1

Lesson 1: Applying Design Principles to Service Contracts 13-2

Lesson 2: Handling Distributed Transactions 13-14


Lesson 3: Extending the WCF Pipeline 13-21

Lab: Designing and Extending WCF Services 13-37


Module Review and Takeaways 13-46

Module Overview
Windows Communication Foundation (WCF) is the framework for developing Simple Object Access
Protocol (SOAP)-based services. In the You learned how to create a WCF service by creating a service
contract, implementing it, and hosting it.
When you create services in WCF, there are additional design principles and techniques that you can
apply to your code to enhance the reliability and performance of your service.. For example, you can
create asynchronous service operations to better utilize how WCF uses the managed Thread Pool. You can
also create services that can be a part of a distributed transaction that spans over several services and
databases. Or you can create your own custom error handlers that log any unhandled exception thrown in
your service code.
This module describes how to design service contracts with various patterns, such as one-way operations
and asynchronous operations, and then explains how to implement those contracts in your service
implementation. You will also learn about distributed transactions, what they are and how you can create
a WCF service that supports distributed transactions. The last lesson in this module is about the WCF
pipeline, where you will learn how you can extend the message handling pipeline by creating your own
custom runtime components and extensible objects, and then apply them to services and operations.

Objectives
After completing this module, you will be able to:

Design and create services and clients to use different kinds of message patterns.

Configure a service to support distributed transactions.


Extend the WCF pipeline with runtime components, custom behaviors, and extensible objects.
13-2 Appendix A: Designing and Extending WCF Services

Lesson 1
Applying Design Principles to Service Contracts
Message patterns describe how clients and services exchange data between each other. The most
common message pattern in WCF is the Request-Response pattern, where the client sends a request and
waits until a response returns from the service. However, WCF supports several other kinds of message
patterns that you can use in your service contract. For example, you can define a service contract with
one-way operations, which clients can call without waiting for any response (even if it is a fault response).

In this lesson, you will learn about the different message patterns that you can choose from when you
design your service contract.

Lesson Objectives
After completing this lesson, you will be able to:

Create service contracts with one-way operations.

Produce services that use streams for requests and responses.

Build duplex services.


Make asynchronous operations in both client and service.

One-Way Operations
Defining an operation as one-way means that the
operation is not expected to return a value.
Therefore, clients can send a request to the service
and continue to execute without waiting for a
response. This pattern is also known as fire-and-
forget. Because one-way operations require that
no value is returned, operations that are marked
as one-way must have void as their return type.
This nonblocking behavior of the client differs
from the behavior when the client calls an
operation that uses the request-response pattern.
This is because in request-response patterns, the
client blocks until the service responds, even if the operation returns void.

You can use one-way operations for any of the following scenarios:

When the client does not require the operation to return a value, either successful or failed.
When the operation is long-running and you do not want to block your client's execution.

When the service has a different way of notifying the client of the operation's result, such as
responding with an email message.
For example, a client tracking system can send one-way messages to a service with the Global Positioning
System (GPS) location of the client every second. The service operation only logs the coordinates and
therefore does not send any response back to the client. The business logic of the application handles the
possibility that some calls fail, because if a call fails, a new location is sent shortly afterward. In this case, it
is also expected that the business logic handles the scenario where the client loses connectivity and does
not send updates for long periods of time.
Developing Windows Azure and Web Services 13-3

Note: The client and service implementation of one-way operations differs between
transports. When using a bidirectional transport such as Hypertext Transfer Protocol (HTTP) or
Transmission Control Protocol (TCP), the client sends a request and waits for an
acknowledgement from the service. When the service receives the request, it immediately
responds with an acknowledgement and only then handles the request. For example, when using
HTTP transports with one-way messaging, the service immediately responds with an HTTP 202
(Accepted) response, and only then handles the request. If the service is unavailable to receive
the request, the client fails and throws an exception. On the other hand, when using a
unidirectional transport such as User Datagram Protocol (UDP), the client sends a request and
does not wait for any service acknowledgement. Therefore, the client cannot verify whether the
service is online and available.

To mark an operation as one-way, add the IsOneWay = true initialization to the [OperationContract]
attribute decorating the operation in the service contract.

The following code example demonstrates how to create a one-way operation in a service contract.

Service Contract With One-Way Operations


[ServiceContract]
public interface ILogging
{
[OperationContract(IsOneWay = true)]
void Log(string header, string message);

[OperationContract]
List<LogLine> GetLog();
}

Note: If the IsOneWay parameter is set to false or omitted from the attribute, the default
messaging pattern becomes request-response. If the operation does not return void, an
exception is thrown in runtime when opening the service host. You can have both one-way and
request-response operations in the same service contract.

When you use one-way messages, the client is unaware of what has occurred during the execution of the
service operation. Therefore, the client does not know whether the service operation completed execution
or failed because of an exception. If the client requires information on the operation's result, the service
has to send the information to the client. The service can send this information by using either an out-of-
band communication, such as an email or a text message (SMS), or by sending a message to the client
application For example, by hosting a WCF service in the client application and sending it a message that
contains the result.

For additional information on the one-way messaging pattern, see:

One-Way Services
http://go.microsoft.com/fwlink/?LinkID=298778&clcid=0x409
13-4 Appendix A: Designing and Extending WCF Services

Streamed Requests and Responses


The default behavior of WCF for sending or
receiving a message is to use buffered transfer. In
buffered transfers, all the data the client or service
has to send is loaded into the memory (into a
buffer), and is stored there until the whole
message is sent to the other end. This results in
several memory allocations that have to be freed.
If you send a very large message, you might even
get an OutOfMemoryException exception if
your object is too big to fit in memory. The same
buffering behavior applies when receiving a
message.

Note: To prevent services from failing due to large memory allocation attempts, the default
configuration of WCF limits the maximum size of a received message to 64 kilobytes (KB). An
additional benefit of this restriction is that it helps prevent denial of service attacks. You can
change this limit by setting the maxReceivedMessageSize and maxBufferSize attributes in the
binding configuration. This configuration also exists on the client-side, and affects the size of
responses that clients can receive from services.

In addition, buffered transfers require the receiving side to wait until the whole message arrives before it
can be read and used. When you send large messages on slow networks, it can cause your service and
client to stop responding for a long time, and even time out if the message takes too long to be received.

If you have to send large amounts of data from the client to the service or from the service to the client
such as when uploading or downloading a file, you might consider sending the data using a streamed
transfer instead of using the standard buffered transfer.

In streamed transfers, both the receiving and sending sides work with streams. When you use streams, you
can send small amounts of data, which are known as chunks, from one end to the other without having to
allocate memory for the whole message in advance. This is possible because the allocated memory is only
as big as a single chunk. In addition, the receiving end can handle each chunk, while it is being received,
without having to wait for the whole message to arrive. WCF can use streamed transfers in most of its
bindings, including HTTP, TCP, and Named pipes.

Streaming is supported on both ends of a WCF service call: an operation can receive a stream, return a
stream, or do both. The first part in creating an operation that supports streaming is to define the
operation contract to use stream, either as an input, output, or both.

The following code example demonstrates how to define operations that work with stream content.

Contract with Operations that Receive and Return Streams


[ServiceContract]
public interface IVideoService
{
[OperationContract]
Stream DownloadVideo(string videoName, string format);

[OperationContract]
Guid UploadVideo(Stream video);

[OperationContract]
Stream DecodeVideo(Stream videoInput);
Developing Windows Azure and Web Services 13-5

Note: If your operation receives a stream, it cannot receive any other parameter other than
that stream. Instead of using the Stream class for the input/output, you can also use the
Message class or any other type that implements the IXmlSerializable interface. If you plan on
passing very large data structures, consider working with the Message class or implementing the
IXmlSerializable interface, because these types provide streamed access to Extensible Markup
Language (XML) content by using the XmlReader abstract classes for reading XML and
XmlWriter abstract classes for writing XML.

For a demonstration on how to use the Message class for streamed transfers, see:

Using the Message Class


http://go.microsoft.com/fwlink/?LinkID=298779&clcid=0x409

Setting an operation to accept or return a Stream class is not enough. You also have to change the
binding configuration of your endpoint to use streamed transfers. Each part of the communication can be
streamed: the request (StreamedRequest), the response (StreamedResponse), or both (Streamed).
The following code example demonstrates setting the binding configuration to use streaming on both
request and response.

Configuring a Binding for Streamed Transport


<configuration>
<system.serviceModel>
<services>
<service name="Services.VideoStreamingService">
<endpoint
address="streamService"
binding="basicHttpBinding"
bindingConfiguration="HttpStreaming"
contract="Contracts.IVideoService" />
</service>
</services>

<bindings>
<basicHttpBinding>
<binding
name="HttpStreaming"
maxReceivedMessageSize="2147483647"
transferMode="Streamed"/>
</basicHttpBinding>
</bindings>
</system.serviceModel>
</configuration>

In this example, the maxReceivedMessageSize is changed from the default size of 65KB to
int32.MaxInt, to enable the service to receive large files.

When you change the transferMode of the binding to any of the streamed options, it affects all the
operations in the contract that you defined for the endpoint. This includes operations that do not use
streams for either input or output. In this case, the streamed content is buffered before
serializing/deserializing it. If your contract has a mix of operations that are more suitable for buffering and
operations that you want to stream, consider splitting the contract to two contracts. You can then expose
each contract through a different endpoint with a different binding configuration: one that uses buffering
and the other that uses streaming.
13-6 Appendix A: Designing and Extending WCF Services

After you configure your service contract for streamed transfers, you also have to make sure that your
client-side configuration is configured for streamed transfers. For TCP and Named pipes, WCF includes
policy settings in the service metadata so clients can create matching binding configuration. If you use the
Add Service Reference dialog box of Visual Studio 2012, your client-side configuration can be
configured for streamed transfer. However, for the HTTP transport, WCF does not include policy settings
in the service metadata, and after you use the Add Service Reference dialog box, you have to manually
edit the client configuration file and change the binding configuration to set which part is streamed: the
request (StreamedRequest), the response (StreamedResponse), or both (Streamed).

For additional information about how to stream messages in WCF, see:


Streaming Message Transfer
http://go.microsoft.com/fwlink/?LinkID=298780&clcid=0x409

Duplex Services
The WCF duplex messaging pattern implements
the callback design pattern. In this pattern, a
callback function is invoked after a certain
function finishes its work. The callback pattern is
used widely in the .NET Framework, for example:
The .NET Asynchronous Programming Model
uses callbacks (BeginInvoke and EndInvoke).

The ASP.NET Cache uses callbacks to indicate


when items are removed from the cache.

Windows Presentation Foundation (WPF) uses


callbacks in its dependency property
mechanism.

You can view the duplex channel in WCF as the implementation of the callback pattern in the distributed
world.

For a service to use a callback that resides on the client-side, the service has to open a connection to the
client. To make this possible, the client must perform two actions:

1. Create a service, and host it inside the client application. The remote service can open a channel to
the client-side service and send a message to it.

2. Send information about the client's address to the service so that the service knows how to call the
client.

When you use a duplex channel, such as a TCP channel, WCF automatically performs the above two
actions.

To call a client's callback from the service-side, the client must perform the following steps:

1. When the client calls the service, save the callback channel until it is needed.

2. When the service is ready to send the message, open the saved channel, and then call the callback
operation.

To create a duplex service, start by creating the client contract to implement in the client. Your service
uses the client contract when calling the client.

The following code example creates the IStockCallback client contract.


Developing Windows Azure and Web Services 13-7

The Client Contract to be Implemented by the Client


[ServiceContract]
public interface IStockCallback
{
[OperationContract(IsOneWay=true)]
void StockUpdated(string stockId, float value);
}

After you create the client contract, create the service contract, which is implemented on the service-side.

The following code example creates the IStock service contract.

The Service Contract to be Used in the Service


[ServiceContract(CallbackContract = typeof(IStockCallback))]
public interface IStock
{
[OperationContract(IsOneWay=true)]
void RegisterForQuote(string stockId);

[OperationContract(IsOneWay=true)]
void UpdateStockQuote(string stockId, float newValue);
}

You can correlate the service and client contracts in the service contract by adding the CallbackContract
parameter to the [ServiceContract] attribute, and setting it to the kind of the client contract interface.
There are two contracts that have to be implemented: the service contract and the client contract (also
known as the callback contract). Both contracts are designed when the service is built, but only the service
contract is implemented by the service itself.

Note: The callback and service contracts use only one-way operations, but you can also
declare operations that use the request-response messaging pattern in these contracts.

When you add a reference to the service on the client-side, you have access to both the service and
callback contracts. You use the service contract when you create a proxy for calling the service and you
implement the callback contract on the client-side.

The following code example demonstrates how the service can call the client by using the callback when it
has information for it.

The Implementation of the Service Contract


public class StockService : IStock
{
var _callbacks = new Dictionary<string, IStockCallback>();

public void RegisterForQuote(string stockId)


{
_callbacks[stockId] =
OperationContext.Current.GetCallbackChannel<IStockCallback>();
}

public void UpdateStockQuote(string stockId, float newValue)


{
try
{
_callbacks[stockId].StockUpdated(stockId, newValue);
}
catch (CommunicationException e)
{
// Log exception
13-8 Appendix A: Designing and Extending WCF Services

}
}
}

When the client sends a request to the RegisterForQuote operation method, the method retrieves the
callback channel from the WCF context by calling the GetCallbackChannel<T> generic method, and
stores it in a generic dictionary. When the UpdateStockQuote method is called by a different client (or
even a different service), the callback channel is retrieved from the dictionary, and a message is sent to the
selected client.

Note: The preceding code example registers a single callback for every stock ID to keep the
code short. In real-world scenarios, you should support multiple registrations per stock, as shown
in the demonstration that follows this topic.

To handle the message sent from the service, the client must implement the callback contract.

The following code example demonstrates how to implement the IStockCallback callback contract.

Client-Side Implementation of the IStockCallback Callback Contract


public class StockCallback : IStockCallback
{
public void StockUpdated(string stockId, float value)
{
MessageBox.Show(string.Format("{0} = {1}", stockId, value), "Stock Updated");
}
}

Implementing the callback contract is not enough. You also have to host this implementation in the client
before calling the service for the first time. However, you do not have to manually create a service host
and host the implementation, because WCF can automatically handle the hosting for you if you create the
client proxy with duplex communication support.

The following code example demonstrates how to create a duplex channel for the IStock service contract
with the StockCallback client callback class.

Creating a Duplex Chanel with the DuplexChannelFactory<T> Generic Class.


InstanceContext context = new InstanceContext(new StockCallback());
DuplexChannelFactory<IStock> factory = new DuplexChannelFactory<IStock>(context);
IStock proxy = factory.CreateChannel();
proxy.RegisterForQuote("MSFT");

When you have to call a duplex service, use the DuplexChannelFactory<T> generic class instead of the
standard ChannelFactory<T> generic class. The DuplexChannelFactory<T> has a constructor that
expects an InstanceContext parameter. The InstanceContext class manages the local service hosting
context. You call the InstanceContext constructor with an instance of the class that you want to host,
which in this case is the StockCallback class.

If you use the Add Service Reference dialog box of Visual Studio 2012 to add reference to a duplex
service, the generated proxy is generated with a constructor method that expects an InstanceContext
object.
The only part that remains is to verify that you are using a binding that supports duplex communication.
TCP and Named pipes support duplex communication, but HTTP and UDP transports cannot handle
duplex communication. If you want to use duplex communication with HTTP, you have to use either the
WsDualHttpBinding or the NetHttpBinding.
Developing Windows Azure and Web Services 13-9

Note: The specification of HTTP, which is defined by the World Wide Web Consortium
(W3C), prevents the usage of HTTP as a duplex channel. Therefore, you cannot use standard
HTTP channels, such as those used with BasicHttpBinding and WsHttpBinding, for duplex
communication. You can use the WsDualHttpBinding with duplex communication, because this
binding creates two request-response channels: one for calling the service, and another for
receiving calls from the service. You can also use the NetHttpBinding for duplex
communication, because this binding can upgrade its HTTP channel to use WebSockets, which is
a new protocol that supports bidirectional duplex communication.

For additional information on duplex services, see:

Duplex Services
http://go.microsoft.com/fwlink/?LinkID=298781&clcid=0x409

For additional information on NetHttBinding and its WebSockets support, see:


Using the NetHttpBinding
http://go.microsoft.com/fwlink/?LinkID=298782&clcid=0x409

Demonstration: Duplex Services


This demonstration shows how to create a service contract and a callback contract, and how to implement
the service contract on the service-side and the callback contract on the client-side. This demonstration
also shows how to create a proxy by using the DuplexChannelFactory<T> generic class.

Demonstration Steps
1. Open the D:\Allfiles\Apx01\DemoFiles\DuplexServices\DuplexServices.sln solution file.
2. View the callback contract and service contract. Observe how the correlation is made between the
two contracts.

The contracts are located under the Contracts project, in the IStockCallback.cs and IStock.cs files.
3. View the service implementation. You set the instancing mode of the service to Single because the
service host directly calls the UpdateStockQuote method, and the reason that you set the
concurrency mode to Multiple is to support multiple concurrent registration calls.
The service implementation is located in the StockService.cs file, under the Service project.

4. Observe the use of the GetCallbackChannel<T> generic method in the RegisterForQuote method.

The GetCallbackChannel<T> generic method retrieves the callback channel to the client and stores
the channel in the dictionary.

The lock around the dictionary access is there to prevent multiple requests from writing the same key
to the dictionary.

5. View the code in the UpdateStockQuote method, and observe how the callback channel is casted to
the ICommunicationObject interface.

6. View the service host code, and observe that it uses TCP-based endpoints because TCP supports
duplex communication.

The service host code is located in the Program.cs file under the Service project.
7. View the implementation of the callback contract in the Client project, in the StockCallback.cs file.
13-10 Appendix A: Designing and Extending WCF Services

8. View the client proxy creation. Note the use of the DuplexChannelFactory<T> generic classes
instead of the ChannelFactory<T> generic class.

9. Run both projects without debugging and wait until both console windows open.
Wait until the message "Enter stock name followed by new price. Enter Q to quit." appears in the
Service Host console window, and until the message "Press Enter when the service is ready" appears
in the Client console window.

Move to Client console window and press Enter. Wait until the following message appears in the
window: "Waiting for stock updates. Press Enter to stop".

10. Enter stock changes in the Service Host console window, and view how the updates appear in the
Client console window.

In the Service Host console window, type MSFT 1200, and then press Enter. Verify the message
"MSFT = 1200" appears in the Client console window.

In the Service Host console window, type MSFT 1400, and then press Enter. Verify the message
"MSFT = 1400" appears in the Client console window.

Close both console windows when you are finished.


Question: What will happen if the callback contract uses a request-response instead of a
one-way operation?

Asynchronous Service Operations


When the client sends requests to a WCF service,
each request is executed in the service in a
dedicated thread from the managed Thread Pool.
As long as an operation is executing, the attached
thread cannot be used for other incoming calls. If
your operation is doing CPU related tasks, such as
data manipulation or some computations, the
handling thread is used. But if your operation is
doing I/O related tasks, such as reading a file from
the hard-drive, or waiting for a database call to
return, the handling thread is blocked. And if the
thread is blocked, it is not doing any work. When
you have several concurrent requests in WCF that are waiting for I/O operations to complete, you end up
having many blocked threads, and this forces the thread pool to create more threads for incoming
requests. Creating more threads when other threads are blocked and not being used is a waste of
resources because there is an overhead to creating and managing many threads, and each new thread
increases your service's memory consumption.

If you have an operation that does blocking I/O work, you can switch it to an asynchronous service
operation. Asynchronous service operations act as follows:

1. The operation starts to execute in the context of a thread from the thread pool.

2. When the operation requires some I/O-bound work, it executes an asynchronous I/O call instead of a
synchronous I/O call.

3. As soon as the I/O call starts, the thread is released back to the thread pool during the asynchronous
I/O call. As soon as the thread is back in the thread pool, the thread can be assigned to other
incoming requests.
Developing Windows Azure and Web Services 13-11

4. When the I/O call completes and the control is returned to the operation, a thread is requested from
the thread pool, assigned to the operation, and the operation continues its execution.

When you use the asynchronous operation call pattern with I/O-bound operations, you can better use the
thread pool and decrease the memory consumption of your service.
You can change your service operation from being synchronous to asynchronous by changing the way
the operation method is declared in the service contract and by the way that you implement it in your
code.

The following code example demonstrates how to declare a service contract with an asynchronous service
operation.

Service Contract with an Asynchronous Service Operation


[ServiceContract]
public interface IFileHandler
{
[OperationContract]
Task<int> ProcessFile(string fileName);
}

In this example, the return kind of the operation method is the Task<T> generic class. When the service
host sees that the operation uses a Task, it treats this operation as asynchronous.

Note: Although the operation contract's signature uses the Task<T> type, the contract
exposed by the service, by using the WSDL file, shows this operation as synchronous. This is
because the asynchronicity of the operation is internal to the service implementation and is
transparent to clients.

The following code example demonstrates how to implement the IFileHandler service contract.

Implementing the Asynchronous Service Contract


public class FileHandlerService : IFileHandler
{
public async Task<int> ProcessFile(string fileName)
{
using (var file = File.OpenRead(fileName))
{
byte[] buffer = new byte[4096];
int numRead;
int result = 0;
while ((numRead = await file.ReadAsync(buffer, 0, buffer.Length)) > 0)
{
result += ProcessData(buffer, numRead);
}

return result;
}
}

private int ProcessData(byte[] buffer, int numRead)


{
// Do some processing and return the partial result
return partialResult;
}
}

To implement the asynchronous service operation, you use the async keyword in the signature of the
method, and in the implementation, you signify any asynchronous operation with the await keyword.
13-12 Appendix A: Designing and Extending WCF Services

When the await keyword is encountered during execution, the thread is released and returned to the
thread pool, until the asynchronous operation is complete. In this sample, the FileStream.ReadAsync
method starts the asynchronous I/O call and returns an instance of the Task<T> class.

Note: Before WCF 4.5, asynchronous service operations were implemented by creating two
methods: BeginXXX and EndXXX, marking the BeginXXX method with the
[OperationContract] attribute, and setting the AsyncPattern parameter of the attribute to true.
This declaration technique, although not obsolete, is more difficult to implement.

For additional information about the different asynchronous programming patterns that you can use in
the .NET Framework, see:

Asynchronous Programming Patterns


http://go.microsoft.com/fwlink/?LinkID=298783&clcid=0x409

All the built-in Stream types in the .NET Framework, such as FileStream and NetworkStream, expose
asynchronous methods for both reading and writing that return a Task object. ADO.NET also provides
asynchronous operations with the DbCommand class. For example, although the
DbCommand.ExecuteNonQuery method is a synchronous blocking call, the
DbCommand.ExecuteNonQueryAsync method provides a nonblocking asynchronous call that you can
use to run lengthy SQL statements in the database.

If your service operation has to call another WCF service, and you do not want to block the thread while
you wait for the other service to respond, you can use asynchronous client calls. You can generate a client
proxy with asynchronous methods with the Add Service Reference dialog box of Visual Studio 2012.
When you add a service reference in the Add Service Reference dialog box, click Advanced, and then
select the Allow generation of asynchronous operations option button.

The following code example demonstrates how to call a WCF service by using an asynchronous WCF
client call.

Invoking a Service Asynchronously


using (var proxy = new RemoteServices.ProcessingServiceClient())
{
int result += await proxy.ProcessDataAsync(buffer, numRead);
}

In this example, the asynchronous ProcessDataAsync method was generated in addition to the
synchronous ProcessData method. You can use the same async/await pattern here as you would use it
when working with streams, because asynchronous WCF client calls also return a Task<T> instance.

For additional information about the async and await keywords, see:

Asynchronous Programming with Async and Await


http://go.microsoft.com/fwlink/?LinkID=298784&clcid=0x409
Note: Before Visual Studio 2012, the Add Service Reference feature generated client
codes with asynchronous methods that used the event-based asynchronous pattern or the
IAsyncResult asynchronous pattern (also known as the Asynchronous Programming Model, or
APM). You can still generate those methods by clicking Advanced in the Add Service Reference
dialog box, and then selecting the Generate asynchronous operations option.

For additional information about how to create asynchronous service and client operations, see:
Developing Windows Azure and Web Services 13-13

Synchronous and Asynchronous Operations


http://go.microsoft.com/fwlink/?LinkID=298785&clcid=0x409

If you choose to use channel factories instead of generating proxy classes, and the service contract
interface that you have only contains synchronous operations, you have to create a new service contract
interface that returns Task<T>, and replace the T generic type with the current return kind of the
method. If the method returns a void, substitute it with the Task return type. The IFileHandler service
contract shown at the beginning of this topic is the result of replacing a synchronous method signature
returning an int value with an asynchronous method signature returning Task<int>.

Note: When you use client-side asynchronous operations, the operations on the service-
side will remain synchronous, unless they were also replaced with asynchronous operations
(which requires changing to implementation of the service as well).
13-14 Appendix A: Designing and Extending WCF Services

Lesson 2
Handling Distributed Transactions
The most used type of transactions is a database transaction. When you use a database, you can start a
transaction, execute several insert/update/delete commands, and then choose whether to commit the
transaction or rollback the transaction, canceling the changes that you made. Although you might think
of transactions as only being relevant to databases, there are other transactional resources that you can
manage, such as the file system of your computer and registry settings.

A single transaction only spans one resource (for example, a specific database), a distributed transaction
can span multiple resources, such as two databases or a database and a file system of machine.
In distributed transactions, the scope of the transaction is not limited to a single or a chain of resources.
Instead, the distributed transaction spans multiple systems, collaborating the work that is performed on
the client-side and service-side, to create a transaction that can even span multiple databases in different
networks. For example, a client can start a distributed transaction and call two WCF services, each working
in an internal transaction. If the second service call fails, the distributed transaction will rollback, causing
the transaction in the first service to rollback.
In this lesson, you will learn how to configure your service to support distributed transactions, and how to
use transactions in your service implementation. You will also learn how to call a service from the client
while in a distributed transaction.

Lesson Objectives
After completing this lesson, you will be able to:
Describe what a distributed transaction is, and how the two-phase commit protocol works.

Configure your service contract and binding to support distributed transactions.


Use distributed transactions in your service implementation and in client-side code.

Explaining Transactions Terms: Transaction Manager, Transaction


Coordinator, and Two Phase Commit
To set up a distributed transaction, each computer
involved in the transaction manages its own local
transaction by using a local transaction manager,
and interacts with the other computer by using a
Distributed Transaction Coordinator (DTC). The
DTC is responsible for handling the completion
and abortion of the distributed transaction.

Note: The
System.Transactions.TransactionScope class is
responsible for locally managing the transaction
and communicating with the DTC. You have
already used this class to create local transactions with Entity Framework in Module 2, Querying
and Manipulating Data Using Entity Framework, Lesson 4, Manipulating Data, in Course 20487.
This class can also handle distributed transactions.
Developing Windows Azure and Web Services 13-15

When the computer that started the distributed transaction finishes its work, its coordinator starts the
distributed transaction commit process by using a two-phase commit protocol.

The following diagram describes the two-phase commit protocol.

FIGURE 13.1: THE TWO-PHASE COMMIT PROTOCOL


The two-phase commit protocol stages are:
1. When the client finishes its work, it asks its coordinator to commit the distributed transaction.

2. The coordinator of the client sends a commit preparation request to the coordinator of the service.

3. The coordinator of the service instructs the service to prepare itself to commit.
4. If the service can commit the transaction, an approval message is sent to the coordinator of the client.

5. After the coordinator of the client makes sure both parties are prepared to commit, it sends the client
application a request to commit the transaction.
6. In addition to sending the client a request, the coordinator also sends a request to commit the
transaction to the coordinator of the service.

7. The coordinator of the service passes the request to commit the transaction to the service
implementation.

For a distributed transaction to work, both computers must have a DTC installed. In Windows operating
systems, the Microsoft DTC (MSDTC) service processes distributed transactions. Therefore, you have to
make sure that the DTC service is installed and is in the Started state. To verify this, open the services list
by clicking the Administrative Tools tile on the Start screen, and then opening Services.
WCF supports distributed transactions in the following modes:

OLE Transactions (OleTx). The OleTx protocol is a Microsoft transaction protocol that is supported
by clients and services developed in the .NET Framework, and by MSDTC. OleTx uses Remote
Procedure Calls (RPC) to coordinate the transaction between the client and the service and therefore
is not interoperable. OleTx is the default transaction mode of WCF when using non-interoperable
bindings such as the NetTcpBinding and NetNamedPipesBinding.

WS-AtomicTransaction (WS-AT). WS-AT is a transaction protocol created by the OASIS


(Organization for the Advancement of Structured Information Standards) consortium, and is part of
the WS-* sets of standards. With WS-AT, the transaction coordination is handled by passing
information in the SOAP headers of requests, and therefore is interoperable. WS-AT is also supported
by MSDTC to communicate with remote DTCs. WCF uses WS-AT with its interoperable bindings, such
as WSHttpBinding, but you can also configure non-interoperable bindings to use WS-AT.
13-16 Appendix A: Designing and Extending WCF Services

Configuring non-interoperable bindings to use WS-AT is useful (for example, when OleTx RPC calls
are blocked because of firewall rules).

If you plan on using WS-AT with MSDTC, you have to configure MSDTC. For the MSDTC configuration
steps required for using WS-AT, see:
Configuring WS-Atomic Transaction Support
http://go.microsoft.com/fwlink/?LinkID=298786&clcid=0x409

WCF prefers to use OleTx when possible, because it provides better performance than WS-AT and requires
fewer configurations in MSDTC. Even if your binding is configured for WS-AT, WCF tries to upgrade the
transaction flow to OleTx, if possible.

You can change the way WCF coordinates transactions by changing the binding configuration of your
service endpoints. This setting will be discussed later in this lesson. Before you decide to use distributed
transactions in your services, remember that transactions have coupling and performance costs caused by
resource locks and additional message traffic. One way to avoid this coupling is to use mechanisms other
than transactions for undoing changes, such as compensations, as explained in the following link.
Compensation vs. Transactions
http://go.microsoft.com/fwlink/?LinkID=298787&clcid=0x409

Configuring Contracts and Bindings for Transactions


If you want your service to accept distributed
transactions, you have to specify this in the service
contract by using the [TransactionFlow]
attribute. By decorating your service operations
with the [TransactionFlow] attribute, you can
state which operations supports flowing
transactions and which operations do not.
The following code example demonstrates how to
define a service contract that supports flowing
transactions.

Configuring Service Contract for Transaction Flow


[ServiceContract]
public interface IBankContract
{
[OperationContract]
[TransactionFlow(TransactionFlowOption.Mandatory)]
bool Transfer1(Account from, Account to, decimal amount);

[OperationContract]
[TransactionFlow(TransactionFlowOption.Allowed)]
bool Transfer2(Account from, Account to, decimal amount);
}

You can use the TransactionFlowOption enumeration to determine whether a transaction can flow from
the client to the service. The possible values for this enumeration are as follows:

Allowed. The transaction of the client may flow to the service.


Developing Windows Azure and Web Services 13-17

NotAllowed. The transactions of the client may not flow to the service. If the client sends a request
with transaction information, the request is rejected with a protocol exception. This is the default
setting for the [TransactionFlow] attribute.

Mandatory. The transactions of the client must flow to the service. If a request is made from a client
without transaction information, the request is rejected with a protocol exception.

The [TransactionFlow] attribute controls only whether the service can receive flow transactions. It does
not state anything about whether and how the service uses the transaction. That depends on the service
implementation.

To enable transaction flow through your service, you have to set the TransactionFlow setting of your
binding to true. The TransactionFlow setting can be applied to any binding that supports SOAP header.
For example, you can set it with WSHttpBinding and NetTcpBinding, but you cannot set
TransactionFlow in BasicHttpBinding.
The following code example demonstrates how you can configure a NetTcpBinding to enable
transaction flow.

Enabling Transaction Flow with NetTcpBinding


<netTcpBinding>
<binding name="TransactionalBinding" transactionFlow="true">
. . .
</binding>
</netTcpBinding>

In some supporting bindings, such as NetTcpBinding, you can also set the transactionProtocol attribute
to configure, which protocol WCF uses to coordinate the transaction: OleTransactions,
WSAtomicTransactionOctober2004, or WSAtomicTransaction11. The first protocol uses the OleTx
transaction protocol, and the latter two protocols use different versions of the WS-AT standard.
WSAtomicTransactionOctober2004 uses version 1.0 from October 2004 and WSAtomicTransaction11
uses version 1.1 from July 2007.

The following code example demonstrates how you can use the <transactionFlow> binding element if
you build your own custom binding.

Creating a Custom Binding that Supports Transactions


<customBinding>
<binding name="transactionalBinding">
<reliableSession/>
<transactionFlow transactionProtocol="WSAtomicTransaction11"/>
<textMessageEncoding/>
<httpTransport/>
</binding>
</customBinding>

The [TransactionFlow] attribute should also appear in your client-side binding configuration. If you use
the Add Service Reference dialog box of Visual Studio 2012, this attribute is added automatically to the
client configuration. If you use the ChannelFactory<T> generic class and manually create the client
configuration file, you have to add this attribute yourself.
13-18 Appendix A: Designing and Extending WCF Services

Implementing Transactions in Services and Clients


When you use transactions in services, you do not
have to explicitly commit or roll back the
transaction. Instead, your code executes in a
transaction scope, where all the code that
executes participates in a transaction. When you
execute a service operation in the transaction
scope, you have to toggle the Complete flag of
the transaction to state that the transaction can
be completed successfully. If the flag is not
toggled by the end of the scope, or if the code
fails while you are in the scope, the transaction
aborts. When you call several service operations
while in the transaction scope, only the last operation of each service has to toggle the Complete flag
and the previous flag settings are ignored.

Note: If you recall the DTC two-phase commit diagram, when the coordinator for the
service checks that the service can complete its transaction, it actually checks that the transaction
scope was marked as completed.

When you use transactions in your service implementation, you have to specify how scopes are created
and used, and when a scope is marked as complete. You can control these settings by using the
[ServiceBehavior] attribute and the [OperationBehavior] attribute.

The following code example demonstrates how you can set transaction-related settings by using the
[ServiceBehavior] attribute.

Setting Transaction-related Settings with the [ServiceBehavior] Attribute


[ServiceBehavior(
TransactionAutoCompleteOnSessionClose = true,
ReleaseServiceInstanceOnTransactionComplete = true)]
public class TransferService : IBankContract
{
. . .
}

The TransactionAutoCompleteOnSessionClose property controls if the transaction automatically


becomes marked as complete when the client's session closes without errors. Setting this property to true
requires that the service use a session. The default value of this property is false.

The ReleaseServiceInstanceOnTransactionComplete property controls if the service instance is


destroyed as soon as the transaction is marked as complete. When you set this property to true and use
the PerSession instancing mode, every time that a transaction is marked as completed, and the service
instance is destroyed.

In addition to setting your service behavior, you can configure each of your operations to define how they
handle transactions.

The following code example demonstrates how you can use the [OperationBehavior] attribute to
configure transactions in service operations.

Setting Transaction-related Settings with the [OperationBehavior] Attribute


[OperationBehavior(
TransactionScopeRequired = true,
Developing Windows Azure and Web Services 13-19

TransactionAutoComplete = false)]
public bool Transfer1(Account from, Account to, decimal amount)
{
bool result = true;
// Transactional database code goes here
OperationContext.Current.SetTransactionComplete();
return result;
}

Setting the TransactionScopeRequired property to true indicates that the operation requires a
transaction for its work. If a transaction scope does not flow from the client, the service creates its own
scope. Creating a scope depends on how you have set the TransactionFlow attribute in your contract,
and whether the client flows a transaction into the service. The default value of the
TransactionScopeRequired property is false.

The following table describes the different scenarios of configuring the TransactionScopeRequired
property, and how it is affected by transactions that flow from the client.

Client passes
TransactionScopeRequired TransactionFlow Result
transaction

True Allowed / Mandatory Yes Operation uses the


flowed transaction.

True Allowed / No Operation uses a new


NotAllowed transaction.

False Allowed / Mandatory Yes Operation is executed


without a transaction.

False Allowed / No Operation is executed


NotAllowed without a transaction.

Note: Incorrect settings that may throw exceptions are omitted from the table. For
example, a service throws an exception if a transaction scope is required and the transaction flow
is mandatory but the client did not pass a transaction to the service.

When an operation uses a transaction scope, you can mark it as complete in one of two ways:

Set the OperationBehavior.TransactionAutoComplete property to true. By setting this property to


true, the transaction automatically sets to complete if the operation finishes without faulting. The
default value for this property is false.

Set the transaction completion in code. If the TransactionAutoComplete property is set to false, you
can set the transaction to complete by calling the SetTransactionComplete method, as shown in the
earlier example. You have to manually complete a transaction when certain execution paths require
the transaction to be aborted.
To call a WCF service within a transaction, you have to wrap your code in a transaction scope, and then
set the transaction to a completed state before ending it.

The following code example demonstrates an example of how to use a distributed transaction on the
client.

Using Transactions in the Client


using (TransactionScope scope = new TransactionScope())
{
13-20 Appendix A: Designing and Extending WCF Services

bank1Proxy.Transfer1(accountOne, accountTwo, amount);


bank2Proxy.Transfer1(accountThree, accountFour, amount);
bank2Proxy.Transfer2(accountThree, accountFour, amount);
scope.Complete();
}

When you create a transaction scope, you can involve several services in one distributed transaction.
Because all participating parties in a distributed transaction have to mark the transaction as completed,
the client also must do so by calling the TransactionScope.Complete method.

Note: To use the TransactionScope class, add a reference to the System.Transactions


assembly, and add a using directive for the System.Transactions namespace.

Demonstration: Creating WCF transactional services


This demonstration shows how to create a service contract that requires transactions, how to configure
the binding to flow the transaction from client to service, how to implement a service that has
transactional behaviors, and how to call the service from the client that has a distributed transaction.

Demonstration Steps
1. Open the D:\Allfiles\Apx01\DemoFiles\Transactions\Transactions.sln solution file and observe the
client-side code.
Open the Program.cs file from the Client project and examine the contents of the Main method.

Observe how the client application first tests a successful transaction, and then tests an unsuccessful
transaction.
2. View the service contract and the use of the [TransactionFlow] attribute. Observe the use of the
different flow levels (Mandatory and Allowed).
3. In the Service project, open the App.config file, and view the binding configuration where the
transactionFlow attribute is set to true.

4. Open the TransferService.cs from the Service project and inspect the service implementation.
View the [OperationBehavior] attribute decorating the Transfer method. Note how the
TransactionScopeRequired and TransactionAutoComplete parameters are set to true.

5. Observe how Entity Framework is used in the Transfer method. The transaction started by the
SaveChanges method is elevated to a distributed transaction.

6. In the Client project, open the Program.cs file. View how the channel factory is created and how the
binding is configured to flow the transaction from the client to the service. Note that in the first
transaction scope block, the Complete method is called, but in the second transaction scope block it
is not called.

7. Run both client and service projects, and view the printouts in the Service console window. The first
distributed transaction is committed, and the second distributed transaction rolls back.
Developing Windows Azure and Web Services 13-21

Lesson 3
Extending the WCF Pipeline
When a WCF service receives a message, the message travels a long way through various parts of the WCF
infrastructure until it reaches the service operation to which it is addressed. Most of the work performed
on the messagesuch as inspecting it, extracting information from it, and deserializing itis performed
by built-in mechanisms of WCF. Nevertheless, you can add custom message handling implementation to
different parts of the infrastructure. As soon as you gain access to the message, WCF offers you an easy
application programming interface (API) to examine and change the WCF contents, and to store relevant
message information for you to use later when processing the message in the operations of the service.

In this lesson, you will learn how WCF controls the behavior of the service and its components, and how
you can customize this behavior to suit your needs. You will also learn how you can inspect messages and
save state information by using various techniques.

Lesson Objectives
After completing this lesson, you will be able to:
Describe the architecture of the WCF pipeline.

Explain the responsibilities of the channel stack.


Explain the responsibilities of the dispatchers.
Create custom runtime components.

Apply custom runtime components to the WCF pipeline by using custom behaviors.
Attach custom behaviors to services, endpoints, contracts, and operations.
Create and use extensions objects.

High-level Architecture of the WCF Pipeline


Messages that are received by a WCF service have
a long distance to travel, from the moment they
arrive until the moment they are sent to your
service implementation for execution. After a
message arrives at the transport channel, it flows
through the channel stack, through the encoding,
and through any other channel stack protocols
such as a security protocol, or a reliable
messaging protocol.

Note: The channel stack will be thoroughly


explained in the next topic. For now, consider the
channel stack as the building blocks of your binding. For example, if your endpoint is configured
to use the BasicHttpBinding binding, your channel stack has an HTTP transport channel, with a
text encoder and no other channel stack protocols, because BasicHttpBinding does not use any
WS-* protocol.

After all the protocols in the channel stack have verified the message, the message has several more steps
to pass before reaching the service implementation. For example, some part of the pipeline has to
13-22 Appendix A: Designing and Extending WCF Services

deserialize the message to the appropriate common language runtime (CLR) types, and inspect it to
determine which service method to invoke and which service instance to use.

These decisions and actions are performed by the dispatchers, which are the components responsible for
translating the content of the message to a method call. The way the dispatcher is constructed is defined
by behaviorsservice behaviors, endpoint behaviors, contract behaviors, and operation behaviors.
Instancing, serialization, throttling, and authorization are only a few examples of the different behaviors
that control dispatcher operations.
The throttling behavior is not covered in this course. However, you should become familiar with it as it can
help you control your service's performance.

For more information about this service behavior and how to configure it, see:

Using ServiceThrottlingBehavior to Control WCF Service Performance


http://go.microsoft.com/fwlink/?LinkID=298788&clcid=0x409

You define and configure behaviors by using code or configuration. For example, adding the
<serviceMetadata> element under the <serviceBehavior> element in the service's configuration file
adds the metadata publishing behavior to your service. The [OperationBehavior] attribute decorates
your service operations and configure how to create transaction scopes. Each of these behaviors changes
the operation of the dispatcher.

In addition to using the behaviors that are built-in to WCF, you can write custom behaviors. With custom
behaviors, you can add message handling logic, and change the way messages process in the dispatcher.
For example, you can change the way errors are handled and logged, or provide custom message
validation.
You can apply custom behaviorsjust like standard behaviorsto your services, endpoints, contracts, and
operations, by using either code or configuration. The ability to extend dispatchers by using custom
behaviors is the most important extensibility point in WCF.

Responsibilities of the Channel Stack


The message pipeline is implemented in WCF as a
pipeline of channels, or a channel stack. The
channel stack is built out of two channel types: the
protocol channel and the transport channel. Each
protocol channel has an inner channel property
that contains the next channel to be used.
However, the transport channel does not hold an
inner channel, because it is the last channel that is
called as the message is transmitted to the client.
There are three kinds of binding elements: the
protocol, the encoder, and the transport. Each
protocol channel holds one protocol binding
element. Therefore, there may be zero or more protocol channels in the channel stack, each one
connected to the other, exactly like a pipeline, where each pipe has another pipe attached at its end.

The transport channel, which is responsible for taking the message and transmitting it to the client,
contains both the encoder binding element and the transport binding element. For example, if the
binding elements are HTTP transport, text encoding, and security and reliability protocols, the channels
that are used are as follows: protocol channel (reliability element), protocol channel (security element),
Developing Windows Azure and Web Services 13-23

and transport channel (HTTP and text elements). When a message is sent, it passes from the outermost
channel to the transport channel. The binding you select for your endpoint is a representation of a
composition of channels.

In WCF, you can extend both the Channel Stack and the Dispatchers by creating new channels and
runtime components. For example, you can create new transport channels for protocols that are not
currently supported by WCF, such as the Pragmatic General Multicast (PGM) protocol.
Creating new channels is an advanced topic and will not be covered in this course. However, the other
topics in this lesson will explain in depth how to extend the Dispatchers.

Responsibilities of the Dispatchers


WCF dispatchers are divided into several types of
scopes as follows:
Channel scope. Responsible for handling all
the messages that are received by a specific
binding type (a specific URI and binding
configuration).
Endpoint scope. Responsible for handling all
the messages directed to a specific endpoint.
Operation scope. Responsible for handling
all the messages directed to a specific
operation in a specific endpoint.

Channel Dispatcher
The ChannelDispatcherwhich is created by the service hostlistens to messages on a specific channel,
and associates messages from the channel with specific endpoints by using the appropriate endpoint
dispatcher.
When a service host opens, it creates a ChannelDispatcher object for every combination of Uniform
Resource Identifier (URI) and binding elementsuch as transfer, encoding, and protocolsto which it
listens. For example, if a host is listening on its base address for service metadata requests and has two
more endpoints with WSHttpBinding and NetTcpBinding bindings, then it has three channel
dispatchers: one channel dispatcher for the base address, one for the WSHttpBinding binding, and
another for the NetTcpBinding binding.

A single ChannelDispatcher can be created for more than one endpoint. For example, if your service has
endpoints for two contracts of the service, and both endpoints use the same binding and same address,
then your host creates a single ChannelDispatcher for both endpoints. When a message is received by
the ChannelDispatcher, it checks the message to see which endpoint should receive the message.

Each channel dispatcher contains information about the URI on which it listens, its channel stack, and the
endpoints (represented by EndpointDispatcher objects) that are used by this channel. When the channel
dispatcher receives a message, the dispatcher passes the message across to the endpoint dispatchers of
which it contains, to see which of them can handle the message. When the matching endpoint dispatcher
is found, the channel dispatcher passes the message to it for processing.

The channel dispatcher is also responsible for various behaviors of the service, such as handling error,
throttling, and time-out settings for receiving and sending messages. For example, you can extend the
channel dispatcher by creating a new error handler that logs all uncaught exceptions to a log file.
13-24 Appendix A: Designing and Extending WCF Services

Endpoint Dispatcher
The EndpointDispatcher class is responsible for receiving messages that are sent to a specific endpoint,
and then passes them to the appropriate service operation by using a DispatchOperation object. The
endpoint dispatcher receives a message from the channel dispatcher, and then checks if the message was
sent to it by checking the To and the Action elements that are specified in the message header. The
check is performed by using the AddressFilter and the ContractFilter properties. After the endpoint
dispatcher receives a message, it passes it to its DispatchRuntime object. In turn, it passes the messages
to the relevant DispatchOperation object, which is responsible for invoking the service instance method.

You can extend the endpoint dispatcher by supplying your own implementation of address and contract
filter. For example, you can create a new address filter that ignores the host name in the address, to
support services that are hosted behind load balancers.

Dispatch Runtime
Each endpoint dispatcher holds a DispatchRuntime object, which is responsible for selecting the most
suitable DispatchOperation object according to the content of the message. In addition to selecting the
DispatchOperation object, the dispatch runtime is also responsible for the following tasks:
Performing message inspection by using custom message inspectors.

Initializing the service instance context and managing its lifetime.

Applying the role provider and authorization manager on the service instance.

Note: The role provider and authorization manager will be discussed in Appendix B,
Implementing Security in WCF Services in Course 20487.

You can extend the dispatch runtime and add your own custom processing by adding message inspectors
and instancing providers to the dispatcher. The extensions that you add to the dispatch runtime affect all
the messages that relate to the endpoint that contains the dispatch runtime. For example, you can use the
dispatch runtime to add a custom message validation that guarantees that messages directed to a specific
endpoint contain a custom SOAP header. Or it can add a special service instance provider that supplies a
pool of service instances instead of using the default service instance creation techniques the dispatch
runtime offers (per-call, per-session, and single).

Dispatch Operation
The DispatchOperation object is responsible for invoking the service method that is specified in the
message, and passing the parameters contained in the message to it. To do this, the DispatchOperation
object performs the following tasks:
1. Obtains the service implementation instance from the dispatch runtime.

2. Deserializes the message to the matching Common Language Runtime (CLR) types.

3. Applies parameter inspectors to all the parameters sent to the method.


4. Invokes the method with the appropriate parameters.

5. Serializes the response, and creates the response message object.

You can customize and extend each of those tasks. For example, you can add a custom parameter
inspector that validates CLR data after it is deserialized, to check for irregular values such as string length,
and integer values range.

Note: You can also use message inspectors to validate data. However, it is more difficult
than using the parameter inspector, because message inspectors are called when the message is
Developing Windows Azure and Web Services 13-25

still in XML format. The XML format requires more work to parse and validate specific parts of the
message.

By extending the dispatch operation, you can add custom processing to the operation level. It affects all
the messages sent to a specific operation that is specified in the contract, regardless of the endpoint
through which the message was received.

Client-Side Dispatchers
The dispatcher components that were mentioned earlier are used on the service side. On the client side,
other components are responsible for creating and handling messages that are sent to the service. The
proxies, generated in WCF clients by the Add Service Reference dialog box of Visual Studio 2012 and by
the ChannelFactory<T> generic class, are responsible for converting method calls and parameters to
outgoing messages, and for converting response messages back to return values.

When the client calls an operation, such as a proxy method, the ClientOperation objectwhich is a part
of the proxytakes the parameters that are sent to the method, inspects them, and then serializes the
parameters to a message. After the client operation finishes building the message, it sends it to the
ClientRuntime object. The ClientRuntime object then inspects the message, creates the outgoing
channel, and passes the message to the channel and from there to the service.

You can use the ClientRuntime and the ClientOperation objects to customize the behavior of your
service. You can customize the ClientRuntime to change the behavior of all the operations in the
contract that are exposed by the proxy, whereas the ClientOperation offers extensibility for a specific
operation. For example, you can build a message inspector that adds a custom SOAP header to all
outgoing messages, and then put this inspector in the MessageInspectors collection of the
ClientRuntime object.
For additional information about the client runtime, see:

Extending Clients
http://go.microsoft.com/fwlink/?LinkID=298789&clcid=0x409

Creating Custom Runtime Components


You can use the extensibility points in the
dispatchers to add custom runtime components
that can handle various tasks, such as message
and parameter inspection, custom serialization
and deserialization of messages, and service
instance creation. To build custom runtime
components, you have to implement specific
interfaces according to the kind of the component
that you want to build. For example, to build a
custom runtime component that performs
message inspection, you have to write a class that
implements the IDispatchMessageInspector
interface and add it to the DispatchRuntime object that you want to customize. There are several
runtime components that you can create to control the message flow in your service. All the runtime
component interfaces are declared in the System.ServiceModel.Dispatcher namespace, which is a part
of the System.ServiceModel assembly.

Some of these interfaces are as follows:


13-26 Appendix A: Designing and Extending WCF Services

Parameter inspectors. You can use parameter inspectors to check the value of parameters before
invoking the operation's method. You can use them to validate constraints on your data types, such
as maximum length of strings or the size of arrays. To implement a parameter inspector, you have to
create a class that implements the IParameterInspector interface. You can add the parameter
inspector component by adding it to the ParameterInspectors collection of the ClientOperation
object (on the client-side) or the DispatchOperation object (on the service-side).

Message formatters. You can use message formatters to customize how messages deserialize to
parameters, how return values are serialized to messages on the service-side, how parameters
serialize to messages, and how response messages deserialize to return values on the client-side. If
you want to build a custom message formatter, you can implement either the
IDispatchMessageFormatter or the IClientMessageFormatter. IDispatchMessageFormatter is
required for customizing the service-side, and IClientMessageFormatter is required customizing the
client-side. You can apply this runtime component by setting the Formatter property of your
DispatchOperation object on the service-side, or of the ClientOperation object on the client-side.

Message inspectors. You can use message inspectors to validate and extract information that is
contained inside the message sent from a client to the service, and inside the message sent from the
service to the client. You can implement the inspectors on either side: client or service. If you want to
build an inspector that you use in the service, you have to implement the
IDispatchMessageInspector interface. If you want to build an inspector for the client-side, you have
to implement the IClientMessageInspector interface. To use the message inspector runtime
component, add it to the MessageInspectors collection of your DispatchRuntime object on the
service-side, or of the ClientRuntime object on the client-side. You can have multiple custom
message inspectors in your service pipeline, each handling a different aspect of the message. For
example, you can have one message inspector to verify the presence of a SOAP header, and another
message inspector that adds missing elements in messages sent by older versions of the clients.

Operation selectors. You can create custom operation selectors to change the way the dispatch
runtime finds the dispatch operation that can process the incoming message. You can use operation
selectors when you have a new version of a service that has changes to operation names, and you
want the service to be backward compatible with older clients that still use the old operation names.
If you want to build a selector for the service-side, you have to implement the
IDispatchOperationSelector interface. If you want to build a selector for the client-side, you have to
implement the IClientOperationSelector interface. You can use the client-side operation selector to
map between proxy methods and service operations. You can apply this runtime component by
setting the OperationSelector property of your DispatchRuntime object on the service-side, or of
the ClientRuntime object on the client-side.

Operation invokers. You can change the way the operation invocations translate to method calls by
using the operation invoker runtime component. For example, you can change the orders of
parameters sent to the method or log each method call. The operation invoker runtime component
can only be applied on the service-side. To build an operation invoker, you have to implement the
IOperationInvoker interface, and apply it in your service by setting the Invoker property of your
DispatchOperation object.

Error handlers. You can create custom error handlers to extend the way WCF handles exceptions. For
example, you might want to catch every exception and log it to the database, or replace any
unhandled exception with a general fault message that contains the message "Please call the help
desk at (555) 555-5555 to report this problem". To create a custom error handler, you have to
implement the IErrorHandler interface, and apply it in your service by adding it to the
ErrorHandlers collection of the ChannelDispatcher object. Error handlers can only be applied on
the service-side, and you can have multiple error handlers in the same channel. For example, you can
create one error handler that is specific to Structured Query Language (SQL) Server exceptions, and
another error handler to handle other kinds of exceptions.
Developing Windows Azure and Web Services 13-27

The following code example demonstrates how to create a custom operation invoker.

Creating a Custom Operation Invoker


public class TimingOperationInvoker : IOperationInvoker
{
IOperationInvoker _previousInvoker = null;

public TimingOperationInvoker(IOperationInvoker previousInvoker)


{
_previousInvoker = previousInvoker;
}

public object[] AllocateInputs()


{
return _previousInvoker.AllocateInputs();
}

public object Invoke(object instance, object[] inputs, out object[] outputs)


{
Stopwatch timer = new Stopwatch();
timer.Start();
object result = _previousInvoker.Invoke(instance, inputs, out outputs);
timer.Stop();
Console.WriteLine("Operation took {0} milliseconds to execute",
timer.ElapsedMilliseconds);
return result;
}

public IAsyncResult InvokeBegin(object instance, object[] inputs, AsyncCallback


callback, object state)
{
// Use the default invoker. Don't time async methods
return _previousInvoker.InvokeBegin(instance, inputs, callback, state);
}

public object InvokeEnd(object instance, out object[] outputs, IAsyncResult result)


{
// Use the default invoker. Don't time async methods
return _previousInvoker.InvokeEnd(instance, out outputs, result);
}

public bool IsSynchronous


{
get { return _previousInvoker.IsSynchronous; }
}
}

As you can see in this example, the operation invoker has a constructor that receives the previous invoker.
The previous invoker is the invoker currently used by the OperationDispatcher. If this is the first custom
operation invoker that is attached to the pipeline, the previous invoker becomes the default operation
invoker of WCF. Then, the previous invoker has the required code to invoke the service method, send it
the parameters it requires, and return its return value.

Note: If you create several operation invokers for an operation, you must connect each of
them by sending each invoker to the constructor of its successor. The last custom operation
invoker receives the default operation invoker to its constructor. For example, you can have an
operation invoker that logs the result of each service method, followed by an operation invoker
that calculates the time of each method execution, followed by the default operation invoker that
performs the actual invocation of the service method.
13-28 Appendix A: Designing and Extending WCF Services

Through the code, the previous invoker is used to perform the actual invocation of the service method.
The custom invoker adds code before and after invoking the service method, to calculate how much time
that it took for the service method to execute.

The IOperationInvoker declares the following methods and properties:


AllocateInputs. This method is called before invoking the operation to obtain the parameters that
are to be sent to the invoked method. This method calls the message formatter, which deserializes
the message to an array of objects.

IsSynchronous. This property returns a value that specifies if the operation is to be invoked
synchronously or asynchronously.

InvokeBegin and InvokeEnd. These methods are called if the operation is to be invoked
asynchronously.

Invoke. This method is called if the operation is to be invoked synchronously.

Applying Runtime Components with Custom Behaviors


After you build custom runtime components, you
have to decide which dispatcher you want to
attach to these components. This must be done at
runtime, when your service host opens. To do this,
you must have a custom behavior. Custom
behaviors are the mechanism that you use to
control which runtime components are added to
the various dispatchers. Using custom behaviors,
you can access the ChannelDispatcher,
EndpointDispatcher, DispatchRuntime, and
DispatchOperation objects, and then add
runtime components to them. You can also use
custom behaviors to remove existing runtime components from the dispatchers.

When you build a custom behavior, you have to decide to which scope you want to apply the behavior.
For example, you can build a custom behavior and attach it to a specific endpoint so that it only applies to
the operations of that endpoint. Or you can build a custom behavior and attach it to the whole service so
that it applies to all the operations in all the different endpoints exposed by the service.

WCF offers you the following kinds of behaviors:

Service behaviors. By creating a service behavior and applying it to your service, you can change the
runtime components of all the operations in all the endpoints of your service. For example, if you
want to add a custom error handler runtime component that handles exceptions from all the service
operations, you have to add it using a service behavior. Only service behaviors have access to the
channel dispatcher where this runtime component is declared. In addition to customizing the runtime
components, you can also customize the service host itself. To create a custom service behavior, you
have to implement the IServiceBehavior interface. You can apply service behaviors to a service by
creating the custom behavior as a custom attribute and then adding it on the service implementation.
Or you can add the behavior to the service's behavior configuration in the configuration file.

Endpoint behaviors. You can use endpoint behaviors to apply changes to a specific endpoint and its
operations. Building an endpoint behavior instead of a service behavior is useful if you only have to
customize the runtime components of a specific endpoint. For example, you can take a message
inspector runtime component that performs message logging, create an endpoint behavior for it, and
only apply the behavior to endpoints that do not require clients to authenticate. To create a custom
Developing Windows Azure and Web Services 13-29

endpoint behavior, you have to implement the IEndpointBehavior interface. You cannot apply
endpoint behaviors by using attributes, because they must be applied directly to an endpoint
declaration. Endpoint behaviors can be added to the endpoint's behavior configuration in the
configuration file.

Contract behaviors. When using a contract behavior, you can apply changes to all the operations of
a specific contract, regardless of the endpoint in which it is declared. To create a custom contract
behavior, you have to implement the IContractBehavior interface. You can apply contract behaviors
to your contract or service implementation only by using custom attributes, because there is no
contract configuration in the configuration file.

Operation behaviors. You can use an operation behavior to change the runtime components that
are used for a specific operation, regardless of the endpoint dispatcher through which it was invoked.
For example, you can create a custom operation invoker that logs the duration of time it takes an
operation to execute, and then apply it to several operations while testing the service. To create a
custom operation behavior, you have to implement the IOperationBehavior interface. Like the
contract behavior, operation behaviors can only be attached to an operation by using custom
attributes.
The following table lists which behavior type is most suitable to the scope that you want to control with
the custom runtime component.

Behavior type Message scope

Service behavior All the messages sent to the service

Endpoint behavior All the messages sent to a specific endpoint

Contract behavior All the messages sent to a specific contract in the service,
regardless of the endpoint that exposes it

Operation behavior All the messages sent to a specific service operation

Each interface mentioned earlier contains the ApplyDispatchBehavior method, which gives you access to
the dispatchers so that you can apply custom runtime components to them. If you want to customize the
runtime components on the client-side, you can create an endpoint behavior, a contract behavior, or an
operation behavior. Each behavior contains the ApplyClientBehavior method, which you can use to gain
access to either the client runtime or the client operation to apply the necessary runtime component to
them.

The following code example demonstrates how to build a custom operation behavior that attaches the
custom operation invoker shown in the previous code example, Creating a Custom Operation Invoker".

Creating a Custom Operation Behavior


public class TimingOperationBehavior : IOperationBehavior
{
public void AddBindingParameters(
OperationDescription operationDescription,
System.ServiceModel.Channels.BindingParameterCollection bindingParameters)
{
}

public void ApplyClientBehavior(


OperationDescription operationDescription,
System.ServiceModel.Dispatcher.ClientOperation clientOperation)
{
}

public void ApplyDispatchBehavior(


13-30 Appendix A: Designing and Extending WCF Services

OperationDescription operationDescription,
System.ServiceModel.Dispatcher.DispatchOperation dispatchOperation)
{
dispatchOperation.Invoker = new
TimingOperationInvoker(dispatchOperation.Invoker);
}

public void Validate(OperationDescription operationDescription)


{
}
}

You can create operation behaviors for either the client side or service side. This is the reason why the
interface exposes both the ApplyClientBehavior and ApplyDispatchBehavior methods. In the previous
example, the operation invoker can be used in the service, and that is why the ApplyClientBehavior
method is not implemented. By implementing the ApplyDispatchBehavior method, you can customize
the runtime components used by the operation dispatcher. In this example, the method is used to plug-in
the new operation invoker by wrapping the default invoker supplied by the WCF infrastructure.

In addition to the ApplyClientBehavior and ApplyDispatchBehavior, the IOperationBehavior interface


exposes two more methods:
AddBindingParameters. This method controls the binding elements in the channel stack. You can
use this method to configure the binding elements. For example, you can use this method to change
the time-out properties of the binding.
Validate. Use this method to verify that you can successfully use the behavior by inspecting the
operation description, such as verifying that the operations support a required fault contract.

Attaching Custom Behaviors to Services


A common practice when applying custom
behaviors is to use custom attributes. If you write
a custom service behavior, you can apply it to
your service. If you write a contract behavior, you
can apply it to your contract. Or if you write an
operation behavior, you can apply it to your
operation. You can implement all of these options
by using custom attributes.

Note: You cannot use custom attributes to


apply endpoint behaviors. You can only attach
endpoint behaviors to endpoints by using the
configuration file.

To build your custom behavior so that you can use it as a custom attribute, you have to derive your
custom behavior from the Attribute class.

The following code example demonstrates how to create a custom service behavior by using a custom
attribute.

Creating a Custom Attribute for the Custom Behavior


[AttributeUsage(AttributeTargets.Method)]
public class TimingOperationBehaviorAttribute : Attribute, IOperationBehavior
{
Developing Windows Azure and Web Services 13-31

You can create any custom behaviorwhether service, endpoint, contract, or operation behavioras a
custom attribute by deriving from the Attribute class. You can also decorate the custom behavior class by
using the [AttributeUsage] attribute to specify where this attribute can be used. The rest of the
implementation of the class remains the same as it was.

Note: In the previous example, the name of the class is changed from
TimingOperationBehaviorAttribute to TimingOperationBehaviorAttribute, because the
naming convention for custom attribute classes is to add the Attribute suffix to the name of the
class.

After you create the new custom attribute, all that remains is to decorate the service, contract, or
operation with the new attribute.

The following code example demonstrates how to apply the custom behavior to a service operation.

Applying Custom Behaviors in Code


public class SimpleService : ISimpleService
{
[TimingOperationBehavior]
public string PerformLengthyTask()
{
// Do something

return "Done";
}
}

If you do not want to apply the custom behavior by using custom attributes, or if you build a custom
endpoint behavior that cannot be applied by using custom attributes, you can apply the behavior by
using the configuration. To use custom behaviors in configuration files, you have to create a class that
derives from the BehaviorExtensionElement class.
The following code example demonstrates a behavior extension element.

Creating a Behavior Extension Element Class


public class TimingBehaviorConfigurationElement : BehaviorExtensionElement
{
public override Type BehaviorType
{
get { return typeof(TimingEndpointBehavior); }
}

protected override object CreateBehavior()


{
return new TimingEndpointBehavior();
}
}

The BehaviorExtensionElement declares two abstract members that you must override when you build
your custom configuration extension:
1. BehaviorType: You must override this property to return a Type object that represents the kind of
the custom behavior this configuration element creates.

2. CreateBehavior: You must override this method to return an instance of your custom behavior.
13-32 Appendix A: Designing and Extending WCF Services

Note: The preceding code example creates a behavior extension element class for the
TimingEndpointBehavior endpoint behavior class. The content of this class is not shown here.
However, the implementation of this class resembles the operation behavior that was
demonstrated before.

To apply this behavior in the configuration, you have to introduce this behavior as an XML element. To do
this, you have to add the newly created type to the extension element of the <system.serviceModel>
element.

The following code example demonstrates how to add the previous configuration extension to the
configuration file.

Adding a Custom Endpoint Behavior to the Configuration File


<system.serviceModel>
<extensions>
<behaviorExtensions>
<add name="timingBehavior" type="Service.TimingBehaviorConfigurationElement,
Service"/>
</behaviorExtensions>
</extensions>
<system.serviceModel>

First you add the behavior extension and set its name and type by setting the name and the type
attributes. Then you can use its name to apply the custom behavior in the configuration file of your
service or in your endpoint configuration, depending on the kind of custom behavior that you create.
The following code example demonstrates how to apply the custom endpoint behavior in configuration.

Applying a Custom Endpoint Behavior in Configuration


<system.serviceModel>
<extensions>
<behaviorExtensions>
<add name="timingBehavior" type="Service.TimingBehaviorConfigurationElement,
Service"/>
</behaviorExtensions>
</extensions>
<behaviors>
<endpointBehaviors>
<behavior name="timing">
<timingBehavior/>
</behavior>
</endpointBehaviors>
</behaviors>
<services>
<service name="Service.SimpleService">
<endpoint
address=""
binding="basicHttpBinding"
contract="Service.ISimpleService"
behaviorConfiguration="timing"/>
</service>
</services>
</system.serviceModel>

Note: The type attribute must be set to the fully qualified name of the class, including its
assembly name. If the class is from a different assembly, you should use the fully qualified name
of the assembly.
Developing Windows Azure and Web Services 13-33

Adding State and Functionality with Extensible Objects


Another method that you can use to extend
various parts of your service is to use the
extensible object pattern that WCF implements.
By using the extensible object pattern, you can
add new functionality to your services, operations,
and channels.

The extensibility mechanism in WCF uses the


IExtensibleObject<T> generic interface, which
declares an Extensions property that holds a
collection of extension objects. There are several
types that implement this interface and enable
extensions to attach to them. An extension object
is any type that implements the IExtension<T> interface. By adding and removing extension objects
from the collection, you can customize the functionality of the extensible objects.

The IExtension<T> interface declares the following methods:

Attach. This method is called when the extension is added to the extensions collection of the
extensible object. You can use this method to apply the required functionality changes.

Detach. This method is called when the extension is removed from the extensions collection. Use this
method to undo the changes that you have made to the extensible object.

The following WCF types implement the IExtensibleObject<T> interface:


ServiceHostBase. This is the base type of the ServiceHost class. You can add extensions to this
object to extend the behavior of your service host.
InstanceContext. This class provides access to both the service instance that is created at run time,
and to its contained service host. By adding extensions to the instance context, you can add
functionality that is executed when the instance is created or destroyed.
OperationContext. This class provides access to all the information about the current operation, such
as the incoming message and the security identity of the client. By using extensions, you can change
the behavior of the endpoint dispatcher of the operation, examine and change message content, and
apply other customizations. For example, you can create a parameter inspector that stores the values
of the o input parameters of the operation, in an extension, and use the extension object in an error
handler to log the parameters. If an operation throws an exception, the parameters, which might have
been the cause of the exception, as well as the error handler, can log the exception.

IContextChannel. This interface is implemented by the service and client channels. You can add
extensions to the channels to attach custom data to them, and use it at some point by custom
behaviors and runtime components. For example, you can create an extension for the service-side
that keeps a list of message headers (name, namespace, and value) that are added to every outgoing
message. Because the channel can be accessed from anywhere in the service, any service operation
and runtime component can add headers to the extension object. When a message is ready to be
sent to the client, a message inspector can take all the headers from the extension, and include them
in the returned message.
In addition to adding functionality to the runtime classes by customizing the dispatchers and the host,
you can also use the extensions to hold state information that you use in your runtime components or in
your service implementation, just as the example described for the OperationContext and
IContextChannel extension objects.

For additional information about extensible objects in WCF, see:


13-34 Appendix A: Designing and Extending WCF Services

Extensible Object
http://go.microsoft.com/fwlink/?LinkID=298790&clcid=0x409

You can use extensions to maintain state for different scopes of your service, to configure the behavior of
your service, and to add functionality to various parts of your service. For example, you can create a
service host extension that performs special processing when a host opens or closes.
The following code example demonstrates an extension that provides a singleton instance of a log
manager that can be accessed from anywhere in the service.

Creating an Extension
public class SingletonLoggerExtension : IExtension<ServiceHostBase>
{
public LogManager Logger { get; private set; }

public SingletonLoggerExtension(string outputLogFile)


{
Logger = new LogManager(outputLogFile);
}

public void LogMessage(string message)


{
Logger.LogMessage(message);
}

public void Attach(ServiceHostBase owner)


{
Logger.Initialize();
}

public void Detach(ServiceHostBase owner)


{
}
}

After you attach the extension to the host, a logger instance is created and becomes available to anyone
who needs it.

Note: Using extensions for singletons gives you more flexibility than by using the standard
singleton design pattern or static classes. That is because with extensions, you can also declare
singleton objects for scopes other than the whole service. For example, you can create a
singleton extension that you can apply to the operation context. Meaning, all the runtime
components and the service implementation code share the same singleton object, whereas
other operation contexts have their own singleton object. This behavior can be very difficult to
achieve when using the standard singleton design pattern or when using static classes.

To add this extension to the service host, you have to use the service host's Extensions collection.
The following code example demonstrates how to add the new extension to the service host.

Adding an Extension Object to Service Host's Extensions Collection


ServiceHost host = new ServiceHost(typeof(ISimpleService));
host.Extensions.Add(new SingletonLoggerExtension(@"c:\serviceLogs\log.txt"));
host.Open();

After you add the extension to the host, you can use it anywhere you want inside the service.

The following code example demonstrates how to use this extension in a service operation.
Developing Windows Azure and Web Services 13-35

Using an Extension
public class SimpleService : ISimpleService
{
public string PerformLengthyTask()
{
string result;

// Do some work

ServiceHost host = OperationContext.Current.Host;


SingletonLoggerExtension extension
=host.Extensions.Find<SingletonLoggerExtension>();
extension.LogMessage("the result was: " + result);

return result;
}
}

By using the Find<T> generic method, you can locate a specific extension inside the extension collection.
If you add several instances of the same extension typefor example, if the logger extension is added
several times to create different output filesyou can use the FindAll<T> generic method that returns a
collection of extension objects.

Demonstration: Extending the WCF Pipeline


This demonstration shows how to create a custom operation invoker and how to apply it to the service
implementation with a custom operation behavior. This demonstration also shows how to create an
extension object, and how to access it from the service host and a custom runtime component.

Demonstration Steps
1. Open the D:\Allfiles\Apx01\DemoFiles\CustomOperationInvoker\CustomOperationInvoker.sln
solution file. This solution shows how to create a custom operation invoker that prints time taken to
execute each service operation.
2. Open the operation invoker code, and observe how the custom operation invoker uses the previous
operation invoker for the methods and properties.

In the Service project, open the TimingOperationInvoker.cs and observe how the class implements
the IOperationInvoker interface.
View the code of the constructor method. The constructor receives the previous operation invoker, in
this case, the default invoker of WCF.

Inspect the AllocateInputs, Invoke, InvokeBegin, and InvokeEnd methods, and the IsSynchronous
property. Each method calls the matching method or property from the underlying
_previousInvoker object.

3. Observe how you retrieve the extension object and log the execution time in the Invoke method by
calling the LogMessage method of the SingletonLoggerExtension extension object.

4. View the code of the SingletonLoggerExtension class, and how it implements a service host
extensible object.

5. Open the Main method and observe how the SingletonLoggerExtension instance is created and
added to the service host's Extensions collections.
6. View the implementation of the custom operation behavior (the TimingOperationBehavior class),
and observe how the new operation invoker replaces the existing one.
13-36 Appendix A: Designing and Extending WCF Services

7. View the SimpleService class and observe how the custom operation behavior is applied to the
PerformLengthyTask operation method.

8. Run the project without debugging, open the WCF Test Client utility, and then connect to the service
by using the address http://localhost:8080/SimpleService. Verify the logger prints the execution
time in the console window after you invoke the service.

Open the WCF Test Client utility by double-clicking the WcfTestClient shortcut in the D:\AllFiles
folder.

After adding the service in the WCF Test Client, call the PerformLengthyTask method.

After you invoke the method, switch to the console window, and verify that you see the message
"Operation took XXXX milliseconds to execute" (XXXX will be replaced by the time that it took the
operation to execute).
Developing Windows Azure and Web Services 13-37

Lab: Designing and Extending WCF Services


Scenario
After running several testing cycles on the newly created booking service, the Blue Yonder Airlines
software testing department wanted the error logs of the service to contain more information about the
input of operations so that it is easier for them to analyze service faults. In this lab, you will add an error
handler to the booking service, which logs the operation parameters for failed requests.

In addition, Blue Yonder Airlines wants to enable its Blue Badge members (from the Blue Yonder Airlines
frequent flyer program) to earn miles for checked-in flights. In this lab, you will update the WCF booking
service to call the frequent flyer WCF service, and update the two databasesreservations and frequent
flyersusing a single distributed transaction.

Objectives
After completing this lab, you will be able to:
Create an error handling runtime component and apply it to a WCF service.

Configure a WCF service to support distributed transactions.

Lab Setup
Estimated Time: 60 minutes

Virtual Machine: 20487B-SEA-DEV-A


User name: Administrator

Password: Pa$$w0rd

For this lab, you will use the available virtual machine environment. Before you begin this lab, you must
complete the following steps:

1. On the host computer, click Start, point to Administrative Tools, and then click Hyper-V Manager.

2. In Hyper-V Manager, click MSL-TMG1, and in the Action pane, click Start.
3. In Hyper-V Manager, click the 20487B-SEA-DEV-A virtual machine.
4. In the Snapshots pane, right-click the StartingImage snapshot and then click Apply.

5. In the Apply Snapshot dialog box, click Apply.

6. In Hyper-V Manager, click 20487B-SEA-DEV-A, and in the Action pane, click Start.
7. In the Action pane, click Connect. Wait until the virtual machine starts.

8. Sign in using the following credentials:

User name: Administrator


Password: Pa$$w0rd

9. Verify that you received credentials to log in to the Azure portal from you training provider, these
credentials and the Azure account will be used throughout the labs of this course.

Exercise 1: Create a Custom Error Handler Runtime Component


Scenario
To log unhandled exceptions together with the parameters that were sent to the service operation, you
have to create several runtime components and attach them to the WCF message handling pipeline. You
can start by creating a parameter inspector which you can use to store the parameters sent to any service
operation in an extension object. Then you can continue by creating an error handler that can retrieve the
13-38 Appendix A: Designing and Extending WCF Services

stored parameters, serialize them to XML, and output the exception and parameter values to a log file.
The last step of this exercise is to apply these custom runtime components to your service by using the
configuration file of your service host.

The main tasks for this exercise are as follows:


1. Create an Operation Extension to Hold the Parameter Values

2. Create a Parameter Inspector that Stores the Parameter Values in an Operation Extension

3. Create an Error Handler that Traces Parameter Values for Faulty Operations
4. Create a Custom Service Behavior for the Error Handler and Apply it to the Service

5. Configure Tracing for WCF and the Custom Trace

Task 1: Create an Operation Extension to Hold the Parameter Values


1. Open BlueYonder.Server.sln from D:\AllFiles\Apx01\LabFiles\begin\BlueYonder.Server.
2. In the BlueYonder.BookingService.Implementation project, create and implement an extensible
object class named ParametersInfo, under a new folder named Extensions.

Set the access modifier of the class to public.


Implement the IExtension<OperationContext> interface

Create the Attach and Detach methods, but leave them empty, as you will not use them.

In the class, create a constructor which receive an array of objects. Store the array in a publicly
available property named Parameters.

Note: You do not have to add any code to the Attach and Detach methods, because you
only use the extension for state management, not to add functionality to the operation context.

Task 2: Create a Parameter Inspector that Stores the Parameter Values in an Operation
Extension
1. In the BlueYonder.BookingService.Implementation project, create and implement a new
parameter inspector class named ParametersInspector, under the Extensions folder.

The class should implement the IParameterInspector interface.

In the BeforeCall method, store the list of parameters sent to the operation in a new extension object
of type ParametersInfo.

Add the new extension object to the current operation context's Extensions collection.
The method should return null when it completes.

Note: You have to implement the BeforeCall method to save the parameters of the
operation before the operation is invoked. You do not have to implement the AfterCall method,
because it only executes after the operation is complete without exceptions.

Task 3: Create an Error Handler that Traces Parameter Values for Faulty Operations
1. To the BlueYonder.BookingService.Implementation project, add
D:\Allfiles\Apx01\Labfiles\Assets\Extensions\ErrorLoggingUtils.cs under the Extensions folder.

2. In the BlueYonder.BookingService.Implementation project, create and implement a new error


handler class named LoggingErrorHandler, under the Extensions folder.
Developing Windows Azure and Web Services 13-39

Set the access modifier of the class to public and implement the IErrorHandler interface.

Add a new private field of type TraceSource to the class and name it _traceSource. Initialize the
trace source to use the trace source ErrorHandlerTrace.
Implement the ProvideFault method and leave it empty.

Implement the HandleError method by retrieving the parameters that you stored in the extension
object.

To retrieve the parameters, use the Find method of current operation context's Extensions collection.
If the parameters were found, create a string containing the type and message of the exception, and
the values of the parameters.

Use the ErrorLoggingUtils.GetObjectAsXml static method to convert each of the parameters to an


XML string.

Write the string to the log by using the _traceSource field.


Return true at the end of the method.
The resulting code of the HandleError method should resemble the following code.

ParametersInfo parametersInfo =
OperationContext.Current.Extensions.Find<ParametersInfo>();
if (parametersInfo != null)
{
string message = string.Format
("Exception of type {0} occurred: {1}\n operation parameters are:\n{2}\n",
error.GetType().Name,
error.Message,
parametersInfo.Parameters.Select
(o => ErrorLoggingUtils.GetObjectAsXml(o)).Aggregate((prev, next) => prev +
"\n" + next));
_traceSource.TraceEvent(TraceEventType.Error, 0, message);
}
return true;

Note: The IErrorHandler interface provides two methods, ProvideFault and HandleError.
You can implement the ProvideFault method to provide a fault message to WCF based on the
thrown exception. The HandleError is called after WCF returns the fault message to the client so
that you can log the thrown exception without making the client wait until the logging
procedure is complete.

Task 4: Create a Custom Service Behavior for the Error Handler and Apply it to the
Service
1. In the BlueYonder.BookingService.Implementation project, create and implement a new service
behavior class named ErrorLoggingBehavior, under the Extensions folder.
Implement the IServiceBehavior interface.

Add the AddBindingParameters and Validate methods of the interface and leave them empty.

Implement the ApplyDispatchBehavior method by iterating the


serviceHostBase.ChannelDispatchers collection, and adding a new LoggingErrorHandler object to
each channel dispatcher's ErrorHandlers collection.

In each channel dispatcher iteration, iterate each of the endpoints, and in each endpoint, iterate the
endpoint's DispatchRuntime.Operations collection.
13-40 Appendix A: Designing and Extending WCF Services

In each dispatch operation iteration, add a new ParametersInspector object to the operation's
ParameterInspectors collection.

2. In the BlueYonder.BookingService.Implementation project, add a reference to the


System.Configuration assembly,
3. In the BlueYonder.BookingService.Implementation project, create and implement a new behavior
extension element class named ErrorLoggingBehaviorExtensionElement, under the Extensions
folder.

Set the access modifier of the class to public and inherit it from the BehaviorExtensionElement
class.

Override the BehaviorType property by returning type Type object for the ErrorLoggingBehavior
class. Use the typeof operator to get the Type object of a class.

Override the CreateBehavior method by returning a new instance of the ErrorLoggingBehavior


class.

Note: You can use a custom behavior in the configuration file only if you create a class for
its configuration element. The configuration element class has to provide two things: the kind of
the custom behavior class and an instance of it.

4. In the BlueYonder.BookingService.Host project, open the App.config file, and add a


<behaviorExtension> element for the new configuration element that you created.

In the <system.serviceModel> element, create a new <extensions> element, and in it create a


<behaviorExtensions> element.

In the <behaviorExtensions> element, add an <add> element with the following values.

Attribute Value

name errorLoggingBehavior

type BlueYonder.BookingService.Implementation.Extensions.ErrorLoggingBehavior
ExtensionElement, BlueYonder.BookingService.Implementation

Note: The type attribute must be set to the qualified name of the configuration element
class, including the name of its containing assembly. You can set the value of the name attribute
to any name that you think best represents your custom behavior.

5. In the <serviceBehaviors> element, add the <errorLoggingBehavior/> element to the service's


<behavior> element.

Note: Visual Studio Intellisense uses built-in schemas to perform validations. Therefore, it
will not recognize errorLoggingBehavior behavior extension, and will display a warning. Please
disregard this warning.

Task 5: Configure Tracing for WCF and the Custom Trace


1. Remaining in the App.config, add a <system.diagnostics> element and set the trace to automatic
flush.
Developing Windows Azure and Web Services 13-41

In the newly added <system.diagnostics> element, add a <trace> element and set its autoflush
attribute to true.

Note: The autoflush attribute controls whether log messages are immediately written to
the log, or cached in memory and periodically flushed. The value of the attribute is set to true so
that you can view the results immediately without waiting for the log to flush its content to the
file.

2. Add a shared listener to the <system.diagnostics> element.

In the <system.diagnostics> element, add a <sharedListeners> element, and in it, add an <add>
element with the following values.

Attribute Value

name ServiceModelTraceListener

type System.Diagnostics.XmlWriterTraceListener

initializeData D:\AllFiles\Apx01\LabFiles\WCFTrace.svclog

3. In the <system.diagnostics> element, add two sources, one for System.ServiceModel and another
for ErrorHandlerTrace. Set both sources to use the shared listener you created before.
In the <system.diagnostics> element, add a <sources> element, and in it, add two <source>
elements.

Set the attributes in the elements according to the following table.

Source element Attribute Value

First name System.ServiceModel

switchValue Error,ActivityTracing

Second name ErrorHandlerTrace

switchValue Error,ActivityTracing

In each of the <source> elements, add a <listeners> element with the following configuration.

<listeners>
<add name="ServiceModelTraceListener">
<filter type="" />
</add>
</listeners>

The resulting configuration should resemble the following configuration.

<source name ="System.ServiceModel" switchValue="Error,ActivityTracing">


<listeners>
<add name="ServiceModelTraceListener">
<filter type="" />
</add>
</listeners>
</source>
<source name ="ErrorHandlerTrace" switchValue="Error,ActivityTracing">
<listeners>
<add name="ServiceModelTraceListener">
13-42 Appendix A: Designing and Extending WCF Services

<filter type="" />


</add>
</listeners>
</source>

Note: The System.ServiceModel source is used for tracing WCF activities, and the
ErrorHandlerTrace source is used by the LoggingErrorHandler class, in the TraceSource
constructor.
WCF tracing is covered in Lesson 2, Configuring Service Diagnostics", of Module 10,
Monitoring and Diagnostics.

4. Run the BlueYonder.BookingService.Host project without debugging, and test the service using the
WCF Test Client utility

After running the project, open the WCF Test Client utility by double-clicking the WcfTestClient
shortcut from the D:\AllFiles folder.

Add the service by using the address http://localhost/BlueYonder/Booking.


Test the UpdateTrip operation by sending a null object. Wait until an error dialog box appears that
states "The confirmation code of the reservation is invalid".

Close the WCF Test Client utility.

5. From D:\AllFiles\Apx01\LabFiles, open the WCFTrace.svclog trace log file and verify that you see
the exception with the XML of the TripUpdateDto parameter. Close the Microsoft Service Trace
Viewer utility and the service host console window, and return to Visual Studio 2012.

Results: You can use the WCF Test Client utility to test the service, cause exceptions to be thrown in the
code, and check the log files to verify that the exception message is logged together with the parameters
that are sent to the service operation.

Exercise 2: Add Support for Distributed Transactions to the WCF Booking


Service
Scenario
To support distributed transactions, you have to update both the Frequent Flyer service and the Booking
service that acts as the client of the Frequent Flyer service. Start by updating the Frequent Flyer service
contract and the service host configuration file to support transaction flow, and then continue to update
the service implementation to use the flowed transactions. After updating the Frequent Flyer service,
apply the same binding configuration to the client-side configuration in the Booking service, and use a
transaction scope to create a distributed transaction that spans the service call and the changes that were
made to the local database.

The main tasks for this exercise are as follows:


1. Add Transaction Flow Attributes to the Frequent Flyer Service Contract

2. Configure the Service Endpoints Binding to Flow Transaction

3. Add the Transaction Scope Attribute to the Service Implementation


4. Add Code to the WCF Booking Service that Calls the Frequent Flyer WCF Service

5. Execute the Service Call and the Reservations Database Updates in a Distributed Transaction

6. Update the WCF Client Configuration with the Frequent Flyer Service Endpoint and the Support for
Transaction Flow in the Bindings
Developing Windows Azure and Web Services 13-43

Task 1: Add Transaction Flow Attributes to the Frequent Flyer Service Contract
1. In the BlueYonder.FrequentFlyerService.Contracts project, open the IFrequentFlyerService.cs file, and
set the AddFrequentFlyerMiles and RevokeFrequentFlyerMiles methods to allow the flow of
transactions.

Decorate the AddFrequentFlyerMiles and RevokeFrequentFlyerMiles methods with the


[TransactionFlow] attribute and set the flow option to TransactionFlowOption.Allowed.

Task 2: Configure the Service Endpoints Binding to Flow Transaction


1. In the BlueYonder.FrequentFlyerService.Host project, open the App.config file, and add a new
binding configuration for netTcpBinding with transaction flow.

In the <system.serviceModel> element, add a new <bindings> element with the following
configuration.

<bindings>
<netTcpBinding>
<binding name="TcpTransactionalBind" transactionFlow="true" />
</netTcpBinding>
</bindings>

2. Apply the new binding configuration to the existing service endpoint

Locate the <endpoint> element for the Frequent Flyer service.


In the element, add the bindingConfiguration attribute, and set it to TcpTransactionalBind.

Task 3: Add the Transaction Scope Attribute to the Service Implementation


1. In the BlueYonder.FrequentFlyerService.Implementation project, open the FrequentFlyerService.cs file,
and add transaction scope setting to the AddFrequentFlyerMiles and RevokeFrequentFlyerMiles
methods.
Decorate the AddFrequentFlyerMiles and RevokeFrequentFlyerMiles methods with the
[OperationBehavior] attribute.
Set the parameters in each [OperationBehavior] attribute according to the following table.

Parameters Value

TransactionAutoComplete true

TransactionScopeRequired true

Task 4: Add Code to the WCF Booking Service that Calls the Frequent Flyer WCF
Service
1. In the BlueYonder.BookingService.Implementation project, open the BookingService.cs file, and
add a private field named _frequentFlyerChannnelFactory to store the channel factory for the
IFrequentFlyerService service contract. Use the FrequentFlyerEP configuration name in the channel
factory constructor.

2. In the UpdateTrip, check whether the traveler is checking in, and if this is the case, call the Frequent
Flyer service to update their miles.
To check if the traveler is checking in, verify the original and new status are different, and that the
new status is FlightStatus.CheckedIn.

Retrieve the earned miles from the originalTrip.FlightInfo.Flight.FrequentFlyerMiles property.


13-44 Appendix A: Designing and Extending WCF Services

Use the _frequentFlyerChannnelFactory.CreateChannel method to create a new proxy to the


Frequent Flyer service.

Call the Frequent Flyer service's AddFrequentFlyerMiles method, and pass it the traveler's ID and
the earned miles.
Call the service before saving the local changes to the database.

Task 5: Execute the Service Call and the Reservations Database Updates in a
Distributed Transaction
1. Add a reference to the System.Transactions assembly, and surround the service call and the
database update with a transaction scope. Make sure that you set the Complete flag before leaving
the scope. The resulting code segment should resemble the following code.

using (TransactionScope scope = new TransactionScope())


{
// TODO: 2 - Call the Frequent Flyer service to add the miles if the traveler has
checked-in
if (originalStatus != newStatus && newStatus == FlightStatus.CheckedIn)
{
IFrequentFlyerService proxy = _frequentFlyerChannnelFactory.CreateChannel();
int earnedMiles = originalTrip.FlightInfo.Flight.FrequentFlyerMiles;
proxy.AddFrequentFlyerMiles(reservation.TravelerId, earnedMiles);
}
// TODO: 3 - Wrap the save and the service call in a transaction scope
reservationRepository.Save();
scope.Complete();
}

Task 6: Update the WCF Client Configuration with the Frequent Flyer Service Endpoint
and the Support for Transaction Flow in the Bindings
1. In the BlueYonder.BookingService.Host project, open the App.config file, add a client endpoint for
the Frequent Flyer service, and configure its binding to flow transactions. Name the client endpoint
FrequentFlyerEP to match the name that you used in the Booking service implementation

Note: You can use the service endpoint configuration from the Frequent Flyer service host
to set the address, binding, and contract settings of the client endpoint. You can also copy the
binding configuration from the Frequent Flyer service host configuration.

2. Make sure that the MSDTC service is running, and start both service hosts (Booking and Frequent
Flyer) without debugging.

To open the services list, on the Start screen, click the Administrative Tools tile, and in the
Administrative Tools window, double-click Services.
In the Services window, look for the Distributed Transaction Coordinator service and check its
Status column. If the status of the service is not Running, right-click it, and then click Start.

After you run both service hosts, wait until both console windows show the "service is running"
message.

3. Start the WCF Test Client utility, add the two services, and verify the distributed transaction works by
calling the UpdateTrip.

Open the WCF Test Client utility by double-clicking the WcfTestClient shortcut from the D:\AllFiles
folder.
Developing Windows Azure and Web Services 13-45

Add the services by using the http://localhost/BlueYonder/Booking and


http://localhost/BlueYonder/FrequentFlyer service addresses http://localhost/BlueYonder/Booking.

Test the UpdateTrip method by using the following request values.

Parameter Value

FlightDirection Departing

ReservationConfirmationCode Aa123

TripToUpdate Value
Property

Class First

FlightScheduleID 1

Status CheckedIn

4. Verify the miles were added to the traveler by invoking the GetAccumulatedMiles operation with
traveler ID 1. Verify the returned miles are 5026. Close the WCF Test Client utility and the two console
windows when done.

Results: You can run the WCF Test Client utility, call an operation in the Booking service that starts a
distributed transaction, and verify that the Frequent Flyer service indeed committed its transaction.

Question: Why did you log the error in the HandleError method of the error handler class
and not in the ProvideFault method?
13-46 Appendix A: Designing and Extending WCF Services

Module Review and Takeaways


In this module, you learned how to apply various design principles to your service contract and how to implement
complex communication scenarios, such as duplex services. You also learned how to support distributed
transactions with WCF services. In the last lesson of this module, you learned the most important parts of WCF: the
WCF message handling pipeline. You learned how the pipeline is constructed, how a message flows through the
dispatchers, and how you can customize the pipeline by creating custom runtime components. This module is the
second of three modules that cover WCF. The next module will discuss the various ways by which you can secure
your WCF services.

Review Question(s)
Question: When is it useful to use asynchronous operations on the service-side?

Question: What are dispatchers?

Tools
Microsoft Service Configuration Editor, Microsoft Service Trace Viewer
14-1

Appendix B
Implementing Security in WCF Services
Contents:
Module Overview 14-1

Lesson 1: Introduction to Web Services Security 14-2

Lesson 2: Transport Security 14-6


Lesson 3: Message Security 14-15

Lesson 4: Configuring Service Authentication and Authorization 14-25


Lab: Securing a WCF Service 14-34
Module Review and Takeaways 14-41

Module Overview
Security is one of the major concerns for many distributed applications. Key security issues that you must
address when you design a web service include authentication, authorization, and secured
communication. Windows Communication Foundation (WCF) provides you an effective and extensible
infrastructure that can meet these challenges. To achieve flexibility, extensibility, and maintainability, WCF
separates the security infrastructure from the business implementation of the service operations.
Developers do not need to implement authentication and secure communication within the service
implementation because WCF manages this aspect. You can write code that concentrates only on the
business aspect and let the WCF infrastructure handle security.
This module provides an overview of web application security and WCF security capabilities, and then it
explains how to configure and consume WCF services that use the security infrastructure provided by
WCF.

Objectives
After you complete this module, you will be able to:

Describe web application security.

Configure a service for transport security.


Configure a service for message security.

Implement and configure authentication and authorization logic.


14-2 Appendix B: Implementing Security in WCF Services

Lesson 1
Introduction to Web Services Security
Before you understand how to implement security in WCF services, it is important that you understand
why securing services is important and what security features are available to secure web services. This
lesson provides you with an overview of application security and the key features of WCF security.

Lesson Objectives
After you complete this lesson, you will be able to:

Describe application security fundamentals.


Describe WCF security fundamentals.

Introduction to Application Security


Applications work with information that might be
sensitive or private, and therefore, such
information must be protected. Web applications,
by definition, are exposed to a large number of
customers distributed geographically. A user with
malicious intent can try to steal information or
damage the intended execution of the
application. Therefore, to protect the integrity of
your data, you must implement adequate security
measures in your applications.

Security is the ability to protect an application


against reasonably predictable dangers or threats.
When you design security for your applications, you must consider the various factors that the application
depends on. These factors include infrastructure considerations such as the existence of a network firewall
and human and business factors such as the complexity of a password and the different permissions that
should be assigned to different roles. If any one aspect of the application is not protected, an attacker can
take advantage of the breach, and compromise the data. However, it is impossible to achieve absolute
security. Therefore, you should take into account the available budget and resources, and then implement
risk assessment and the required mitigation steps in a security plan.

To secure an application, you need to consider three major aspects:

Human. People have access to information when they interact with an application. People can be
manipulated and can reveal information. Organizations educate their staff, and implement privacy
and security policies to reduce the risk of exposing sensitive information.
Infrastructure. Applications run on infrastructure such as operating systems and networks. You can
protect this infrastructure by regularly installing security updates and by using security networking
components such as firewalls and other dedicated equipment.

Application. Only the application layer knows the business of its organization, such as business rules,
system use-cases, user's permissions, and valid workflows. By validating user input and monitoring the
application for atypical behavior, you can minimize the risk of applicative attacks, such as invalid user
input, and irregular transactions frequency.
Given the preceding security aspects, you need to keep in mind the following considerations when
designing and implementing secure applications:
Developing Windows Azure and Web Services 14-3

Input validation. You must validate all the inputs of your service before you use them. Because clients
can sometimes be impersonated, and malicious users might even use a fake client application, it is
not enough if you implement validation on the client side, and you should repeat those validations
on the server side. Implementing validation is usually simple and inexpensive, but highly effective in
preventing many present and future attacks.

There are two main strategies that you can use to implement validation: the allow list and the block
list.
o Allow List. This list defines a pattern of valid inputs. All inputs that follow the pattern are
accepted, whereas other inputs are rejected.

o Block List. This is a list of bad inputs. If bad inputs are discovered, they are rejected.

Authentication. You need to make sure that you know who your client is. Using authentication
techniques for client identification is a common way to secure the boundary of the service, and to
prevent unknown users from accessing your service.

Authorization. Knowing your client is important, but you also must make sure your client can perform
the action that they requested. Often, different users will have access to certain services and
operations, but not to others. Sometimes even the value of a parameter that is sent to an operation is
valid if sent by a certain client, but not valid when sent by another client. For example, a bank loan
operation can receive values up to $1,000 if requested by a clerk, and up to $100,000 if requested by
a manager.
Cryptography. To protect the content of your data, you will need to encrypt it and make sure no
attacker can access or change it. You can protect your data from being changed by signing it
digitally, and encrypting it by using symmetric or asymmetrical cryptography.

Sensitive Data. You will need to identify which data is sensitive, such as contact information, credit
card numbers, salaries, and confidential phone numbers. Sensitive information must be handled with
care. You will need to restrict access to it, encrypt it when persisted, and use a secured channel to
transmit it. Most of the previously mentioned aspects (validation, authentication, and cryptography)
serve the purpose of handling sensitive data.
Session Management. If you want clients to perform several related operations in a session, and you
plan to save state information for that session, you will need to secure both your data and the session
information. A common attack type is client spoofing, where an external attacker assumes the role of
your client and sends requests on its behalf. You will need to ensure that your session cannot be
compromised to prevent such attacks.

Configuration Management. The configuration of your application might contain sensitive data - such
as server names, database passwords, default values for operations - and other data that you would
not want anyone to obtain. You will need to protect your configuration by applying authorization
and encryption.
Data Manipulation: When a client calls your service, and when your service responds, the message is
sent over the network, where every resourceful malicious user can see and manipulate it. To protect
yourself from these types of threats, you will need to find a way to prevent changes to the data, or at
least be able to detect if changes were made to the data while in transit. Encrypting or digitally
signing the data can help prevent this threat.

Exception Management: When your service throws an exception, it might contain information you do
not want to reveal to the outside world, such as server or database table names. Revealing these
details can help potential attackers gather information on how your application works, and on the
resources it uses. You need to make sure that exceptions (faults) returned by your service contain a
minimal amount of detail - only the detail required for the client to understand what has happened -
without revealing the inner workings of the service.
14-4 Appendix B: Implementing Security in WCF Services

Auditing and Logging. Preventing an attack is important, but knowing that you were attacked is also
equally important. You need to audit any failed attempts for using the service, security violations, or
any abnormal situation your service encounters. Auditing can also help you determine the identity of
the attacker.

Additional Reading: For more information about securing software security, see:
http://go.microsoft.com/fwlink/?LinkID=298791&clcid=0x409.

Introduction to WCF Security


When you design a web service, you need to
consider the various security aspects detailed in
the previous topic. Your web service should
validate all input, authenticate and authorize all
clients, define and protect sensitive data, sanitize
exceptions to prevent internal details about the
application implementation such as call stacks to
leak, and audit the application execution.
WCF framework provides the infrastructure to
address the following security needs of web
services:

Communication Confidentiality. You can


encrypt all the information transmitted on a WCF channel. Encryption prevents information disclosure
attacks, such as a malicious user monitoring the packets transmitted in the network and capturing the
information.
Communication Integrity. Information that is sent on the network must be digitally signed to ensure
integrity. When the information is signed, the receiver is assured that the information is not tampered
with and that its origin is a legitimate source. WCF digitally signs messages when you use encrypted
channels. You can also digitally sign messages with WCF without using encrypted channels by
configuring your service contracts. Digital signatures prevent tampering attacks such as man-in-the-
middle (MITM), where the attacker tampers with the information before it reaches the receiver.
Configuring the service contract to sign messages digitally without using an encrypted channel is not
covered in this module. For instructions on performing this configuration, see:

Understanding Protection Level.


http://go.microsoft.com/fwlink/?LinkID=298792&clcid=0x409

Authentication. To prove the identity of a user, you can implement an authentication process. When
users authenticate, they use some sort of a credential that identifies them uniquely. A credential is a
set of claims that states who the user is, usually their user name, and some proof of possession that
identifies the user as the rightful owner of the identity such as a password. WCF can use different
types of credentials to identify users, such as Windows identities and client certificates, and decide if
they have proven their identity or not. Authentication is used to prevent unknown users from
accessing the service.
Authorization. By implementing authorization, you can grant or deny access to specific resources.
Authenticated clients and unauthenticated clients can access resources according to a security policy.
WCF authorization supports the .NET role-based security model that uses the IIdentity and
IPrincipal interfaces, and the claim-based security model, which is discussed in depth in Module 11,
Developing Windows Azure and Web Services 14-5

"Identity Management and Access Control" in Course 20487. Authorization is used to reduce privilege
attacks, in which unauthorized clients access privileged resources.

Other security aspects such as input validation, in addition to auditing and exception handling, are not
handled automatically by WCF and you must address them yourself. There are no dedicated
infrastructures in WCF for input validation, auditing, and exception handling and therefore, these
concepts are out of scope of this course. For more information on how to implement these concepts, see:
Security Design Guidelines for Web Services
http://go.microsoft.com/fwlink/?LinkID=298793&clcid=0x409
14-6 Appendix B: Implementing Security in WCF Services

Lesson 2
Transport Security
One of the fundamental requirements for securing a service is the ability to create a secured channel. This
secured channel is used to convey the message from a client to the service and from the service to the
client, without letting network sniffers read and understand the content of the message. With WCF, you
can create secured channels in several ways to match different scenarios. Each method has its advantages
and disadvantages.

In this lesson, you will learn how to create a WCF service that uses transport security, in which the
transport layer itself, and not WCF, is responsible for securing the communication channel.

Lesson Objectives
After you complete this lesson, you will be able to:

Describe WCF transport security.

Configure a WCF service for transport security.

Describe how to use transport security in clients.


Consume a WCF service that is secured by transport security.

Introduction to Transport Security


One security risk that you need to guard against is
someone stealing and changing sensitive data
that is sent to and from your service. One way to
build an encrypted channel is to use an encrypting
transport protocol. If you use HTTP, you can
encrypt your channel by using the Secure Sockets
Layer (SSL) transport security mechanism. This
mechanism creates a secured, encrypted
transport-HTTPS that you can use in your WCF
service endpoint.

By using SSL, you can encrypt each message that


is sent to and from the service within the specific
client connection. An SSL connection needs to be opened for each client. SSL uses a lower level of security
implementation, so you do not have to build your own security mechanism.

Note: HTTPS and SSL were introduced in Module 4, Extending and Securing ASP.NET Web
API Services in course 20487. Because the operating system and the transport layer implement
SSL, the usage of SSL in WCF is similar to that of ASP.NET Web API.

If you use Transmission Control Protocol (TCP) transport, you can also use a secured transport - SSL over
TCP - which uses the Transport Layer Security (TLS) mechanism. TLS is a security protocol that evolved
from SSL.

Benefits and Challenges of Transport Security


Using transport security is simple, because you do not have to write special code to implement message
encryption and decryption. The main advantage of transport security is that you gain throughput and
Developing Windows Azure and Web Services 14-7

performance, because most of transport security handling is implemented at the operating system-level.
Today, you can also use SSL hardware accelerators that perform the SSL handshake at the hardware-level,
which provides faster encryption compared to the security implemented at the operating system-level.

However, you should be aware of the following limitations of transport security:


Point to point. Transport security implements a secured channel between two points or two
machines. When a more complex topology is required, a secure channel might not be possible. For
example, consider the following topology: Service A calls Service B through a proxy. Service B sends
the response directly to Service A. If Service A does not trust the proxy, a secure channel cannot be
established through the proxy.

Exposed data. Because transport security uses point-to-point security, when the message reaches the
server, it is decrypted. In the previous example, Service A calls Service B through the proxy, so in fact
there are two secured channels: between Service A and the proxy, and between the proxy and Service
B. When the encrypted messages reach the proxy, they are decrypted, and then re-encrypted before
they are delivered to Service B. If an attacker gains access to the proxy, the attacker will be able to
retrieve the decrypted data.

Limited types of client credentials. When you use a secure transport, the service identifies itself to
the client to prevent phishing. Phishing is when a malicious service poses as another service. In
addition, you can configure the channel to require the client to authenticate. The client authenticates
by using the HTTP-Authorization header, which supports most of the authentication credentials -
such as Windows and X.509 - but does not support custom authentication credentials.

For more information about the use of HTTPS in WCF, see:

HTTPS Transport Security.


http://go.microsoft.com/fwlink/?LinkID=298794&clcid=0x409

Configuring a Service for Transport Security


Because a service can have many endpoints with
different transport types such as HTTP and TCP,
the transport security settings are applied at the
endpoint level.

Not all bindings use transport security by default.


For example, BasicHttpBinding and
WSHttpBinding do not use transport security by
default, but the NetTcpBinding and
NetNamedPipeBinding do.

To set up an endpoint for transport security, you


need to change its binding configuration.

The following example demonstrates changing the security mode of a BasicHttpBinding.

Configuring a Binding for Transport Security.


<endpoint
address="https://localhost/booking"
binding="basicHttpBinding"
bindingConfiguration="secured"
contract="Samples.IBookingService" />

<binding name="secured">
<security mode="Transport">
14-8 Appendix B: Implementing Security in WCF Services

<transport clientCredentialType="None"/>
</security>
</binding>

To change the security settings of a binding, you need to add the <security> element under the
configuration of the binding. To change the security mode, you set the mode attribute to Transport -
other values will be discussed later in this module. In the case of HTTP transport security, you will also
need to change the scheme or the address from http:// to https://. The scheme change only applies to
HTTP-based bindings. When you use transport security with TCP or Named Pipes, you can leave the
address unchanged.

In addition to setting the transport security, you can also set the credential types that the client needs to
transfer. In the previous example, the client credentials are set to None, which means the client does not
need to transfer any credentials. The following table describes the different client credentials types.

Authentication
Description
scheme

None This is anonymous authentication. Everyone can access the service.

Basic The client is required to send their user name and password in a Base64-
encoded string. The string is not encrypted, and therefore sending user names
and passwords by using this method is the same as sending them in clear text.
If you use this scheme, you will also need to set the realm attribute to the
name of the domain against which you are authenticating. Basic
authentication is common in Internet environments where clients and services
do not use a shared domain controller.

Digest The client is required to send an encrypted user name and password. This
setting requires a domain account, and the service needs to trust that domain.
When the service receives the user name and password, it sends them to the
domain controller for validation. If you use this scheme, you will also need to
set the realm attribute to the name of the domain against which you are
authenticating. Digest is rarely used today, because it requires the domain
controller to store reversible passwords, which is considered a security risk.

NTLM With NT LAN Manager (NTLM), also called Integrated Windows


Authentication, the client does not send its password, but rather uses the
password to encrypt the user name. The service sends the response to the
domain controller, which validates the response and returns the users security
information. Using NTLM is common in small intranet environments, where a
domain controller does not exist.

Windows With Windows authentication, the client uses the Kerberos security token that
it obtains from a Ticket-Granting Service (TGS). The token is sent to the
service, which in turn sends it to the TGS for validation. This technique has
been used since Windows Server 2003 to authenticate users, and is commonly
used on Microsoft networks. Using Windows identities is common in modern
intranet environments.

Certificate The client is required to send an X.509 certificate that the service can validate.
A trusted certificate authority (CA) must issue the certificate. Using certificates
to identify clients is common in business-to-business (B2B) environments.

InheritedFromHost When the service is hosted in Internet Information Services (IIS), this option
will instruct the service host to inherit the configuration of the client
credential types set for the web application in IIS. You can have multiple client
credential types for a web application in IIS. For example, you can configure
your web application to support both Basic authentication for Internet users
Developing Windows Azure and Web Services 14-9

Authentication
Description
scheme
and Windows authentication for intranet users. Instead of creating two
endpoints for your service, one for each authentication type, you can create a
single endpoint and configure the binding to inherit the client credential
types set in IIS.

Note: The X.509 certificate is a standard for packaging a public key together with metadata
about the token, the issuer, and its owner. SSL certificates for Internet websites are X.509
certificates. X.509 certificates are also used for client authentication and digitally signing
messages.

For more information about client credential type, see:

Selecting a Credential Type.


http://go.microsoft.com/fwlink/?LinkID=298795&clcid=0x409

Configuring the Service for SSL and TLS


To use transport security with either SSL over HTTP or TLS, you need to configure your service to use a
specific certificate that will be sent to clients requesting access to the service.
If you host your service under IIS, you need to configure IIS to register the certificate, as was explained in
Lesson 4, Implementing Security in ASP.NET Web API Services, of Module 4, Extending and Securing
ASP.NET Web API Services in Course 20487.

If your service is self-hosted within a Windows application or a Windows Service, you will need to specify
the certificate that you wish to use. For the HTTP transport with SSL, you can use the netsh utility with
Windows Vista and newer versions, to attach an SSL certificate to a specific port.
The following code example shows how to attach a certificate to port 8000 by using the netsh tool.
Replace line breaks with spaces.

Registering a Certificate to Port 8000 by Using Netsh


netsh http add sslcert ipport=0.0.0.0:8000
certhash=0000000000003ed9cd0c315bbb6dc1c08da5e6
appid={00112233-4455-6677-8899-AABBCCDDEEFF}

The certhash value is the hex string representing the SHA hash of the certificate, also referred to as the
certificate's thumbprint, which you can find by looking at the certificate's information. The appid value is
the globally unique identifier (GUID) of your service host assembly, which you can find in the
AssemblyInfo.cs file.
If your self-hosted service uses a secure TCP transport, you cannot use the netsh utility. You will need to
set the certificate of the service in code or in configuration.

The following configuration shows how to set the service's certificate in the configuration file.

Configuring a Service Certificate


<behaviors>
<serviceBehaviors>
<behavior>
<serviceCredentials>
<serviceCertificate
storeLocation="LocalMachine"
MCT USE ONLY. STUDENT USE PROHIBITED
14-10 Appendix B: Implementing Security in WCF Services

storeName="My"
findValue="0000000000003ed9cd0c315bbb6dc1c08da5e6"
x509FindType="FindByThumbprint" />
</serviceCredentials>
</behavior>
</serviceBehaviors>
</behaviors>

For more information on netsh, and on configuring ports for SSL for Windows Server 2003 and Windows
XP, see:

Configuring a Port with an SSL Certificate.


http://go.microsoft.com/fwlink/?LinkID=298796&clcid=0x409

For more information about transport security, see:

Transport Security.
http://go.microsoft.com/fwlink/?LinkID=298797&clcid=0x409

Using Transport Security in Clients


To consume a WCF service that uses transport
security, configure a client proxy with the
appropriate binding.

The address, binding, and contract on the client


have to match the corresponding server
configuration.

If the bindings security configuration does not


match the one defined by the service, an
exception is thrown.

The following code example illustrates how to


configure a proxy to a WCF service that uses
transport security with X.509 certificate client credentials.

Configure a Proxy with Transport Security


var securedBinding = new WSHttpBinding(SecurityMode.Transport);
securedBinding.Security.Transport.ClientCredentialType =
HttpClientCredentialType.Certificate;
var chf = new ChannelFactory<IBookingService>(securedBinding,
"https://serverName/booking");
// The client must specify a certificate trusted by the server
chf.Credentials.ClientCertificate.SetCertificate(
StoreLocation.CurrentUser, StoreName.My, X509FindType.FindBySubjectName,
"BlueYonderClientCert");
var proxy = chf.CreateChannel();

All the bindings that support transport security, such as BasicHttpBinding, NetTcpBinding, and
NetNamedPipesBinding, have a constructor that receives an enumeration for the security mode. If you
want to change the security mode of the binding dynamically, you can use the binding's Security.Mode
property.
Developing Windows Azure and Web Services 14-11

After you set the security mode of the binding, you need to set the credential type you will use, by setting
the Security.Transport.ClientCredentialType property to the credential type the service endpoint
requires.

Instead of manually configuring the proxy in code, you can use the Add Service Reference dialog box of
Visual Studio 2012. The generated configuration will contain the binding configuration with the transport
security settings and the required client credential type. The generated configuration is demonstrated in
the next code example.
The last step of the configuration is to set the client credentials, which will be sent to the service along
with the message. In the above example, a certificate is attached to the proxy by calling the
ClientCertificate.SetCertificate method. However, there are other types of credentials you can set, which
are described in the following table.

ClientCredentialTy
Property to set Description
pe

Basic UserName Use the UserName.UserName and UserName.Password


properties to configure the username and password, which
will be sent to the service.

Certificate ClientCertifica Use the ClientCertificate.SetCertificate method to locate the


te client certificate in the certificates store. Instead of loading a
certificate from the store, you can set the Certificate property
to a
System.Security.Cryptography.X509Certificates.X509Certi
ficate2 object. For example, you can load a certificate from a
.cer file, and use it as the client's certificate.

Digest HttpDigest Set the HttpDigest.ClientCredentials property to a


System.Net.NetworkCredential object. The
NetworkCredential object should be initialized with the
domain, username, and password of the user.

NTLM, Windows Windows By default, the current user's Windows identity will be used as
the client credentials. If you want to set the credentials to a
different Windows identity manually, set the
Windows.ClientCredential property to an instance of the
NetworkCredential class.

Note: The preceding table contains the client credential types for HTTP. If you use
transport security with TCP, you can only use the Windows or Certificate client credential types.

Instead of configuring the client credentials in code, you can also set it in the configuration file, just like
any other client endpoint setting. However, as you can see in the previous table, all the credential types,
other than Certificate, require that you set your username and password. Setting username and password
in configuration is considered unsecured, and therefore, you can only set the client's certificate in the
configuration file.

The following code example shows how to configure a client endpoint with transport security and
Certificate client credentials in the configuration file.

Configuring Client Endpoint for Transport Security in the Configuration File


<client>
<endpoint address="https://serverName/booking"
binding="wsHttpBinding"
14-12 Appendix B: Implementing Security in WCF Services

bindingConfiguration="securedBinding"
behaviorConfiguration="certificateEndpointBehavior"
contract="Contracts.IBookingService"
name="bookingServiceEP"></endpoint>
</client>

<bindings>
<wsHttpBinding>
<binding name="securedBinding">
<security mode="Transport">
<transport clientCredentialType="Certificate"/>
</security>
</binding>
</wsHttpBinding>
</bindings>

<behaviors>
<endpointBehaviors>
<behavior name="certificateEndpointBehavior">
<clientCredentials>
<clientCertificate storeLocation="CurrentUser" storeName="My"
findValue="BlueYonderClientCert" x509FindType="FindBySubjectName" />
</clientCredentials>
</behavior>
</endpointBehaviors>
</behaviors>

Similar to how service behaviors configure how services work, endpoint behaviors configure how
endpoints work. In the above example, an endpoint behavior was created to configure the certificate that
the client will use when authenticating itself to the service.
To point to a specific certificate, you will need to specify where the certificate is stored, and how to find it.
The following table describes the attributes that you need to use to find the certificate:

Attribute
Description

storeLocation LocalMachine or CurrentUser. You should set this attribute according to the store
where the certificate was installed.

storeName The certificate store to open. Possible values are:


AddressBook. Contains certificates of other users.
AuthRoot. Contains certificates of third-party CAs (Certificate Authorities).
CertificateAuthority. Contains certificates of intermediary CAs.
Disallowed. Contains revoked certificates.
My. Contains personal certificates (default value).
Root. Contains certificates of trusted root CAs.
TrustedPeople. Contains certificates of specifically trusted people.
TrustedPublisher. Contains certificates of specifically trusted publishers.

X509FindType Specify which field of the certificate will be searched for the matching value. The
value specified in the findValue attribute must be an exact match to the value in
the field. The possible fields are:
FindByThumbprint
FindBySubjectName
FindBySubjectDistinguishedName (default value)
Developing Windows Azure and Web Services 14-13

Attribute
Description

FindByIssuerName
FindByIssuerDistinguishedName
FindBySerialNumber
FindByTimeValid
FindByTimeNotYetValid
FindByTemplateName
FindByApplicationPolicy
FindByCertificatePolicy
FindByExtension
FindByKeyUsage
FindBySubjectKeyIdentifier

findValue The string value to match.

For more information about client credential types, see:

Selecting a Client Credential.


http://go.microsoft.com/fwlink/?LinkID=298798&clcid=0x409

Secure transport protocols, such as HTTPS, provide a guarantee to the client that the server is legitimate.
The client does not necessarily have to authenticate when establishing a secured channel. However, the
server must supply a certificate to prove its identity. If the client trusts the CA and the certificate is valid, it
can communicate with the server. After a channel is established and secured, the client can send
additional credentials. These are used by the server-side application to authenticate the client.
When the server uses a certificate that was not issued by a well-known CA, the client does not trust the
issuer of the certificate, and the certificate validation fails. Clients can introduce custom certificate
validation logic by using the class ServicePointManager and authorize the certificate before establishing
and securing a channel. To attach custom certificate validation logic, add a delegate to the
ServerCertificateValidationCallback static property of the System.Net.ServicePointManager class.

The following code demonstrates how to attach custom certificate validation logic in the client.

Attaching a Custom Certificate Validation Logic in the Client


ServicePointManager.ServerCertificateValidationCallback = (sender, cert, chain, error) =>
true;

The above example overrides the default certificate validation process by accepting any service certificate.
This override makes your client susceptible for phishing attacks, and therefore is not recommended for
use in production environments. Usually, this override is used only in development environments where
the services you connect to often use a self-signed certificate that cannot be validated by your clients.

Note: The ServicePointManager.ServerCertificateValidationCallback property can only


be used to override certificate validations with HTTPS. Overriding certificate validations for other
protocols is done through the configuration. This technique is shown in Lesson 3, Message
Security.
14-14 Appendix B: Implementing Security in WCF Services

Demonstration: Securing a WCF Service with Transport Security


In this demonstration, you will see how to use WCF transport security.

Demonstration Steps
1. From D:\Allfiles\Apx02\DemoFiles\TransportSecurity\setup, run the
CreateAndRegisterCert.cmd file

2. From the D:\Allfiles\Apx02\DemoFiles\TransportSecurity\begin folder, open the


TransportSecurity solution.

3. Examine the content of the Service and Client projects.

4. Run the client and service to verify that they communicate successfully.

5. Change the configuration of the service to use transport security, and change the service address to
use HTTPS.
Add a binding configuration named Secured for basicHttpBinding, and set the security mode to
Transport.

<bindings>
<basicHttpBinding>
<binding name="Secured">
<security mode="Transport"/>
</binding>
</basicHttpBinding>
</bindings>

Change the endpoint's address from http to https.


Add the bindingConfiguration attribute to the endpoint configuration, and set its value to Secured.

6. Run the service, and update the clients service reference.

7. Examine the clients App.config file. Observe the changes made to the client configuration.
8. Run the client application and view the exception that occurs because of the certificate validation
failure.

9. In the Client project, open the client.cs file, and then register a delegate to the
ServicePointManager.ServerCertificateValidationCallback property. Set the delegate to return
True.

Note: Refer to Topic 3, "Using Transport Security in Clients", in this Lesson for a code sample of
setting the ServerCertificateValidationCallback property.

10. Run the client application and verify that the application communicates with the service successfully.
Developing Windows Azure and Web Services 14-15

Lesson 3
Message Security
There might be some scenarios when you want to use a secured channel but transport security does not
meet your requirements. For example, if you need to make sure that the message is encrypted so it
cannot be deciphered even if it passes through a proxy, you can use the WCF security mechanism, which
provides end-to-end encryption of messages.

This lesson describes the differences between transport and message security, the scenarios where each
technique is suitable for use, and the procedures for applying message security to your WCF services.

Lesson Objectives
After you complete this lesson, you will be able to:

Describe WCF message security.


Configure a WCF service for message security.

Explain how to use message security in clients.


Describe how to implement transport security with message credentials.

Consume a WCF service that is secured by using message security.

Introduction to Message Security


WCF's message security provides an end-to-end
secure channel, which means the message is
encrypted on the client side, and decrypted only
when it reaches the service. If other servers (such
as proxies) receive the message, they will not be
able to decrypt it. You can use message security
when you need to create a secure channel in a
complicated topology where transport security is
not an acceptable solution.

Message security supports complex topologies,


such as content-based routing or service buses, in
which intermediaries forward messages to the
appropriate endpoint. Message security provides end-to-end channel security because it does not rely on
the transport layer. Because message security manipulates messages directly, it is secure regardless of how
many intermediaries are involved before the message reaches its final destination. The WCF stack
implements confidentiality, integrity, and authentication by encrypting the content, digitally signing it,
and securely transferring client credentials to the server.

Unlike transport security in which the entire message is encrypted, message security can encrypt and sign
parts of the message. If only a part of the message is encrypted, intermediaries can open the message,
view the clear text parts, and take some action or make a decision accordingly. Because the signature is
part of the message, the final destination can still verify that the message was not tampered with. You can
use this feature for implementing a content-based router that can route a message according to the
action header value. By default, WCF does not encrypt the action header. However, it does sign it when
using message security. Therefore, the action header is available to all intermediaries, but no one can
tamper with its value.
14-16 Appendix B: Implementing Security in WCF Services

WCF message security uses the Web Services (WS)-Security specification to secure messages. This
specification defines the structure of a secured SOAP message. WCF message security also uses WS-Trust,
which specifies how to establish a secured channel, and how to exchange security tokens and credentials.

For more information about SOAP messages, security tokens, and credentials, see:
Understanding WS-Security.
http://go.microsoft.com/fwlink/?LinkID=298799&clcid=0x409

Message security supports all the client credential types supported by transport security. In addition to the
well-known credential types, it also supports the Security Assertion Markup Language (SAML) token and
custom tokens. The WS-Security specification, on which message security is based, provides an extensible
framework that can transmit any kind of token inside the SOAP message. Similar to transport security,
message security supports a set of easy-to-use built-in authentication providers. But unlike transport
security, it also provides the ability to add custom validation of credentials, where your custom code
performs authentication.

Although there are many benefits of using message security, you should be aware that there are some
disadvantages to this solution:
Because message security is implemented in WCF, which is the software layer, and not in the
operating system or in hardware, it has a higher performance cost than transport security.

The security negotiation that is required to create the secure channel involves sending large messages
through the channel, so there might be a noticeable delay when opening such a channel.

Secured messages are larger than normal messages, which might result in a longer transmission time.
For more information about message security in WCF, see:

Message Security in WCF.


http://go.microsoft.com/fwlink/?LinkID=298800&clcid=0x409

Question: How is message security different from transport security?

Configuring a Service for Message Security


To configure an endpoint to use message security,
you need to change the binding configuration of
the endpoint. If you also require the client to pass
their credentials, you need to configure that too.

The following code demonstrates how to


configure a service binding with message security.

Configure a Service for Message Security


var securedBinding = new WSHttpBinding();
securedBinding.Security.Mode =
SecurityMode.Message;
securedBinding.Security.Message.ClientCred
entialType =
MessageCredentialType.Certificate;
host.AddServiceEndpoint(typeof(IBookingService), securedBinding,
"http://localhost/booking");
Developing Windows Azure and Web Services 14-17

In the previous example, the security mode was set to Message, and the client's credential type was set to
Certificate. The authentication types in message security are different from the ones for transport
security. The following table lists the possible authentication schemes:

Authentication
Description
scheme

None Anonymous authentication. Everyone can access the service.

Windows Uses the client's Windows credentials to secure the SOAP message.

UserName Requires the client to transfer a user name and password. WCF requires that the user
name and password are transferred in plain text. When using this scheme, WCF
automatically uses transport security.

Certificate Requires the client to send an X.509 certificate that the service can validate.

IssuedToken Requires the client to transfer a custom security token, such as a SAML token.

Note: In the previous example, you used the MessageCredentialType enum, which has
different values than the HttpClientCredentialType enum, which was used in the previous
lesson for transport security.

The same binding configuration can also be applied through the configuration file.

The following configuration demonstrates how to configure a binding for message security with username
client credentials.

Configuring a Binding for Message Security in the Configuration File.


<endpoint
address="http://localhost/booking"
binding="wsHttpBinding"
bindingConfiguration="secured"
contract="Samples.IBookingService" />

<wsHttpBinding>
<binding name="secured">
<security mode="Message">
<message clientCredentialType="Certificate"/>
</security>
</binding>
<wsHttpBinding>

Message security does not depend on transport security, and therefore you do not need to change the
address scheme from http:// to https://, as shown in the above example.

Note: The default binding configuration for WSHttpBinding is to use message security
with Windows client credentials. Therefore, in the previous XML binding configuration, omitting
the mode attribute would have the same affect. For other bindings, such as NetTcpBinding, you
will need to configure the mode attribute to Message explicitly.

When you use message security, you need to specify the certificate that the service will use for
authentication. (You cannot use IIS or netsh to register certificates for message security.) To do this, you
need to add the service credentials behavior to your service.
14-18 Appendix B: Implementing Security in WCF Services

The following code demonstrates how to configure a service certificate.

Configuring a Service Certificate


<serviceBehaviors>
<behavior>
<serviceCredentials>
<serviceCertificate findValue="BlueYonderService" storeLocation="LocalMachine"
storeName="My" x509FindType="FindBySubjectName" />
</serviceCredentials>
</behavior>
</serviceBehaviors>

X.509 certificates are a standard for packaging a public key together with metadata about the token, the
issuer, and its owner. An X.509 certificate is considered valid if the issuer of the certificate is trusted and
the digital signature is valid. The digital signature guarantees that a valid user produced the certificate
and the certificate was not tampered with.

If your client and service use the same Active Directory domain controller, you do not need to set the
service certificate. This is because the client can use the service's Windows identity (the identity that is
attached to the process that runs the service host) to identify the service and to secure the channel. To
create a secured channel with the service's Windows identity, make sure both client and service are on the
same network and are using the same domain, and set the client credential type to Windows.
The secured channel negotiation process of message security is very similar to that of SSL: The client uses
the certificate presented by the server to authenticate the server and to establish and secure a channel
with it. The client reads the server certificate's public key, then uses the public key to encrypt a random
number (the symmetric key), and then sends it to the server. The difference between this process and SSL
is that with message security, the negotiation is made by WCF, rather than by the operating system.

Best Practice: You can place the service credential configuration in configuration or in
code. However, because certificates are renewed over time and their information might change,
it is a best practice to set credential information in configuration, and not in code.

In the example shown at the beginning of this topic, the client credential type was set to Certificate.
Unlike with transport security, where the operating system validates the client certificate, in message
security, WCF is responsible for validating the client certificate. By default, a service will validate the
certificate if the server can find the issuer in a trust chain that leads to a root CA. If you know that a
trusted CA did not issue your client's certificate - for example, if you are working in a development
environment where your client's certificates are self-issued - then you can change the way your service
validates certificates.

The following code demonstrates how to change the client certificate validation mode.

Configuring the Client Certificate Validation Mode.


<serviceBehaviors>
<behavior>
<serviceCredentials>
<serviceCertificate findValue="BlueYonderService" storeLocation="LocalMachine"
storeName="My" x509FindType="FindBySubjectName" />
<clientCertificate>
<authentication certificateValidationMode="PeerOrChainTrust" />
</clientCertificate>
</serviceCredentials>
</behavior>
</serviceBehaviors>
Developing Windows Azure and Web Services 14-19

In the preceding example, the certificateValidationMode attribute was set to PeerOrChainTrust. The
various types of validation modes are described in the following table:

Value Description

None The certificate is not validated. Every certificate is acceptable. This setting is not
recommended.

PeerTrust The client's certificate must exist in the trusted people store.

ChainTrust The chain from the certificate's issuer must lead to a root CA.

PeerOrChainTrust Either the certificate is located in the trusted people store, or its issuer is part of a
root CA trust chain.

Custom Indicates that a custom X509CertificateValidator class should be used to validate


the certificate. If you use the Custom mode, you also need to set the
customCertificateValidatorType attribute to the type of your custom validator
class.

Note: Authenticating client certificates with a custom validation class will be explained in
Lesson 4, Configuring Service Authentication and Authorization.

The service behavior must also include the configuration of authentication and authorization handlers,
which will be explained in Lesson 4, Configuring Service Authentication and Authorization.

Using Message Security in Clients


To consume a WCF service that uses message
security, configure the client proxy with the
appropriate binding. The binding configuration
has to match the binding on the server. You can
configure the client binding in code or in
configuration.

The following code demonstrates how to


configure a proxy to a WCF service that uses
message security and username-password
credentials.

Configuring a Proxy to a WCF Service With Message Security


var securedBinding = new WSHttpBinding(SecurityMode.Message);
securedBinding.Security.Message.ClientCredentialType = MessageCredentialType.UserName;
var chf = new ChannelFactory<IBookingService>(securedBinding,
"http://serverName/booking");
chf.Credentials.UserName.UserName = "myUserName";
chf.Credentials.UserName.Password = "myPassword";
var prox = chf.CreateChannel();

Similar to the code demonstrated in the topic Using Transport Security in Clients in Lesson 2, Transport
Security, configuring the client for message security requires setting the security mode to Message,
setting the client credential type according to the type required by the service, and lastly, configuring the
14-20 Appendix B: Implementing Security in WCF Services

client credentials, which will be used to authenticate the client. In the above example, the binding is
configured to use the UserName credential type. Other credential types are detailed in the mentioned
topic.

Instead of configuring the proxy for message security manually, you can use the Add Service Reference
dialog box in Visual Studio 2012. The generated client configuration will contain the appropriate message
security settings, including the required client credential type. If you use the Add Service Reference
dialog box, and you configured your service behavior with a service certificate (instead of using the
service's Windows identity to secure the channel), then the generated client-side configuration will also
contain the information about the service's certificate. The client will use this information to compare the
certificate in the configuration to the certificate sent by the service upon opening a channel.
The following configuration shows a client endpoint configuration that contains the service certificate
informationthe encoded value was shortened for brevity.

A Client Endpoint Configuration with Service Certificate Information


<client>
<endpoint address="http://localhost:8080/booking/"
binding="wsHttpBinding"
bindingConfiguration="WSHttpBinding_IBookingContract"
contract="BookingService.IBookingContract"
name="WSHttpBinding_IBookingContract">
<identity>
<certificate encodedValue=
"AwAAAAEAAAAUAAAA7YeDPqrps9YUN/5iBzGpI2fKYRwgAAAAAQAAANQBAAAwggHQMIIBeqADAgECAhDvO+Au9Wg/
tEfoLK0smo6XMA0GCSqGSIb3DQEBBAUAMBYxFDASBgNVBAMTC1Jvb3QgQWdlbmN5MB0yhqS9Jtcz/HQjdGg855sHb
Iy1cMLQVo6BdAiqfjEHpbyzLIFAdRvad6l9gHufN2jQ0=" />
</identity>
</endpoint>
</client>

The Base64 string in the above example is not the certificate's thumbprint - it is the service certificate,
without its private key, encoded as a Base64 string. In run time, the client decodes the string and loads it
into a certificate object, and then compares this certificate with the certificates sent from the service.

The generated configuration, however, will not include information about which certificate the client
needs to use - it is up to you to provide this information.

Note: Refer to the Topic Using Transport Security in Clients in Lesson 2, Transport
Security, for an example of setting the client certificate in code and in configuration.

To guarantee that the server is legitimate, the client has to validate the servers certificate. Unlike transport
security, WCF handles the credentials validation process, and you can control it in the endpoint behavior
of the client.

The following configuration demonstrates how to configure server certificate validation in the
configuration file.

Configuring the Server Certificate Validation Mode


<behaviors>
<endpointBehaviors>
<behavior name="securedClient">
<clientCredentials>
<serviceCertificate>
<authentication certificateValidationMode="PeerOrChainTrust"/>
</serviceCertificate>
</clientCredentials>
</behavior>
Developing Windows Azure and Web Services 14-21

</endpointBehaviors>
</behaviors>

In the above example, the certificate validation mode is set to PeerOrChainTrust, which means that the
client will try to validate the service certificate by first checking if it has the same certificate in the Trusted
People certificate store. If the certificate is not in the certificate store, the client will try to verify its chain
trust (the CA that issued the certificate). For the client to locate the service certificate in the Trusted
People store, you need to receive the service certificate out-of-band as a .cer file (by email or other
means), and install it on the client computer, in the Trusted People certificate store.

Note: Other certificate validation modes are: None, PeerTrust, ChainTrust (default), and
Custom. Refer to the previous topic for more details on each of the other validation modes.

If the service does not use a service certificate, and instead uses its Windows identity to authenticate itself
and to create the secured channel, the servers identity needs to be specified in the client configuration
file. If you use the Add Service Reference dialog box, the generated configuration will be created with
the identity of the service. If you manually create the configuration, you will need to specify either the
User Principal Name (UPN) or the Service Principal Name (SPN) that identities the service, according to
the type of the Windows account the service uses:

UPN. Used when the service uses a non-system user, such as a domain user.

SPN. Used when the service uses a system user, such as NetworkService, or LocalSystem.
For more information about service identities, see:

Service Identity and Authentication.


http://go.microsoft.com/fwlink/?LinkID=298801&clcid=0x409

Establishing and securing a channel is an expensive process that involves exchange of several messages.
You can explore these messages by looking at the WCF message log.

Note: WCF Message logging is explained in depth in Module 10, Monitoring and
Diagnostics, in Course 20487.

The following figure displays the message flow of a simple service secured by message security.
14-22 Appendix B: Implementing Security in WCF Services

FIGURE 14.1: MESSAGE SECURITY MESSAGING FLOW


The different messages shown in the above figure are part of the WS-Trust specification. The first pair of
RST and RSTR messages are a request for security token (RST) and the request for security token response
(RSTR) which are used to negotiate the service credentials. The second pair of RST and RSTR message is
the client credentials, which are sent to the service and the response, which contain the security token for
encrypting messages. The third pair of RST/SCT and RSTR/SCT messages exchange the security context
token (SCT) between the service and the client to create the secured session.

When you send the first request to the service, WCF establishes the secured channel by negotiating with
the service, exchanging the service credentials and client credentials. After the secured context is
established, WCF will start sending the requests through the secured channel. After the channel is
established and secured there is no need to execute the security protocol for each request.
Developing Windows Azure and Web Services 14-23

Transport Security with Message Credentials


WCF offers another security mechanism that
incorporates both transport security and message
security to create a better secured channel.

The TransportWithMessageCredential security


mode uses transport security - which offers better
performance and secured transport of messages -
and the message-layer security mechanism -
which supports various credential types that you
cannot use with the standard transport security
mechanism.

When you use transport with message credential


over HTTP, WCF will use HTTPS. Over TCP, WCF
will use TLS similar to the transport security mechanism.
When you use transport with message credentials, you need to apply the same rules as when using the
transport security mechanism with SSL certificates. If the service is hosted under IIS, IIS will manage the
certificate for you. If the service is self-hosted, you will need to use the netsh utility if the service uses an
HTTP endpoint, or specify the certificate yourself in configuration or in code, if the service endpoint uses
TCP.

The following configuration demonstrates how to configure your binding to use the
TransportWithMessageCredential security mode.

Configuring a Binding for Transport with Message Credentials


<wsHttpBinding>
<binding>
<security mode="TransportWithMessageCredential" >
<message clientCredentialType="UserName" />
</security>
</binding>
</wsHttpBinding>

In the above example, after setting the security mode attribute to TransportWithMessageCredential,
you configure the client credentials by adding the <message> element and setting the
clientCredentialType attribute to any of the message security client credential types - UserName,
Windows, even custom tokens, and SAML.
For more information on configuring various bindings for the TransportWithMessageCredentials
security mode, see:

Use Transport Security and Message Credentials


http://go.microsoft.com/fwlink/?LinkID=298802&clcid=0x409

Demonstration: Securing a WCF Service with Message Security


This demonstration shows how to use WCF message security.

Demonstration Steps
1. From D:\Allfiles\Apx02\DemoFiles\MessageSecurity\setup, run the CreateCert.cmd file.

2. From the D:\Allfiles\Apx02\DemoFiles\MessageSecurity\begin folder, open the MessageSecurity


solution.
14-24 Appendix B: Implementing Security in WCF Services

3. Explore the content of the Service project and Client project.

4. Run the client and service to see that they work.

5. Change the binding configuration of the service endpoint to use Message security instead of None.

6. Add a <serviceCredentials> element with a <serviceCertificate> element, and set the service
certificate to use the DemoCert certificate from the My store in the localMachine store location. The
resulting configuration should resemble the following code.

<serviceCredentials>
<serviceCertificate findValue="DemoCert" storeLocation="LocalMachine"
storeName="My" x509FindType="FindBySubjectName"/>
</serviceCredentials>

7. Use the WCF Service Configuration Editor tool to configure the service for message logging.
Configure the log to include the entire message.

In the Diagnostics configuration, enable Message Logging.


Expand the Diagnostics configuration node, and in the Message Logging configuration, set the
LogEntireMessage to True.

8. Run the service, and update the clients service reference.


9. Open the clients App.config file and examine the newly generated configuration.
10. Run the client application and verify the client is able to communicate with the service.

11. Open the message log file, and view the negotiation messages and the encrypted message.
Developing Windows Azure and Web Services 14-25

Lesson 4
Configuring Service Authentication and Authorization
After defining the service endpoints with the security mode and the credential type, you should address
authentication and authorization of the client's identity. WCF contains several built-in providers and
extensibility points to customize these aspects of your service for your needs.

This lesson describes the WCF authentication and authorization models, the built-in providers and
extensibility points, and explains how you can perform authorization enforcement at run time.

Lesson Objectives
After you complete this lesson, you will be able to:

Authenticate clients.
Create a custom credential validator.

Access identity information.

Authorize clients.

Authenticating Clients
The security infrastructure provided by transport
and message security establishes and helps secure
a channel where the client can transfer credentials
to the server. After receiving the credentials, the
server has to execute some logic to identify the
calling client. This is the authentication logic.
When writing secured services, authenticating
clients is essential. If the calling client is
authenticated successfully, an identity object is
created and attached to the security context.

You can implement authentication by attaching


custom credential validators or benefit from the
WCF integration with well-known identity infrastructures such as ASP.NET Membership and Active
Directory Domain Services (AD DS). For example, a WCF service can use AD DS to authenticate the client
credentials and retrieve its Windows identity. The service can also use ASP.NET Membership, which verifies
credentials in an SQL database.

Note: The authentication process only deals with determining the identity of the client.
Authorization, which is the process of determining what the user is allowed to do, will be
explained in the next topic.

How WCF authenticates the credentials of the client depends on the type of credentials the client
provides and the type of security mode the client is using. When the client and the service use Transport
security, the transport layer is responsible for authenticating the client's credentials. For example, if the
client and service are configured for Transport security with the Basic client credential type (sending
username and password as a Base64 string), then the transport layer will validate the username and
password against the domain's Active Directory. The only exception for this behavior is when the client
14-26 Appendix B: Implementing Security in WCF Services

credential type is Certificate. If the client sends a certificate, then WCF is responsible for validating the
certificate.

If your client and service use Message security, then WCF is responsible for the validation of the client's
credentials. The validation process depends on the type of the client credential used, as detailed in the
following table:

Client
credential Credential authentication process
type

Windows Windows identities are tokens that the client receives from its Active Directory and
sends them to the service as a mean of identification. WCF can authenticate Windows
tokens by verifying them against Active Directory.

UserName Clients can send a set of username and password as their identity. By default, WCF
will validate the username and password against the domain's AD DS. You can
configure WCF to validate the username and password against an ASP.NET
Membership provider, or provide your own username/password validator logic.

Certificate WCF has several ways to authenticate the client's X.509 certificate. For example, the
default behavior of WCF is to validate the certificate by checking its expiration and its
chain of issuers. However, you can customize the validation process and even replace
it with a custom validation process. Refer to the Topic Configuring a Service for
Message Security in the previous lesson for a code example of changing the client
certificate validation mode.

IssuedToken When a client sends a token issued to it by a Security Token Service (STS), WCF can
validate the token according to prior information it received from the STS provider,
such as its name and digital signature key. Issued tokens, claims, and STS
authentication are covered in Module 11, "Identity Management and Access Control"
in Course 20487.

Note: If you want to customize the credentials validation process, but still be able to use
Transport security, consider using the TransportWithMessageCredential security mode, which
was explained in the previous lesson.

ASP.NET has a successful infrastructure for identity management, which is based on username and
password credentials. WCF has full support for integrating services with the ASP.NET Membership
infrastructure.
The following configuration demonstrates how to change the username/password validation from using
Active Directory to ASP.NET Membership.

Configuring Username/Password Validation with ASP.NET Membership


<bindings>
<wsHttpBinding>
<binding>
<security mode="Message">
<message clientCredentialType="UserName"/>
</security>
</binding>
</wsHttpBinding>
</bindings>

<serviceBehaviors>
<behavior>
<serviceCredentials>
Developing Windows Azure and Web Services 14-27

<userNameAuthentication userNamePasswordValidationMode="MembershipProvider" />


</serviceCredentials>
</behavior>
</serviceBehaviors>

Changing the default value of the userNamePasswordValidationMode attribute from Windows to


MembershipProvider will make WCF validate the username and password provided by the client against
the ASP.NET Membership provider. You can also change the default configuration of the Membership
provider to control which SQL Server database will be used by WCF, as well as other identity-related
settings, such as whether users can be disabled. For instructions on how to configure the membership
provider, see:

Using the ASP.NET Membership Provider.


http://go.microsoft.com/fwlink/?LinkID=298803&clcid=0x409

Creating a Custom Credential Validator


If you prefer to validate the username/password
credentials on your own, you can create a custom
validator class by deriving from the
System.IdentityModel.Selectors.UserNamePass
wordValidator abstract class, and registering the
custom validator in the service behavior
configuration.
The following code is an example of a custom
username/password validator class.

Custom Username/Password Validator


public class MyCustomUserNameValidator :
UserNamePasswordValidator
{
public override void Validate(string username, string password)
{
if (username == null || password == null)
{
throw new ArgumentNullException();
}

if (!(username == "user1" && password == "Pa$$w0rd"))


{
throw new SecurityTokenException("Unknown Username or Incorrect Password");
}
}
}

By implementing the Validate method, you can introduce your own validation logic, such as validating
the credentials against a list of users and passwords stored in your own database. The Validate method
does not have a return value; If the method completes its execution successfully, WCF will assume the
validation was successful. If the custom validation reveals a problem with the credentials, for example, if
the username does not appear in your users list, or the password does not match the user's password, you
should throw a SecurityTokenException. Throwing a security token exception will result in a security
exception on the client side. If you want to provide extra information about the validation failure, such as
a list of possible failure reasons and links where the user can reset their password, you can replace the
SecurityTokenException object with a FaultException or a FaultException<T> object.
14-28 Appendix B: Implementing Security in WCF Services

Best Practice: As a best practice, avoid providing too much information about the failed
validation. For example, if a hacker tries to guess the username and password, letting them know
that the username is valid but the password is wrong will advance the hacker one step towards
finding the correct credentials.

To complete the process, modify the service configuration. Change the validation mode to Custom and
specify the newly created validator type.

The following configuration demonstrates how to attach a custom username/password validator by using
the service configuration file.

Attaching a Custom Username/Password Validator


<serviceBehaviors>
<behavior>
<serviceCredentials>
<userNameAuthentication userNamePasswordValidationMode="Custom"
customUserNamePasswordValidatorType="Namespace.ValidatorType, AssemblyName"
/>
</serviceCredentials>
</behavior>
</serviceBehaviors>

Make sure you set the customUserNamePasswordValidatorType attribute to the fully qualified name of
your custom validator class, including the fully qualified name of its containing assembly.

If the service endpoint is configured to use the Certificate client credentials, you can create a custom
X.509 certificate validator, by creating a class that derives from the
System.IdentityModel.Selectors.X509CertificateValidator class and implement the Validate method.

The following code is an example of a custom X.509 certificate validator.

Creating a Customized X.509 Certificate Validator


public class MyCustomCertificateValidator : X509CertificateValidator
{
public override void Validate(X509Certificate2 certificate)
{
if (certificate.Subject != "CN=localhost")
{
throw new SecurityTokenException("wrong certificate");
}
}
}

Like the UserNamePasswordValidator abstract class, the X509CertificateValidator abstract also


exposes a Validate method, which should either complete successfully, or throw a
SecurityTokenException. The Validate method receives an X509Certificate2 parameter that provides
access to the certificate information, including its subject name, expiration date, and issuer information.

To attach the custom validator to your service, you can use either code or configuration.

The following code demonstrates how to attach a custom certificate validator through code.

Attaching a Custom Certificate Validator


host.Credentials.ClientCertificate.Authentication.CertificateValidationMode =
X509CertificateValidationMode.Custom;
host.Credentials.ClientCertificate.Authentication.CustomCertificateValidator =
new MyCustomCertificateValidator();
Developing Windows Azure and Web Services 14-29

If you want to attach the custom validator by using the configuration file, add the <clientCertificate>
element to your service behavior. Add an <authentication> element to it, set the
certificateValidationMode attribute to Custom, and set the customCertificateValidatorType attribute
to the fully qualified name of the class and its containing assembly.

For an example of configuring a custom certificate validator in both code and configuration, see:

Create a Custom Certificate Validator.


http://go.microsoft.com/fwlink/?LinkID=298804&clcid=0x409

Accessing Identity Information


After the client has been authenticated, its identity
information is attached to the security context so
that you can access it from your code for auditing
purposes. To access the client's identity
information, use the
System.ServiceModel.ServiceSecurityContext.C
urrent static property, which returns a
ServiceSecurityContext object. You can use the
PrimaryIdentity property of a class to retrieve
the identity information, such as the type and
name of the identity.

The following code demonstrates how to access


the user's identity from the service operation code.

Accessing the Users Identity Information


System.Security.Principal.IIdentity identity =
ServiceSecurityContext.Current.PrimaryIdentity;
string authenticationType = identity.AuthenticationType;
string name = identity.Name;
Logger.Log("Operation invoked by {0}. Authentication type: {1}", name,
authenticationType);

The PrimaryIdentity property returns an object that implements the IIdentity interface. The final type of
the object depends on the client credential type and the way the WCF service validates these credentials.
For example, if the client provides a username and a password, and WCF validates those credentials
against AD DS, WCF will create a WindowsIdentity object. If the validation process is against the ASP.NET
Membership provider, the created identity will be a GenericIdentity object.
The AuthenticationType property contains a string representing the name of the authentication type
mechanism. The Name property contains of a string representing the identity name, which could be a
user name or the name of the client's certificate, depending on the type of the client credentials. For
example, for the ASP.NET Membership provider, the AuthenticationType property will be set to
MembershipProviderValidator, and the Name property will be set to the username presented by the
client's credentials.
14-30 Appendix B: Implementing Security in WCF Services

Authorizing Clients
Even though the server can identify the client, it
does not mean that the client can execute any
service operation. The authorization process
grants access or denies access to a specific
resource or action according to the clients
identity. It is only after authentication and
authorization that WCF forwards the request to
the service operation.

The information required by WCF to authorize the


client may or may not be a part of the credentials
provided by the client. For example, if the client
identifies with a Windows identity, WCF needs to
retrieve the identity's security groups from the Active Directory. If the credentials used by the client were
validated against the ASP.NET Membership database, then WCF can try and retrieve the roles of the
identity by calling the ASP.NET Role provider.

Active Directory and ASP.NET Membership are two types of role-based identities. Roles define a group of
users with a common set of permissions. For example, all the sales people in a system will have the Sales
role. In code, you will permit all those who have the Sales role to invoke purchase-related service
operations, but you will prevent them from invoking salary-related operations. The Managers role, to
which several other identities will belong, will have permissions to invoke both purchase-related and
salary-related operations. To retrieve the roles of an identity, you will need to retrieve a list of roles for the
identity. For example, AD DS stores role information, called groups, for each Windows identity. ASP.NET
also supports role storage with the ASP.NET Role provider. The default storage for the role provider is a
SQL database.

The following example shows how you can configure the service behavior to use the built-in ASP.NET
Membership and Role providers for client authentication and authorization.

Configuring the Service for Authorization with the ASP.NET Role Provider
<serviceBehaviors>
<behavior name="SecureCalcBehavior">
<serviceAuthorization principalPermissionMode="UseAspNetRoles" />
<serviceCredentials>
<userNameAuthentication
userNamePasswordValidationMode="MembershipProvider" />
</serviceCredentials>
</behavior>
</serviceBehaviors>

By adding the <serviceAuthorization> element to your service behavior, you can configure which
authorization technique you want to use. The default authorization mechanism used by WCF is the Active
Directory Windows groups, but you can set the principalPermissionMode attribute to either
UseAspNetRoles, to use the ASP.NET Role provider, or to Custom if you want to create your own
authorization manager.

Note: It is possible to mix authentication and authorization storages. For example, WCF can
authenticate the client credentials against Active Directory and retrieve the list of roles for the
Windows identity from the ASP.NET Role provider.

Creating a custom authorization manager is outside the scope of this course. For an example of how to
create a custom authorization manager, see:
Developing Windows Azure and Web Services 14-31

Creating a Custom Authorization Manager

http://go.microsoft.com/fwlink/?LinkID=298805&clcid=0x409

Role-based identities are one type of identities that you can use to define permissions in a WCF service.
Another type of identities is claim-based identities. With claim-based identities, the credentials provided
by the client contain, in addition to the identity information, sets of claims you can use to authorize the
client. With claim-based identities, all the information required to authenticate and authorize the client
are included in the credentials the client sends to the service. Claim-based identities are discussed in
Module 11, "Identity Management and Access Control" in Course 20487. However, claim-based
authorization is outside the scope of this course. The remaining of this topic will focus on role-based
identities and role-based authorization.

Role-based security in the .NET Framework is based on the IIdentity and IPrincipal interfaces. The
IIdentity implementation contains information about the client's identity, and the IPrincipal
implementation contains information about the client's roles along with functionality for checking if the
client's identity has a certain role.

For example, in cases where you authenticate users by using Windows credentials, WCF creates the
corresponding WindowsIdentity and WindowsPrincipal objects, and attaches them to the threads
security context.
Because the identity and principal are available throughout the entire lifetime of your service, you can use
their implementation to authorize the client against specific role requirements. In addition, you can use
the common .NET role-based security model through the PrincipalPermission class imperatively or
declaratively.
To use the PrincipalPermission declaratively, decorate your service operation implementation with the
[PrincipalPermission] attribute, and set against which roles you want to authenticate the identity.
The following code demonstrates how to configure a service operation with the [PrincipalPermission]
attribute, which requires that the calling user is a member of the Sales role.

Granting Access by Using the [PrincipalPermission] Attribute


[PrincipalPermission(SecurityAction.Demand, Role = "Sales")]
public string CreateReservation(Reservation reservation)
{
// Create the reservation and return its confirmation code

}

If the Sales role is not one of the identity's roles, a SecurityException will be thrown, and an Access
Denied fault message will be sent back to the client. The WCF client will turn this fault message into a
SecurityAccessDeniedException managed exception.

You can use the SecurityAction enum to require that the identity belong to a role, and you can also use
it to deny access from specific roles.

The following code demonstrates how to configure a service operation with the [PrincipalPermission]
attribute, denying access from the Intern role.

Denying Access by Using the [PrincipalPermission] Attribute


[PrincipalPermission(SecurityAction.Deny, Role = "Intern")]
public string CreateReservation(Reservation reservation)
{
// Create the reservation and return its confirmation code

14-32 Appendix B: Implementing Security in WCF Services

You can also use the PrincipalPermission imperatively by creating an instance of the
PrincipalPermission type, and then specifying which roles to check.

The following code demonstrates how to use the PrincipalPermission class in code.

Verifying Permission in Code


public string CreateReservation(Reservation reservation)
{
if (reservation.Price > 50000)
{
PrincipalPermission rolePermission = new PrincipalPermission(name:null,
role:"Manager");
rolePermission.Demand();
}
// Create the reservation and return its confirmation code

}

The [PrincipalPermission] attribute cannot be used in the preceding example, because the permission
check depends on the price of the reservation. That is why you need to use imperative authorization. The
call to the Demand method verifies the user has the Manager role. If the method fails, a
SecurityException will be thrown.
You can use both the [PrincipalPermission] attribute and the PrincipalPermission class to authorize
roles and individual users, by setting the name parameter.

For more information about role providers and authorization, see:

WCF Authorization

http://go.microsoft.com/fwlink/?LinkID=298806&clcid=0x409

For more information about authentication and authorization of identities, see:

Authentication, Authorization, and Identities in WCF

http://go.microsoft.com/fwlink/?LinkID=298807&clcid=0x409

Demonstration: Authenticating and Authorizing Clients


This demonstration shows how to authenticate and authorize users in WCF services.

Demonstration Steps
1. From D:\Allfiles\Apx02\DemoFiles\AuthenticatingUsers\setup, run the CreateCert.cmd file.

2. From the D:\Allfiles\Apx02\DemoFiles\AuthenticatingUsers\begin folder, open the


AuthenticatingUsers solution file.

3. Explore the contents of the Service project and Client project. Both service and client are configured
to use WsHttp binding which uses message security with windows authentication by default.
4. To authenticate user names with the ASP.NET Membership provider, add binding configuration for
the wsHttpBinding, and in the message security settings, set the client credential type to UserName.
The resulting configuration should resemble the following code.
Developing Windows Azure and Web Services 14-33

<bindings>
<wsHttpBinding>
<binding>
<security>
<message clientCredentialType="UserName"/>
</security>
</binding>
</wsHttpBinding>
</bindings>

5. In the <serviceCredentials> element, add a <userNameAuthentication>, and set the


userNamePasswordValidationMode attribute to MembershipProvider.

6. To authorize users with ASP.NET Roles, add a <serviceAuthorization> element to the service
behavior configuration, and set the element's principalPermissionMode to UseAspNetRoles.
7. Enable the ASP.NET Role manager by adding the following configuration.

<system.web>
<roleManager enabled="true"/>
</system.web>

8. Add four principal permission checks to the service implementation in the following way:
Add: Principal permission attribute that demands the user has the StandardUsers role

Sub: Principal permission attribute that demands the user has the Managers role
Mul: Principal permission object that demands the user is User1
Div: Principal permission object that demands the user has the Managers role

9. In the clients App.config file, configure the binding for UserName authentication. The resulting
configuration should resemble the binding configuration you created in the service a few steps
before.

10. In the client configuration, add a default endpoint behavior to disable the service certificate check.
The resulting configuration should resemble the following code.

<behaviors>
<endpointBehaviors>
<behavior>
<clientCredentials>
<serviceCertificate>
<authentication certificateValidationMode="None"/>
</serviceCertificate>
</clientCredentials>
</behavior>
</endpointBehaviors>
</behaviors>

11. From the Client project, open the client.cs. Add the code needed to set the credentials of the proxy
to username/password credentials.
Use the ClientCredentials.UserName property of the proxy object to set the client credentials.

Set the client username to User1 and the password to Pa$$w0rd.

12. Run the service and the client. Verify that the Add and Mul operations succeed, and that the Sub and
Div operations throw a SecurityAccessDenied exception.
14-34 Appendix B: Implementing Security in WCF Services

Lab: Securing a WCF Service


Scenario
Blue Yonder Airlines plans to sign codeshare agreements with other airlines. Before entering into
negotiations, the booking systems must be able to differentiate between connecting clients and secure
the communication between the clients and the service. Because Blue Yonder Airlines cannot trust the
internal networks used by its partners, the service communication is secured end-to-end with message
security, and the client has to authenticate by using a certificate. In this lab, you will add the necessary
code and configuration to make the WCF service as secure as required.

Objectives
After you complete this lab, you will be able to:

Secure the WCF service by using message security.


Use authorization rules to validate clients requests.

Configure the ASP.NET Web API client to authenticate by using a certificate.

Lab Setup
Estimated Time: 30 minutes
Virtual Machine: 20487B-SEA-DEV-A, 20487B-SEA-DEV-C

User name: Administrator, Admin


Password: Pa$$w0rd, Pa$$w0rd
For this lab, you will use the available virtual machine environment. Before you begin this lab, you must
complete the following steps:

1. On the host computer, click Start, point to Administrative Tools, and then click Hyper-V Manager.
2. In Hyper-V Manager, click MSL-TMG1, and in the Action pane, click Start.

3. If you executed a later lab before this one, follow these instructions:

In Hyper-V Manager, click the 20487B-SEA-DEV-A virtual machine.


In the Snapshots pane, right-click the StartingImage snapshot, and then click Apply.

In the Apply Snapshot dialog box, click Apply.


4. In Hyper-V Manager, click 20487B-SEA-DEV-A, and in the Action pane, click Start.

5. In the Action pane, click Connect. Wait until the virtual machine starts.

6. Sign in using the following credentials:

User name: Administrator

Password: Pa$$w0rd

7. Return to Hyper-V Manager, click 20487B-SEA-DEV-C, and in the Action pane, click Start.

8. In the Action pane, click Connect. Wait until the virtual machine starts.

9. Sign in using the following credentials:

User name: Admin

Password: Pa$$w0rd

10. Verify that you received credentials to log in to the Azure portal from you training provider, these
credentials and the Azure account will be used throughout the labs of this course.
Developing Windows Azure and Web Services 14-35

Exercise 1: Securing the WCF Service


Scenario
As part of securing the communication between the Web API service and the WCF service, configure the
WCF service to use message security and require that the client authenticate with a certificate.

The main tasks for this exercise are as follows:


1. Create a server certificate

2. Configure the service endpoint's binding

3. Configure the service behavior to use the newly created certificate

Task 1: Create a server certificate


1. In the 20487B-SEA-DEV-A virtual machine, run the CreateServerCertificate.cmd file from the
D:\AllFiles\Apx02\LabFiles\Setup, folder.

Task 2: Configure the service endpoint's binding


1. Open Visual Studio 2012, and then from the D:\AllFiles\Apx02\LabFiles\begin\BlueYonder.Server
folder, open the BlueYonder.Server solution.
2. In the BlueYonder.Server.Booking.Host project, in the App.config file, add a new NetTcpBinding
binding configuration, and configure it to use Message security and client credentials of type
Certificate. Do not add the name attribute to the <binding> element.

The resulting configuration should resemble the following code.

<binding>
<security mode="Message">
<message clientCredentialType="Certificate"/>
</security>
</binding>

Note: You can create one default binding configuration without the name attribute for
each binding type. The default configuration will apply to any endpoint using that binding, which
does not have its own binding configuration.

Task 3: Configure the service behavior to use the newly created certificate
3. Add a <serviceCredentials> element to the default service behavior.

1. Add a <serviceCertificate> element to the <serviceCredentials> element. Use the service


certificate from the LocalMachine\My certificate store. Search for the certificate with the Server
subject name.

The resulting configuration should resemble the following code.

2. <serviceCertificate storeLocation="LocalMachine" storeName="My"


x509FindType="FindBySubjectName" findValue="Server"/>

3. In the <serviceCredentials> element, add the <clientCertificate> element and set the revocation
mode of the client certificate to NoCheck.

Note: You cannot check if the client certificate has been revoked, because it was generated
locally. If a real certification authority had issued the client certificate, it would have been possible
to check whether it was revoked.
14-36 Appendix B: Implementing Security in WCF Services

Results: You can test your changes at the end of the next exercise.

Exercise 2: Using Authorization Rules to Validate the Clients Requests


Scenario
After authenticating the client, the next step is to authorize the user and verify that the client is permitted
to perform the requested operation. To authorize users by their certificate, create an authorization policy
that matches a set of roles for each certificate, and then use the roles with the principal permission
authorization check.

The main tasks for this exercise are as follows:

1. Create a custom authorization policy to attach roles to client certificates

2. Configure the service authorization to use the custom authorization policy

3. Apply principal permissions to the service operations

Task 1: Create a custom authorization policy to attach roles to client certificates


1. In the BlueYonder.BookingService.Implementation project, add a reference to the
System.IdentityModel assembly.
2. Add a new class to the BlueYonder.BookingService.Implementation project.

Name the new class CertificateAuthorizationPolicy.


Set the access modifier of the class to public.
Change the class to implement the System.IdentityModel.Policy.IAuthorizationPolicy interface.

3. Implement the Issuer property by returning the value of


System.IdentityModel.Claims.ClaimSet.System.

4. Implement the Id property by returning a new GUID. Initialize the new GUID in the classs constructor
and store it in a private field.
5. Create a static dictionary field called AuthorizationForUser in the class. Set the key to a string and
the value to a string array. Initialize it with the following key/value pair:

Key: CN=Client

Value: an array containing the string ReservationsManager


6. Implement the Evaluate method.

Get the Identities value from the evaluationContext.Properties collection and cast it to a list of
System.Security.Principal.IIdentity.
Verify the list contains identities, and select the first identity from the list.
Extract the part of the identitys Name property before the semicolon.
Check whether the roles dictionary contains roles for the partial name that you found.
Set the Principal value of the evaluationContext.Properties collection to a new
System.Security.Principal.GenericPrincipal. Initialize the generic principal with the identity and the
roles that you found (or a null value if no roles are found.)
If the principal was created, return true. Otherwise, return false.
The resulting code should resemble the following code.

public bool Evaluate(System.IdentityModel.Policy.EvaluationContext


evaluationContext, ref object state)
{
bool retValue = false;
Developing Windows Azure and Web Services 14-37

var identitiesList = evaluationContext.Properties["Identities"] as


List<System.Security.Principal.IIdentity>;
if (identitiesList != null && identitiesList.Count > 0)
{
System.Security.Principal.IIdentity identity = identitiesList.First();
string name = identity.Name.Split(';').First();
string[] roles = null;
if (AuthorizationForUser.ContainsKey(name))
{
roles = AuthorizationForUser[name];
}
evaluationContext.Properties["Principal"] =
new System.Security.Principal.GenericPrincipal(
identity,
roles);
retValue = true;
}
return retValue;
}

Task 2: Configure the service authorization to use the custom authorization policy
1. From the BlueYonder.Server.Booking.Host project, open the App.config file.
2. Add a <serviceAuthorization> element to the default service behavior.

Set the principalPermissionMode attribute to Custom.


Add an authorization policy element with the full name of the class that you created in the previous
task (use the full name of the class, including the assembly name).

The resulting configuration should resemble the following code.

<serviceAuthorization principalPermissionMode="Custom" >


<authorizationPolicies>
<add
policyType="BlueYonder.BookingService.Implementation.CertificateAuthorizationPoli
cy, BlueYonder.BookingService.Implementation"/>
</authorizationPolicies>
</serviceAuthorization>

Task 3: Apply principal permissions to the service operations


1. From the BlueYonder.BookingService.Implementation project, open the BookingService.cs. Add
a PrincipalPermission attribute to the CreateReservation method, and set the principal permission
to demand that the user has the ReservationsManager role.

2. Insert a breakpoint at the beginning of the CreateReservation method, set the


BlueYonder.Server.Booking.Host project as the startup project, and press F5 to run the service host
in debug mode. Verify that the service host opens without throwing any exceptions.

Results: After you complete this exercise, the booking service host is opened successfully and can locate
the service certificate.

Exercise 3: Configure the ASP.NET Web API Booking Service for Secured
Communication
Scenario
To complete the message security configuration, you must also configure the client-side accordingly. In
this exercise, you update the endpoint configuration in the Web API service to use message security and
authenticate with a certificate.
14-38 Appendix B: Implementing Security in WCF Services

The main tasks for this exercise are as follows:

1. Create a client authentication certificate for the ASP.NET Web API booking service

2. Configure the client-side endpoint binding to use message security

3. Configure the client-side endpoint behavior with the client's certificate

Task 1: Create a client authentication certificate for the ASP.NET Web API booking
service
1. From D:\AllFiles\Mod08\LabFiles\Setup, run the CreateClientCertificate.cmd file.

Task 2: Configure the client-side endpoint binding to use message security


1. Open the D:\AllFiles\Apx02\LabFiles\begin\BlueYonder.Server\BlueYonder.Companion.sln
solution file in a new instance of Visual Studio 2012.

2. From the BlueYonder.Companion.Host project, open the Web.config file.

3. Add a default NetTcpBinding binding configuration. Set it to use Message security with client
credentials of type Certificate.

Task 3: Configure the client-side endpoint behavior with the client's certificate
1. Add a <behaviors> element and in it, add a default endpoint behavior with a <clientCredentials>
element.
The resulting configuration should resemble the following code.

<behaviors>
<endpointBehaviors>
<behavior>
<clientCredentials>
</clientCredentials>
</behavior>
</endpointBehaviors>
</behaviors>

2. Add a <serviceCertificate> element to the <clientCredentials> element, and set the revocation
mode of the service certificate to NoCheck.
3. Add the <clientCertificate> element to the <clientCredentials> element. Set the certificate to use
the client certificate from the LocalMachine\My certificate store. Search for the certificate with the
Client subject name.

The resulting configuration should resemble the following code.

<clientCertificate storeLocation="LocalMachine" storeName="My"


x509FindType="FindBySubjectName" findValue="Client"/>

4. Add an <identity> element in the client endpoint element. Configure the identity to the server
certificate from the LocalMachine\TrustedPeople certificate store. Search for the certificate with the
Server subject name.

The resulting configuration should resemble the following code.

<identity>
<certificateReference storeLocation="LocalMachine" storeName="TrustedPeople"
x509FindType="FindBySubjectName" findValue="Server"/>
</identity>
Developing Windows Azure and Web Services 14-39

Note: The <identity> element contains the information about the service's certificate. The
client uses this configuration to verify that it is connected to the correct service.

5. Build the solution.

6. In the 20487B-SEA-DEV-C virtual machine, open the


D:\AllFiles\Apx02\LabFiles\begin\BlueYonder.Companion.Client solution file in Visual Studio
2012. Run the project without debugging.

7. Open the app bar and search for trips to New York. Purchase a trip from Seattle to New York.

8. Go back to the 20487B-SEA-DEV-A virtual machine to debug the WCF service. Use the Quick Watch
to view the value of the ServiceSecurityContext.Current.PrimaryIdentity property, and verify that
it is set to the client certificate. Continue running and verify the client is showing the new reservation.
Close the client and service applications.

Results: After you complete the exercise, you will be able to start the client application and create a
reservation.

Question: In this lab, you used a client certificate to authenticate the client. Why would you
use certificates instead of other authentication types, such as Windows identities or
username/password?
14-40 Appendix B: Implementing Security in WCF Services

Module Review and Takeaways


In this module, you learned how to help secure a WCF service by using transport security, message
security, and the mixed mode of transport security with message credentials. You also explored the
different ways to authenticate clients, to create custom authentication mechanisms, and to authorize users
with roles.

Review Question(s)
Question: What types of credentials does WCF support?

Tools
WCF Services Configuration Editor

Microsoft Service Trace Viewer


Developing Windows Azure and Web Services 14-41

Course Evaluation
Include this slide only in the last module of the Course.

<insert slide here>

Keep this evaluation topic page if this is the final module in this course. Insert the Product_Evaluation.ppt
on this page.

If this is not the final module in the course, delete this page

Your evaluation of this course will help Microsoft understand the quality of your learning experience.
Please work with your training provider to access the course evaluation form.

Microsoft will keep your answers to this survey private and confidential and will use your responses to
improve your future learning experience. Your open and honest feedback is valuable and appreciated.
14-42 Appendix B: Implementing Security in WCF Services
L1-1

Module 1: Overview of Service and Cloud Technologies


Lab: Exploring the Work Environment
Exercise 1: Creating a Windows Azure SQL Database
Task 1: Create a new Windows Azure SQL database server
1. On the Start screen, click the Internet Explorer tile.

2. Browse to https://manage.windowsazure.com.

3. If the Sign In page appears, enter your email and password, and then click Sign In.

4. In the navigation pane on the left side, click SQL DATABASES.

5. In the top corner on the left side of the sql databases page, click SERVERS.

6. Click ADD at the bottom of the page.

7. In the SQL Database server settings dialog box, in the LOGIN NAME box, type SQLAdmin.
8. In the LOGIN PASSWORD box and CONFIRM PASSWORD box, type Pa$$w0rd.

9. In the REGION list, select the region closest to you, and then click Complete (the V button).

10. Wait for the server to appear in the list of servers and its status changed to Started.

11. Write down the name of the newly created SQL Database server.

Task 2: Manage the Windows Azure SQL Database Server from the SQL Server
Management Studio.
1. On the sql database page, click on the name of the newly created server.
2. Click the CONFIGURE tab.

3. In the allowed ip addresses section, add a new firewall rule by filling the following information:

RULE NAME: OpenAllIPs


START IP ADDRESS: 0.0.0.0

END IP ADDRESS: 255.255.255.255

4. Click Save at the bottom of the page.

Note: As a best practice, you should allow only your IP address, or your organization's IP
address range to access the database server. However, in this course, you will use this database
server for future labs, and your IP address might change in the meanwhile, therefore you are
required to allow access from all IP addresses.

5. On the Start screen, click the SQL Server Management Studio tile.

6. In the Connect to Server dialog box, in the Server name box, enter
SQLServerName.database.windows.net (Replace SQLServerName with the server name you wrote
down in the previous task).

7. In the Authentication drop-down list, select SQL Server Authentication.

8. In the Login box, enter SQLAdmin.

9. In the Password box, enter Pa$$w0rd.


L1-2 Developing Windows Azure and Web Services

10. Click Connect.

11. In Object Explorer, right-click the Databases node, and then click Import Data Tier Application.

12. In the Import Data-tier Application wizard, click Next, and then click Browse.

13. Browse to D:\AllFiles\Mod01\LabFiles\Assets.

14. Double-click BlueYonder.bacpac.

15. Click Next, click Next again, and then click Finish. Wait until the database import procedure is
finished, and then click Close.

16. Press F5 to refresh the database list, expand the Databases node, and verify you see the BlueYonder
database.

Results: After completing this exercise, you should have created a Windows Azure SQL Database in your
Windows Azure account.
L1-3

Exercise 2: Creating an Entity Data model


Task 1: Create an Entity Framework Model
1. On the Start screen, click the Visual Studio 2012 tile.

2. On the File menu, point to New, and then click Project.


3. In the New Project dialog box, on the navigation pane, expand the Installed node. Expand the
Visual C# node. Click the Windows node. Select Class Library from the list of templates.

4. In the Name box, type BlueYonder.Model.

5. In the Location box, type D:\AllFiles\Mod01\LabFiles\begin.

6. Select the Create directory for solution check box, and then click OK.

7. In Solution Explorer, right-click the Class1.cs file, click Delete, and then click OK.
8. Right-click the BlueYonder.Model project, point to Add, and then click New Item.

9. In the Add New Item dialog box, on the navigation pane, expand Visual C# Items node. Click the
Data node, and then select the ADO.NET Entity Data Model from the list of templates.

10. In the Name text box, enter EntityModel, and then click Add.

11. In the Entity Data Model Wizard wizard, click Generate from database, and then click Next.
12. In the Choose Your Data Connection step, click New Connection.
13. If the Choose Data Source dialog box appears, click Microsoft SQL Server, and then click Continue.

14. In the Connection Properties dialog box, in the Server name box, enter
SQLServerName.database.windows.net (replace SQLServerName with the server name you have
written down in the previous exercise).

15. Select the Use SQL Server Authentication option.

16. In the User name text box, enter SQLAdmin.


17. In the Password text box, enter Pa$$w0rd.

18. Click the Save my password check box.


19. Click Test Connection, and verify the connection test succeeded. Click OK to close the Microsoft
Visual Studio dialog box that appeared.

20. From the Select or enter a database name list, select the BlueYonder database, and then click OK.

21. In the Choose Your Data Connection step, select the Yes, Include the sensitive data in the
connection string option, and then click Next.

22. In the Choose Your Database Objects and Settings step, expand Tables, expand dbo, then check
Locations and Travelers, and then click Finish. Wait until Visual Studio 2012 creates the data model.

23. If the Security Warning dialog box appears, click OK (it may appear more than once).

24. To save the file, press Ctrl+S. If the Security Warning dialog box appears, click OK (it may appear
more than once).
25. Close the EntityModel.edmx diagram.

Results: After completing this exercise, you should have created Entity Framework wrappers for the
BlueYonder database.
L1-4 Developing Windows Azure and Web Services

Exercise 3: Managing the Entity Framework Model with an ASP.NET Web


API Project
Task 1: Create an ASP.NET Web API Project
1. Right-click the BlueYonder.Model solution, point to Add, and then click New Project.

2. In the Add New Project dialog box, on the navigation pane, expand the Installed node. Expand the
Visual C# node. Click the Web node. Click ASP.NET MVC 4 Web Application from the list of
templates.

3. In the Name box, type BlueYonder.MVC, and then click OK.

4. In the New ASP.NET MVC 4 Project dialog box, click the Web API template, and then click OK.

Task 2: Add a Web API Controller with CRUD Actions, Using the Add Controller
Wizard
1. Right-click BlueYonder.MVC project, and then click Add Reference.

2. In the Reference Manager dialog box, click Solution in the navigation pane, check the
BlueYonder.Model check box, and then click OK.

3. In Solution Explorer, in the BlueYonder.Model project, double-click App.config.

4. Locate the <connectionStrings> section, select the <add> element, including its attributes, and
press Ctrl+C to copy the element to the clipboard.
5. In Solution Explorer, in the BlueYonder.MVC project, double-click Web.config.

6. Locate the <connectionStrings> section, place the cursor after the <connectionStrings> tag, and
press Ctrl+V to paste the connection string.

7. To save the changes, press Ctrl+S.

8. On the Build menu, click Build Solution.


9. On the View menu, click Server Explorer.

10. In Server Explorer, right-click Data Connections, and then click Refresh. BlueYonderEntities will
show up under the Data Connections node.
11. In Solution Explorer, in the BlueYonder.MVC project, right-click Controllers, then point Add, and
then click Controller.

12. In the Add Controller dialog box, in the Controller name box, enter LocationsController.

13. Select API controller with read/write actions, using Entity Framework from the Template drop
down list.

14. Select Location (BlueYonder.Model) from the Model class combo box.
15. Select BlueYonderEntities (BlueYonder.Model) from the Data context class combo box.

16. To create the controller, click Add.

Note: You now have a Web API controller for the Location model.

17. In Solution Explorer, right-click the BlueYonder.MVC project, and then click Set as StartUp Project.

18. To start the application without debugging, press Ctrl+F5.


L1-5

19. In the Internet Explorer window, in the address bar, append api/locations to the address, and then
press Enter.

20. At the bottom of the Internet Explorer window, a prompt appears. Click Open.
21. If you are prompted to select a program to open the file, in Windows can't open this type of file
(.json) dialog box, click Try an app on this PC, select Notepad from the list of available programs.
When Notepad opens, you should see a list of Location entities, encoded with the JSON format.

Results: After completing this exercise, you will have a website that exposes the Web API for CRUD
operations on the BlueYonder database.
L1-6 Developing Windows Azure and Web Services

Exercise 4: Deploying a web application to Windows Azure


Task 1: Create a New Windows Azure Web Site
1. On the Start screen, click the Internet Explorer tile.

2. Browse to the Windows Azure Management Portal at https://manage.windowsazure.com.


3. If the Sign in page appears, enter your email and password, and then click Sign In.

4. In the navigation pane, click WEB SITES.

5. Click NEW, and then click QUICK CREATE.


6. In the URL box, enter BlueYonderWebSiteYourInitials (Replce YourInitials with your initials).

Note: The name you enter is combined with the .azurewebsites.net suffix, providing a
unique host name that is used as your website URL.

7. In the REGION drop down list, select the region closest to your location.

8. Click CREATE WEB SITE. The website is added to the Web Sites table with the status as Creating;
Wait until the status changes to Running.

9. On the web sites page, click the name of the new website.

10. On the DASHBOARD page, click Download the publish profile on the right side under quick
glance. An Internet Explorer dialog box appears at the bottom. Click the arrow within the Save
button. Select the Save as option and specify the location D:\AllFiles\Mod01\LabFiles.

11. Click Save.

Task 2: Deploy the Web Application to the Windows Azure Web Site
1. Return to Visual Studio 2012, and in Solution Explorer, right-click the BlueYonder.MVC project, and
then click Publish.

2. In the Publish Web dialog box, click Import, browse to D:\AllFiles\Mod01\LabFiles, then select the
profile settings file that you downloaded, and then click Open. Click Ok in the Import Publish
Profile dialog box.

3. Click Publish. Visual Studio 2012 builds and publishes the application according to the settings that
are provided in the profile file.

4. After the deployment finishes, Visual Studio 2012 opens Internet Explorer and browses to the website.

Note: At this point, you can simply click Next at every step of the wizard, and then click
Publish to start the publishing process. Later in the course you will learn how the deployment
process works and how to configure it.

5. In the Internet Explorer window, in the address bar, append api/locations to the address, and then
press Enter.

6. At the bottom of the Internet Explorer window, a prompt appears. Click Open.

7. If you are prompted to select a program to open the file, select Notepad from the list of available
programs.

8. When Notepad opens, you should see a list of Location entities, encoded with the JSON format.
L1-7

Task 3: Delete the Windows Azure SQL Server


1. On the Start screen, click the Internet Explorer tile.

2. Browse to https://manage.windowsazure.com.

3. If the Sign In page appears, enter your email and password, and then click Sign In.

4. In the navigation pane on the left side, click SQL DATABASES.

5. In the top corner on the left side of the sql databases page, click SERVERS.

6. On the sql database page, click on the line of the newly created server, in the STATUS column. The
line should highlight.

7. Click DELETE in the bottom pane.

8. In the Delete Server Confirmation dialog box, enter the name of the server, as suggested in the
description. Click V to confirm the operation.

Note: Windows Azure free subscriptions have a resource limitation and a restriction on the
total working hours. To avoid exceeding those limitations, you have to delete the Windows Azure
SQL Databases.

Results: After completing this exercise, you should ensure that all your products will be hosted on the
Windows Azure cloud by using Windows Azure SQL Database and Windows Azure Web Site.
L1-8 Developing Windows Azure and Web Services
L2-1

Module 2: Querying and Manipulating Data Using Entity


Framework
Lab: Creating a Data Access Layer by Using
Entity Framework
Exercise 1: Creating a Data Model
Task 1: Explore the existing Entity framework data model project
1. On the Start screen, click the Visual Studio 2012 tile.

2. On the File menu, point to Open, and then click Project/Solution.


3. In the File name box, type
D:\AllFiles\Mod02\LabFiles\begin\BlueYonder.Companion\BlueYonder.Companion.sln, and
then click Open.

4. In Solution Explorer, expand the BlueYonder.Entities project, and then double-click


FlightSchedule.cs.

5. Explore the FlightScheduleId and the Flight properties of the FlightSchedule class, and note how
the DatabaseGenerated and ForeignKey attributes are used in this class.
6. In Solution Explorer, in the BlueYonder.DataAccess project, expand the Repositories folder, and
then double-click FlightRepository.cs.

7. Explore the methods of the FlightRepository class.

Note: The FlightRepository class implements the Repository pattern. The Repository
pattern is designed to decouple the data access strategy from the business logic layer that
handles the data. The repository exposes the data access functionality and implements it
internally by using a specific data access strategy, which in this case is Entity Framework. By using
repositories, you can easily create a mock, replacing the repository, and improve the testability of
the business logic.
For more information about the Repository pattern and its related patterns, see
http://go.microsoft.com/fwlink/?LinkID=298756&clcid=0x409.
In Lab 4, "Extending Travel Companions ASP.NET Web API Services", Module 4, "Extending and
Securing ASP.NET Web API Services", you will see how to increase testability by using mocked
repositories.

Task 2: Prepare the data model classes for Entity Framework


1. In Solution Explorer, expand the BlueYonder.Entities project, and then double-click Trip.cs to open
it.

2. Add the following using directives to the beginning of the file.

using System.ComponentModel.DataAnnotations.Schema;

3. Note the name of the TripId field that will be used as a key.

public int TripId { get; set; }


L2-2 Developing Windows Azure and Web Services

Note: You do not need to decorate the TripId property with the [Key] attribute, because
the property corresponds to the Code First convention for primary key name, which is the class'
name suffixed with ID.

4. Enable lazy loading for the FlightInfo property by replacing the property with the following code.

[ForeignKey("FlightScheduleID")]
public virtual FlightSchedule FlightInfo { get; set; }

Note: Entity Framework will detect the virtual property in the Trip class and will create a
new derived proxy class that implements lazy loading for the FlightInfo property.
When you load trip entities from the database, the entity object will be of the derived trip proxy
type, and not of the Trip type.

5. Press Ctrl+S to save the file.


6. In Solution Explorer, in the BlueYonder.Entities project, double-click the Reservation.cs file.

7. Add the following using directives to the beginning of the file.

using System.ComponentModel.DataAnnotations.Schema;

8. Enable lazy loading for the DepartureFlight property by replacing the property with the following
code.

[ForeignKey("DepartFlightScheduleID")]
public virtual Trip DepartureFlight { get; set; }

9. Enable lazy loading for the ReturnFlight property by replacing the property with the following code.

[ForeignKey("ReturnFlightScheduleID")]
public virtual Trip ReturnFlight { get; set; }

10. Set the ReturnFlightScheduleID property as nullable to make it optional.

public int? ReturnFlightScheduleID { get; set; }

Note: Setting the ReturnFlightScheduleID foreign key property to a nullable int indicates
that this relation is not mandatory (0-N relation, meaning a reservation does not require a return
flight). The DepartFlightScheduleID foreign key property is not nullable and therefore indicates
the relation is mandatory (1-N relation, meaning every reservation must have a departing flight).

Task 3: Add the newly created entities to the context


1. In Solution Explorer, expand the BlueYonder.DataAccess project, and then double-click
TravelCompanionContext.cs.

2. Inside the TravelCompanionContext class, add the following code.

public DbSet<Reservation> Reservations { get; set; }

3. Inside the TravelCompanionContext class, add the following code.

public DbSet<Trip> Trips { get; set; }


L2-3

4. To save the file, press Ctrl+S.

Task 4: Implement a the reservation repository


1. In Solution Explorer, expand the BlueYonder.DataAccess project.

2. Expand the Repositories folder, and double-click ReservationRepository.cs.


3. Locate the GetSingle method, and replace its content with the following code.

var query = from r in context.Reservations


where r.ReservationId == entityKey
select r;
return query.SingleOrDefault();

4. Locate the Edit method, and replace its content with the following code.

var originalEntity = context.Reservations.Find(entity.ReservationId);


context.Entry(originalEntity).CurrentValues.SetValues(entity);

Note: You can refer to Lesson 4, "Manipulating Data", Topic 4, "Updating Entities", for an
explanation when to use the SetValues method instead of manually setting the state of the entity to
Modified.
5. Locate the Dispose method, and replace its content with the following code.

if (context != null)
{
context.Dispose();
context = null;
}

6. Uncomment the comments from the following methods.


GetAll

Add

Delete
7. Review the implementation of the Delete method to understand how cascade delete was
implemented, so that when a Reservation is deleted, its DepartureFlight and ReturnFlight objects
are deleted as well.

8. To save the file, press Ctrl+S.

Results: After you complete this exercise, the Entity Framework Code First model is ready for testing.
L2-4 Developing Windows Azure and Web Services

Exercise 2: Querying and Manipulating Data


Task 1: Explore the existing integration test project
1. In Solution Explorer, under the BlueYonder.IntegrationTests project, double-click
FlightQueries.cs.

2. Explore the query test methods in the FlightQueries class. The TestInitialize static method is
responsible for initializing the database and the test data, and all the other methods are intended to
test various queries with lazy load and eager load.

3. In Solution Explorer, under the BlueYonder.IntegrationTests project, double-click


FlightActions.cs.
4. Explore the InsertFlight, UpdateFlight, and DeleteFlight test methods in the FlightActions class.
Observe the use of the Assert static class to verify the results of the test.

Task 2: Create queries by using LINQ to Entities


1. In Solution Explorer, expand the BlueYonder.IntegrationTests project, and then double-click
ReservationQueries.cs.
2. In the ReservationQueries class, add the following code to the
GetReservationWithFlightsEagerLoad test method.

Reservation reservation;
using (var repository = new ReservationRepository())
{
var query = from r in repository.GetAll()
where r.ConfirmationCode == "1234"
select r;
query = query.Include(r => r.DepartureFlight).Include(r => r.ReturnFlight);
reservation = query.FirstOrDefault();
}
Assert.IsNotNull(reservation);
Assert.IsNotNull(reservation.DepartureFlight);
Assert.IsNotNull(reservation.ReturnFlight);

3. In the ReservationQueries class, in the GetReservationWithFlightsLazyLoad, add the following


code between the comment and the end of the using block.

Assert.IsNotNull(reservation.DepartureFlight);
Assert.IsNotNull(reservation.ReturnFlight);

Note: By examining the value of the navigation properties, you are invoking the lazy load
mechanism.

4. In the ReservationQueries class, in the GetReservationsWithSingleFlightWithoutLazyLoad, locate


the //TODO comment and add the following code below it.

context.Configuration.LazyLoadingEnabled = false;

5. To save the file, press Ctrl+S.

Note: Pay attention that the Assert method now checks again a null value, because lazy loading
was turned off, and the navigation properties are not loaded.

Task 3: Create queries by using Entity SQL


1. Locate the GetOrderedReservations of the ReservationQueries class.
L2-5

2. Add the following code below the //TODO comment.

var sql = @"SELECT value r FROM reservations as r ORDER BY r.confirmationCode DESC";


ObjectQuery<Reservation> query =
((IObjectContextAdapter)context).ObjectContext.CreateQuery<Reservation>(sql);
reservations = query.ToList();

3. To save the file, press Ctrl+S.

Task 4: Create a test for manipulating data


1. In Solution Explorer, expand the BlueYonder.IntegrationTests project, and then double-click
FlightActions.cs.

2. In the FlightActions class, add the following code to the UpdateFlight method, below the comment
//TODO: Lab 02 Exercise 2, Task 4.1 : Implement the UpdateFlight Method.

FlightRepository repository;
using (repository = new FlightRepository())
{
repository.Edit(flight);
repository.Save();
}
using (repository = new FlightRepository())
{
Flight updatedFlight = repository.FindBy(f => f.FlightNumber ==
"BY002_updated").FirstOrDefault();
Assert.IsNotNull(updatedFlight);
}

Task 5: Use cross-repositories integration tests with System.Transactions


1. In the FlightActions class, locate the UpdateUsingTwoRepositories method.
2. Locate the code that creates the Location and Flight repositories. Each repository is created with a
separate context, meaning each repository will use a separate transaction when saving changes.

3. Locate the code for loading and updating the flight and location objects. Each entity is updated and
saved in a separate transaction, but because both transactions are located in the same transaction
scope, both transactions are not yet committed.

4. In the UpdateUsingTwoRepositories method, locate the query below the comment //TODO: Lab
02, Exercise 2 Task 5.2 : Review the query for the updated flight that is inside the transaction scope.

Note: When querying from inside a transaction scope, you will get the updated values of
entities, while other users, not participating in the transaction, will see the old values, until the
transaction commits.

5. Locate the commented call to the Complete method.

Note: Without setting the transaction as complete, both transactions will roll back after the
transaction scope closes

6. Locate the query below the comment //TODO: Lab 02, Exercise 2 Task 5.4 : Review the query for the
updated flight that is outside the transaction scope.

Note: After the transaction is rolled back, attempts to locate the updated entity will fail.
L2-6 Developing Windows Azure and Web Services

Task 6: Run the tests, and explore the database created by Entity framework
1. In Solution Explorer, under the BlueYonder.IntegrationTests project, double-click
TravelCompanionDatabaseInitializer.cs.

2. Locate the Seed method, and add the following code at the end of the method.

context.Reservations.Add(reservation1);
context.Reservations.Add(reservation2);
context.SaveChanges();

3. To save the file, press Ctrl+S.

4. On the Test menu, point to Windows, and then click Test Explorer.
5. In Test Explorer, click Run All, and wait for all the tests to complete.

6. Explore the results, and verify that all 16 methods have passed the test.

7. On the Start screen, click the SQL Server Management Studio tile.
8. In the Connect to Server dialog box, type the following information, and then click Connect:

Server name: .\SQLEXPRESS


Authentication: Windows Authentication
9. In Object Explorer, expand the Databases node, expand the BlueYonder.Companion.Lab02
database, and then expand the Tables node.

10. Explore the tables that were created by Entity Framework, and notice the creation of the
Reservations and Trips tables.

Results: The Entity Framework data model works as designed and is verified by tests.
L3-1

Module 3: Creating and Consuming ASP.NET Web API


Services
Lab: Creating the Travel Reservation
ASP.NET Web API Service
Exercise 1: Creating an ASP.NET Web API Service
Task 1: Create a new API Controller for the Traveler Entity
1. Log on to the 20487B-SEA-DEV-A virtual machine and on the Start screen, click the Visual Studio
2012 tile.

2. On the File menu, point to Open, and then click Project/Solution.


3. Type D:\AllFiles\Mod03\LabFiles\begin\BlueYonder.Companion\BlueYonder.Companion.sln in the File
name box, and then click Open.

4. In Solution Explorer, right-click the BlueYonder.Companion.Controllers project node, point to Add,


and then click Class.

5. In the Add New Item window type TravelersController in the Name box, and then click Add.
6. Add the following using directives at the beginning of the class.

using System.Web.Http;
using BlueYonder.DataAccess.Interfaces;
using BlueYonder.DataAccess.Repositories;
using System.Net.Http;
using BlueYonder.Entities;
using System.Net;

7. Replace the class declaration with the following code.

public class TravelersController : ApiController

8. Add the following property to the TravelersController class.

private ITravelerRepository Travelers { get; set; }

9. Add the following constructor code to the class.

public TravelersController()
{
Travelers = new TravelerRepository();
}

10. Create a public method called Get by adding the following code.

public HttpResponseMessage Get(string id)


{
}

11. Add code to retrieve a traveler from the Travelers property by adding the following code to the Get
method.

var traveler = Travelers.FindBy(t=>t.TravelerUserIdentity == id).FirstOrDefault();

12. Add the following code to the end of the Get method.
L3-2 Developing Windows Azure and Web Services

// Handling the HTTP status codes


if (traveler != null)
return Request.CreateResponse<Traveler>(HttpStatusCode.OK, traveler);
else
return Request.CreateResponse(HttpStatusCode.NotFound);

13. In the Get method, right-click the first line of code, point to Breakpoint, and then click Insert
Breakpoint.

14. Create a public method called Post by adding the following code.

public HttpResponseMessage Post(Traveler traveler)


{
}

15. Add new traveler to the database by adding the following code to the Post method.

// saving the new order to the database


Travelers.Add(traveler);
Travelers.Save();

16. Add the following code to the end of the Post method, to create an HttpResponseMessage.

// creating the response, the newly saved entity and 201 Created status code
var response = Request.CreateResponse(HttpStatusCode.Created, traveler);

17. Add the following code to the end of the Post method to set the Location header with the URI of
the new traveler.

response.Headers.Location = new Uri(Request.RequestUri,


traveler.TravelerId.ToString());
return response;

18. In the Post method, right-click the first line of code, point to Breakpoint, and then click Insert
Breakpoint.

19. Create a public method called Put by adding the following code.

public HttpResponseMessage Put(string id, Traveler traveler)


{
}

20. Add the following code to the beginning of the Put method to validate that the traveler exists.

// returning 404 if the entity doesn't exist


if (Travelers.FindBy(t => t.TravelerUserIdentity == id).FirstOrDefault() == null)
return Request.CreateResponse(HttpStatusCode.NotFound);

21. Update the existing traveler by adding the following code to the end of the method.

Travelers.Edit(traveler);
Travelers.Save();
return Request.CreateResponse(HttpStatusCode.OK);

Note: The HTTP PUT method can also be used to create resources. Checking if the
resources exist is performed here for simplicity.

22. In the Put method, right-click the first line of code, point to Breakpoint, and then click Insert
Breakpoint.
L3-3

23. Create a public method called Delete by adding the following code.

public HttpResponseMessage Delete(string id)


{
}

24. Retrieve the traveler from the repository by adding the following code to the Delete method.

var traveler = Travelers.FindBy(t => t.TravelerUserIdentity == id).FirstOrDefault();

25. Validate that the traveler exist by adding the following code to the end of the Delete method.

// returning 404 if the entity doesn't exist


if (traveler == null)
return Request.CreateResponse(HttpStatusCode.NotFound);

26. Delete the traveler from the repository by adding the following code to the end of the Delete
method.

Travelers.Delete(traveler);
Travelers.Save();
return Request.CreateResponse(HttpStatusCode.OK);

27. Press Ctrl+S to save the changes.

Results: After you complete this exercise, you will be able to run the project from Visual Studio 2012 and
access the travelers service.
L3-4 Developing Windows Azure and Web Services

Exercise 2: Consuming an ASP.NET Web API Service


Task 1: Consume the API Controller from a Client Application
1. In the 20487B-SEA-DEV-C virtual machine, on the Start screen, click the Visual Studio 2012 tile.

2. On the File menu, point to Open, and then click Project/Solution.


3. In the File name box, type
D:\AllFiles\Mod03\LabFiles\begin\BlueYonder.Companion.Client\BlueYonder.Companion.Client.sln,
and then click Open.

4. If you are prompted by a Developers License dialog box, click I Agree. If you are prompted by a
User Account Control dialog box, click Yes. Type your email address and a password in the
Windows Security dialog box, and then click Sign in. Click Close in the Developers License dialog
box.

Note: If you do not have valid email address, click Sign up and register for the service.
Write down these credentials and use them whenever a use of an email account is required.

5. In Solution Explorer, under the BlueYonder.Companion.Client project, expand the Helpers folder,
and then double-click DataManager.cs.
6. Locate the GetTravelerAsync method, and under the comment // TODO: Lab 03 Exercise 2: Task 1.3:
Implement the GetTravelelrAsync method, remove the return null line and add the following code.

var travelersUri = string.Format("{0}travelers/{1}", BaseUri, hardwareId);


HttpResponseMessage response = await client.GetAsync(new Uri(travelersUri));
if (response.IsSuccessStatusCode)
{
string resultJson = await response.Content.ReadAsStringAsync();
return await JsonConvert.DeserializeObjectAsync<Traveler>(resultJson);
}
else
{
return null;
}

7. In the GetTravelerAsync method, right-click the first line of code, point to Breakpoint, and then
click Insert Breakpoint.

8. Locate the comment // TODO: Lab 03 Exercise 2: Task 1.6: Review the UpdateTravelerAsync method,
and review the code of the CreateTravelerAsync method. The method sets the ContentType header
to request a JSON response. The method then uses the PostAsync method to send a POST request to
the server.

9. In the CreateTravelerAsync method, right-click the first line of code, point to Breakpoint, and then
click Insert Breakpoint.

10. Locate the comment // TODO: Lab 03 Exercise 2: Task 1.8: Review the UpdateTravelerAsync method,
and review the code of the UpdateTravelerAsync method. The method uses the client.PutAsync
method to send a PUT request to the server.

11. In the UpdateTravelerAsync method, right-click the first line of code, point to Breakpoint, and then
click Insert Breakpoint.

Task 2: Debug the Client App


1. Go back to the virtual machine 20487B-SEA-DEV-A. In Solution Explorer, right-click the
BlueYonder.Companion.Host project, and then click Set as StartUp Project.
L3-5

2. On the Debug menu, click Start Debugging.

3. Go back to the virtual machine 20487B-SEA-DEV-C. In Solution Explorer, right-click the


BlueYonder.Companion.Client project, and then click Set as StartUp Project.
4. On the Debug menu, click Start Debugging. The client app will start running.

5. Visual Studio 2012 with the BlueYonder.Companion.Client solution opens. The code execution
breaks inside the GetTravelerAsync method, and the line in breakpoint is highlighted in yellow.

6. To continue, press F5.


7. Go back to the virtual machine 20487B-SEA-DEV-A.

8. Visual Studio 2012 with the BlueYonder.Companion solution opens. The code execution breaks
inside the Get method, and the line in breakpoint is highlighted in yellow.

9. Position the mouse cursor over the id parameter to view its value.

10. To continue, press F5.

11. Go back to the virtual machine 20487B-SEA-DEV-C.


12. Visual Studio 2012 with the BlueYonder.Companion.Client solution opens. The code execution
breaks inside the CreateTravelerAsync method, and the line in breakpoint is highlighted in yellow.

13. To continue, press F5.


14. Go back to the virtual machine 20487B-SEA-DEV-A.

15. Visual Studio 2012 with the BlueYonder.Companion solution opens. The code execution breaks
inside the Post method, and the line in breakpoint is highlighted in yellow.
16. Position the mouse cursor over the traveler parameter to view its contents. Expand the traveler
object to view the object's properties.

17. Press F5 to continue.


18. Go back to the virtual machine 20487B-SEA-DEV-C.

19. If you are prompted to allow the app to run in the background, click Allow.
20. Display the app bar by right-clicking or by swiping from the bottom of the screen.

21. Click Search, and in the Search box on the right side enter New. If you are prompted to allow the
app to share your location, click Allow.
22. Wait for the app to show a list of flights from Seattle to New York.

23. Click Purchase this trip.

24. In the First Name box, type your first name.


25. In the Last Name box, type your last name.

26. In the Passport box, type Aa1234567.

27. In the Mobile Phone box, type 555-5555555.

28. In the Home Address box, type 423 Main St..

29. In the Email Address box, type your email address.

30. Click Purchase.

31. Visual Studio 2012 with the BlueYonder.Companion.Client solution opens. The code execution
breaks inside the UpdateTravelerAsync method, and the line in breakpoint is highlighted in yellow.
L3-6 Developing Windows Azure and Web Services

32. Press F5 to continue.

33. Go back to the virtual machine 20487B-SEA-DEV-A.

34. Visual Studio 2012 with the BlueYonder.Companion solution opens. The code execution breaks
inside the Put method, and the line in breakpoint is highlighted in yellow.

35. Position the mouse cursor over the traveler parameter to view its contents. Expand the traveler
object to view the object's properties.

36. Press F5 to continue.


37. Go back to the virtual machine 20487B-SEA-DEV-C.

38. In the client app, click Close to close the confirmation message, and then close the client app.
39. Go back to the virtual machine 20487B-SEA-DEV-A.

40. To stop debugging In Visual Studio 2012, press Shift+F5.

Results: After you complete this exercise, you will be able to run the BlueYonder Companion client
application and create a traveler when purchasing a trip. You will also be able to retrieve an existing
traveler and update its details.
L4-1

Module 4: Extending and Securing ASP.NET Web API


Services
Lab: Extending Travel Companions ASP.NET
Web API Services
Exercise 1: Creating a Dependency Resolver for Repositories
Task 1: Change FlightController constructor to support Injection
1. In the 20487B-SEA-DEV-A virtual machine, on the Start screen, click the Visual Studio 2012 tile.

2. On the File menu, point to Open, and then click Project/Solution.


3. Browse to D:\Allfiles\Mod04\LabFiles\begin\BlueYonder.Companion.

4. Select the file BlueYonder.Companion.sln and then click Open.

5. In Solution Explorer, under BlueYonder.Companion.Controllers project, double-click


LocationsController.cs.

6. Replace the LocationsController constructor method with the following code:

public LocationsController (ILocationRepository locations)


{
Locations = locations;
}

7. Press Ctrl+S to save the file.

Note: The same pattern was already applied in the begin solution for the rest of the
controller classes (TravelersController, FlightsController, ReservationsController and
TripsController). Open those classes to review the constructor definition.

Task 2: Create a dependency resolver class


1. In Solution Explorer, double-click BlueYonderResolver.cs, locate in the
BlueYonder.Companion.Controllers project.

Note: The BlueYonderResolver class implements the IDependencyResolver interface.

2. Locate the GetService method, and add the following code after the comment // TODO: Lab 4:
Exercise 1: Task 2.1: Add a resolver for the LocationsController class.

if (serviceType == typeof(LocationsController))
return new LocationsController(new LocationRepository());

Task 3: Register the dependency resolver class with HttpConfiguration


1. In Solution Explorer, expand the BlueYonder.Companion.Host project. Expand the App_Start
folder, and then double-click WebApiConfig.cs to open it.

2. Add the following code in the beginning of Register method, to map the dependency resolver to
BlueYonderResolver:
L4-2 Developing Windows Azure and Web Services

config.DependencyResolver = new BlueYonderResolver();

3. Press Ctrl+S to save the file.

4. In Solution Explorer, under the BlueYonder.Companion.Controllers project, open the


LocationsController.cs file.
5. Click the first line of code in the constructor method, and then press F9 to add a breakpoint.

6. In Solution Explorer, right-click the BlueYonder.Companion.Host project, and then click Set as
StartUp Project.

7. Press F5 to run the app.


8. After the browser opens, append the string Locations to the address in the address bar and press
Enter. The address should be: http://localhost:9239/Locations

9. Switch back to Visual Studio and make sure the code breaks on the breakpoint.

10. Move the mouse cursor over the constructor's parameter and verify it is not null.

11. Press Shift+F5 to stop the debugger.

12. In Solution Explorer, expand the Tests folder, expand the BlueYonder.Companion.Controllers.Tests
project, and then double-click LocationControllerTest.cs.

13. Locate the Initialize method and examine its code. The test initialization process uses the
StubLocationRepository type which was auto-generated with the Fakes framework. This stub
repository mimics the real location repository. You use the fake repository to test the code, instead of
using the real repository, which requires using a database for the test. When running unit tests, you
should use fake objects to replace external components, in order to reduce the complexity of creating
and executing the test.

Note: For additional information about Fakes, see:


http://go.microsoft.com/fwlink/?LinkID=298770&clcid=0x409

14. On the Test menu, point to Run, and the click All Tests.

15. Ensure the test passes and then close the Test Explorer window that opened.

Results: You will be able to inject data repositories to the controllers instead of creating them explicitly
inside the controllers. This will decouple the controllers from the implementation of the repositories.
L4-3

Exercise 2: Adding OData Capabilities to the Flight Schedule Service


Task 1: Add a Queryable action to the flight schedule service
1. In Solution Explorer, right-click the BlueYonder.Companion.Controllers project, and click Manage
NuGet Packages.

2. In the Manage NuGet Packages dialog box, on the navigation pane, expand the Online node and
then click the NuGet official package source node.
3. Press Ctrl+E and type WebApi.OData.

4. In the center pane, click the Microsoft ASP.NET Web API OData package, and then click Install.

5. If the License Acceptance dialog box appears, click I Accept.


6. Wait for installation completion. Click Close to close the dialog box.

7. In Solution Explorer, expand the BlueYonder.Companion.Controllers project, and double-click


LocationsController.cs.

8. Decorate the Get method overload, which has three parameters, with the following attribute.

[Queryable]

9. Remove the three parameters from the Get method and replace the IEnumerable return type with
the IQueryable type. The resulting method declaration should resemble the following code.

public IQueryable<Location> Get()

10. Replace the implementation of the method with the following code.

return Locations.GetAll();

11. Press Ctrl+S to save the file.

Task 2: Handle the search event in the client application and query the flight schedule
service by Using OData filters
1. Log on to the virtual machine 20487B-SEA-DEV-C as Admin with the password Pa$$w0rd.

2. On the Start screen, click the Visual Studio 2012 tile.

3. On the File menu, point to Open, and then click Project/Solution.

4. Browse to D:\AllFiles\Mod04\LabFiles\begin\BlueYonder.Companion.Client.

5. Select the file BlueYonder. Companion.Client.sln and then click Open.

6. If you are prompted by a Developers License dialog box, click I Agree. If you are prompted by a User
Account Control dialog box, click Yes. Type your email address and a password in the Windows
Security dialog box and then click Sign in. Click Close in the Developers License dialog box.

Note: If you do not have valid email address, click Sign up now and register for the service. Write
down these credentials and use them whenever a use of an email account is required.

7. In Solution Explorer, expand the BlueYonder.Companion.Shared project, and then double-click


Addresses.cs file.

8. Locate the declaration of the GetLocationsWithQueryUri property, change it to access the locations
service using OData query by replacing the returned value with the following code.

GetLocationsUri + "?$filter=substringof(tolower('{0}'),tolower(City))";
L4-4 Developing Windows Azure and Web Services

9. Press Ctrl+S to save the file.

Results: Your web application exposes OData protocol that supports Get request of the locations data.
L4-5

Exercise 3: Applying Validation Rules in the Booking Service


Task 1: Add Data Annotations to the Traveler Class
1. Go back to virtual machine 20487B-SEA-DEV-A.

2. In Solution Explorer, expand the BlueYonder.Entities project, and then double-click Traveler.cs.
3. Add the following using directives to the beginning of the file.

using System.ComponentModel.DataAnnotations;

4. Decorate the FirstName, LastName and HomeAddress properties with the following attribute.

[Required]

5. Decorate the MobilePhone property with the following attribute.

[Phone]

6. Decorate the Email property with the following attribute.

[EmailAddress]

7. Press Ctrl+S to save the file.

Task 2: Implement the Action Filter to Validate the Model


1. In Solution Explorer, right-click the ActionFilters folder in the BlueYonder.Companion.Controllers
project, point to Add, and then click Class.

2. In the Add New Item dialog box, type ModelValidationAttribute in the Name box. Click Add.

3. Add the following using directive to the beginning of the file.

using System.Web.Http.Filters;
using System.Net;
using System.Net.Http;
using System.Web.Http;

4. Replace the class declaration with the following code.

public class ModelValidationAttribute: ActionFilterAttribute

5. In ModelValidationAttribute class, add the following code.

public override void OnActionExecuting (System.Web.Http.Controllers.HttpActionContext


actionContext)
{
if (!actionContext.ModelState.IsValid)
actionContext.Response = actionContext.Request.CreateErrorResponse
(HttpStatusCode.BadRequest, new HttpError (actionContext.ModelState, true));
}

6. Press Ctrl+S to save the file.

Task 3: Apply the custom attribute to the PUT and POST actions in the booking
service
1. In Solution Explorer, in the BlueYonder.Companion.Controllers project, double-click
TravelersController.cs.

2. Add the following using directive in the beginning of the file.


L4-6 Developing Windows Azure and Web Services

using BlueYonder.Companion.Controllers.ActionFilters;

3. Decorate the Put and Post methods with the following attribute.

[ModelValidation]

4. Press Ctrl+S to save the file.

5. On the Build menu, click Build Solution.

Results: Your web application will verify that the minimum necessary information is sent by the client
before trying to handle it.
L4-7

Exercise 4: Secure the communication between client and server


Task 1: Add an HTTPS Binding in IIS
1. On the Start screen, click Computer to open the File Explorer window.

2. Browse to D:\AllFiles\Mod04\LabFiles\Setup.
3. Double-click the Setup.cmd file. Wait for the script to complete successfully and press any key to
close the window.

4. On the Start screen, click the Internet Information Services (IIS) Manager tile.

5. In the Connections pane, click SEA-DEV12-A (SEA-DEV12-A\Administrator).

6. If an Internet Information Services (IIS) Manager dialog pops up asking about the Microsoft Web
Platform, click No.

7. In the Features View, double-click the Server Certificates icon under the IIS group.

8. In the Server Certificates list, verify you see a certificate issued to SEA-DEV12-A. This certificate was
created by the script you ran in the previous task.
9. In the Connections pane, expand SEA-DEV12-A (SEA-DEV12-A\Administrator). Expand Sites, and then
click Default Web Site.

10. In the Actions pane, click Bindings.


11. In the Site Bindings dialog box, click Add.

12. In the Add Site Binding dialog box, select https in the Type combo box.
13. Select SEA-DEV12-A in the SSL Certificate combo box. Click OK. An HTTPS binding is added to the
Site Binding list.

14. Click Close to close the Site Bindings dialog box.

Note: When you add an HTTPS binding to the Web site bindings, all web applications in
the Web site will support HTTPS.

Task 2: Host ASP.NET Web API web application in IIS


1. Return to Visual Studio 2012.

2. In Solution Explorer, right-click the BlueYonder.Companion.Host project, and then click Properties.
3. On the navigation pane, click the Web tab.

4. On the Web tab, scroll to Servers group. Clear the Use IIS Express check box. If you get a
confirmation dialog for SQL Express, click Yes.

5. Click Create Virtual Directory.

6. Wait for a confirmation message, and click OK.

7. Press Ctrl+S to save the changes.

8. Return to IIS Manager.

9. In the Connections pane, right-click Default Web Site, and click Refresh.

10. Expand Default Web Site. Click BlueYonder.Companion.Host.

11. In the Actions pane, click Browse *:80 (http).


L4-8 Developing Windows Azure and Web Services

This action opens Internet Explorer and browses to the web application.

12. In Internet Explorer 10, locate the address bar and append locations to the end of the URL. Press
Enter.
13. In the bottom of Internet Explorer 10 appears a prompt. Click Open.

14. If you are prompted to select a program to open the file, click Try an App on this PC and select
Notepad from the list of available programs.

15. Explore the contents of the file. It contains the list of locations from the database in the JSON format.
16. In Internet Explorer 10, change the URL in the address bar to https://SEA-DEV12-
A/BlueYonder.Companion.Host/locations and press Enter.

17. In the bottom of Internet Explorer 10 appears a prompt. Click Open.

18. If you are prompted to select a program to open the file, select Notepad from the list of available
programs.
19. Explore the contents of the file. It contains the list of locations from the database in the JSON format.

Task 3: Test the client application against the secure connection.


1. Go back to virtual machine 20487B-SEA-DEV-C.

2. In Solution Explorer, under the BlueYonder.Companion.Shared project, double-click Addresses.cs.


3. In the Addresses class, locate and replace content of BaseUri property with the following URL.

https://SEA-DEV12-A/BlueYonder.Companion.Host/

4. Press Ctrl+S to save the changes.

5. Press Ctrl+F5 to start the client app without debugging.


6. If you are prompted to allow the app to run in the background, click Allow.

7. After the client app starts, display the app bar by right-clicking or by swiping from the bottom of the
screen.
8. Click Search, and in the Search box on the right side enter New. If you are prompted to allow the
app to share your location, click Allow.

Note: The search functionality now uses the OData based locations service.

9. Wait for the app to show a list of flights from Seattle to New York.

10. Click Purchase this trip.

11. In the First Name box, type your first name.

12. In the Last Name box, type your last name.

13. In the Passport box, type Aa1234567.

14. In the Mobile Phone box, type 555-5555555.

15. In the Home Address box, type 423 Main St..

16. In the Email Address box, type ABC.

17. Click Purchase.


L4-9

18. Verify you receive an error message originating from the service saying The Email field is not a valid
e-mail address. Click Close.

19. In the Email Address box, replace ABC with your email address.
20. Click Purchase.

21. Click Close to close the confirmation message, and then close the client app.

Results: The communication with your web application will be secured using a certificate.
L4-10 Developing Windows Azure and Web Services
L5-1

Module 5: Creating WCF Services


Lab: Creating and Consuming the WCF
Booking Service
Exercise 1: Creating the WCF Booking Service
Task 1: Create a data contract for the booking request
1. In the 20487B-SEA-DEV-A virtual machine, on the Start screen, click the Visual Studio 2012 tile.

2. On the File menu, point to Open, and then click Project/Solution.

3. In the File name box, type


D:\AllFiles\Mod05\LabFiles\begin\BlueYonder.Server\BlueYonder.Server.sln, and then click
Open.

4. In Solution Explorer, right-click the BlueYonder.BookingService.Contracts project, point to Add,


and then click Class.
5. In the Add New Item dialog box, enter TripDto in the Name box (replace the existing name), and
then click Add.
6. In the TripDto.cs file that opened, add the following using directives.

using System.Runtime.Serialization;
using BlueYonder.Entities;

7. To the TripDto class declaration, add a public access modifier.


8. Add a [DataContract] attribute above the TripDto class declaration.

9. In the TripDto class, add the following code.

[DataMember]
public int FlightScheduleID { get; set; }
[DataMember]
public FlightStatus Status { get; set; }
[DataMember]
public SeatClass Class { get; set; }

10. To save the file, press Ctrl+S.

11. In Solution Explorer, right-click the BlueYonder.BookingService.Contracts project, point to Add,


and then click Class.
12. In the Add New Item dialog box, enter ReservationDto in the Name box (replace the existing
name), and then click Add.

13. In the ReservationDto.cs file that opened, add the following using directives.

using System.Runtime.Serialization;
using BlueYonder.Entities;

14. To the ReservationDto class declaration, add a public access modifier.

15. Add a [DataContract] attribute above the ReservationDto class declaration.


16. In the ReservationDto class, add the following code.

[DataMember]
public int TravelerId { get; set; }
L5-2 Developing Windows Azure and Web Services

[DataMember]
public DateTime ReservationDate { get; set; }
[DataMember]
public TripDto DepartureFlight { get; set; }
[DataMember]
public TripDto ReturnFlight { get; set; }

17. To save the file, press Ctrl+S.

Task 2: Create a service contract for the booking service


1. In Solution Explorer, right-click the BlueYonder.BookingService.Contracts project, point to Add,
and then click New Item.

2. In the Add New Item dialog box, select Interface from the items list, enter IBookingService in the
Name box (replace the existing name), and then click Add.
3. In the IBookingService.cs file that opened, add the following using directives.

using System.ServiceModel;
using BlueYonder.BookingService.Contracts.Faults;

4. To the IBookingService interface declaration, add a public access modifier.


5. Above the IBookingService interface declaration, add the following attribute.

[ServiceContract(Namespace = "http://blueyonder.server.interfaces/")]

6. In the IBookingService interface, add the following code.

[OperationContract]
[FaultContract(typeof(ReservationCreationFault))]
string CreateReservation(ReservationDto request);

7. To save the file, press Ctrl+S.

Task 3: Implement the service contract


1. In Solution Explorer, expand BlueYonder.BookingService.Implementation, and then double-click
BookingService.cs to open it.

2. To implement the IBookingService interface, change the declaration of the class as follows.

public class BookingService : IBookingService

3. Above the BookingService class declaration, add the following attribute.

[ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall)]

4. Implement the interface by adding the following method code to the class.

public string CreateReservation(ReservationDto request)


{
}

Note: At this point, the class will not compile because no value is returned from the
method. Ignore this for now, as you will soon write the missing code.

5. In the CreateReservation method, type the following code.


L5-3

if (request.DepartureFlight == null)
{
throw new FaultException<ReservationCreationFault>(
new ReservationCreationFault
{
Description = "Reservation must include a departure flight",
ReservationDate = request.ReservationDate
}, "Invalid flight info");
}

6. To the CreateReservation method, add the following code below the if statement block.

var reservation = new Reservation


{
TravelerId = request.TravelerId,
ReservationDate = request.ReservationDate,
DepartureFlight = new Trip
{
Class = request.DepartureFlight.Class,
Status = request.DepartureFlight.Status,
FlightScheduleID = request.DepartureFlight.FlightScheduleID
}
};

7. To the end of the CreateReservation method, add the following code.

if (request.ReturnFlight != null)
{
reservation.ReturnFlight = new Trip
{
Class = request.ReturnFlight.Class,
Status = request.ReturnFlight.Status,
FlightScheduleID = request.ReturnFlight.FlightScheduleID
};
}

8. To the CreateReservation method, add the following code below the last if statement block.

using (IReservationRepository reservationRepository = new


ReservationRepository(ConnectionName))
{
reservation.ConfirmationCode =
ReservationUtils.GenerateConfirmationCode(reservationRepository);
reservationRepository.Add(reservation);
reservationRepository.Save();
return reservation.ConfirmationCode;
}

9. To save the file, press Ctrl+S.

10. In the CreateReservation method, right-click the first line of code, point to Breakpoint, and then
click Insert Breakpoint.

Results: You will be able to test your results only at the end of the second exercise.
L5-4 Developing Windows Azure and Web Services

Exercise 2: Configuring and Hosting the WCF Service


Task 1: Configure the console project to host the WCF service with TCP endpoint
1. In Solution Explorer, locate the BlueYonder.BookingService.Host project, and then expand its
References folder.

Note: The begin solution already contains all the project references that are needed for the
project. This includes the BlueYonder.BookingService.Contracts,
BlueYonder.BookingService.Implementation, BlueYonder.DataAccess, and
BlueYonder.Entities projects, as well as the Entity Framework 5.0 package assembly.

2. Right-click the References folder, and then click Add Reference.

3. In the Reference Manager dialog box, expand the Assemblies node in the pane on the left side, and
then click Framework.
4. Scroll down the assemblies list, point to the System.ServiceModel assembly, and then select the
check box next to the assembly name.
5. Click OK.

6. In Solution Explorer, under the BlueYonder.BookingService.Host project, double-click


FlightScheduleDatabaseInitializer.cs.
7. Review the file to understand how the Seed method initializes the database with predefined locations
and flights.

8. In Solution Explorer, under the BlueYonder.BookingService.Host project, double-click App.config.

9. Before the </configuration> tag (the last tag in the file), add the following configuration.

<system.serviceModel>
<services>
<service name="BlueYonder.BookingService.Implementation.BookingService">
</service>
</services>
</system.serviceModel>

10. Between the <service> and </service> tags of the App.config file, add the following configuration.

<endpoint name="BookingTcp"
address="net.tcp://localhost:900/BlueYonder/Booking/"
binding="netTcpBinding"
contract="BlueYonder.BookingService.Contracts.IBookingService" />

11. In the App.config file, before the </configuration> tag (the last tag in the file), add the following
configuration.

<connectionStrings>
<add name="BlueYonderServer" connectionString="Data
Source=.\SQLEXPRESS;Database=BlueYonder.Server.Lab5;Integrated Security=SSPI"
providerName="System.Data.SqlClient" />
</connectionStrings>

12. To save the file, press Ctrl+S.

Task 2: Create the service hosting code


1. In Solution Explorer, in the BlueYonder.BookingService.Host project, double-click Program.cs.
L5-5

2. To the Program class, after the Main method, enter the following methods.

private static void OnServiceOpened(object sender, EventArgs e)


{
Console.WriteLine("Booking Service Is Running... Press [ENTER] to close.");
}
private static void OnServiceOpening(object sender, EventArgs e)
{
Console.WriteLine("Booking Service Is Initializing...");
}

3. At the beginning of the file, add the following using directives.

using System.ServiceModel;
using BlueYonder.DataAccess;

4. In the Main method, enter the following code.

var dbInitializer = new FlightScheduleDatabaseInitializer();


dbInitializer.InitializeDatabase(new
TravelCompanionContext(Implementation.BookingService.ConnectionName));

5. In the Main method, after calling the InitializeDatabase method, enter the following code.

var host = new ServiceHost(typeof(Implementation.BookingService));


host.Opening += OnServiceOpening;
host.Opened += OnServiceOpened;
try
{
host.Open();
}
catch (Exception e)
{
host = null;
Console.WriteLine(" *** Error occured while trying to open the booking service
host *** \n\n{0}", e.Message);
Console.WriteLine("\n\n Press [ENTER] to exit.");
}
Console.ReadLine();
if (host == null) return;
try
{
host.Close();
}
catch (Exception)
{
host.Abort();
}

6. To save the file, press Ctrl+S.

7. In Solution Explorer, right-click the BlueYonder.BookingService.Host project, and then click Set as
StartUp Project.
8. To start debugging the project, press F5.

9. Wait until the initialization and running messages appear in the service host console window.

Note: Keep the console window open, as you will need to use it later in the lab

Results: You will be able to start the console application and open the service host.
L5-6 Developing Windows Azure and Web Services

Exercise 3: Consuming the WCF Service from the ASP.NET Web API
Booking Service
Task 1: Add a reference to the service contract project in the ASP.NET Web API
projects
1. On the Start screen, right-click the Visual Studio 2012 tile, and then click Open new window at the
bottom.

2. On the File menu, point to Open, and then click Project/Solution.


3. In the File name box, type
D:\AllFiles\Mod05\LabFiles\begin\BlueYonder.Server\BlueYonder.Companion.sln, and then
click Open.

4. On the File menu, point to Add, and then click Existing Project.

5. In the Add Existing Project dialog box, browse to


D:\Allfiles\Mod05\Labfiles\begin\BlueYonder.Server\BlueYonder.BookingService.Contracts,
click BlueYonder.BookingService.Contracts.csproj, and then click Open.

6. In Solution Explorer, right-click the BlueYonder.Companion.Controllers project, and then click Add
Reference.
7. In the Reference Manager dialog box, in the pane on the left side, click Solution. In the pane on the
right side, point to BlueYonder.BookingService.Contracts, select the check box next to the project
name, and then click OK.

8. In Solution Explorer, right-click the BlueYonder.Companion.Host project, and then click Add
Reference.

9. In the Reference Manager dialog box, in the pane on the left side, click Solution. In the pane on the
right side, point to BlueYonder.BookingService.Contracts, select the check box next to the project
name, and then click OK.

Task 2: Add client configuration to Web.Config


1. In Solution Explorer, expand the BlueYonder.Companion.Host project, and then double-click
Web.config.

2. To the bottom of the file, before the </configuration> tag, add the following configuration.

<system.serviceModel>
<client>
<endpoint address="net.tcp://localhost:900/BlueYonder/Booking"
binding="netTcpBinding"
contract="BlueYonder.BookingService.Contracts.IBookingService" name="BookingTcp"/>
</client>
</system.serviceModel>

3. To save the file, press Ctrl+S.

Task 3: Call the Booking service by using ChannelFactory<T>


1. In Solution Explorer, expand the BlueYonder.Companion.Controllers project, and then double-click
ReservationsController.cs.

2. Add the following using directives.

using BlueYonder.BookingService.Contracts;
using BlueYonder.BookingService.Contracts.Faults;
L5-7

3. In the ReservationsController class, locate the comment // TODO: Module 5: Exercise 3: Task 3.1.
Create an instance of the channel factory and then add the following code after it.

private ChannelFactory<IBookingService> factory =


new ChannelFactory<IBookingService>("BookingTcp");

4. Locate the CreateReservationOnBackendSystem method, and then uncomment the code below
the comment // TODO: Module 5: Exercise 3: Task 3.2 Uncomment the Dto creation objects.

5. To create the channel, above the try block, add the following statement.

IBookingService proxy = factory.CreateChannel();

6. In the try block, add the following code.

string confirmationCode = proxy.CreateReservation(request);


(proxy as ICommunicationObject).Close();
return confirmationCode;

7. From the end of the method, remove the following code.

return null;

8. Locate the comment // TODO: Module 5: Exercise 3: Task 3.4: Call the service and return the result,
and replace the catch block with the following code.

catch (FaultException<ReservationCreationFault> fault)


{
HttpResponseMessage faultedResponse =
Request.CreateResponse(HttpStatusCode.BadRequest, fault.Detail.Description);
throw new HttpResponseException(faultedResponse);
}

9. Locate the comment // TODO: Module 5: Exercise 3: Task 3.5: abort the communication in case of
Exception, and after the comment and before calling the throw statement, add the following code.

(proxy as ICommunicationObject).Abort();

10. In the ReservationsController class, look for the Post method, and in it locate the following
comment.

// TODO: Module 5: Exercise 3: Task 3.6: Call the booking service

11. After the comment, and before saving the entity, add the following code.

string confirmationCode = CreateReservationOnBackendSystem(newReservation);


newReservation.ConfirmationCode = confirmationCode;

12. To save the file, press Ctrl+S.

13. On the Build menu, click Build Solution.

Task 4: Debug the WCF service with the client app


1. If the ReservationsController.cs file is closed, in Solution Explorer, expand the
BlueYonder.Companion.Controllers project, and then double-click ReservationsController.cs.

2. In the Post method, right-click the line of code that starts with string confirmationCode, point to
Breakpoint, and then click Insert Breakpoint.
L5-8 Developing Windows Azure and Web Services

3. In Solution Explorer, right-click the BlueYonder.Companion.Host project, and then click Set as
StartUp Project.

4. To start debugging the web application, press F5.


5. In the 20487B-SEA-DEV-C virtual machine, on the Start screen, click the Visual Studio 2012 tile.

6. On the File menu, point to Open, and then click Project/Solution.

7. In the File name box, type


D:\AllFiles\Mod05\LabFiles\begin\BlueYonder.Companion.Client\BlueYonder.Companion.Clie
nt.sln, and then click Open.

8. If you are prompted by a Developers License dialog box, click I Agree. If you are prompted by a
User Account Control dialog box, click Yes. In the Windows Security dialog box, enter your email
address and a password, and then click Sign in. In the Developers License dialog box, click Close.

Note: If you do not have valid email address, click Sign up and register for the service.
Write down these credentials and use them whenever an email account is required.

9. In Solution Explorer, right-click the BlueYonder.Companion.Client project, and then click Set as
StartUp Project.
10. To start the client app without debugging, press Ctrl+F5.

11. If you are prompted to allow the app to run in the background, click Allow.

12. Display the app bar by right-clicking or by swiping from the bottom of the screen.
13. Click Search, and in the Search box on the right side enter New. If you are prompted to allow the
app to share your location, click Allow.

14. Wait for the app to show a list of flights from Seattle to New York.
15. Click Purchase this trip.

16. In the First Name box, enter your first name.

17. In the Last Name box, enter your last name.

18. In the Passport box, type Aa1234567.


19. In the Mobile Phone box, type 555-5555555.

20. In the Home Address box, type 423 Main St..


21. In the Email Address box, enter your email address.

22. Click Purchase.

23. Go back to the 20487B-SEA-DEV-A virtual machine, to Visual Studio 2012 instance with open
BlueYonder.Companion solution. The code execution breaks, and the line in breakpoint is
highlighted in yellow.

24. To step over the line, press F10. Switch to the Visual Studio 2012 where the BlueYonder.Server
solution is open. The code execution breaks, and the line in breakpoint is highlighted in yellow.

25. To run the service code and return to the previous Visual Studio 2012 window, press F5.

26. Hover with the mouse cursor over the confirmationCode variable and verify it is now set to a random
confirmation code it received from the WCF service.
L5-9

27. To resume execution and go back to the 20487B-SEA-DEV-C virtual machine, to the client app, press
F5.

28. To close the confirmation message, and then close the client app, click Close.
29. To stop debugging the service, go back to the 20487B-SEA-DEV-A virtual machine, and close the
service host console window.

30. To stop debugging the web application, return to Visual Studio 2012 where the
BlueYonder.Companion solution is open and press Shift+F5.

Results: After you complete this exercise, you will be able to run the Blue Yonder Companion client
application and purchase a trip.
L5-10 Developing Windows Azure and Web Services
L6-1

Module 6: Hosting Services


Lab: Hosting Services
Exercise 1: Hosting the WCF Services in IIS
Task 1: Create a web application project
1. On the Start screen, click Computer to open the File Explorer window.

2. Browse to D:\AllFiles\Mod06\LabFiles\Setup.

3. Double-click the setup.cmd file. Wait for the script to complete successfully and press any key to
close the window.

4. On the Start screen, click the Visual Studio 2012 tile.

5. On the File menu, point to Open, and then click Project/Solution.


6. In the File name box, type
D:\AllFiles\Mod06\LabFiles\begin\BlueYonder.Server\BlueYonder.Server.sln, and then click
Open.
7. On the File menu, point to Add, and then click New Project.

8. In the Add New Project dialog box, on the navigation pane, expand the Installed node. Expand the
Visual C# node. Click the Web node. Select ASP.NET Empty Web Application from the list of
templates.
9. In the Name box, type BlueYonder.Server.Booking.WebHost.

10. In the Location box, type D:\AllFiles\Mod06\LabFiles\begin\BlueYonder.Server


11. To add the new project, click OK.
12. In Solution Explorer, right-click the BlueYonder.Server.Booking.WebHost project node, and then
click Manage NuGet Packages.

13. In the Manage NuGet Packages dialog box, on the navigation pane, expand the Online node, and
then click the NuGet official package source node.

14. Press Ctrl+E and type EntityFramework.


15. In the center pane, click the EntityFramework package, and then click Install.
16. Wait for installation to complete. To close the window, click Close.

17. In Solution Explorer, right-click the BlueYonder.Server.Booking.WebHost project, and then click
Add Reference.

18. In the Reference Manager dialog box, expand the Assemblies node in the pane on the left side.
Click Framework.

19. Scroll down the assemblies list, point to the System.ServiceModel assembly, and select the check
box next to the assembly name.

20. In the pane on the left side, click Solution.

21. In the pane on the right side, point to each of the following projects, and select the check box next to
the project name:

BlueYonder.BookingService.Contracts
BlueYonder.BookingService.Implementation
L6-2 Developing Windows Azure and Web Services

BlueYonder.DataAccess

BlueYonder.Entities

22. To add the references, click OK.

23. In Solution Explorer, expand the BlueYonder.Server.Booking.Host project, right-click


FlightScheduleDatabaseInitializer.cs, and click Copy.

24. In Solution Explorer, right-click BlueYonder.Server.Booking.WebHost, and click Paste.


25. In Solution Explorer, right-click the BlueYonder.Server.Booking.WebHost project, point to Add,
and then click New Item.

26. In the Add New Item dialog box, in the pane on the left side, expand the Installed node, expand the
Visual C# node, click the Web node, and then click Global Application Class in the list of items. To
finish, click Add.

27. In the Global.asax.cs file that opened, add the following using directives to the beginning of the file.

using System.Data.Entity;
using BlueYonder.DataAccess;
using BlueYonder.BookingService.Host;

28. Locate the Application_Start method, and add the following code to it.

var dbInitializer = new FlightScheduleDatabaseInitializer();


dbInitializer.InitializeDatabase(new
TravelCompanionContext(BlueYonder.BookingService.Implementation.BookingService.Connec
tionName));

29. To save the file, press Ctrl+S.

Task 2: Configure the web application project to use IIS


1. In Solution Explorer, expand the BlueYonder.Server.Booking.Host project, and double-click the
App.config file.
2. To select the content of the file, press Ctrl+A, and then to copy the content of App.config files to
clipboard, press Ctrl+C.

3. In Solution Explorer, expand the BlueYonder.Server.Booking.WebHost project, and open the


Web.config file.

4. Select all the contents of Web.config file by pressing Ctrl+A, and then press Delete.

5. To paste the contents of the clipboard in Web.config file, press Ctrl+V.


6. Within the <system.serviceModel> section group, enter the following configuration.

<serviceHostingEnvironment>
<serviceActivations>
<add service="BlueYonder.BookingService.Implementation.BookingService"
relativeAddress="Booking.svc"/>
</serviceActivations>
</serviceHostingEnvironment>

7. Within the <configuration> section, enter the following configuration to the end of the section.

<system.web>
<compilation debug="true" targetFramework="4.5" />
<httpRuntime targetFramework="4.5" />
</system.web>
L6-3

Note: Make sure you are adding this code as the last child element of the
<configuration> element. Adding this code anywhere else within the Web.Config file will make
the application fail.

8. In the <system.serviceModel> section group, locate the <behaviors> section, and in it, locate the
<serviceMetadata> element.

9. Remove the httpGetUrl attribute and value from the <serviceMetadata> element.
10. In the <system.serviceModel> section group, locate the <services> section, and in it, locate the
<endpoint> element.

11. Remove the address attribute and value from the <endpoint> element.

12. To save the changes, press Ctrl+S.

Note: IIS uses the address of the web application to create the service metadata address
and the service endpoint address.

13. In Solution Explorer, right-click the BlueYonder.Server.Booking.WebHost project, and then click
Properties.

14. On the navigation pane, click the Web tab.


15. On the Web tab, under the Servers section, select the Use Local IIS Web server option. Clear the
Use IIS Express check box. Click Create Virtual Directory.
16. Wait for a confirmation message, and click OK.
17. To save the changes, press Ctrl+S.

18. On the Build menu, click Build Solution.

Task 3: Configure the web applications to support NET.TCP


1. On the Start screen, click the Internet Information Services (IIS) Manager tile.
2. In the Connections pane, expand SEA-DEV12-A (SEA-DEV12-A\Administrator).

3. If an Internet Information Services (IIS) Manager dialog box opens asking about the Microsoft
Web Platform, click No.
4. From the Connections pane, expand Sites, and then click Default Web Site.

5. In the Actions pane, click Bindings.


6. In the Site Bindings dialog box, verify that net.tcp binding is listed. To close the Site Binding dialog
box, .click Close.

Note: The site bindings configure which protocols are supported by the IIS Web Site and
which port, host name, and IP address are used with each protocol.

7. From the Connections pane, expand Default Web Site.

8. Click BlueYonder.Server.Booking.WebHost.

9. In the Actions pane, click Advanced Settings.

10. In the Advanced Settings dialog box, expand the Behavior node. In the Enabled Protocols box,
type http, net.tcp (replace the current value).
L6-4 Developing Windows Azure and Web Services

11. To save the changes, click OK.

Note: In addition to adding net.tcp to the site bindings list, you also need to enable net.tcp
for each Web application you host in IIS. By enabling net.tcp, WCF will automatically create an
endpoint with NetTcpBinding.

12. On the Start screen, click Computer to open File Explorer. Browse to D:\AllFiles, and double-click the
WcfTestClient shortcut.

13. In the WCF Test Client application, on the File menu, click Add Service, and in the Add Service
dialog box, type http://localhost/BlueYonder.Server.Booking.WebHost/Booking.svc, and then
click OK. Wait until you see the service and endpoints tree in the pane to the left.
14. Close the WCF Test Client application.

Results: You will be able to run the WCF Test Client application and verify if the services are running
properly in IIS.
L6-5

Exercise 2: Hosting the ASP.NET Web API Services in a Windows Azure


Web Role
Task 1: Create a new SQL database server and a new cloud service
1. On the Start screen, click the Internet Explorer tile.

2. Browse to the Windows Azure Management Portal at http://manage.windowsazure.com.

Note: The browser should automatically log you in to the portal. If you are redirected to
the Windows Live ID Sign in page, type your email and password, and then click Sign in.

3. If the Windows Azure Tour dialog box appears, close it.


4. Click SQL DATABASES in the left navigation pane, and then click SERVERS at the top of the page.

5. Click ADD at the bottom of the page. In the CREATE SERVER dialog box, enter the following
information:
LOGIN NAME: BlueYonderAdmin
LOGIN PASSWORD: Pa$$w0rd

CONFIRM PASSWORD: Pa$$w0rd


In the REGION box, select the region closest to your location.
6. Click the V icon at the bottom of the window to create the server, and wait until the server is created.

7. Write down the name of the newly created SQL Database Server. Later in this task, you will embed
this name within the connection string.

8. On the sql databases page, click the name of the newly created server.

9. Click the CONFIGURE tab.


10. In the allowed ip addresses section, add a new firewall rule by filling the following information:

RULE NAME: OpenAllIPs

START IP ADDRESS: 0.0.0.0


END IP ADDRESS: 255.255.255.255

11. Click Save at the bottom of the page.

Note: As a best practice, you should allow only your IP address, or your organization's IP
address range to access the database server. However, in this course, you will use this database
server for future labs, and your IP address might change in the meanwhile, therefore you are
required to allow access from all IP addresses.

12. Click NEW on the lower left of the portal. Click COMPUTE. Click CLOUD SERVICE. Click QUICK
CREATE. URL and REGION/AFFINITY GROUP input boxes are displayed to the right side.

13. In the URL box, enter the following cloud service name: BlueYonderCompanionYourInitials
(YourInitials contains your initials).

14. In the REGION OR AFFINITY GROUP box, select the region closest to your location.

15. Click CREATE CLOUD SERVICE at the lower right corner of the portal. Wait until the cloud service is
created.
L6-6 Developing Windows Azure and Web Services

16. Click CLOUD SERVICES in the navigation pane.

17. Click the cloud service that you created in the previous step (the one that named
BlueYonderCompanionYourInitials (YourInitials contains your initials), and then click CERTIFICATES
at the top of the page.
18. Click the UPLOAD link at the bottom of the page.

19. In the UPLOAD CERTIFICATE dialog box, click the BROWSE FOR FILE link.

20. In the Choose File to Upload dialog box, In the File name box, type
D:\AllFiles\certs\CloudApp.pfx, and then click Open.

21. In the PASSWORD box, type 1.

22. Click the V icon at the right side bottom of the window, and wait for the upload to finish.

Note: In this lab, the ASP.NET Web API services are accessible through HTTP and HTTPS. To
use HTTPS, you need to upload a certificate to the Windows Azure cloud servie.

23. On the Start screen, right-click the Visual Studio 2012 tile, and then click Open new window at the
bottom.

24. On the File menu, point to Open, and then click Project/Solution.

25. Type D:\AllFiles\Mod06\LabFiles\begin\BlueYonder.Server\BlueYonder.Companion.sln in the


File name box, and then click Open.

26. In Solution Explorer, open the Web.config file under BlueYonder.Companion.Host project.
27. Find the <connectionStrings> section. Locate the connection string entry, whose name attribute is
set to TravelCompanion.

28. Locate the two occurrences of {ServerName} placeholder in the connectionString attribute, select
each of them, and replace them with the new SQL Database server name.
29. In the end of the file, locate the <system.serviceModel> section group, and in it, locate the
<client> section.
30. In the <client> section, locate the <endpoint> element, and change its address attribute value to
net.tcp://localhost/BlueYonder.Server.Booking.WebHost/Booking.svc.

31. To save the changes, press Ctrl+S.

Task 2: Add a cloud project to the solution


1. In Solution Explorer, right-click the BlueYonder.Companion.Host.

2. Select Add Windows Azure Cloud Service Project.

3. Verify that a new BlueYonder.Companion.Host.Azure project was added to the solution and that it
has BlueYonder.Companion.Host as its Web Role.

Note: You can achieve the same result by adding a new Windows Azure Cloud Service
project, to the solution, and then manually adding a Web Role Project from an existing project.

4. In Solution Explorer, expand the BlueYonder.Companion.Host.Azure project. Expand the Roles


folder. Double-click the BlueYonder.Companion.Host role.

5. Click the Certificates tab.


L6-7

6. On the Certificates tab, click Add Certificate, to add a new certificate row with the following
information:

Name: BlueYonderCompanionSSL
Store Location: LocalMachine

Store Name: My

Click the Thumbprint box, and then click the ellipsis. Select the BlueYonderSSLCloud certificate, and
then click OK.
7. From the Service Configuration drop down list at the top of the tab, select Local.

8. In the BlueYonderCompanionSSL certificate line, click the Thumbprint box, and then click the
ellipsis. Select the BlueYonderSSLDev certificate, and then click OK.

9. From the Service Configuration drop down list at the top of the tab, select All Configurations.

Note: SSL certificates contain the name of the server so that clients can validate the
authenticity of the server. Therefore, there are different certificates for the local deployment, and
for the cloud deployment.

10. Click the Endpoints tab.

11. On the Endpoints tab, click Add Endpoint, to add a new endpoint row with the following
information:
Name: Endpoint2

Type: Input

Protocol: https

Public Port: 443


SSL Certificate Name: BlueYonderCompanionSSL

12. To save the changes, press Ctrl+S.


13. In Solution Explorer, right-click the BlueYonder.Companion.Host.Azure project and click Set as
StartUp Project.

14. To start the Windows Azure compute emulator without debugging, press Ctrl+F5.

15. When the two web browsers open, verify they use the addresses http://127.0.0.1:81 and
https://127.0.0.1:444.

Note: The endpoint configuration of the role uses ports 80 and 443 for the HTTP and
HTTPS endpoint. However, the local IIS Web server already uses those ports, so the emulator
needs to uses different ports.

16. Log on to the virtual machine 20487B-SEA-DEV-C as Admin with the password Pa$$w0rd.

17. On the Start screen, click the Visual Studio 2012 tile.

18. On the File menu, point to Open, and then click Project/Solution.

19. In the File name box, type


D:\AllFiles\Mod06\LabFiles\begin\BlueYonder.Companion.Client\BlueYonder.Companion.Clie
nt.sln, and then click Open.
L6-8 Developing Windows Azure and Web Services

20. If you are prompted by a Developers License dialog box, click I Agree. If you are prompted by a
User Account Control dialog box, click Yes. Type your email address and a password in the
Windows Security dialog box, and then click Sign in. In the Developers License dialog box, click
Close.

Note: If you do not have a valid email address, click Sign up and register for the service.
Write down these credentials and use them whenever a use of an email account is required.

21. In Solution Explorer, right-click the BlueYonder.Companion.Client project, and then click Set as
StartUp Project.

22. The client app is already configured to use the Windows Azure compute emulator. To start the client
app without debugging, press Ctrl+F5.

Note: Normally, the Windows Azure Emulator is not accessible from other computers on
the network. For purposes of testing this lab from a Windows 8 client, a routing module was
installed on the server's IIS, routing the incoming traffic to the emulator.

23. If you are prompted to allow the app to run in the background, click Allow.
24. After the client app starts, display the app bar by right-clicking or by swiping from the bottom of the
screen.

25. Click Search, and in the Search box on the right side enter New. If you are prompted to allow the
app to share your location, click Allow.

26. Wait for the app to show a list of flights from Seattle to New York.

27. Close the client app.


28. Leave Visual Studio 2012 open, and return to the virtual machine 20487B-SEA-DEV-A.

Task 3: Deploy the cloud project to Windows Azure


1. On the View menu, click Task List.

2. In Task List, select Comments in the drop-down list at the top.

3. Double-click the comments to view the code that was marked and verify that no calls are made to

UpdateReservationOnBackendSystem

CreateReservationOnBackendSystem

Note: Prior to the deployment of the cloud project to Azure, all the on-premises WCF calls
were disabled.
These include calls from the Reservation Controller class and the Trips Controller class.
After you deploy the ASP.NET Web API project to Windows Azure, it cannot call the on-premises
WCF service, so for now, the WCF Service calls are disabled. In Module 7, "Windows Azure Service
Bus" in Course 20487, you will learn how a cloud application can connect to an on-premises
service.

4. In Solution Explorer, right-click the BlueYonder.Companion.Host.Azure project, and then click


Publish.
L6-9

5. If you already added your Windows Azure subscription information to Visual Studio 2012, select your
subscription from the drop down list and skip to step 9.

6. In the Publish Windows Azure Application dialog box, click the Sign in to download credentials
hyperlink.

Note: The browser automatically logs you in to the portal. When you are redirected to the
Windows Live ID Sign in page, type your email address and password, and then click Sign in.

7. The publish settings file is generated, and a Do you want to open or save... Internet Explorer dialog
box appears at the bottom. Click the arrow within the Save button. Select the Save as option and
specify the following location:
D:\AllFiles\Mod06\LabFiles. Click Save. If a Confirm Save As dialog box appears, click Yes.

8. In Visual Studio 2012, return to Publish Windows Azure Application dialog box. Click Import. Type
D:\AllFiles\Mod06\LabFiles and select the file that you downloaded in the previous step, and then
click Open.
9. Make sure that your subscription is selected under Choose your subscription section, and then click
Next.
10. If the Create Windows Azure Services dialog box appears, click Cancel.

11. On the Common Settings tab, click the Cloud Service box. Select
BlueYonderCompanionYourInitials (YourInitials contains your names initials)

12. On Advanced Settings tab, click the Storage Account box. Select Create New. In the Create
Windows Azure Services dialog box that opens, fill the following information:

Name: byclyourinitials (yourinitials contains your names initials, in lower-case).

Location: select the region closest to you location.

Note: The abbreviation bycl stands for Blue Yonder Companion Labs. An abbreviation is
used because storage account names are limited to 24 characters. The abbreviation is in lower-
case because storage account names are in lower-case. Windows Azure Storage is covered in
depth in Module 9, "Windows Azure Storage" in Course 20487.

13. Click OK and wait for the storage account to create.

Note: If you get a message saying the service creation failed because you reached your
storage account limit, delete one of your existing storage accounts, and then retry the step. If you
do not know how to delete a storage account, consult the instructor.

14. In the Deployment label box, type Lab6.

15. Clear the Append current date and time check box.

16. Click Publish to start the publishing process. This might take several minutes to complete.

Task 4: Test the cloud service against the client application


1. Go back to the 20487B-SEA-DEV-C virtual machine.

2. In Solution Explorer, under the BlueYonder.Companion.Shared project, double-click Addresses.cs.


L6-10 Developing Windows Azure and Web Services

3. Locate the BaseUri property and change the string value to


https://blueyondercompanionYourInitials.cloudapp.net/ (replace YourInitials with your initials).

4. To save the file, press Ctrl+S.


5. To start the app without debugging, press Ctrl+F5.

6. If you are prompted to allow the app to run in the background, click Allow.

7. After the client app starts, display the app bar by right-clicking or by swiping from the bottom of the
screen.
8. Click Search, and in the Search box on the right side, type New. If you are prompted to allow the
app to share your location, click Allow.

9. Wait for the app to show a list of flights from Seattle to New York.

10. Close the client app.

Results: You will verify the application works locally in the Windows Azure compute emulator, and then
deploy it to Windows Azure and verify it works there too.
L6-11

Exercise 3: Hosting the Flights Management Web Application in a


Windows Azure Web Site
Task 1: Create new Web Site in Azure
1. Go back to the 20487B-SEA-DEV-A virtual machine.

2. On the Start screen, click the Internet Explorer tile.


3. Browse to the Windows Azure Management Portal at http://manage.windowsazure.com

Note: The browser automatically logs you in to the portal. When you are redirected to the
Windows Live ID Sign in page, type your email address and password, and then click Sign in.

4. On the lower left of the portal, click NEW. Click COMPUTE. Select WEB SITE. Click QUICK CREATE.
The URL and REGION input boxes are displayed to the right side.

5. In the URL box, enter the following Web Site name: BlueYonderCompanionYourInitials (YourInitials
contains your initials).

6. In the REGION box, select the region closest to your location.

7. At the lower right corner of the portal, click CREATE WEB SITE, and wait for the Web Site creation to
complete.
8. From the pane to the left, click WEB SITES, and then click the name of your newly created Web Site.

9. At the top of the page, click DASHBOARD.


10. Under the QUICK GLANCE section, click the Download the publish profile link.

11. A Do you want to open or save Internet Explorer dialog box appears at the bottom. Click the
arrow within the Save button, select Save as option and specify the following location:
D:\AllFiles\Mod06\LabFiles. Click Save.

Note: The publishing profile file includes the information required to publish a Web
application to the Web Site. This is an alternative publish method to downloading the
subscription file, as shown in Lesson 2, "Hosting Services in Windows Azure", Demo 1, "Hosting in
Windows Azure" in Course 20487. The difference is that by importing the subscription file, you
can publish to any of the Web Sites manages by your Windows Azure subscription, whereas
importing the publish profile file of a Web Site will only allow you to publish to that specific Web
Site.

Task 2: Upload the Flights Management web application to the new Web Site by
using the Windows Azure Management Portal
1. On the Start screen, right-click the Visual Studio 2012 tile, and then click Open new window at the
bottom.

2. On the File menu, point to Open, and then click Project/Solution.


3. In the File name box, type
D:\AllFiles\Mod06\LabFiles\begin\BlueYonder.Server\BlueYonder.Companion.FlightsManager.
sln Click Open.

4. In Solution Explorer, expand the BlueYonder.FlightsManager project, and then double-click the
Web.config file.
L6-12 Developing Windows Azure and Web Services

5. In the <appSettings> section, locate the webapi:BlueYonderCompanionService key. In the value


attribute, substitute the {YourInitials} placeholder with initials you have used when you created the
Windows Azure cloud service earlier.

6. In Solution Explorer, right-click the BlueYonder.FlightsManager project, and then click Publish.
7. In the Publish Web dialog box, click Import.

8. In the Import Publish Settings dialog box, in the File name box, type D:\AllFiles\Mod06\LabFiles,
and then press Enter.

9. Select the publish settings file that you downloaded in the previous task, and then click Open.

10. Click Publish. The deployment process starts. When the process is complete Internet Explorer
automatically opens with the URL of the deployed site.

11. In the browser, select Paris, France from the drop down list on the left, select Rome, Italy from the
drop down list on the right, and then click Filter. Verify you see flight schedules.

Results: After you publish the flights manager web application, you will open the web application in a
browser and verify if it is working properly and is able to communicate with the web role you deployed in
the previous exercise.
L7-1

Module 7: Windows Azure Service Bus


Lab: Windows Azure Service Bus
Exercise 1: Using a Service Bus Relay for the WCF Booking Service
Task 1: Create the Service Bus namespace by using the Windows Azure Management
Portal
1. In the 20487B-SEA-DEV-A virtual machine, on the Start screen, click the Computer tile to open File
Explorer.

2. Browse to D:\AllFiles\Mod07\LabFiles\Setup.
3. Double-click the Setup.cmd file. When prompted for information, provide it according to the
instructions.

Note: You may see a warning saying the client model does not match the server model.
This warning may appear if there is a newer version of the Windows Azure PowerShell cmdlets. If
this message is followed by an error message, please inform the instructor, otherwise you can
ignore the warning.

4. Write down the name of the cloud service that is shown in the script. You will use it later on during
the lab.
5. Wait for the script to finish, and then press any key to close the script window.

6. On the Start screen, click the Internet Explorer tile.


7. Browse to the Windows Azure Management Portal at http://manage.windowsazure.com.

Note: The browser should automatically log you in to the portal. If you are redirected to
the Windows Live ID Sign in page, type your email and password, and then click Sign in.

8. If the Windows Azure Tour dialog box appears, close it.


9. In the left navigation pane, click SERVICE BUS, and then at the bottom of the page, click CREATE.

10. In the CREATE A NAMESPACE dialog box enter the following information:

NAMESPACE NAME: BlueYonderServerLab07YourInitials (Replace YourInitials with your initials).


REGION: Select the region closest to your location.

11. To create the namespace, click the V icon at the bottom of the window, and wait until the namespace
is active.

12. To highlight the STATUS column of the namespace you created, click it.

13. At the bottom of the page, click CONNECTION INFORMATION.

14. In the ACCESS CONNECTION INFORMATION dialog box, locate DEFAULT KEY field and click the
Copy icon to the right of it. If you are prompted to allow access to your clipboard, click Allow access.

15. Click the OK button, on the right side of the window to close the dialog box.

Task 2: Add a new WCF Endpoint with a relay binding


1. On the Start screen, click the Visual Studio 2012 tile.
L7-2 Developing Windows Azure and Web Services

2. On the File menu, point to Open, and then click Project/Solution.

3. Type D:\AllFiles\Mod07\LabFiles\begin\BlueYonder.Server\BlueYonder.Server.sln in the File name box,


and then click Open.
4. In Solution Explorer, right-click the BlueYonder.Server.Booking.WebHost project node, and then click
Manage NuGet Packages.

5. In the Manage NuGet Packages dialog box, on the navigation pane, expand the Online node, and
then click the NuGet official package source node.

6. Press CTRL+E, and then type WindowsAzure.ServiceBus.

7. In the center pane, click the Windows Azure Service Bus package, and then click Install. If a License
Acceptance dialog box appears, click I Accept.

8. Wait for installation to complete. Click Close to close the window.

9. In Solution Explorer, expand the BlueYonder.Server.Booking.WebHost project, and then double-


click Web.config.

10. In the <system.serviceModel> section group, locate the <services> section, and inside it, locate the
<endpoint> element named BookingTcp.
11. In the <endpoint> element, change the value of the binding attribute from netTcpBinding to
netTcpRelayBinding.

12. Add the address attribute to the <endpoint> element with the value
sb://BlueYonderServerLab07YourInitials.servicebus.windows.net/booking (Replace YourInitials
with your initials).

13. In the <system.serviceModel> section group, locate the <behaviors> section.

14. In the <behaviors> section, add the following configuration.

<endpointBehaviors>
<behavior name="sbTokenProvider">
<transportClientEndpointBehavior>
<tokenProvider>
<sharedSecret issuerName="owner" issuerSecret="{IssuerSecret}" />
</tokenProvider>
</transportClientEndpointBehavior>
</behavior>
</endpointBehaviors>

15. Substitute the {IssuerSecret} placeholder by pasting the Service Bus namespace access key that you
copied in the previous task.

Note: Visual Studio Intellisense uses built-in schemas to perform validations. Therefore, it will not
recognize the transportClientEndpointBehavior behavior extension, and will display a warning.
Disregard this warning.

16. In the <system.serviceModel> section group, locate the <services> section, and inside it, locate the
<endpoint> element.
17. Add a behaviorConfiguration attribute to the element, and then set its value to sbTokenProvider.

18. Locate the <system.webServer> section group, and then add the following configuration to it.

<applicationInitialization>
<add initializationPage="/booking.svc"/>
</applicationInitialization>

19. To save the changes, press CTRL+S.


L7-3

Note: Application initialization automatically sends requests to specified addresses after the
Web application loads. Sending the request to the service will make the service host load and
initiate the Service Bus connection.

20. On the Start screen, click the Internet Information Services (IIS) Manager tile.

21. In the Connections pane, expand SEA-DEV12-A (SEA-DEV12-A\Administrator).

22. If an Internet Information Services (IIS) Manager dialog box pops up asking about the Microsoft
Web Platform, click No.

23. In the Connections pane, click the Application Pools node.

24. In the Features View, right-click DefaultAppPool, and then click Advanced Settings.
25. In the Advanced Settings dialog box, set the Start Mode option to AlwaysRunning.

26. To save the settings, click OK.

Note: Setting the start mode to AlwaysRunning will load the application pool
automatically after IIS loads. To use application initialization the application pool must be
running.

27. From the Connections pane, expand the Sites node, and then expand the Default Web Site node.

28. Right-click the BlueYonder.Server.Booking.WebHost node, point to Manage Application, and then
click Advanced Settings.

29. In the Advanced Settings dialog box, set the Preload Enabled option to True. This setting will start
the service after IIS starts.
30. To save the changes, click OK.

Note: When preload is enabled, IIS will simulate requests after the application pool starts.
The list of requests is specified in the application initialization configuration that you already
created.

31. Return to Visual Studio 2012, and in Solution Explorer, right-click the
BlueYonder.Server.Booking.WebHost project, and then click Build.

32. Return to IIS Manager, and in the Connections pane, click the Application Pools node and in the
Features View.

33. Right-click DefaultAppPool, and then click Recycle.

Task 3: Configure the ASP.NET Web API back-end service to use the new relay
endpoint
1. On the Start screen, right-click the Visual Studio 2012 tile, and then click Open new window at the
bottom.

2. On the File menu, point to Open, and then click Project/Solution.

3. Type D:\AllFiles\Mod07\LabFiles\begin\BlueYonder.Server\BlueYonder.Companion.sln in the File


name box, and then click Open.

4. In Solution Explorer, right-click the BlueYonder.Companion.Host project node, and then click Manage
NuGet Packages.
L7-4 Developing Windows Azure and Web Services

5. In the Manage NuGet Packages dialog box, on the navigation pane, expand the Online node and
then click the NuGet official package source node.

6. Press Ctrl+E and type WindowsAzure.ServiceBus.


7. In the center pane, click the Windows Azure Service Bus package, and then click Install.

8. Wait for installation to complete. Click Close.

9. In Solution Explorer, expand the BlueYonder.Companion.Host project, and then double-click


Web.config.
10. In the <system.serviceModel> section group, locate the <client> section, and inside it, locate the
<endpoint> element.

11. Change the value of the binding attribute from netTcpBinding to netTcpRelayBinding.

12. Substitute the value of the address attribute with the following value:
sb://BlueYonderServerLab07YourInitials.servicebus.windows.net/booking (Replace YourInitials
with your initials).
13. In the <system.serviceModel> section group, add the following configuration.

<behaviors>
<endpointBehaviors>
<behavior>
<transportClientEndpointBehavior>
<tokenProvider>
<sharedSecret issuerName="owner" issuerSecret="{IssuerSecret}" />
</tokenProvider>
</transportClientEndpointBehavior>
</behavior>
</endpointBehaviors>
</behaviors>

14. Substitute the {IssuerSecret} placeholder by pasting the Service Bus namespace access key that you
copied in the first task.

15. To save the file, press Ctrl+S.

16. To close the file, press Ctrl+F4.


Note: Visual Studio Intellisense uses built-in schemas to perform validations. Therefore, it will not
recognize transportClientEndpointBehavior behavior extension, and will display a warning. Disregard
this warning.

Task 4: Test the WCF service


1. On the Start screen, click the Internet Explorer tile.

2. Browse to the Windows Azure Management Portal at http://manage.windowsazure.com.

3. In the left navigation pane, click SERVICE BUS, and then click the name column of
BlueYonderServerLab07YourInitials (Replace YourInitials with your initials) Service Bus.

4. Click RELAYS.

5. Verify that booking relay is listed.

6. Go back to the Visual Studio 2012 instance with the open BlueYonder.Companion solution.

7. On the View menu, click Task List.

8. In Task List, click Comments in the drop-down list at the top.


L7-5

9. Double-click the comment // TODO: Lab 07 Exercise 1: Task 4.3: Bring back the call to the backend
WCF service.

10. Uncomment the call to the CreateReservationOnBackendSystem method. Make sure the return
value of the method is stored in the confirmationCode variable.
11. To save the file, press Ctrl+S.

12. In Solution Explorer, right-click the BlueYonder.Companion.Host.Azure, and then click Publish.

13. In the Publish Windows Azure Application dialog box, click Import.
14. Type D:\AllFiles\Mod07\LabFiles in the File name box, and then click Open. Select your publish
settings file (the file should have the .publishsettings extension), and then click Open.

15. Click Next.

16. On the Common Settings tab, click the Cloud Service box, and then select the cloud service that
matches the name you wrote down at the beginning of the lab, after running the setup script.
17. Click Publish to start the publishing process. If a Deployment Environment In Use dialog box
appears, click Replace. This might take several minutes to complete.

18. Switch to the Visual Studio 2012 instance with the open BlueYonder.Server solution.
19. In Solution Explorer, expand the BlueYonder.BookingService.Implementation project, and then
double-click BookingService.cs.

20. In the CreateReservation method, right-click the line of code that starts with
if(request.DepartureFlight, point to Breakpoint, and then click Insert Breakpoint.

21. In Solution Explorer, right-click the BlueYonder.Server.Booking.WebHost project, and then click
Set as StartUp Project.

22. To start debugging the WCF application, press F5.


23. Log on to the virtual machine 20487B-SEA-DEV-C as Admin with the password Pa$$w0rd.

24. On the Start screen, click the Visual Studio 2012 tile.
25. On the File menu, point to Open, and then click Project/Solution.

26. In the File name box, type


D:\AllFiles\Mod07\LabFiles\begin\BlueYonder.Companion.Client\BlueYonder.Companion.Clie
nt.sln, and then click Open.

27. If you are prompted by a Developers License dialog box, click I Agree. If you are prompted by a
User Account Control dialog box, click Yes. Type your email address and a password in the
Windows Security dialog box, and then click Sign in. Click Close in the Developers License dialog
box.

Note: If you do not have valid email address, click Sign up and register for the service.
Write down these credentials and use them whenever an email account is required.

28. In Solution Explorer, expand the BlueYonder.Companion.Shared project, and then double-click
Addresses.cs.

29. In the Addresses class, locate the BaseUri property, and then replace the {CloudService} string with
the Windows Azure Cloud Service name you wrote down at the beginning of this lab.

30. To save the changes, press Ctrl+S.


L7-6 Developing Windows Azure and Web Services

31. To start the client app without debugging, press Ctrl+F5.

32. If you are prompted to allow the app to run in the background, click Allow.

33. Display the app bar by right clicking or by swiping from the bottom of the screen.

34. Click Search, and then in the Search box on the right side enter New. If you are prompted to allow
the app to share your location, click Allow.

35. Wait for the app to show a list of flights from Seattle to New York.
36. Click Purchase this trip.

37. In the First Name box, type your first name.

38. In the Last Name box, type your last name.


39. In the Passport box, type Aa1234567.

40. In the Mobile Phone box, type 555-5555555.

41. In the Home Address box, type 423 Main St.

42. In the Email Address box, type your email address.

43. Click Purchase.

44. Go back to the 20487B-SEA-DEV-A virtual machine, to Visual Studio 2012 instance with the open
BlueYonder.Server solution. The code execution breaks, and the line in breakpoint is highlighted in
yellow.

45. To resume execution, press F5, and then go back to the 20487B-SEA-DEV-C virtual machine, to the
client app.

46. To close the confirmation message, click Close, and then close the client app.

47. Go back to the 20487B-SEA-DEV-A virtual machine.

48. Return to Visual Studio 2012 where the BlueYonder.Server solution is open and press Shift+F5 to
stop debugging the WCF application.

Results:After you complete this exercise, you can run the client app and book a flight, and have the
ASP.NET Web API services running in the Windows Azure Web Role communicate with the on-premises
WCF services by using Windows Azure Service Bus Relays.
L7-7

Exercise 2: Publishing Flight Updates to Clients by Using Windows Azure


Service Bus Queues
Task 1: Send flight update messages to the Service Bus Queue
1. On the Start screen, click the Internet Explorer tile.

2. Browse to the Windows Azure Management Portal at http://manage.windowsazure.com. If the


Windows Azure Tour dialog box appears, close it.

Note: The browser should automatically log you in to the portal. If you are redirected to
the Windows Live ID Sign in page, type your email and password, and then click Sign in.

3. In the navigation pane, click SERVICE BUS, click in the STATUS column of the service bus row you
created in the previous exercise, and then at the bottom of the page, click CONNECTION
INFORMATION.

4. In the ACCESS CONNECTION INFORMATION dialog box, locate CONNECTION STRING field and
click the Copy icon to the right of it.

If you are prompted to allow access to your clipboard, click Allow access.

5. Click OK, on the right side of the window to close the dialog box.
6. Return to Visual Studio 2012 where the BlueYonder.Companion solution is open.

7. In Solution Explorer, expand the BlueYonder.Companion.Host.Azure project, expand the Roles


folder, and then double-click the BlueYonder.Companion.Host web role.

8. Click the Settings tab, and then click Add Setting.

9. Enter a setting with the following information:

Name: Microsoft.ServiceBus.ConnectionString
Type: String

Value: Press Ctrl+V to paste the Service Bus connection string you copied from the portal.
10. Press Ctrl+S to save the changes.

11. In Solution Explorer, expand BlueYonder.Companion.Controllers project node, and then double-click
ServiceBusQueueHelper.cs to open it.

Note: The BlueYonder.Companion.Controllers already contains the NuGet packages


Windows Azure ServiceBus and Windows Azure Configuration Manager.
When you install the Windows Azure ServiceBus NuGet package, the package Windows Azure
Configuration Manager is installed automatically.

12. Replace the return null statement in the ConnectToQueue method with the following code.

string connectionString =
CloudConfigurationManager.GetSetting("Microsoft.ServiceBus.ConnectionString");
var namespaceManager = NamespaceManager.CreateFromConnectionString(connectionString);

13. Add the following code to the end of the method to create the queue if it does not exist.

if (!namespaceManager.QueueExists(QueueName))
{
namespaceManager.CreateQueue(QueueName);
L7-8 Developing Windows Azure and Web Services

14. Add the following code to the end of the method to return a QueueClient object.

return QueueClient.CreateFromConnectionString(connectionString, QueueName);

15. Press Ctrl+S to save the changes.

16. In Solution Explorer, under the BlueYonder.Companion.Controllers project, and double-click


FlightsController.cs.
17. Add the following using directive to the beginning of the file.

using Microsoft.ServiceBus.Messaging;

18. Add the following static field to the class.

private static QueueClient Client;

19. Create a static constructor in the class by adding the following code to it.

static FlightsController()
{
Client = ServiceBusQueueHelper.ConnectToQueue();
}

20. Locate the Put method. Place the following code after the comment // TODO: Lab07, Exercise 2, Task
1.6 : Send a flight update message to the queue

updatedSchedule.FlightId = id;
var msg = new BrokeredMessage(updatedSchedule);
msg.ContentType = "UpdatedSchedule";
Client.Send(msg);

21. Press Ctrl+S to save the changes.


22. In Solution Explorer, under the BlueYonder.Companion.Controllers project, and double-click
NotificationsController.cs.

23. Explore the content of the Register method and the static constructor of the
NotificationsController. The same pattern of creating a QueueClient object in the static constructor
and then sending the update messages by using the BrokeredMessage is applied to this controller.

Note: The Register method subscribes clients to flight update notifications. When a flight
update message is sent to the queue, every subscribed client waiting for that flight will be
notified by using the Windows Push Notification Services (WNS).

Task 2: Create a Windows Azure Worker role that receives messages from a Service
Bus Queue
1. In Solution Explorer, under the BlueYonder.Companion.Host.Azure project, right-click Roles, then
point to Add, and then click New Worker Role Project.
2. In the Add New .NET Framework 4.5 Role Project dialog box, click Worker Role with Service Bus
Queue.

3. In the Name box, type BlueYonder.Companion.WNS.WorkerRole, and then click Add.


L7-9

4. In Solution Explorer, under the BlueYonder.Companion.Host.Azure project, under the Roles folder,
double-click the BlueYonder.Companion.Host web role.

5. Click the Settings tab. In the Microsoft.ServiceBus.ConnectionString setting row, click the Value
cell, and then press Ctrl+C.
6. In Solution Explorer, under the BlueYonder.Companion.Host.Azure project, under the Roles folder,
double-click the BlueYonder.Companion.WNS.WorkerRole worker role.

7. Click the Settings tab. In the Microsoft.ServiceBus.ConnectionString setting row, click the Value
cell, and then press Ctrl+V.

8. Press Ctrl+S to save the changes.

Task 3: Handle the subscription and update messages


1. In Solution Explorer, right-click the root solution node, point to Add, and then click Existing Project.

2. In the Add Existing Project dialog box, browse to


D:\Allfiles\Mod07\LabFiles\begin\BlueYonder.Server\BlueYonder.Companion.WNS, select
BlueYonder.Companion.WNS.csproj, and then click Open.

Note: The BlueYonder.Companion.WNS project includes code that handles WNS


subscriptions and notifications. WNS is out of scope of this course; however, you can open the
project's code and observe how WNS is used.

3. In Solution Explorer, under the BlueYonder.Companion.Host project, double-click Web.config.


4. Select the entire <connectionStrings> element, and then press Ctrl+C to copy the connection string
to the clipboard.

5. In Solution Explorer, expand the BlueYonder.Companion.WNS.WorkerRole project, and then


double-click App.config.

6. Place the text cursor between the <configuration> and <system.diagnostics> tags, and then press
Ctrl+V to paste the connection string in the configuration.
7. Locate the <appSettings> element and add the following configuration to it.

<add key="ClientSecret" value="1r7Bt7zllZLfDM4W4Q7BxAZEze2qnvuN" />


<add key="PackageSID" value="ms-app://s-1-15-2-1252400722-2342768715-2725817281-
1266214681-2802664595-2493784738-901281077" />

8. Press Ctrl+S to save the changes.

Note: You can find the above configuration in the WnsConfiguration.xml file, under the
lab's Assets folder
The ClientSecret and PackageSID settings were retrieved by the Windows 8 client team during
the upload process of the client app to the windows store.

9. In Solution Explorer, right-click the BlueYonder.Companion.WNS.WorkerRole project, and then


click Add Reference.

10. In the Reference Manager dialog box, in the pane on the left side, click Solution.

11. In the pane on the right side, point to each of the following projects, and select the check box next to
the project name:
L7-10 Developing Windows Azure and Web Services

BlueYonder.Companion.WNS

BlueYonder.Companion.Entities

BlueYonder.DataAccess.Interfaces

BlueYonder.DataAccess

BlueYonder.Entities

12. Click OK to add the references.

13. In Solution Explorer, right-click the BlueYonder.Companion.WNS.WorkerRole project, point to


Add, and then click Existing Item.

14. Type D:\AllFiles\Mod07\LabFiles\Assets in the File name box, and press Enter.
15. Select MessageHandler.cs from the file list and then click Add.

Note: The MessageHandler class contains the code to subscribe clients to WNS and send
notifications to clients when their flights are rescheduled.

16. In Solution Explorer, under the BlueYonder.Companion.WNS.WorkerRole project, double-click


WorkerRole.cs.

17. Add the following code to the beginning of the OnStart method.

WNSManager.Authenticate();

18. In the WorkerRole class, locate the QueueName constant, and change its string value from
ProcessingQueue to FlightUpdatesQueue.

19. Add the following using directives to the beginning of the file.

using BlueYonder.Companion.Entities;
using BlueYonder.Entities;

20. In the WorkerRole class, locate the Run method. Locate the // Process the message comment and
add the following code after the Trace.Writeline method.

switch (receivedMessage.ContentType)
{
case "Subscription":

MessageHandler.CreateSubscription(receivedMessage.GetBody<RegisterNotificationsReques
t>());
break;
case "UpdatedSchedule":
MessageHandler.Publish(receivedMessage.GetBody<FlightSchedule>());
break;
}

21. Press Ctrl+S to save the changes.

22. In Solution Explorer, right-click the BlueYonder.Companion.Host.Azure project, and then click Publish.
23. In the Publish Windows Azure Application dialog box, click Publish.

24. Click Publish to start the publishing process. When a Deployment Environment In Use dialog box
appears, click Replace. The publish process might take several minutes to complete.
L7-11

Task 4: Test the Service Bus Queue with flight update messages
1. Place the two virtual machine windows so you work in virtual machine 20487B-SEA-DEV-A and see
the right-hand side of 20487B-SEA-DEV-C.

2. Go back to the 20487B-SEA-DEV-C virtual machine, to Visual Studio 2012.


3. Press Ctrl+F5 to start the client app without debugging. The trip you purchased in the previous
exercise will show in the Current Trip list.

4. Write down the date of the trip.

5. Leave the client app open and go back to the 20487B-SEA-DEV-A virtual machine.
6. On the Start screen, right-click the Visual Studio 2012 tile, and then click Open new window at the
bottom.

7. On the File menu, point to Open, and then click Project/Solution.

8. Type
D:\AllFiles\Mod07\LabFiles\begin\BlueYonder.Server\BlueYonder.Companion.FlightsManager.
sln in the File name box, and then click Open.
9. In Solution Explorer, expand the BlueYonder.FlightsManager project, and then double-click the
Web.config file.

10. In the <appSettings> section, locate the webapi:BlueYonderCompanionService key. In the value
attribute, replace the {CloudService} string with the Windows Azure Cloud Service name you wrote
down at the beginning of this lab.
11. In Solution Explorer, right-click the BlueYonder.FlightsManager project, and then click Set as
StartUp Project.

12. Press Ctrl+F5 to start the web application. A browser will open.

13. In the browser, select Seattle, Washington United States from the drop down list on the left, select
New York, New York United States from the drop down list on the right, and then click the filter
icon.

14. Locate the row for the departure date of your purchased trip.
15. Click in the New Time cell. Select the hour 9:00 AM, and then click the save icon.

16. Wait for a couple of seconds and observe the toast notification in the 20487B-SEA-DEV-C virtual
machine (should appear in the top-right hand corner of the window).
17. Close the client app in virtual machine 20487B-SEA-DEV-C.

Results: After you complete this exercise, you will be able to run the Flight Manager Web application,
update the flight departure time of a flight you booked in advance in your client app, and receive
Windows push notifications directly to your computer.
L7-12 Developing Windows Azure and Web Services
L8-1

Module 8: Deploying Services


Lab: Deploying Services
Exercise 1: Deploying an Updated Service to Windows Azure
Task 1: Add the New Weather Updates Service to the ASP.NET Web API project
1. In the 20487B-SEA-DEV-A virtual machine, on the Start screen, click the Computer tile to open File
Explorer.

2. Browse to D:\AllFiles\Mod08\LabFiles\Setup.

3. Double-click the setup.cmd file. When prompted for information, provide it according to the
instructions.

Note: You may see a warning saying the client model does not match the server model.
This warning may appear if there is a newer version of the Windows Azure PowerShell Cmdlets. If
this message is followed by an error message, please inform the instructor, otherwise you can
ignore the warning.

4. Wait for the deployment to complete successfully, write down the names of the Windows Azure
Service Bus namespace and Windows Azure Cloud Service, and press any key to close the window.
5. On the Start screen, click the Visual Studio 2012 tile.

6. On the File menu, point to Open, and then click Project/Solution.

7. Type D:\AllFiles\Mod08\LabFiles\begin\BlueYonder.Server\BlueYonder.Companion.sln in the


File name box, and then click Open.

8. In Solution Explorer, right-click the BlueYonder.Companion.Host.Azure project, and then click


Package.
9. In the Package Windows Azure Application, select Cloud in the Service configuration drop-down
list, select Debug in the Build configuration drop-down list, and then click Package.
10. Wait for the packaging process to complete, and then close File Explorer that opened.

11. On the Start screen, click the Internet Explorer tile.

12. Browse to the Windows Azure Management Portal at http://manage.windowsazure.com

13. If you are taken to the sign in page, enter your Windows Azure account email and password, and
then click Sign in.

14. In the navigation pane, click CLOUD SERVICES, and in the Cloud Services list, click the name of the
Cloud Service you wrote down at the beginning of this lab.
15. On the Cloud Service DASHBOARD tab, click PRODUCTION, and then click Update or Upload at
the bottom of the page (only one of the buttons should be visible).
16. In the dialog box that opened, enter Lab08 in the DEPLOYMENT NAME box.

17. Under PACKAGE, click FROM LOCAL, type


D:\AllFiles\Mod08\LabFiles\begin\BlueYonder.Server\BlueYonder.Companion.Host.Azure\bin\
Debug\app.publish\BlueYonder.Companion.Host.Azure.cspkg, and then click Open.

18. Under CONFIGURATION, click FROM LOCAL, select ServiceConfiguration.Cloud.cscfg from the
file list, and then click Open.
L8-2 Developing Windows Azure and Web Services

19. Select the Deploy even if one or more roles contain a single instance check box or the Update
even if one or more roles contain a single instance check box (only one of the check boxes will be
visible), and then click OK (lower-right icon in the dialog).

20. Click the INSTANCES tab, wait for the new instance to show in the list, and then wait until its status
changes to Running.

21. Close Internet Explorer, and return to Visual Studio 2012.

22. In Solution Explorer, expand BlueYonder.Companion.Controllers project, and then double-click


LocationController.cs to open it.

23. Locate the GetWeather method and replace its code with the following code.

var service = new WeatherService();


Location location = Locations.GetSingle(locationId);
return service.GetWeather(location, date);

24. To save the file, press Ctrl+S.


25. In Task List, double-click the comment // TODO: Mod08: Ex1: Task 1.5: Add route for the weather
updates, and then verify the WebApiConfig.cs file has opened.

26. Add the following code under the comment.

config.Routes.MapHttpRoute(
name: "LocationWeatherApi",
routeTemplate: "locations/{locationId}/weather",
defaults: new
{
controller = "locations",
action = "GetWeather"
},
constraints: new
{
httpMethod = new HttpMethodConstraint(HttpMethod.Get)
}
);

27. To save the file, press Ctrl+S.

Task 2: Deploy the updated project to staging by using the Windows Azure
Management Portal
1. In Solution Explorer, right-click the BlueYonder.Companion.Host.Azure project, and then click
Package.

2. In the package Windows Azure Application, select Cloud in the Service configuration drop-down
list, select Debug in the Build configuration drop-down list, and then click Package.

3. Wait for the packaging process to complete, and then close File Explorer that opened.

4. On the Start screen, click the Internet Explorer tile.

5. Browse to the Windows Azure Management Portal at http://manage.windowsazure.com

6. If you are taken to the sign in page, enter your Windows Azure account email and password, and
then click Sign in.
7. In the navigation pane, click CLOUD SERVICES, and in the Cloud Services list, click the name of the
Cloud Service you wrote down at the beginning of this lab.

8. On the Cloud Service DASHBOARD tab, click STAGING, and then click UPLOAD A NEW STAGING
DEPLOYMENT.
L8-3

9. In the Upload a package dialog box, enter Lab08 in the DEPLOYMENT NAME box.

10. Under PACKAGE, click FROM LOCAL, type


D:\AllFiles\Mod08\LabFiles\begin\BlueYonder.Server\BlueYonder.Companion.Host.Azure\bin\
Debug\app.publish\BlueYonder.Companion.Host.Azure.cspkg, and then click Open.
11. Under CONFIGURATION, click FROM LOCAL, select ServiceConfiguration.Cloud.cscfg from the
file list, and then click Open.

12. Select the Deploy even if one or more roles contain a single instance check box, and then click
OK (lower-right icon in the dialog).

13. Click the INSTANCES tab, wait for the new instance to show in the list, and then wait until its status
changes to Running.

Note: You are performing the exact same procedure as you did in Task 1 of this exercise,
with one difference: you are deploying to the Staging configuration and not to the Production
configuration.
Note: It may take several minutes until the instance starts to run.

Task 3: Test the client app with the production and staging deployments
1. In the 20487B-SEA-DEV-C virtual machine, on the Start screen, click the Visual Studio 2012 tile.
2. On the File menu, point to Open, and then click Project/Solution.

3. Type
D:\AllFiles\Mod08\LabFiles\begin\BlueYonder.Companion.Client\BlueYonder.Companion.Clie
nt.sln in the File name box, and then click Open.
4. If you are prompted by a Developers License dialog box, click I Agree. If you are prompted by a
User Account Control dialog box, click Yes. Type your e-mail address and a password in the
Windows Security dialog box and then click Sign in. Click Close in the Developers License dialog
box.

Note: If you do not have valid e-mail address, click Sign up and register for the service.
Write down these credentials and use them whenever a use of an e-mail account is required.

5. In Solution Explorer, under the BlueYonder.Companion.Shared project, double-click Addresses.cs.

6. Locate the BaseUri property, and replace the {CloudService} string with the Windows Azure Cloud
Service name you wrote down at the beginning of this lab.

7. In Solution Explorer, right-click the BlueYonder.Companion.Client project, and then click Set as
StartUp Project.

8. To start the client app without debugging, press Ctrl+F5.

9. If you are prompted to allow the app to run in the background, click Allow.
10. Display the app bar by right-clicking or by swiping from the bottom of the screen.

11. Click Search, and in the Search box on the right side enter New. If you are prompted to allow the
app to share your location, click Allow.

12. Wait for the app to show a list of flights from Seattle to New York.
13. Click Purchase this trip.
L8-4 Developing Windows Azure and Web Services

14. In the First Name box, enter your first name.

15. In the Last Name box, enter your last name.

16. In the Passport box, type Aa1234567.

17. In the Mobile Phone box, type 555-5555555.

18. In the Home Address box, type 423 Main St..

19. In the Email Address box, enter your email address.

20. Click Purchase.

21. To close the confirmation message, click Close.

22. Verify the weather forecast does not show the temperature, only the degrees Fahrenheit sign.

23. Close the client app.

24. On the Start screen, click the Internet Explorer tile.

25. Browse to the Windows Azure Management Portal at http://manage.windowsazure.com.

26. If you are taken to the sign in page, enter your Windows Azure account email and password, and
then click Sign in.

27. In the navigation pane, click CLOUD SERVICES, and in the Cloud Services list, click the name of the
Cloud Service you wrote down at the beginning of this lab.
28. On the Cloud Service DASHBOARD tab, click STAGING, then in the quick glance pane on the right
side, right-click the link below SITE URL, and then click Copy shortcut.
29. Return to Visual Studio 2012. In Solution Explorer, under the BlueYonder.Companion.Shared
project, double-click Addresses.cs.

30. Switch the comments between the two BaseUri get implementations by placing the production URL
in comments and removing the comment from the staging URL. The resulting code should resemble
the following:

public static string BaseUri


{
//get { return "http://{CloudService}.cloudapp.net/"; } // production
get { return "{StagingAddress}"; } // staging
}

Note: {CloudService} is replaced with the name of the cloud service that was shown at the
beginning of the lab.

31. In the BaseUri property, select the value {StagingAddress}, and then press Ctrl+V to paste the
copied staging deployment address over it.

32. To save the file, press Ctrl+S.

33. To start the client app without debugging, press Ctrl+F5.

34. After the app starts, verify the weather forecast shows a temperature for the current trip.

35. Close the client app.


L8-5

Note: The staging and the production deployments share the database, which is why the
current trip, which you created with the production deployment, is shown when connecting to
the staging deployment.

Task 4: Perform a VIP Swap by using the Windows Azure Management Portal and
retest the client app
1. Return to Internet Explorer, and then click SWAP on the bottom task bar.

2. In the dialog box, click YES, and wait until SWAP is enabled.
3. Leave the browser open, and return to Visual Studio 2012.

4. If the Addresses.cs file is not opened, in Solution Explorer, under the


BlueYonder.Companion.Shared project, double-click Addresses.cs.

5. Switch the comments between the two BaseUri get implementation by placing the staging in
comments and removing the comment from the production.

6. To save the file, press Ctrl+S.


7. To start the client app without debugging, press Ctrl+F5.

8. After the app starts, verify that the weather forecast shows a temperature for the current trip.

9. Close the client app.

10. Return to Internet Explorer, move the pointer over DELETE on the bottom task bar, and then click
Delete staging deployment for Cloud Service. Click YES, and wait until the message You have
nothing deployed to the staging environment appears.

Note: After the production deployment is running and has been tested, it is recommended
that you delete the staging deployment to reduce compute hour charges.

Results: After you complete this exercise, the client app will retrieve weather forecast information from
the production deployment in Windows Azure.
L8-6 Developing Windows Azure and Web Services

Exercise 2: Exporting and Importing an IIS Deployment Package


Task 1: Export the web applications containing the WCF booking and frequent flyer
services
1. In the 20487B-SEA-DEV-A virtual machine, on the Start screen, click the Internet Information
Services (IIS) Manager tile.

2. In the Connections pane, expand SEA-DEV12-A (SEA-DEV12-A\Administrator).

3. If an Internet Information Services (IIS) Manager dialog box pops up asking about the Microsoft
Web Platform, click No.

4. From the Connection pane, expand Sites, and then click Default Web Site.

5. In the Actions pane, click Export Application.


6. In the Export Application Package dialog box, click Manage Components.

7. In the Manage Components dialog box, select the first line in the grid, click Remove, and then click
Yes when you are asked whether to delete the selected entry.

8. Add a line to the grid with the following settings:


Provider Name: appHostConfig

Path: Default Web Site/BlueYonder.Server.Booking.WebHost


9. Add a line to the grid with the following settings:
Provider Name: appHostConfig

Path: Default Web Site/BlueYonder.Server.FrequentFlyer.WebHost

10. Add a line to the grid with the following settings:


Provider Name: appPoolConfig

Path: DefaultAppPool
11. Click OK to close the Manage Components dialog box.

12. Click Next, then click Next again, type C:\backup.zip in the Package path box, and then click Next.

13. Wait for the export to be created, and then click Finish.
14. Close the Internet Information Services (IIS) Manager window.
15. On the Start screen, click the Computer tile to open File Explorer, and browse to C:\.

16. Select the backup.zip file, and then press Ctrl+C to copy the file.

17. In File Explorer, browse to \\10.10.0.11\c$, and press Ctrl+V to paste the file.

Task 2: Import the deployment package to a second server


1. In the 20487B-SEA-DEV-B virtual machine, on the Start screen, click the Internet Information
Services (IIS) Manager tile.

2. In the Connections pane, expand SEA-DEV12-B (SEA-DEV12-B\Administrator).


3. If an Internet Information Services (IIS) Manager dialog pops up asking about the Microsoft Web
Platform, click No.

4. From the Connection pane, expand Sites, and then click Default Web Site.

5. In the Actions pane, click Import Application.


L8-7

6. In the Import Application Package dialog box, type C:\backup.zip in the Package path box, and
then click Next.

7. In the Web Application Physical Path (/BlueYonder.Server.Booking.WebHost/) box, change the


physical path of C:\Services\BlueYonder.Server.Booking.WebHost.
8. In the Web Application Physical Path (/BlueYonder.Server.FrequentFlyer.WebHost/) box,
change the physical path of C:\Services\BlueYonder.Server.FrequentFlyer.WebHost.

9. Click Next, wait for the package to be installed, and then click Finish.

10. Close IIS Manager, and on the Start screen, click the Internet Explorer tile.

11. Browse to the Windows Azure Management Portal at http://manage.windowsazure.com

12. If you are taken to the sign in page, enter your Windows Azure account email and password, and
then click Sign in.

13. In the navigation pane, click SERVICE BUS, and then click the Service Bus namespace you wrote
down at the beginning of this lab on the right pane. Click the RELAYS tab and verify that you see the
booking relay with two listeners.

Results: As soon as both servers are online, they will listen to the same Service Bus relay, and will be load
balanced. You will verify that both servers are listening by checking the Service Bus relay listeners
information supplied by Service Bus in the Windows Azure Management Portal.
L8-8 Developing Windows Azure and Web Services
L9-1

Module 9: Windows Azure Storage


Lab: Windows Azure Storage
Exercise 1: Storing Content in Windows Azure Storage
Task 1: Create a storage account
1. Log on to the virtual machine 20487B-SEA-DEV-A as Administrator with the password Pa$$w0rd.

2. On the Start screen, click the Computer tile to open File Explorer.

3. Browse to D:\AllFiles\Mod09\LabFiles\Setup.

4. Double-click the Setup.cmd file. When prompt for information, provide it according to the
instructions.

Note: You may see a warning saying the client model does not match the server model.
This warning may appear if there is a newer version of the Windows Azure PowerShell Cmdlets. If
this message is followed by an error message, please inform the instructor, otherwise you can
ignore the warning.

5. Write down the name of the cloud service that is shown in the script. You will use it later on during
the lab.

6. Wait for the script to finish, and then press any key to close the script window.

7. On the Start screen, click the Internet Explorer tile.


8. Browse to the Windows Azure Management Portal at http://manage.windowsazure.com.

Note: The browser should automatically log you in to the portal. If you are redirected to
the Windows Live ID Sign in page, type your email and password, and then click Sign in.

9. If the Windows Azure Tour dialog appears, close it.


10. Click NEW on the lower left of the portal. Click DATA SERVICES. Click STORAGE. Click QUICK
CREATE. URL and REGION/AFFINITY GROUP input boxes are displayed to the right side.

11. In the URL text box, enter the following storage account name: blueyonderlab09yourinitials
(yourinitials contains your names initials, in lower-case).

12. In the REGION box, select the region closest to your location.

13. Click CREATE STORAGE ACCOUNT at the lower right corner of the portal. Wait until the storage
account is created.

Note: If you get a message saying the storage account creation failed because you reached your
storage account limit, delete one of your existing storage accounts and retry the step. If you do not know
how to delete a storage account, consult the instructor.

14. Click STORAGE in the left navigation pane.

15. In the STORAGE pane, click the account name that you just created.
16. Click MANAGE ACCESS KEYS at the bottom of the page.
L9-2 Developing Windows Azure and Web Services

17. In the Manage Access Keys dialog, click on the copy icon to the right of the PRIMARY ACCESS KEY
box.

18. If you are prompted to allow copying to the clipboard, click Allow access.
19. Close the dialog.

Task 2: Add a storage connection string to the Cloud project


1. On the Start screen, click the Visual Studio 2012 tile.

2. On the File menu, point to Open, and then click Project/Solution.


3. Type D:\AllFiles\Mod09\LabFiles\begin\BlueYonder.Server\BlueYonder.Companion.sln in the File
name text box, and then click Open.

4. In Solution Explorer, expand the BlueYonder.Companion.Host.Azure project. Expand the Roles


folder, and then double-click the BlueYonder.Companion.Host role.

5. Click the Settings tab, then click Add Setting and enter the following information:

Name: BlueYonderStore

Type: Connection String


Click the Value box, and then click the ellipsis.

6. In the Create Storage Connection String dialog box, enter the following information and click OK:

Connect using: Manually entered credentials

Account name: blueyonderlab09yourinitials (yourinitials contains your names initials, in lower-case)

Account key: press Ctrl+V to paste the primary access key you copied in the previous task.

7. Click OK to close the dialog box.


8. Press Ctrl+S to save the changes.

Task 3: Create blob containers and upload files to them


1. In Solution Explorer, expand the BlueYonder.Companion.Storage project, and then double-click
AsyncStorageManager.cs.
2. In AsyncStorageManager class, enter the following code in a default constructor:

string connectionString = CloudConfigurationManager.GetSetting("BlueYonderStore");


_account = CloudStorageAccount.Parse(connectionString);

3. Replace the content of the GetContainer method with the following code:

var blobClient = _account.CreateCloudBlobClient();


var container = blobClient.GetContainerReference(containerName);
container.CreateIfNotExists();
return container;

4. Replace the content of the GetBlob method with the following code:

CloudBlobContainer container = GetContainer(containerName);


if (isPublic)
{
container.SetPermissions(new BlobContainerPermissions { PublicAccess =
BlobContainerPublicAccessType.Blob });
}
return container.GetBlockBlobReference(fileName);
L9-3

5. Press Ctrl+S to save the changes.

6. Locate the UploadStreamAsync method and explore its code. The method uses the previous
methods to retrieve a reference to the new blob, and then uploads the stream to it.

Task 4: Explore the asynchronous file upload action


1. In Solution Explorer, expand the BlueYonder.Companion.Controllers project, and then double-click
FilesController.cs.

2. Explore the UploadFile method of the FilesController class. Explore how the asynchronous
UploadStreamAsync method is called, and how the result is returned.
3. Explore the Public and Private methods of the FilesController class. Each method uploads a file to
either a public blob container or a private blob container.

Note: The client app calls these service actions to upload files as either public or private.
Public files can be viewed by any user, whereas private files can only be viewed by the user who
uploaded them.

Results: You can test your changes at the end of the lab.
L9-4 Developing Windows Azure and Web Services

Exercise 2: Storing Content in Windows Azure Table Storage


Task 1: Write the files metadata to Table storage
1. In Solution Explorer, in the BlueYonder.Companion.Storage project, expand the TableEntities
folder, and then double-click FileEntity.cs.

2. Add the following using directive to the beginning of the file:

using Microsoft.WindowsAzure.Storage.Table.DataServices;

3. Derive the FileEntity class from the TableServiceEntity abstract class by replacing the FileEntity
class declaration with the following code:

public class FileEntity : TableServiceEntity

4. Press Ctrl+S to save the file.

5. In Solution Explorer, in the BlueYonder.Companion.Storage project, double-click


AsyncStorageManager.cs.

6. Add the following using directive to the beginning of the file:

using Microsoft.WindowsAzure.Storage.Table;

7. In the AsyncStorageManager class, replace the content of the GetTableContext method with the
following code:

CloudTableClient tableClient = _account.CreateCloudTableClient();


CloudTable table = tableClient.GetTableReference(MetadataTable);
table.CreateIfNotExists();
TableServiceContext tableContext = tableClient.GetTableServiceContext();
return tableContext;

Note: You should make sure the table exists before you return a context for it, otherwise
the code will fail when running queries on the table. If you already created the table, you can skip
calling the GetTableReference and CreateIfNotExists methods.

8. In the SaveMetadataAsync method, add the following code after the // TODO: Lab 9 Exercise 2: Task
1.3: use a TableServiceContext to add the object comment:

tableContext.AddObject(MetadataTable, fileData);

9. Press Ctrl+S to save the file.

10. In Solution Explorer, in the BlueYonder.Companion.Controllers project, double-click


FilesController.cs.

11. In the CreateFileEntity method, before the return statement, add the following code:

entity.RowKey = HttpUtility.UrlEncode(fileData.Uri.ToString());
entity.PartitionKey= locationId.ToString();

Note: The RowKey property is set to the files URL, because it has a unique value. The URL
is encoded because the forward slash (/) character is not valid in row keys. The PartitionKey
property is set to the locationID property, because the partition key groups all the files from a
L9-5

single location in the same partition. By using the locations ID as the partition key, you can query
the table and get all the files uploaded for a specific location.

12. Explore the code in the Metadata method. The method creates the FileEntity object and saves it to
the table.

Note: The client app calls this service action after it uploads the new file to Blob storage. By
storing the list of files in Table storage, the client app can use queries to find specific images,
either by trip or location.

Task 2: Query the Table storage


1. In Solution Explorer, in the BlueYonder.Companion.Storage project, double-click
AsyncStorageManager.cs.

2. In the AsyncStorageManager class, replace the content of the GetLocationMetadata method with
the following code:

TableServiceContext tableContext = GetTableContext();


var query = from file in tableContext.CreateQuery<FileEntity>(MetadataTable)
where file.PartitionKey == locationId
select file;
return query.ToList();

Note: Recall that the location ID was used as the entity's partition key.

3. In the GetFilesMetadata method, uncomment the line

//where file.RowKey = rowKey

4. Verify that the final method code is as follows:

TableServiceContext tableContext = GetTableContext();


foreach (var rowKey in rowKeys)
{
var fileEntity = (from file in
tableContext.CreateQuery<FileEntity>(MetadataTable)
where file.RowKey == rowKey
select file).Single();
yield return fileEntity;
}

5. Press Ctrl+S to save the file.

6. In Solution Explorer, in the BlueYonder.Companion.Controllers project, double-click


FilesController.cs.
7. In the FilesController class, review the content of the LocationMetadata.

Note: The method calls the GetLocationMetadata method from the


AsyncStorageManager class, and converts the FileEntity objects that are marked as public to
FileDto objects. The client app calls this service action to get a list of all public files related to a
specific location.

8. Locate the ToFileDto method of the FileController class.


L9-6 Developing Windows Azure and Web Services

9. Uncomment the line:

LocationId = int.Parse(file.PartitionKey),

10. Press Ctrl+S to save the changes.

11. Open the FilesController class and explore the code in the TripMetadata method.

Note: The method retrieves the list of files in the trips public blob container, and then uses
the GetFilesMetadata method of the AsyncStorageManager class to get the FileEntity object
for each of the files. The client app calls this service action to get a list of all files related to a
specific trip. Currently the code retrieves only the public files. In the next exercise you will add the
code to retrieve both public and private files.

Results: You can test your changes at the end of the lab.
L9-7

Exercise 3: Creating Shared Access Signatures for Blobs


Task 1: Change the public photos query to return private photos
1. In Solution Explorer, in the BlueYonder.Companion.Storage project, double-click
AsyncStorageManager.cs.

2. In the AsyncStorageManager class, replace the content of the CreateSharedAccessSignature


method with the following code:

var policy = new SharedAccessBlobPolicy()


{
Permissions = SharedAccessBlobPermissions.Read,
SharedAccessExpiryTime = DateTime.UtcNow.AddHours(1)
};

3. In the CreateSharedAccessSignature method, add the following code to the end of the method:

BlobContainerPermissions blobPermissions = new BlobContainerPermissions();


blobPermissions.SharedAccessPolicies.Add("blueyonder", policy);
var container = GetContainer(containerName);
container.SetPermissions(blobPermissions);

4. Complete the CreateSharedAccessSignature method by adding the following code to the end of
the method:

return container.GetSharedAccessSignature(policy);

5. Press Ctrl+S to save the changes.

Note: The shared access key signature is a URL query string that you append to blob URLs.
Without the query string, you cannot access private blobs.

6. In Solution Explorer, in the BlueYonder.Companion.Controllers project, double-click


FilesController.cs.
7. In the FilesController class, locate the TripMetadata method.

8. Add the following code after the // TODO: Lab 9, Exercise 3, Task 1.4: get a list of files in the trip's
private folder comment:

var privateUris = storageManager.GetFileUris(GetContainer(id, true));


var allUris = publicUris.Union(privateUris);

9. In the allKeys variable assignment, replace the publicUris variable with the allUris variable. The
resulting code should resemble the following:

var allKeys = allUris.Select(u => HttpUtility.UrlEncode(u.ToString()));

10. Press Ctrl+S to save the file.

11. Locate the ToFileDto method and explore its code. If the requested file is private, you create a shared
access key for the blob's container, and then set the Uri property of the file to a URL containing the
shared access key.

12. In Solution Explorer, right-click the BlueYonder.Companion.Host.Azure project, and then click
Publish.
L9-8 Developing Windows Azure and Web Services

13. If you already added your Windows Azure subscription information to Visual Studio 2012, select your
subscription from the drop down list and skip to step 17.

14. In the Publish Windows Azure Application dialog box, click Import.
15. Type D:\AllFiles\Mod09\LabFiles in the File name text box, and then click Open. Select your
publish settings file and click Open.

16. Click Next.

17. On the Common Settings tab, click the Cloud Service box, and select the cloud service that matches
the name you wrote down in the beginning of the lab, while running the setup script.

18. Click Publish to start the publishing process. If a Deployment Environment In Use dialog box
appears, click Replace. The publish process might take several minutes to complete.

Task 2: Upload public and private files to Windows Azure Storage


1. Log on to the virtual machine 20487B-SEA-DEV-C as Admin with the password Pa$$w0rd.

2. On the Start screen, click the Visual Studio 2012 tile.


3. On the File menu, point to Open, and then click Project/Solution.

4. Browse to D:\AllFiles\Mod09\LabFiles\begin\BlueYonder.Companion.Client\, select the


BlueYonder.Companion.Client.sln file, and then click Open.
5. If you are prompted by a Developers License dialog box, click I Agree. If you are prompted by a
User Account Control dialog box, click Yes. Type your email address and a password in the
Windows Security dialog box and then click Sign in. Click Close in the Developers License dialog
box.

Note: If you do not have valid email address, click Sign up and register for the service.
Write down these credentials and use them whenever a use of an email account is required.

6. In Solution Explorer, under the BlueYonder.Companion.Shared project, double-click Addresses.cs.


7. Locate the BaseUri property, and replace the {CloudService} string with the Windows Azure Cloud
Service name you wrote down in the beginning of this lab.

8. Press Ctrl+S to save the changes.

9. In Solution Explorer, right-click the BlueYonder.Companion.Client project, and then click Set as
StartUp Project.
10. Press Ctrl+F5 to start the client app without debugging.

11. If you are prompted to allow the app to run in the background, click Allow.

12. After the client app starts, display the app bar by right-clicking or by swiping from the bottom of the
screen.

13. Click Search, and in the Search box on the right side enter New. If you are prompted to allow the
app to share your location, click Allow.

14. Wait for the app to show a list of flights from Seattle to New York.

15. Click Purchase this trip.

16. In the First Name text box, type your first name.

17. In the Last Name text box, type your last name.
L9-9

18. In the Passport text box, type Aa1234567.

19. In the Mobile Phone text box, type 555-5555555.

20. In the Home Address text box, type 423 Main St..

21. In the Email Address text box, type your email address.

22. Click Purchase.

23. Click Close to close the confirmation message.

24. In the Blue Yonder Companion page, click the current trip from Seattle to New York.

25. In the Current Trip page, display the app bar by right-clicking or by swiping from the bottom of the
screen. Click Media.
26. In the Media page, display the app bar by right-clicking or by swiping from the bottom of the screen.
Click Add Files from Disk.

27. Browse to D:\Allfiles\Mod09\LabFiles\Assets, select StatueOfLiberty.jpg, and then click Open.

28. In the Media page, display the app bar by right-clicking or by swiping from the bottom of the screen.
Click Upload Item to Public Storage.

29. Wait until the file completes uploading.


30. In the Media page, display the app bar by right-clicking or by swiping from the bottom of the screen.
Click Add Files from Disk.

31. Select EmpireStateBuilding.jpg, and then click Open.

32. In the Media page, display the app bar by right-clicking or by swiping from the bottom of the screen.
Click Upload Item to Private Storage.

33. Wait until the file completes uploading.


34. Click the back button. In the Current Trip page, display the app bar by right-clicking or by swiping
from the bottom of the screen, and then click Media.

35. In the Media page, wait for a couple of seconds until the images are downloaded from storage.
Verify you see both the private and the public photos.
36. Click the back button to return to the Current Trip page, and then click the back button again to
return to the Blue Yonder Companion page. Under New York at a Glance, verify you see the photo
of the Statue of Liberty you uploaded to the public container.

Task 3: View the Content of the Blobs and Table


1. Return to the 20487B-SEA-DEV-A virtual machine, to Visual Studio 2012.

2. On the View menu, click Server Explorer.

3. In Server Explorer, right-click Windows Azure Storage, and then click Add New Storage Account.
4. If you already added your Windows Azure subscription information to Visual Studio 2012, select your
subscription from the drop down list and skip to step 8.

5. In the Add New Storage Account dialog box, click the Download Publish Settings hyperlink.

Note: The browser automatically logs you in to the portal. When you are redirected to the Windows
Live ID Sign in page, type your email address and password, and then click Sign in.
6. The publish settings file is generated, and a Do you want to open or save... Internet Explorer dialog
appears at the bottom. Click the arrow within the Save button. Select the Save as option and specify
L9-10 Developing Windows Azure and Web Services

the following location: D:\AllFiles\Mod09\LabFiles. Click Save. If a Confirm Save As dialog box
appears, click Yes.

7. Return to the Add New Storage Account dialog box in Visual Studio 2012. Click Import. Type
D:\AllFiles\Mod09\LabFiles and select the file that you downloaded in the previous step. Make sure
that your subscription is selected in the Subscription dropdown list.

8. In the Account name dropdown list, select the account named blueyonderlab09yourinitials
(yourinitials contains your names initials, in lower-case). Click OK.

9. In Server Explorer, expand the blueyonderlab09yourinitials (yourinitials contains your names


initials, in lower-case) node and then expand Blobs. Observe two folders were created, one for public
photos and one for private photos.

10. Under Blobs, double-click the container that ends with public. The blob container holds one file.

11. In the containers file table, right-click the first line, and then click Copy URL.

12. On the Start screen, click the Internet Explorer tile.


13. In the browsers address bar, remove the existing address and press Ctrl+V to paste the copied
address. Press Enter and observe the uploaded photo.

14. Return to Visual Studio 2012, and in Server Explorer, double-click the container that ends with
private. The blob container holds one file.

15. In the containers file table, right-click the first line, and then click Copy URL.

16. On the Start screen, click the Internet Explorer tile.


17. In the browsers address bar, remove the existing address and press Ctrl+V to paste the copied
address. Press Enter.

18. The private photos cannot be accessed by a direct URL, therefore an HTTP 404 (The webpage cannot
be found) page is shown.

Note: The client app is able to show the private photo because it uses a URL that contains a
shared access permission key.

19. In Server Explorer, expand the Tables node, and then double-click the FilesMetadata node.

20. View the content of the FilesMetadata table. The table contains metadata for both public and
private photos.

Results: After you complete the exercise, you will be able to use the client App to upload photos to the
private and public blob containers. You will also be able to view the content of the Blob and Table storage
by using Visual Studio 2012.
L10-1

Module 10: Monitoring and Diagnostics


Lab: Monitoring and Diagnostics
Exercise 1: Configuring Message Logging
Task 1: Open the WCF service configuration editor
1. In the 20487B-SEA-DEV-A virtual machine, on the Start screen, click Computer to open the File
Explorer window.

2. Browse to D:\AllFiles\Mod10\LabFiles\Setup.
3. Double-click the setup.cmd file. When prompted for information, provide it according to the
instructions.

Note: You may see a warning saying the client model does not match the server model.
This warning may appear if there is a newer version of the Windows Azure PowerShell Cmdlets. If
this message is followed by an error message, please inform the instructor, otherwise you can
ignore the warning.

4. Wait for the script to complete successfully, write down the name of the Windows Azure Cloud
Service, and press any key to close the window.
5. On the Start screen, click the Visual Studio 2012 tile.

6. On the File menu, point to Open, and then click Project/Solution.


7. Browse to D:\AllFiles\Mod10\LabFiles\begin\BlueYonder.Server\, select the BlueYonder.Server.sln file,
and then click Open.

8. In Solution Explorer, expand the BlueYonder.Server.Booking.WebHost project.


9. Right-click the Web.config file, and then click Edit WCF Configuration.

10. When prompted that the Microsoft.ServiceBus assembly could not be found, click Yes. Browse to
D:\AllFiles\Mod10\LabFiles\begin\BlueYonder.Server\BlueYonder.Server.Booking.WebHost\bin select
Microsoft.ServiceBus.dll and click open. Click Yes in the dialog box.

Task 2: Configure WCF message logging


1. In the Configuration pane, click the Diagnostics node, and then click Enable MessageLogging
under the MessageLogging section.
2. In the Configuration pane, expand the Diagnostics node, and then click the Message Logging
node.

3. In the Message Logging pane, set the LogEntireMessage and LogMessagesAtServiceLevel settings to
True. Set the LogMessagesAtTransportLevel setting to False.
4. Press Ctrl+S to save the changes.

5. Click File on the menu bar, and select Exit to close the window.

Results: You can test your changes at the end of the lab.
L10-2 Developing Windows Azure and Web Services

Exercise 2: Configuring Windows Azure Diagnostics


Task 1: Add trace messages to the ASP.NET Web API service
1. On the Start screen, right-click the Visual Studio 2012 tile and then click Open new window.

2. On the File menu, point to Open, and then click Project/Solution.


3. Browse to D:\AllFiles\Mod10\LabFiles\begin\BlueYonder.Server\, select the
BlueYonder.Companion.sln file, and then click Open.

4. In Solution Explorer, expand the BlueYonder.Companion.Host project, and double-click


TraceWriter.cs.

5. Implement the Trace method by adding the following code to the method.

TraceRecord rec = new TraceRecord(request, category, level);


traceAction(rec);
string message = string.Format("{0};{1};{2}", rec.Operator, rec.Operation,
rec.Message);
System.Diagnostics.Trace.WriteLine(message, rec.Category);

6. Click Ctrl+S to save the changes.

7. In Solution Explorer, expand the BlueYonder.Companion.Host project. Expand the App_Start


folder, and double-click WebApiConfig.cs.

8. Add the following using directive to the beginning of the file.

using System.Web.Http.Tracing;

9. Add the following code to the beginning of the Register method, before setting the dependency
resolver.

config.Services.Replace(typeof(ITraceWriter), new TraceWriter());

10. Click Ctrl+S to save the changes.

11. In Solution Explorer, expand the BlueYonder.Companion.Controllers project, and double-click


ReservationsController.cs.
12. Add the following using directive to the beginning of the file.

using System.Web.Http.Tracing;

13. Add the following code to the Post method, after the call to the Save method.

Configuration.Services.GetTraceWriter().Info(Request, "ReservationsController", "New


reservation was created: {0}", confirmationCode);

14. Click Ctrl+S to save the changes.

Task 2: Configure Windows Azure Diagnostics for the Web Role


1. In Solution Explorer, expand the BlueYonder.Companion.Host.Azure project, expand Roles, right-
click the BlueYonder.Companion.Host role, and click Properties.
2. In the Properties window, on the Configuration tab, under Diagnostics, select the Custom plan
option, and then click Edit.

3. In the Diagnostics configuration dialog box, on the Application logs tab, change the Log level from
Error to Verbose.
L10-3

4. On the Log directories tab, in the Transfer period combo box, select 1.

5. In the Buffer size box, type 1024.

6. In the Directories grid, in the IIS logs row, type 1024 in the Directory quota (MB) column.

7. Click OK, and then press Ctrl+S to save the changes to the role configuration.

Task 3: Deploy the ASP.NET Web API Application to Windows Azure


1. In Solution Explorer, right-click the BlueYonder.Companion.Host.Azure project, and then click
Publish.

2. If you already added your Windows Azure subscription information to Visual Studio 2012, select your
subscription from the drop down list and skip to step 6.

3. In the Publish Windows Azure Application dialog box, click the Sign in to download credentials
hyperlink.

Note: The browser automatically logs you in to the portal. When you are redirected to the
Windows Live ID Sign in page, type your email address and password, and then click Sign in.

4. The publish settings file is generated, and a Do you want to open or save... Internet Explorer dialog
appears at the bottom. Click the arrow within the Save button. Select the Save as option and specify
the following location:

D:\AllFiles\Mod10\LabFiles. Click Save. If a Confirm Save As dialog box appears, click Yes.
5. Return to Publish Windows Azure Application dialog box in Visual Studio 2012. Click Import. Type
D:\AllFiles\Mod10\LabFiles and select the file that you downloaded in the previous step. Make sure
that your subscription is selected under Choose your subscription section.
6. Click Next.

7. On the Common Settings tab, click the Cloud Service box, and select the cloud service that matches
the name you wrote down in the beginning of the lab, while running the setup script.

8. Click Publish to start the publishing process. If a Deployment Environment In Use dialog box
appears, click Replace. The publish process might take several minutes to complete.

Task 4: Run the client app to create logs


1. Log on to the virtual machine 20487B-SEA-DEV-C as Admin with the password Pa$$w0rd.

2. On the Start screen, click the Visual Studio 2012 tile.

3. On the File menu, point to Open, and then click Project/Solution.


4. Browse to D:\AllFiles\Mod10\LabFiles\begin\BlueYonder.Companion.Client\, select the
BlueYonder.Companion.Client.sln file, and then click Open.

5. If you are prompted by a Developers License dialog box, click I Agree. If you are prompted by a
User Account Control dialog box, click Yes. Type your email address and a password in the
Windows Security dialog box and then click Sign in. Click Close in the Developers License dialog
box.

Note: If you do not have valid email address, click Sign up and register for the service.
Write down these credentials and use them whenever a use of an email account is required.
L10-4 Developing Windows Azure and Web Services

6. In Solution Explorer, under the BlueYonder.Companion.Shared project, double-click Addresses.cs.

7. Locate the BaseUri property, and replace the {CloudService} string with the Windows Azure Cloud
Service name you wrote down in the beginning of this lab.
8. In Solution Explorer, right-click the BlueYonder.Companion.Client project, and then click Set as
StartUp Project.

9. Press Ctrl+F5 to start the client app without debugging.

10. If you are prompted to allow the app to run in the background, click Allow.
11. Click Close to close the confirmation message, and then close the client app.

12. After the client app starts, display the app bar by right-clicking or by swiping from the bottom of the
screen.

13. Click Search, and in the Search box on the right side enter New. If you are prompted to allow the
app to share your location, click Allow.
14. Wait for the app to show a list of flights from Seattle to New York.

15. Click Purchase this trip.

16. In the First Name text box, type your first name.
17. In the Last Name text box, type your last name.
18. In the Passport text box, type Aa1234567.

19. In the Mobile Phone text box, type 555-5555555.


20. In the Home Address text box, type 423 Main St..

21. In the Email Address text box, type your email address.

22. Click Purchase.


23. Click Close to close the confirmation message, and then close the client app.

Task 5: View the collected diagnostics data


1. Go back to virtual machine 20487B-SEA-DEV-A.

2. In Solution Explorer, right-click the BlueYonder.Companion.Host.Azure project, and then click


Publish. The Publish Windows Azure Application dialog box open with the publish summary.

3. Write down the name of the storage account.

4. Click Cancel.
5. On the View menu, click Server Explorer.

6. In Server Explorer, right-click Windows Azure Storage and click Add New Storage Account.

7. In the Add New Storage Account dialog box, select the storage account you have written down in
the previous step from the Account name drop down list.

8. Click OK.

9. In Server Explorer, under Windows Azure Storage, expand the new storage account you added, and
then expand Tables.
10. Click Tables, and double-click WADLogsTable.

11. Scroll right and explore the table by looking at the Message column which presents the value of the
logged events.
L10-5

12. Look for the message that starts with ReservationsController, and then double-click the row to open
its details.

Note: In addition to the trace message your code writes to the log, ASP.NET Web API
writes several other infrastructure trace messages.

13. In the Edit Entity dialog box, view the message and then click OK to close the dialog box.

14. In Server Explorer, under Windows Azure Storage, under the new storage account you added,
expand Blobs.

15. Double-click wad-iis-logfiles, and in the container's file list, double-click the first line to open it in
Notepad. If you are prompted to select how to open this type of file, click More options and then
choose to open the file with Notepad.

16. After the IIS log opens in Notepad, verify you see the requests for the Travelers, Locations, Flights,
and Reservations controllers. Close Notepad.

Note: It is possible it will take more than a minute from the time the request is sent and
until it is logged by IIS. If you do not yet see any logs, or the requests are missing from the log,
wait for another minute, refresh the blob container, and then download the log again.

17. On the Start screen, click the Computer tile to open File Explorer.

18. Browse to D:\AllFiles\Mod10\LabFiles\begin\BlueYonder.Server\blueyonder.server.booking.webhost\,


and then double-click web_messages.svclog to open the message logs file in the Service Trace Viewer.
19. Click the Message tab in the left pane, and then click the row with
http://blueyonder.server.interfaces/IBookingService/CreateReservation action.
20. Click the Message tab in the bottom-right pane to review the CreateReservation request message.
Scroll to the end of the message to view the <s:Body> element.

21. In the left side pane, click the row with


http://blueyonder.server.interfaces/IBookingService/CreateReservationResponse action.

22. Review the CreateReservationResponse message on the Message tab in the bottom-right pane. Scroll
to the end of the message to view the <s:Body> element.

Results: After you complete the exercise, you will be able to use the client App to purchase a trip, and
then view the created log files, for both the Windows Azure deployment and the on-premises WCF
service.
L10-6 Developing Windows Azure and Web Services
L11-1

Module 11: Identity Management and Access Control


Lab: Identity Management and Access
Control
Exercise 1: Configuring Windows Azure ACS
Task 1: Create a new ACS namespace
1. On the Start screen, click the Internet Explorer tile.

2. Browse to the Windows Azure Management Portal at http://manage.windowsazure.com.

Note: The browser should log you in automatically to the portal. If you are redirected to
the Windows Live ID Sign in page, type your email and password, and then click Sign in.

3. If the Windows Azure Tour dialog appears, click close (the X button).

4. Click NEW, then click APP SERVICES, then click ACCESS CONTROL, and then click QUICK CREATE.
5. Enter the following values:
Namespace: BlueYonderCompanionYourInitials (YourInitials will contain your initials).

Region: select the region closest to your location.


6. Click CREATE, and wait for a successful creation.

Task 2: Configure the Relying Party application


1. Click ACTIVE DIRECTORY in the navigation pane, click the ACCESS CONTROL NAMESAPCES tab,
click the newly created namespace, and then click MANAGE at the bottom of the page. The Access
Control Service portal will open.
2. In the Access Control Service portal, in the pane on the left, click Relying party applications link
under the Trust relationships section.

3. In the Relying Party Applications pane, click Add link.


4. On the Add Relying Party Application page, enter the following values:

Name: BlueYonderCloud

Realm: urn:blueyonder.cloud
Return URL: https://CloudServiceName.cloudapp.net/federationcallback (CloudServiceName is
the name of the cloud service you wrote down in the beginning of the lab while running the setup
script)
Token format: SWT

5. Verify that the Create new rule group check box is selected.
6. Under Token Signing Settings, click the Generate button to generate new token signing key.

7. Click Save at the bottom of the page.

Task 3: Create a rule group


1. In the Access Control Service portal, in the pane on the left, click the Rule groups link under Trust
relationships section.
L11-2 Developing Windows Azure and Web Services

2. On the Rule Groups page, under the Rule Groups section, click Default Rule Group for
BlueYonderCloud.

3. On the Edit Rule Group page, click the Generate link above the Rules section.
4. On the Generate Rules page, click Generate.

5. On the Edit Rule Group page, click Save.

Results: After you complete this exercise, you would have created a new ACS namespace and configured
an RP for the ASP.NET Web API services. You will test the RP configuration at the end of the lab.
L11-3

Exercise 2: Integrating ACS with the ASP.NET Web API Project


Task 1: Add the Thinktecture.IdentityModel NuGet package
1. Leave the ACS portal open, and on the Start screen, click the Visual Studio 2012 tile.

2. On the File menu, point to Open, and then click Project/Solution.


3. Type D:\AllFiles\Mod11\LabFiles\begin\BlueYonder.Server\BlueYonder.Companion.sln in the File
name text box, and then click Open.

4. On the Tools menu, point to Library Package Manager, and then click Package Manager Console.

5. In Package Manager Console, enter install-package ThinkTecture.IdentityModel -version 2.2.1 -


ProjectName BlueYonder.Companion.Host in Package Manager Console editor, and then press Enter.

Note: The last known version of the ThinkTecture.IdentityModel NuGet package that
supports the SWT token is 2.2.1. Therefore, you need to use the Package Manager Console to
install this NuGet package, rather than using the Manage NuGet Packages dialog box.

Task 2: Add token validation to ASP.NET Web API


1. In Solution Explorer, expand the BlueYonder.Companion.Host.Azure project, expand the Roles
folder, and then double-click the BlueYonder.Companion.Host web role.
2. Click the Settings tab, and then click Add Settings.

3. Enter a setting with the following information:

Name: ACS.IssuerName
Type: String

Value: https://BlueYonderCompanionYourInitials.accesscontrol.windows.net/ (YourInitials will


contain your initials). Make sure there are no spaces at the end of the string.
4. Enter a setting with the following information:

Name: ACS.Realm

Type: String

Value: urn:blueyonder.cloud
5. Return to the Internet Explorer window, to the ACS portal, and in the pane on the left, click
Certificates and Keys link under the Service settings section.

6. On the Certificates and Keys page, click BlueYonderCloud, and then click Show Key.
7. Select the generated key, and press Ctrl+C to copy it to the clipboard.

8. Return to Visual Studio 2012 (do not close the browser), to the Settings tab, and click Add Settings.
Enter a new setting with the following information:

Name: ACS.SigningKey

Type: String

Value: Press Ctrl+V to paste the copied signing key.

9. Press Ctrl+S to save the changes.

10. In Solution Explorer, right-click the BlueYonder.Companion.Host project, point to Add, and then
click New Folder. Name the folder Authentication, and press Enter.
L11-4 Developing Windows Azure and Web Services

11. In Solution Explorer, right-click the Authentication folder, point to Add, and then click Class.

12. In the Add New Item dialog box, type AuthenticationConfig in the Name text box, and then click
Add.
13. Add the following using directive to the beginning of the file.

using Thinktecture.IdentityModel.Tokens.Http;

14. Add the following method to the AuthenticationConfig class.

public static AuthenticationConfiguration CreateConfiguration()


{
var config = new AuthenticationConfiguration();
// Get the SWT configuration from the Web Role configuration
// Add an SWT authentication support
// Set defaults
return config;
}

15. Enter the following code after the // Get the SWT configuration comment.

string issuerName =
Microsoft.WindowsAzure.CloudConfigurationManager.GetSetting("ACS.IssuerName").Trim();
string realm =
Microsoft.WindowsAzure.CloudConfigurationManager.GetSetting("ACS.Realm").Trim();
string signingKey =
Microsoft.WindowsAzure.CloudConfigurationManager.GetSetting("ACS.SigningKey").Trim();

16. Add the following code after the // Add an SWT authentication support comment.

config.AddSimpleWebToken(
issuer: issuerName,
audience: realm,
signingKey: signingKey,
options: AuthenticationOptions.ForAuthorizationHeader("OAuth"));

Note: Realm is the unique identifier of your RP. Audience refers to the realm of the RP that
redirected the client to the STS. In most cases, the realm and audience are the same, because you
are redirected back to the application you came from. There are scenarios where the RP that got
the token is not the same RP that requested the token.
The AuthenticationOptions.ForAuthorizationHeader method sets the value of the HTTP
Authorization header that is added to unauthorized responses. The OAuth value specifies that
OAuth authentication should be used.

17. Add the following code after the // Set defaults comment.

config.DefaultAuthenticationScheme = "OAuth";
config.EnableSessionToken = true;

18. The resulting code should resemble the following.

public static AuthenticationConfiguration CreateConfiguration()


{
var config = new AuthenticationConfiguration();
// Get the SWT configuration from the Web Role configuration
string issuerName =
Microsoft.WindowsAzure.CloudConfigurationManager.GetSetting("ACS.IssuerName");
string realm =
Microsoft.WindowsAzure.CloudConfigurationManager.GetSetting("ACS.Realm");
L11-5

string signingKey =
Microsoft.WindowsAzure.CloudConfigurationManager.GetSetting("ACS.SigningKey");
// Add an SWT authentication support
config.AddSimpleWebToken(
issuer: issuerName,
audience: realm,
signingKey: signingKey,
options: AuthenticationOptions.ForAuthorizationHeader("OAuth"));
// Set defaults
config.DefaultAuthenticationScheme = "OAuth";
config.EnableSessionToken = true;
return config;
}

19. Press Ctrl+S to save the file.

Task 3: Add a federation callback controller


1. In Solution Explorer, right-click the BlueYonder.Companion.Host project node, and then click
Manage NuGet Packages.
2. In the Manage NuGet Packages dialog box, in the navigation pane, expand the Online node and
then click the NuGet official package source node.

3. Press Ctrl+E and type WIF.SWT.

4. In the packages list, click the Simple Web Token Support for Windows Identity Foundation package,
and then click Install.

5. Wait for installation completion. Click Close to close the window


6. In Solution Explorer, right-click the BlueYonder.Companion.Host project, point to Add, and then
click Class.

7. In the Add New Item dialog box, type FederationCallbackController in the Name text box, and
then click Add.
8. Add the following using directives to the beginning of the file.

using System.Web.Http;
using System.Net.Http;
using System.Net;
using System.Web;
using Microsoft.IdentityModel;

9. Replace the class declaration with the following code.

public class FederationCallbackController : ApiController

10. Add the following method to the class, to handle active federation with POST requests.

public HttpResponseMessage Post()


{
var response = this.Request.CreateResponse(HttpStatusCode.Redirect);
response.Headers.Add("Location", @"FederationCallback/end?acsToken=" +
HttpContext.Current.User.BootstrapToken());
return response;
}

Note: The POST method extracts the token from the request's body, and returns a message
carrying the token in the Location HTTP header. The client application can then extract the token
from the response and use it to authenticate against the service in future requests.
L11-6 Developing Windows Azure and Web Services

The special redirect to FederationCallback/end indicates to the client that the authentication
process has completed successfully. This message flow is part of the passive federation process.

11. Press Ctrl+S to save the file.

12. Return to the browser window, to the ACS portal, and in the pane on the left, click Certificates and
Keys link under the Service settings section.

13. In the Certificates and Keys page, click BlueYonderCloud, and then click Show Key.

14. Select the generated key, and press Ctrl+C to copy it to the clipboard.

15. Return to Visual Studio 2012 (do not close the browser), and in Solution Explorer, in the
BlueYonder.Companion.Host project, double-click Web.config.

16. Locate the <appSettings> section, and in it locate the <add> element whose key attribute is set to
SwtSigningKey.

17. Select the value [your 256-bit symmetric key configured in the STS/ACS], and then press Ctrl+V to
replace it with the token signing key you copied from the portal.
18. Locate the <microsoft.identityModel> section in the end of the file, and in it, locate
<audienceUris> element.

19. Replace the <add> element with the following configuration.

<add value="urn:blueyonder.cloud" />

Note: WIF 4.5 uses the <system.identityModel> section. However, the WIF.SWT NuGet package you
installed still uses WIF 4, which uses the <microsoft.identityModel> section.

20. Locate the <trustedIssuers> element, and replace the string [youracsnamespace] with
blueyondercompanionyourinitials (yourinitials will contain your initials). Make sure the string you
type is in lowercase letters.

21. Add the following configuration between the <service> and <audienceUris> tags.

<federatedAuthentication>
<wsFederation passiveRedirectEnabled="false" issuer="urn:unused" realm="urn:unused"
requireHttps="false" />
</federatedAuthentication>

22. Press Ctrl+S to save the file.

23. In Solution Explorer, in the BlueYonder.Companion.Host project, expand the References folder.

24. Click Microsoft.IdentityModel, and in the Properties window, change Copy Local to True.

Note: WIF 4 is not installed by default in Windows Azure VMs. Therefore you need to make
sure the assembly is included in the deployed package.

Task 4: Update the Routing with the New Authentication Configuration


1. In Solution Explorer, in the BlueYonder.Companion.Host project, expand App_Start, and then
double-click WebApiConfig.cs.
2. Add the following using directive to the beginning of the file.

using Thinktecture.IdentityModel.Tokens.Http;
L11-7

3. Add the following code to the Register static method, before the first call to the MapHttpRoute
method.

config.Routes.MapHttpRoute(
name: "callback",
routeTemplate: "FederationCallback",
defaults: new { Controller = "FederationCallback" });

Note: The order of routes is important; you must add the federation callback route before
adding the default route ({controller}/{id}) which handles all the other calls to the controllers. If
you add the default route first, it will be used even when you use a URL that ends with
FederationCallback.

4. Add the following using directive to the beginning of the file.

using BlueYonder.Companion.Host.Authentication;

5. Create a new authentication configuration by adding the following code to the beginning of the
Register method.

AuthenticationConfiguration authenticationConfig =
AuthenticationConfig.CreateConfiguration();

6. Locate the call to the MapHttpRoute method, which uses the name TravelerReservationsApi.
Replace the method call with the following code.

config.Routes.MapHttpRoute(
name: "TravelerReservationsApi",
routeTemplate: "travelers/{travelerId}/reservations",
defaults: new
{
controller = "reservations",
id = RouteParameter.Optional
},
constraints:null,
handler: new AuthenticationHandler(authenticationConfig,
GlobalConfiguration.Configuration));

7. Locate the call to the MapHttpRoute method, which uses the name ReservationApi. Replace the
method call with the following code.

config.Routes.MapHttpRoute(
name: "ReservationsApi",
routeTemplate: "Reservations/{id}",
defaults: new
{
controller = "Reservations",
action = "GetReservation"
},
constraints: new
{
httpMethod = new HttpMethodConstraint(HttpMethod.Get)
},
handler: new AuthenticationHandler(authenticationConfig,
GlobalConfiguration.Configuration));

8. Locate the call to the MapHttpRoute method, which uses the name DefaultApi. Replace the method
call with the following code.

config.Routes.MapHttpRoute(
L11-8 Developing Windows Azure and Web Services

name: "DefaultApi",
routeTemplate: "{controller}/{id}",
defaults: new { id = RouteParameter.Optional },
constraints:null,
handler: new AuthenticationHandler(authenticationConfig,
GlobalConfiguration.Configuration));

Note: The authentication handler is not used for the first two routes you just added,
because the requests to the FederationCallback controller are sent before the client is
authenticated. The authentication handler is not used for the location's weather route because
the GetWeather action is public and does not require any authentication.

9. Press Ctrl+S to save the file.

Task 5: Decorate the ASP.NET Web API controllers for authorization


1. In Solution Explorer, expand the BlueYonder.Companion.Controllers project, and double-click
ReservationsController.cs.

2. Decorate the class with the [Authorize] attribute. The resulting code should resemble the following.

[Authorize]
public class ReservationsController : ApiController
{
...
}

3. Press Ctrl+S to save the file.

4. Repeat the same process for the TravelersController.cs and the TripsController.cs files.

Results: After completing this exercise, you will have configured your ASP.NET Web API services to use
claims-based identities, authenticate users, and authorize users. You will test this configuration at the end
of the lab.
L11-9

Exercise 3: Deploying the Web Application to Windows Azure and


Configure the Client App
Task 1: Deploy the Web Application to Windows Azure
1. In Solution Explorer, right-click the BlueYonder.Companion.Host.Azure project, and then click
Publish.

2. If your subscription is already listed in the subscriptions drop down list, skip to the next task,
otherwise continue.

3. In the Publish Windows Azure Application dialog box, click the Sign in to download credentials
hyperlink.

Note: The browser automatically logs you in to the portal. When you are redirected to the
Windows Live ID Sign in page, type your email address and password, and then click Sign in.

4. The publish settings file is generated, and Do you want to open or save... Internet Explorer dialog
appears at the bottom. Click arrow within the Save button. Select the Save as option and specify the
following location:

D:\AllFiles\Mod11\LabFiles. Click Save. If a Confirm Save As dialog box appears, click Yes.
5. Return to Publish Windows Azure Application window in Visual Studio 2012. Click Import. Type
D:\AllFiles\Mod11\LabFiles and select the .publishsettings file that you downloaded in the previous
step. Make sure that your subscription is selected under Choose your subscription section.
6. Click Next.

7. On the Common Settings tab, click the Cloud Service box, and select the cloud service that matches
the name you wrote down in the beginning of the lab, after running the setup script.

8. Click Publish to start the publishing process. If a Deployment Environment In Use dialog box
appears, click Replace. The publish process might take several minutes to complete.

Task 2: Configure the client app for authentication


1. In the 20487B-SEA-DEV-C virtual machine, on the Start screen, click the Visual Studio 2012 tile.
2. On the File menu, point to Open, and then click Project/Solution.

3. Type
D:\AllFiles\Mod11\LabFiles\begin\BlueYonder.Companion.Client\BlueYonder.Companion.Clie
nt.sln in the File name text box, and then click Open.

4. If you are prompted by a Developers License dialog box, click I Agree. If you are prompted by a
User Account Control dialog box, click Yes. Type your email address and a password in the
Windows Security dialog box and then click Sign in. Click Close in the Developers License dialog
box.

Note: If you do not have valid email address, click Sign up and register for the service.
Write down these credentials and use them whenever a use of an email account is required.

5. In Solution Explorer, under the BlueYonder.Companion.Shared project, double-click Addresses.cs.

6. Locate the BaseUri property, and replace the {CloudService} string with the Windows Azure Cloud
Service name you wrote down at the beginning of this lab.
L11-10 Developing Windows Azure and Web Services

7. Press Ctrl+S to save the changes.

8. In Solution Explorer, expand the BlueYonder.Companion.Client project, expand Helpers, and then
double-click DataManager.cs.
9. In the DataManager class, locate the private constant named AcsNamespace, and set its value to
BlueYonderCompanionYourInitials (YourInitials will contain your initials).

10. Press Ctrl+S to save the changes.

Note: The client is already configured to use the blueyonder.cloud realm.

Task 3: Examine the Client Code That Manages the Authentication Process
1. Locate the GetLiveIdUri method, and examine its code. The method sends a request to the ACS to
request the list of identity providers, and returns the address of the first identity provider.

Note: The ACS namespace you created automatically has the Windows Live ID identity
provider.

2. Locate the AuthenticateAsync method, and examine its code. The method uses the
WebAuthenticationBroker class to authenticate the client with the Windows Live ID identity
provider, and then sends the token to the ASP.NET Web API FederationCallback controller for
authentication.
3. Locate the GetSessionToken method and examine its code. The method uses the SWT returned from
the federation callback to perform a session handshake with the ASP.NET Web API service. The service
returns a session token, which is then stored in the static Token field.

4. Locate the CreateHttpClient method and examine its code. The method creates an HTTP request
and adds the HTTP Authorization header with the stored token.

5. In Solution Explorer, right-click the BlueYonder.Companion.Client project, and then click Set as
StartUp Project.
6. Press Ctrl+F5 to start the client without debugging.

7. Wait for the Connecting to a service page to show, enter your email and password, and click Sign in.

8. Wait until the connection is complete and you see the main window of the app.

9. Right-click the screen to open the app bar, and click Log out.

10. Close the client app.

Results: After completing this exercise, you will be able to run the client app, and log in using your
Windows Live ID credentials.
L12-1

Module 12: Scaling Services


Lab: Scalability
Exercise 1: Use Windows Azure Caching
Task 1: Add a Caching Worker Role to the Cloud Project
1. In the 20487B-SEA-DEV-A virtual machine, on the Start screen, click the Computer tile to open File
Explorer.

2. Browse to D:\AllFiles\Mod12\LabFiles\Setup.
3. Double-click the setup.cmd file. When prompt for information, provide it according to the
instructions.

Note: You may see a warning saying the client model does not match the server model.
This warning may appear if there is a newer version of the Windows Azure PowerShell Cmdlets. If
this message is followed by an error message, please inform the instructor, otherwise you can
ignore the warning.

4. Wait for the script to finish, and then press any key to close the script window.
5. On the Start screen, click the Visual Studio 2012 tile.

6. On the File menu, point to Open, and then click Project/Solution.

7. Browse to D:\Allfiles\Mod12\LabFiles\begin\BlueYonder.Server.
8. Select the file BlueYonder.Companion.sln and then click Open.
9. In Solution Explorer, expand the BlueYonder.Companion.Host.Azure project, right-click Roles, then
point to Add, and then click New Worker Role Project.
10. In the Add New .NET Framework 4.5 Role Project dialog box, click Cache Worker Role.
11. In the Name text box, type BlueYonder.Companion.CacheWorkerRole, and then click Add.

12. On the Build menu, click Build Solution.

Task 2: Add the Windows Azure Caching NuGet Package to the ASP.NET Web API
Project
1. In Solution Explorer, right-click the BlueYonder.Companion.Controllers project, and click Manage
NuGet Packages.

2. In the Manage NuGet Packages dialog box, on the navigation pane, expand the Online node and
then click the NuGet official package source node.

3. Press Ctrl+E and type WindowsAzure.

4. In the center pane, click the Windows Azure Caching package, and then click Install.

5. If the License Acceptance dialog box appears, click I Accept.

6. Wait for installation completion. Click Close to close the dialog box.

7. In Solution Explorer, right-click the BlueYonder.Companion.Host project, and click Manage NuGet
Packages.
L12-2 Developing Windows Azure and Web Services

8. In the Manage NuGet Packages dialog box, on the navigation pane, expand the Online node and
then click the NuGet official package source node.

9. Press Ctrl+E and type WindowsAzure.


10. In the center pane, click the Windows Azure Caching package, and then click Install.

11. Wait for installation completion. Click Close to close the dialog box.

12. In Solution Explorer, under the BlueYonder.Companion.Host project, double-click Web.config.


13. Locate the <dataCacheClients> section at the end of the file, and in it locate the <autoDiscover>
element.

14. Change the value of the identifier attribute to BlueYonder.Companion.CacheWorkerRole.

15. Press Ctrl+S to save the file.

Task 3: Add Code to Cache the List of Searched Locations


1. In Solution Explorer, expand the BlueYonder.Companion.Controllers project and the double-click
FlightsController.cs.
2. Add the following using directive to the beginning of the file:

using Microsoft.ApplicationServer.Caching;

3. Locate the comment // TODO: Place cache initialization here, and place the following code after it:

DataCacheFactory cacheFactory = new DataCacheFactory();


DataCache cache = cacheFactory.GetDefaultCache();

4. Locate the comment // TODO: Place cache check here, and place the following code after it:

string cacheKey = String.Format("{0};{1};{2}",


source.ToString(), destination.ToString(),
date.HasValue ? date.ToString() : String.Empty);
routesWithSchedules = cache.Get(cacheKey) as List<FlightWithSchedulesDTO>;

5. Locate the comment // TODO: Insert into cache here, and place the following code after it:

cache.Put(cacheKey, routesWithSchedules);

6. Press Ctrl+S to save the changes.

Task 4: Debug Using the Client Application


1. Click the first line of code in the Get method you have edited in the previous task, and then press F9
to add a breakpoint.

2. In Solution Explorer, right-click the BlueYonder.Companion.Host.Azure project, and then click Set
as StartUp Project.

3. Press F5 to start the Windows Azure emulator.

4. Switch into the 20487B-SEA-DEV-C virtual machine.

5. On the Start screen, click the Visual Studio 2012 tile.

6. On the File menu, point to Open, and then click Project/Solution.

7. Type
D:\AllFiles\Mod12\LabFiles\begin\BlueYonder.Companion.Client\BlueYonder.Companion.Client.sln in
the File name text box, and then click Open.
L12-3

8. If you are prompted by a Developers License dialog box, click I Agree. If you are prompted by a
User Account Control dialog box, click Yes. Type your email address and a password in the
Windows Security dialog box and then click Sign in. Click Close in the Developers License dialog
box.

Note: If you do not have valid email address, click Sign up and register for the service.
Write down these credentials and use them whenever a use of an email account is required.

9. In Solution Explorer, right-click the BlueYonder.Companion.Client project, and then click Set as
StartUp Project.

10. The client app is already configured to use the Windows Azure compute emulator. Press Ctrl+F5 to
start the client app without debugging.

Note: Normally, the Windows Azure Emulator is not accessible from other computers on
the network. For the purpose of testing this lab from a Windows 8 client, a routing module was
installed on the server's IIS, routing the incoming traffic to the emulator.

11. If you are prompted to allow the app to run in the background, click Allow.

12. After the client app starts, display the app bar by right-clicking or by swiping from the bottom of the
screen.

13. Click Search, and in the Search box on the right side enter N.
14. Go back to the 20487B-SEA-DEV-A virtual machine, to Visual Studio 2012. The code execution
breaks, and the line in breakpoint is highlighted in yellow.

15. Press F10 several times, until you reach the second if statement. Hover over the
routesWithSchedules object and verify that the value of the variable is null, meaning the item was
not found in the cache.

16. Press F5 to continue, and go back to the 20487B-SEA-DEV-C virtual machine, to the client app.
17. Close the client app, return to Visual Studio 2012, and press Ctrl+F5 to start the client app without
debugging.

18. Display the app bar by right-clicking or by swiping from the bottom of the screen. Click Search, and
in the Search box on the right side enter N.

19. Go back to the 20487B-SEA-DEV-A virtual machine, to Visual Studio 2012. The code execution
breaks, and the line in breakpoint is highlighted in yellow.

20. Press F10 several times, until you reach the second if statement. Hover over the
routesWithSchedules object and verify that now the code retrieves data from the cache.

21. Press F5 to continue running the code, return to Visual Studio 2012, and on the Debug menu click
Stop Debugging.

22. Go back to the 20487B-SEA-DEV-C virtual machine, and close the client app.

Results: You will have added a caching worker role to the Cloud project, and implemented other
Windows Azure caching features.
L12-4 Developing Windows Azure and Web Services
L13-1

Appendix A: Designing and Extending WCF Services


Lab: Designing and Extending WCF Services
Exercise 1: Create a Custom Error Handler Runtime Component
Task 1: Create an Operation Extension to Hold the Parameter Values
1. On the Start screen, click the Visual Studio 2012 tile.

2. On the File menu, point to Open, and then click Project/Solution.

3. Browse to D:\Allfiles\Apx01\LabFiles\begin\BlueYonder.Server.

4. Select the file BlueYonder.Server.sln and then click Open.

5. In Solution Explorer, right-click the BlueYonder.BookingService.Implementation project, point to


Add, and then click New Folder. Name the folder Extensions, and press Enter.
6. In Solution Explorer, right-click the Extensions folder, point to Add, and then click Class.
7. In the Add New Item dialog box, type ParametersInfo in the Name box, and then click Add.

8. To the beginning of the file, add the following using directive.

using System.ServiceModel;

9. Replace the class declaration with the following code.

public class ParametersInfo : IExtension<OperationContext>

10. To the class to create a local field which can store the operation parameters, add the following code.

public object[] Parameters { get; set; }

11. To the class, add the following constructor method.

public ParametersInfo(object[] parameters)


{
Parameters = parameters;
}

12. Implement the IExtension<OperationContext> interface by adding the following code to the class.

public void Attach(OperationContext owner)


{
}
public void Detach(OperationContext owner)
{
}

Note: You do not have to add any code to the Attach and Detach methods, because you
only use the extension for state management, and not for adding functionality to the operation
context.

13. To save the file, press Ctrl+S.


L13-2 Developing Windows Azure and Web Services

Task 2: Create a Parameter Inspector that Stores the Parameter Values in an Operation
Extension
1. In Solution Explorer, in the BlueYonder.BookingService.Implementation project, right-click the
Extensions folder, point to Add, and then click Class.

2. In the Add New Item dialog box, type ParametersInspector in the Name box, and then click Add.
3. To the beginning of the file, add the following using directives.

using System.ServiceModel;
using System.ServiceModel.Dispatcher;

4. Replace the class declaration with the following code.

class ParametersInspector : IParameterInspector

5. Implement the IParameterInspector interface by adding the following code to the class.

public void AfterCall(string operationName, object[] outputs, object returnValue,


object correlationState)
{
}
public object BeforeCall(string operationName, object[] inputs)
{
}

6. To the BeforeCall method, add the following code to store the input parameters in an extension
object.

OperationContext.Current.Extensions.Add(new ParametersInfo(inputs));
return null;

Note: You have to implement the BeforeCall method to save the parameters of the
operation before the operation is invoked. You do not have to implement the AfterCall method,
because it only executes after the operation is complete without exceptions.

7. To save the file, press Ctrl+S.

Task 3: Create an Error Handler that Traces Parameter Values for Faulty Operations
1. In Solution Explorer, in the BlueYonder.BookingService.Implementation project, right-click the
Extensions folder, point to Add, and then click Existing Item.

2. In the File name box, type D:\Allfiles\Apx01\Labfiles\Assets\Extensions, and press Enter.

3. Click the file ErrorLoggingUtils.cs, and then click Add.

4. In Solution Explorer, in the BlueYonder.BookingService.Implementation project, right-click the


Extensions folder, point to Add, and then click Class.

5. In the Add New Item dialog box, type


6. LoggingErrorHandler in the Name box, and then click Add.

7. To the beginning of the file, add the following using directives.

using System.ServiceModel;
using System.ServiceModel.Channels;
using System.ServiceModel.Dispatcher;
using System.Diagnostics;
L13-3

8. Replace the class declaration with the following code.

public class LoggingErrorHandler : IErrorHandler

9. Add the following code to the class.

private TraceSource _traceSource = new TraceSource("ErrorHandlerTrace");

Note: Use the System.Diagnostics.TraceSource class to write trace messages to the WCF
tracing log file.

10. To implement the IErrorHandler interface, add the following code to the class.

public void ProvideFault(Exception error, MessageVersion version, ref Message fault)


{
}
public bool HandleError(Exception error)
{
return true;
}

11. To obtain the extension object from the operation's context, in the HandleError method, add the
following code before the return statement.

ParametersInfo parametersInfo =
OperationContext.Current.Extensions.Find<ParametersInfo>();
if (parametersInfo != null)
{
}

Note: The IErrorHandler interface provides two methods, ProvideFault and HandleError.
You can implement the ProvideFault method to provide a fault message to WCF based on the
thrown exception. The HandleError is called after WCF returns the fault message to the client so
that you can log the thrown exception without making the client wait until the logging
procedure is complete.

12. In the if block to log the parameters, add the following code.

string message = string.Format


("Exception of type {0} occurred: {1}\n operation parameters are:\n{2}\n",
error.GetType().Name,
error.Message,
parametersInfo.Parameters.Select
(o => ErrorLoggingUtils.GetObjectAsXml(o)).Aggregate((prev, next) => prev +
"\n" + next));
_traceSource.TraceEvent(TraceEventType.Error,0,message);

Note: The ErrorLoggingUtils.GetObjectAsXml method receives an object and returns it


as an XML string. The Aggregate extension method receives an IEnumerable<T> collection and
executes an aggregative function for every item in the collection. In the code example that was
just mentioned, the accumulative function concatenates the XML strings to one large string.

13. To save the file, press Ctrl+S.


L13-4 Developing Windows Azure and Web Services

Task 4: Create a Custom Service Behavior for the Error Handler and Apply it to the
Service
1. In Solution Explorer, in the BlueYonder.BookingService.Implementation project, right-click the
Extensions folder, point to Add, and then click Class.

2. In the Add New Item dialog box, type ErrorLoggingBehavior in the Name box, and then click Add.
3. To the beginning of the file, add the following using directives.

using System.ServiceModel;
using System.ServiceModel.Channels;
using System.ServiceModel.Description;
using System.Collections.ObjectModel;
using System.ServiceModel.Dispatcher;

4. Replace the class declaration with the following code.

class ErrorLoggingBehavior : IServiceBehavior

5. To implement the IServiceBehavior interface, add the following code to the class.

public void AddBindingParameters(ServiceDescription serviceDescription,


ServiceHostBase serviceHostBase, Collection<ServiceEndpoint> endpoints,
BindingParameterCollection bindingParameters)
{
}
public void ApplyDispatchBehavior(ServiceDescription serviceDescription,
ServiceHostBase serviceHostBase)
{
}
public void Validate(ServiceDescription serviceDescription, ServiceHostBase
serviceHostBase)
{
}

6. To the ApplyDispatchBehavior method to attach the new error handler to each channel, add the
following code.

foreach (ChannelDispatcher channelDispatcher in serviceHostBase.ChannelDispatchers)


{
channelDispatcher.ErrorHandlers.Add(new LoggingErrorHandler());
}

7. To the end of the foreach block you added (after the error handler) to apply the custom parameter
inspector to each operation, add the following code.

foreach (EndpointDispatcher endpoint in channelDispatcher.Endpoints)


{
foreach (DispatchOperation operation in endpoint.DispatchRuntime.Operations)
{
operation.ParameterInspectors.Add(new ParametersInspector());
}
}

8. To save the file, press Ctrl+S.

9. In Solution Explorer, right-click the BlueYonder.BookingService.Implementation project, and then


click Add Reference.

10. In the Reference Manager dialog box, expand the Assemblies node in the navigation pane, and
then click Framework.
L13-5

11. Scroll down the assemblies list, point to the System.Configuration assembly, select the check box
next to the assembly name, and then click OK.

12. In Solution Explorer, in the BlueYonder.BookingService.Implementation project, right-click the


Extensions folder, point to Add, and then click Class.
13. In the Add New Item dialog box, type ErrorLoggingBehaviorExtensionElement in the Name box, and
then click Add.

14. To the beginning of the file, add the following using directive.

using System.ServiceModel.Configuration;

15. Replace the class declaration with the following code.

public class ErrorLoggingBehaviorExtensionElement : BehaviorExtensionElement

16. Implement the BehaviorExtensionElement abstract class by adding the following code to the class.

public override Type BehaviorType


{
get
{
}
}
protected override object CreateBehavior()
{
}

17. To the get property accessor of the BehaviorType property, add the following code.

return typeof(ErrorLoggingBehavior);

18. To the CreateBehavior method, add the following code.

return new ErrorLoggingBehavior();

Note: You can use a custom behavior in the configuration file only if you create a class for
its configuration element. The configuration element class has to provide two things: the kind of
the custom behavior class and an instance of it.

19. To save the file, press Ctrl+S.

20. In Solution Explorer, expand the BlueYonder.BookingService.Host project, and double-click


App.config.

21. Locate the <system.serviceModel> tag, and add the following section after it, before the
<services> section.

<extensions>
<behaviorExtensions>
<add name="errorLoggingBehavior"
type="BlueYonder.BookingService.Implementation.Extensions.ErrorLoggingBehaviorExtensi
onElement, BlueYonder.BookingService.Implementation"/>
</behaviorExtensions>
</extensions>

22. Locate the <serviceBehaviors> element under the <behaviors> section, and add the following
element between the <behavior> and <serviceMetadata> tags.
L13-6 Developing Windows Azure and Web Services

<errorLoggingBehavior/>

23. The resulting <behaviors> section should resemble the following configuration.

<behaviors>
<serviceBehaviors>
<behavior>
<errorLoggingBehavior/>
<serviceMetadata httpGetEnabled="true"
httpGetUrl="http://localhost/BlueYonder/Booking"/>
</behavior>
</serviceBehaviors>
</behaviors>

Note: Visual Studio Intellisense uses built-in schemas to perform validations. Therefore, it
will not recognize errorLoggingBehavior behavior extension, and will display a warning. Please
disregard this warning.

24. To save the file, press Ctrl+S, and leave the file opened.

Task 5: Configure Tracing for WCF and the Custom Trace


1. Remaining in the App.config, add the following configuration to the end of the file before the
</configuration> tag.

<system.diagnostics>
<sources>
</sources>
<trace autoflush="true" />
</system.diagnostics>

Note: The autoflush attribute controls whether log messages are immediately written to
the log, or cached in memory and periodically flushed. The value of the attribute is set to true so
that you can view the results immediately without waiting for the log to flush its content to the
file.

2. In the <system.diagnostics> element, add the following configuration to configure where the logs
are written.

<sharedListeners>
<add name="ServiceModelTraceListener"
type="System.Diagnostics.XmlWriterTraceListener"
initializeData="D:\AllFiles\Apx01\LabFiles\WCFTrace.svclog" />
</sharedListeners>

3. Add the following configuration In the <sources> element, add the following configuration to log
both WCF trace messages and your custom trace messages.

<source name ="System.ServiceModel" switchValue="Error,ActivityTracing">


<listeners>
<add name="ServiceModelTraceListener">
<filter type="" />
</add>
</listeners>
</source>
<source name ="ErrorHandlerTrace" switchValue="Error,ActivityTracing">
<listeners>
L13-7

<add name="ServiceModelTraceListener">
<filter type="" />
</add>
</listeners>
</source>

Note: In the above configuration you have two trace sources; each one configures a
different event source. The System.ServiceModel source is used for tracing WCF activities, and
the ErrorHandlerTrace source is used by the LoggingErrorHandler class, in the TraceSource
constructor. WCF tracing is covered in Module 10, Monitoring and Diagnostics, Lesson 2,
Configuring Service Diagnostics, in Course 20487.

4. To save the file, press Ctrl+S.

5. In Solution Explorer, right-click the BlueYonder.BookingService.Host project, and then click Set as
StartUp Project.
6. To start the service host without debugging, press Ctrl+F5.
7. On the Start screen, click the Computer tile to open File Explorer. Browse to D:\AllFiles and double-
click the WcfTestClient shortcut.

8. In the WCF Test Client utility, on the File menu, click Add Service.
9. In the Add Service dialog box, type http://localhost/BlueYonder/Booking, and then click OK. Wait
until you see the service and endpoints tree in the pane to the left.

10. In the pane to the left, double-click the UpdateTrip() node, and then click Invoke in the UpdateTrip
tab. If a Security Warning dialog box appears, click OK.

11. Wait until an error dialog box appears that states "The confirmation code of the reservation is invalid".
To close the dialog box, click Close, and then close the WCF Test Client utility.
12. Return to File Explorer and browse to D:\AllFiles\Apx01\LabFiles. Double-click WCFTrace.svclog.

13. In the Microsoft Service Trace Viewer utility, in the Activity tab to the left, select the line marked in
red that says "Process action 'http://blueyonder.server.interfaces/IBookingService/UpdateTrip'".

14. In the pane to the right side, select the line marked in red that begins with "Exception of type
FaultException occurred: The confirmation code of the reservation is invalid".

15. In the Formatted tab on the lower-right side, locate the Application Data text area and verify that
you see the XML representation of the TripUpdateDto object.

16. Close the Microsoft Service Trace Viewer utility.


17. Close the service host console window, and return to Visual Studio 2012.

Results: You can use the WCF Test Client utility to test the service, cause exceptions to be thrown in the
code, and check the log files to verify that the exception message is logged together with the parameters
that are sent to the service operation.
L13-8 Developing Windows Azure and Web Services

Exercise 2: Add Support for Distributed Transactions to the WCF Booking


Service
Task 1: Add Transaction Flow Attributes to the Frequent Flyer Service Contract
1. In Solution Explorer, expand the BlueYonder.FrequentFlyerService.Contracts project and double-click
IFrequentFlyerService.cs.

2. Add the following attribute to the AddFrequentFlyerMiles and RevokeFrequentFlyerMiles


methods.

[TransactionFlow(TransactionFlowOption.Allowed)]

3. To save the file, press Ctrl+S.

Task 2: Configure the Service Endpoints Binding to Flow Transaction


1. In Solution Explorer, expand the BlueYonder.FrequentFlyerService.Host project and double-click
App.config.

2. Locate the <system.serviceModel> tag and add the following <bindings> section between the
<system.serviceModel> and <services> tags.

<bindings>
<netTcpBinding>
<binding name="TcpTransactionalBind" transactionFlow="true" />
</netTcpBinding>
</bindings>

3. Locate the <service> element whose name attribute is set to


BlueYonder.FrequentFlyerService.Implementation.FrequentFlyerService and replace its
<endpoint> with the following element.

<endpoint name="FrequentFlyerTcp"
address="net.tcp://localhost:5010/BlueYonder/FrequentFlyer" binding="netTcpBinding"
bindingConfiguration="TcpTransactionalBind"
contract="BlueYonder.FrequentFlyerService.Contracts.IFrequentFlyerService" />

Note: Make sure that you replace the existing endpoint and do not add a second endpoint,
because the two endpoints have the same contract and the same address, and will therefore
collide.

4. To save the file, press Ctrl+S.

Task 3: Add the Transaction Scope Attribute to the Service Implementation


1. In Solution Explorer, expand the BlueYonder.FrequentFlyerService.Implementation project and
double-click FrequentFlyerService.cs.

2. Decorate the AddFrequentFlyerMiles and RevokeFrequentFlyerMiles methods with the following


attribute.

[OperationBehavior(TransactionAutoComplete = true, TransactionScopeRequired = true)]

3. To save the file, press Ctrl+S.


L13-9

Task 4: Add Code to the WCF Booking Service that Calls the Frequent Flyer WCF
Service
1. In Solution Explorer, in the BlueYonder.BookingService.Implementation project, double-click
BookingService.cs.

2. In the BookingService class, locate the comment // TODO: 1 - Create a channel factory for the
Frequent Flyer service, and add the following code after it.

private ChannelFactory<IFrequentFlyerService> _frequentFlyerChannnelFactory =


new ChannelFactory<IFrequentFlyerService>("FrequentFlyerEP");

3. Locate the method UpdateTrip, and scroll to the end of the method until you see the comment //
TODO: 2 - Call the Frequent Flyer service to add the miles if the traveler has checked-in and add the
following code between the two comments.

if (originalStatus != newStatus && newStatus == FlightStatus.CheckedIn)


{
IFrequentFlyerService proxy = _frequentFlyerChannnelFactory.CreateChannel();
int earnedMiles = originalTrip.FlightInfo.Flight.FrequentFlyerMiles;
proxy.AddFrequentFlyerMiles(reservation.TravelerId, earnedMiles);
}

4. To save the file, press Ctrl+S.

Task 5: Execute the Service Call and the Reservations Database Updates in a
Distributed Transaction
1. In Solution Explorer, right-click the BlueYonder.BookingService.Implementation project, and then
click Add Reference.
2. In the Reference Manager dialog box, expand the Assemblies node in the navigation pane, and
then click Framework.

3. Scroll down the assemblies list, point to the System.Transactions assembly, select the check box next
to the assembly name, and then click OK.

4. Return to the BookingService.cs file, and add the following using directive in the beginning of the
file.

using System.Transactions;

5. Locate the UpdateTrip method, and surround the code that begins with the if statement that you
added before, and ends with the Save method call with the following using statement.

using (TransactionScope scope = new TransactionScope())


{
}

6. Add the following code at the end of the using block after the Save method.

scope.Complete();

7. The resulting code segment should resemble the following code.

using (TransactionScope scope = new TransactionScope())


{
// TODO: 2 - Call the Frequent Flyer service to add the miles if the traveler has
checked-in
if (originalStatus != newStatus && newStatus == FlightStatus.CheckedIn)
{
L13-10 Developing Windows Azure and Web Services

IFrequentFlyerService proxy = _frequentFlyerChannnelFactory.CreateChannel();


int earnedMiles = originalTrip.FlightInfo.Flight.FrequentFlyerMiles;
proxy.AddFrequentFlyerMiles(reservation.TravelerId, earnedMiles);
}
// TODO: 3 - Wrap the save and the service call in a transaction scope
reservationRepository.Save();
scope.Complete();
}

8. To save the file, press Ctrl+S.

Task 6: Update the WCF Client Configuration with the Frequent Flyer Service Endpoint
and the Support for Transaction Flow in the Bindings
1. In Solution Explorer, in the BlueYonder.BookingService.Host project, double-click App.config.

2. Locate the <system.serviceModel> element and add the following <bindings> section in the end
of the element before the </system.serviceModel> tag.

<bindings>
<netTcpBinding>
<binding name="TcpTransactionalBind" transactionFlow="true" />
</netTcpBinding>
</bindings>

3. Before the </system.serviceModel> tag, add the following configuration element.

<client>
<endpoint address="net.tcp://localhost:5010/BlueYonder/FrequentFlyer"
binding="netTcpBinding" bindingConfiguration="TcpTransactionalBind"
contract="BlueYonder.FrequentFlyerService.Contracts.IFrequentFlyerService"
name="FrequentFlyerEP"></endpoint>
</client>

4. To save the file, press Ctrl+S.


5. On the Start screen, click the Administrative Tools tile, and in the Administrative Tools window,
double-click Services.

6. In the Services window, look for the Distributed Transaction Coordinator service and check its
Status column. If the status of the service is not Running, right-click it, and then click Start.

7. Return to Visual Studio 2012. In Solution Explorer, right-click the BlueYonder.BookingService.Host


project, click Set as StartUp Project, and then press Ctrl+F5 to start the project without debugging.
8. Wait until the message Booking Service Is Running... Press [ENTER] to close. appears, and then return
to Visual Studio 2012.

9. In Solution Explorer, right-click the BlueYonder.FrequentFlyerService.Host project, click Set as


StartUp Project, and then press Ctrl+F5 to start the project without debugging.

10. Wait until the message "Frequent Flyer Service Is Running... Press [ENTER] to close." appears.

11. On the Start screen, click the Computer tile to open File Explorer. Browse to D:\AllFiles and double-
click the WcfTestClient shortcut.

12. In the WCF Test Client utility, on the File menu, click Add Service.

13. In the Add Service dialog box, type http://localhost/BlueYonder/Booking, and then click OK. Wait
until you see the service and endpoints tree in the pane to the left.

14. On the File menu, click Add Service.


L13-11

15. In the Add Service dialog box, type http://localhost/BlueYonder/FrequentFlyer, and then click
OK. Wait until you see the service and endpoints tree in the pane to the left.

16. Double-click the UpdateTrip() node in the pane to the left and enter the following values in the
Request area of the UpdateTrip tab:
FlightDirection: Departing

ReservationConfirmationCode: Aa123

17. In the TripToUpdate property, click the null value, open the drop-down list, and select
BlueYonder.BookingService.Contracts.TripDto.

18. Expand the TripToUpdate node and enter the following values:

Class: First

FlightScheduleID: 1

Status: CheckedIn

19. In the UpdateTrip tab, click Invoke. If a Security Warning dialog box appears, click OK.
20. Wait until the service invocation is finished, and verify no errors are shown.

21. In the pane to the left, double-click the GetAccumulatedMiles() node and set the travelerId
parameter to 1 in the Request area of the GetAccumulatedMiles tab.
22. Click Invoke, and if a Security Warning dialog box appears, click OK. Wait until the service
invocation is finished, and verify the return value in the Response area is 5026.

23. Close the WCF Test Client utility and the two console windows.

Results: You can run the WCF Test Client utility, call an operation in the Booking service that starts a
distributed transaction, and verify that the Frequent Flyer service indeed committed its transaction.
L13-12 Developing Windows Azure and Web Services
L14-1

Appendix B: Implementing Security in WCF Services


Lab: Securing a WCF Service
Exercise 1: Securing the WCF Service
Task 1: Create a server certificate
1. In the 20487B-SEA-DEV-A virtual machine, on the Start screen, click Computer to open the File
Explorer window.

2. Browse to D:\AllFiles\Apx02\LabFiles\Setup.
3. Double-click CreateServerCertificate.cmd, when completed, press any key to close the command
window.

Task 2: Configure the service endpoint's binding


1. On the Start screen, click the Visual Studio 2012 tile.
2. On the File menu, point to Open, and then click Project/Solution.

3. Type D:\AllFiles\Apx02\LabFiles\begin\BlueYonder.Server\BlueYonder.Server.sln in the File


name box, and then click Open.
4. In Solution Explorer, expand the BlueYonder.Server.Booking.Host project, and then double-click
App.config

5. Locate the <bindings> section under the <system.serviceModel> section group.


6. Add the following binding configuration between the <netTcpBinding> and <binding> tags:

<binding>
<security mode="Message">
<message clientCredentialType="Certificate"/>
</security>
</binding>

Note: You can create one default binding configuration without the name attribute for
each binding type. The default configuration will apply to any endpoint using that binding, which
does not have its own binding configuration.

Task 3: Configure the service behavior to use the newly created certificate
1. Under the <system.serviceModel> section group, locate the <behaviors> section.

2. Locate the <behavior> element under the <serviceBehaviors> element.

3. Add the following configuration to the <behavior> element.

<serviceCredentials>
</serviceCredentials>

4. Add the <serviceCertificate> element to the <serviceCredentials> element with the following
configuration.

<serviceCertificate storeLocation="LocalMachine" storeName="My"


x509FindType="FindBySubjectName" findValue="Server"/>
L14-2 Developing Windows Azure and Web Services

5. Add the <clientCertificate> element to the <serviceCredentials> element with the following
configuration.

<clientCertificate>
<authentication revocationMode="NoCheck"/>
</clientCertificate>

Note: You cannot check if the client certificate has been revoked, because it was generated
locally. If a real certification authority had issued the client certificate, it would have been possible
to check whether it was revoked.

Results: You can test your changes at the end of the next exercise.
L14-3

Exercise 2: Using Authorization Rules to Validate the Clients Requests


Task 1: Create a custom authorization policy to attach roles to client certificates
1. In Solution Explorer, right-click the BlueYonder.BookingService.Implementation project, and then
click Add Reference.

2. In the Reference Manager dialog box, expand the Assemblies tab, and then click the Framework
tab.
3. Select the System.IdentityModel assembly from the list, and then click OK.

4. In Solution Explorer, right-click the BlueYonder.BookingService.Implementation project, point to


Add, and then click Class.

5. In the Add New Item dialog box, type CertificateAuthorizationPolicy in the Name box, and then
click Add.

6. Change the declaration of the class to the following.

public class CertificateAuthorizationPolicy :


System.IdentityModel.Policy.IAuthorizationPolicy

7. Add the following property to the class.

public System.IdentityModel.Claims.ClaimSet Issuer


{
get
{
return System.IdentityModel.Claims.ClaimSet.System;
}
}

8. Add the following field declaration to the class.

private Guid _id;

9. Add the following constructor method to the class.

public CertificateAuthorizationPolicy()
{
_id = Guid.NewGuid();
}

10. Add the following property to the class.

public string Id
{
get
{
return _id.ToString();
}
}

11. Add the following static field to the class.

private static Dictionary<string, string[]> AuthorizationForUser


= new Dictionary<string, string[]>
{
{ "CN=Client", new string[] {"ReservationsManager" }}
};

12. Add the following method to the class.


L14-4 Developing Windows Azure and Web Services

public bool Evaluate(System.IdentityModel.Policy.EvaluationContext evaluationContext,


ref object state)
{
bool retValue = false;
var identitiesList = evaluationContext.Properties["Identities"] as
List<System.Security.Principal.IIdentity>;
if (identitiesList != null && identitiesList.Count > 0)
{
System.Security.Principal.IIdentity identity = identitiesList.First();
string name = identity.Name.Split(';').First();
string[] roles = null;
if (AuthorizationForUser.ContainsKey(name))
{
roles = AuthorizationForUser[name];
}
evaluationContext.Properties["Principal"] =
new System.Security.Principal.GenericPrincipal(
identity,
roles);
retValue = true;
}
return retValue;
}

Task 2: Configure the service authorization to use the custom authorization policy
1. In Solution Explorer, expand the BlueYonder.Server.Booking.Host project, and double-click
App.config.
2. Under the <system.serviceModel> section group, locate the <behaviors> section.

3. Under the <serviceBehaviors> element, locate the first <behavior> element.

4. Add the <serviceAuthorization> element to the behavior with the following configuration.

<serviceAuthorization principalPermissionMode="Custom" >


<authorizationPolicies>
<add
policyType="BlueYonder.BookingService.Implementation.CertificateAuthorizationPolicy,
BlueYonder.BookingService.Implementation"/>
</authorizationPolicies>
</serviceAuthorization>

5. To save the file, press Ctrl+S.

Task 3: Apply principal permissions to the service operations


1. In Solution Explorer, expand the BlueYonder.BookingService.Implementation project, and double-
click BookingService.cs.

2. Decorate the CreateReservation method with the following attribute.

[System.Security.Permissions.PrincipalPermission(
System.Security.Permissions.SecurityAction.Demand, Role="ReservationsManager")]

3. To save the file, press Ctrl+S.

4. In the CreateReservation method, right-click the first line of code (the if statement), point to
Breakpoint, and then click Insert Breakpoint

5. In Solution Explorer, right-click the BlueYonder.Server.Booking.Host project, and then click Set as
StartUp Project.

6. To debug the service host, press F5.


L14-5

7. Verify that the console window opens without throwing any exceptions.

8. Leave the console window open.

Results: After you complete this exercise, the booking service host is opened successfully and can locate
the service certificate.
L14-6 Developing Windows Azure and Web Services

Exercise 3: Configure the ASP.NET Web API Booking Service for Secured
Communication
Task 1: Create a client authentication certificate for the ASP.NET Web API booking
service
1. On the Start screen, click Computer to open the File Explorer window.

2. Browse to D:\AllFiles\Apx02\LabFiles\Setup.
3. Double-click CreateClientCertificate.cmd, when completed, press any key to close the command
window.

Task 2: Configure the client-side endpoint binding to use message security


1. On the Start screen, right-click the Visual Studio 2012 tile, and then click Open new window at the
bottom.

2. On the File menu, point to Open, and then click Project/Solution.

3. Type D:\AllFiles\Apx02\LabFiles\begin\BlueYonder.Server\BlueYonder.Companion.sln in the


File name box, and then click Open.
4. In Solution Explorer, expand the BlueYonder.Companion.Host project, and then double-click
Web.config.

5. Under the <configuration> root element, locate the <system.serviceModel> section group.
6. Add a <bindings> section within the <system.serviceModel> section group, with the following
configuration.

<bindings>
<netTcpBinding>
<binding>
<security mode="Message">
<message clientCredentialType="Certificate"/>
</security>
</binding>
</netTcpBinding>
</bindings>

Task 3: Configure the client-side endpoint behavior with the client's certificate
1. Add a <behaviors> section within the <system.serviceModel> section group, with the following
configuration.

<behaviors>
<endpointBehaviors>
<behavior>
<clientCredentials>
</clientCredentials>
</behavior>
</endpointBehaviors>
</behaviors>

2. Add the <serviceCertificate> element to the <clientCredentials> element, with the following
configuration.

<serviceCertificate>
<authentication revocationMode="NoCheck" />
</serviceCertificate>
L14-7

3. Add the <clientCertificate> element to the <clientCredentials> element, with the following
configuration.

<clientCertificate storeLocation="LocalMachine" storeName="My"


x509FindType="FindBySubjectName" findValue="Client"/>

4. Locate the <client> section, and add the following <identity> configuration to the <endpoint>
element.

<identity>
<certificateReference storeLocation="LocalMachine" storeName="TrustedPeople"
x509FindType="FindBySubjectName" findValue="Server"/>
</identity>

Note: The <identity> element contains the information about the service's certificate. The
client uses this configuration to verify that it is connected to the correct service.

5. To save the file, press Ctrl+S.

6. On the Build menu, click Build Solution.


7. In the 20487B-SEA-DEV-C virtual machine, on the Start screen, click the Visual Studio 2012 tile.
8. On the File menu, point to Open, and then click Project/Solution.

9. In the File name box, type


D:\AllFiles\Apx02\LabFiles\begin\BlueYonder.Companion.Client\BlueYonder.Companion.Clien
t.sln, and then click Open.

10. If you are prompted by a Developers License dialog box, click I Agree. If you are prompted by a
User Account Control dialog box, click Yes. Type your email address and a password in the
Windows Security dialog box, and then click Sign in. Click Close in the Developers License dialog
box.

Note: If you do not have valid email address, click Sign up and register for the service.
Write down these credentials and use them whenever an email account is required.

11. In Solution Explorer, right-click the BlueYonder.Companion.Client project, and then click Set as
StartUp Project.
12. To start the client app without debugging, press Ctrl+F5.

13. If you are prompted to allow the app to run in the background, click Allow.

14. Display the app bar by right-clicking or by swiping from the bottom of the screen.
15. Click Search, and in the Search box on the right side enter New. If you are prompted to allow the
app to share your location, click Allow.

16. Wait for the app to show a list of flights from Seattle to New York.
17. Click Purchase this trip.

18. In the First Name box, enter your first name.


19. In the Last Name box, enter your last name.

20. In the Passport box, type Aa1234567.


L14-8 Developing Windows Azure and Web Services

21. In the Mobile Phone box, type 555-5555555.

22. In the Home Address box, type 423 Main St..

23. In the Email Address box, enter your email address.

24. Click Purchase.

25. Go back to the 20487B-SEA-DEV-A virtual machine, to Visual Studio 2012. The code execution
breaks, and the line in breakpoint is highlighted in yellow.
26. On the Debug menu, click Quick Watch.

27. In the QuickWatch dialog box, in the Expression combo box, enter
ServiceSecurityContext.Current.PrimaryIdentity, and then click Reevaluate.

28. In the Value grid, expand ServiceSecurityContext.Current.PrimaryIdentity. Verify that the


AuthenticationType property is set to X509, and the Name property value starts with CN=Client.

29. To close the QuickWatch dialog box, click Close and then press F5 to continue.
30. Go back to the 20487B-SEA-DEV-C virtual machine, to the client app.
31. To close the confirmation message, click Close, and then close the client app.

32. Go back to the 20487B-SEA-DEV-A virtual machine, and close the service host console window to
stop debugging the service.

Results: After you complete the exercise, you will be able to start the client application and create a
reservation.
Notes
Notes
Notes
Notes
Notes
Notes
Notes
Notes
Notes
Notes
Notes
Notes
Notes
Notes

You might also like