You are on page 1of 86

Detecting and Solving Memory Problems in

.NET
Alexey Totin
This book is for sale at http://leanpub.com/detectingandsolvingmemoryproblemsinnet
This version was published on 2016-04-04

JetBrains - Worlds Leading Vendor of Professional Development Tools


We help developers work faster by automating common, repetitive tasks to enable them to stay
focused on code design and the big picture. We provide tools to explore and familiarize with code
bases faster. Our products make it easy for you to take care of quality during all stages of
development and spend less time on maintenance tasks.
Our JetBrains Technical Series of books focuses on general computer science topics as well as
tooling.
2016 Copyright 2000-2016, JetBrains

Contents
JetBrains Technical Series . .
About This Book . . . . . .
How the Book Is Organized
Acknowledgements . . . .

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

1
1
1
1

About the Author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Why Should I Care about Memory? . . . . . . . . .


Types of Memory Issues . . . . . . . . . . . . . .
How Can I Find These Issues? . . . . . . . . . . .
What Does Memory Profiling Look Like Exactly?

.
.
.
.

3
3
4
5

What Tools Do I Need? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

Memory Leaks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
How .NET Stores Objects in Memory . . . . . . . . . . . . . . . . . . . . . . .
Who Retains the Object? A Common Approach of Detecting Memory Leaks .
What If a Leak Is Not Obvious? Automatic Inspections . . . . . . . . . . . . .
How to Fight . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
What If There Is Nothing Suspicious in the Objects Retention Path? GC Roots
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

7
7
9
11
11
20
23

High Memory Traffic . . . . . . . . . . . . . . . . .


How Garbage Collection Works . . . . . . . . . .
Considerations on Fighting High Memory Traffic
Things Worth Looking For . . . . . . . . . . . .
Summary . . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

25
25
29
33
55

Ineffective Memory Usage . . . . . . . . . . .


Who Retains Memory? . . . . . . . . . . .
What Methods Allocate the Most Memory?
Things Worth Looking For . . . . . . . . .
Summary . . . . . . . . . . . . . . . . . . .

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

56
57
61
63
70

Memory Profiling in Unit Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

72

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

CONTENTS

How It Works . . . . . . . . . . . . . . . . . . . . . . . . . .
When to Use dotMemory Unit . . . . . . . . . . . . . . . . .
Example 1. Checking for Specific Objects . . . . . . . . . . .
Example 2. Selecting Objects by a Number of Conditions . . .
Example 3. Checking Memory Traffic . . . . . . . . . . . . .
Example 4. Complex Scenarios for Checking Memory Traffic .
Example 4: Comparing Snapshots . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

72
73
73
74
75
75
76

Memory Profiling in Continuous Integration . . . . . . . . . . . . . . . . . . . . . . . . .


Integration with JetBrains TeamCity . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

77
78

Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
dotMemory and dotTrace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

82
82

JetBrains Technical Series


This book is a part of the JetBrains Technical Series of books covering a wide range of topics related
to Software Development. For more information on JetBrains, please visit JetBrains website.

About This Book


Strictly speaking, the content of this book is mostly a compilation of a number of blog posts related
to .NET memory management from JetBrains ReSharper blog. We had been publishing these posts
for the last two years until we realized that it would be great to put all this material in order. The
result of such an attempt is this book.
The content is based on the experience weve had here at JetBrains when developing such wellknown products as ReSharper, dotCover, dotTrace, dotMemory and others. In other words, this book
is our vision of best practices in finding and solving memory issues in .NET applications.
We hope this book will be useful to a wide range of developers interested in application optimization.
For those whore only starting their way with .NET, this book can be an especially valuable read as
it gives short insights on how .NET works under the hood.

How the Book Is Organized


Each chapter is devoted to a specific kind of memory issue, and each subsection features a particular
problem and its solution. This means you can start reading the book from any point. Nevertheless,
if you are new to .NET and the topics, we recommend following the book chapter by chapter. Every
chapter is prefaced with some theory on .NET memory management, but not so much as to bore
you to death (at least we hope so).

Acknowledgements
The author would like to specially thank Serjic Shkredov, Maarten Balliauw, Ed Pavlov, Fedor Reznik,
Anastasia Goncharova, and Mikhail Kropotov for their help with the content, and Hadi Hariri for
the idea of the book.
https://www.jetbrains.com
http://blog.jetbrains.com/dotnet/
With the exception of the last two chapters, which come as bonus content describing ways to automate the detection of memory issues in unit

tests and continuous integration.

About the Author


Alexey started as a developer of measurement and analytical instruments for the nuclear industry. He spent
almost seven years developing desktop and embedded
software for various measurement solutions. Though
creating user manuals and other guides was only a
small part of his job, after some time he found himself
spending more and more time on technical communication. Little by little, explaining complex things in
simple words became his real passion. Alexey is a regular contributor to JetBrains blogs and authors various
educational materials.

Alexey Totin

Why Should I Care about Memory?


Should you, really? Some might say, Hey, .NET is a managed environment. Its CLR thats supposed
to care about memory, not me. Unfortunately, this is true only to a certain extent. Although the
runtime totally controls objects lifetime, there are still many pitfalls related to memory usage that
can slow down our application (and that would be the best-case scenario) or even make it shut down
with an exception. So, what are these issues?

Types of Memory Issues


All memory issues in .NET applications can be divided into three fundamental categories.

Memory Leaks
A classic memory leak is a situation when an object in memory cannot be accessed by the running
code. This situation is impossible in a .NET application, as the runtime tracks all unused objects
(objects that are not referenced by other objects) and removes them from memory when they are no
longer needed. This mechanism is called garbage collection (GC). Nevertheless, it cannot prevent
the case when your application constantly creates objects that are referenced by other objects with
the reference you dont know about (from garbage collectors point of view, such objects are still
needed). So, sooner or later your application may run into the Out of memory exception.

Ineffective Memory Usage


When your application consumes more memory than it should, or could, we call this ineffective.
There can be millions of reasons for ineffective memory usage, most of which are related to poor
code design. For example, creating numerous string duplicates in memory is not efficient (e.g., when
parsing a text file with repetitive content such as a log).

High Memory Traffic


Memory traffic is an inherent part of a working application as CLR constantly allocates and collects
memory. Garbage collection is a resource-consuming operation. Thus, sometimes, garbage collector
has to even block all threads including the UI thread, which results in a UI freeze. So, the more
CLR, or Common Language Runtime, is the .NET Framework component that manages the execution of .NET applications.
Garbage collector is the CLR component that performs garbage collection.

Why Should I Care about Memory?

memory garbage collector has to collect, the slower our application. Thats how high memory traffic
impacts application performance.
The classic high memory traffic examples that everyone likes are the ones based on the string type
immutability. Immutability means that each time you explicitly change string content, a new string
is created. For example, if you reverse a string using the + operator in a loop, it will create as many
strings as the total number of characters in the string.

How Can I Find These Issues?


Even if a program is just a few lines of code, because of .NETs high level of abstraction it will create
numerous objects in memory. For example, the following simple console application with just two
lines of meaningful code
class Program
{
static void Main()
{
var s = Console.ReadLine();
Console.WriteLine("Text: {0}", s);
}
}

will create almost 300 objects and take up more than 30 kilobytes of memory.
For medium- or large-sized applications, its millions of objects and hundreds of megabytes. And
this is just a static memory dump. In dynamics, things get much, much worse. Memory traffic of a
typical application is tens of thousands of objects created and removed from memory every second.
Even if we know that our app has, say, a leak; how on earth are we supposed to find it in such a
mess? The answer is memory profiling. Memory profiling is a process of dynamic program analysis
that allows you not only to identify but also analyze instant data on objects in memory and memory
traffic data. Tools used to perform memory profiling are called memory profilers. With their help,
you can get answers to questions such as:
Why is this object still in memory (what is causing the memory leak)?
What takes so much memory (which exact objects)?
How does garbage collection affect the performance of my application? What method is the
origin of high memory traffic?
Are any memory allocation/distribution patterns being violated?
Well return to this example later in the book, so its OK if this short explanation is not 100% clear.

Why Should I Care about Memory?

What Does Memory Profiling Look Like Exactly?


The profiling workflow may vary depending on the issue you want to investigate. Nevertheless, as
a rule it will include the following steps:
1. Run your application under a memory profiler. In terms of profiling, this is called start a
profiling session (profiling session is a period of time during which the profiler performs
measurements).
2. Work with your application for some time as you would normally do. If you seek a certain
memory issue, reproduce the issue in the app.
3. Get a memory snapshot (an instant image of the managed heap).
4. For information about memory traffic, you may need to get additional snapshots (for example,
before and after reproducing the issue). Profilers typically allow you to compare snapshots and
get memory traffic data based on that comparison. In other words, youll be able to see what
objects were created and collected in the time interval between the snapshots.
5. Analyze the snapshot(s) to find memory issues and determine their cause(s).

What Tools Do I Need?


To use the tips given in this book, well need two tools:
JetBrains dotMemory memory profiler. We recommend using dotMemory, though it may be
possible to use other profilers to reproduce suggested solutions.

We dont want to turn this book into a step-by-step dotMemory manual. Therefore, it does
not contain instructions on basic dotMemory usage: how to start profiling, get memory
snapshots, navigate through snapshots, and so on. If desired, you can get these details from
the official dotMemory documentation.

Heap Allocations Viewer plugin for JetBrains ReSharper. The plugin highlights all places
in your code where memory is allocated. While not a must, it makes coding much more
convenient and in some sense forces you to avoid excessive allocations.
http://jetbrains.com/dotmemory/
https://www.jetbrains.com/dotmemory/help
https://resharper-plugins.jetbrains.com/packages/ReSharper.HeapView
Yes, you will need JetBrains ReSharper installed in your Visual Studio. Seeing how dotMemory is now available only as a part of ReSharper

Ultimate, having dotMemory on your machine implies having ReSharper as well.

Memory Leaks

Lets start with definitions. According to Wikipedia, a memory leak is a result of incorrect memory
management when an object is stored in memory but cannot be accessed by the running code. In
addition, memory leaks add up over time, and if they are not cleaned up, the system eventually runs
out of memory.
Actually, if we strictly follow the definition above, classic memory leaks in .NET applications are
impossible. Garbage collector fully controls memory release and removes all objects that cannot
be accessed by the code. Moreover, after an application is closed, garbage collector entirely frees
the memory occupied by the application. Nevertheless, point #2 (memory exhaustion because of
a leak) is quite real. This wont crash the system, but sooner or later the application will raise an
OutOfMemory exception.
The thing is, garbage collector collects only unreferenced objects. If theres a reference to an object
you dont know about, garbage collector will not collect it.
To better understand what all this reference stuff means, lets make a little digression and talk
about how .NET stores objects in memory.

How .NET Stores Objects in Memory


First, lets list main concepts of how memory is allocated for a .NET application.
7

Memory Leaks

Memory Allocation
When a new process is started, the runtime reserves a region of address space for the process
called the managed heap.
Objects are allocated in the heap contiguously one after another.
Memory allocation is a very fast process as it is just the adding of a value to a pointer.
In addition to the managed heap, an app always consumes some amount of unmanaged
memory which is not managed by garbage collector. Generally, it is required by CLR itself,
dynamic libraries employed by the app, graphics buffer, and so on.
If you were thinking of some stack where the runtime places blocks (new objects) one after another,
put this idea out of your mind. It is correct to some extent, but not really helpful. To find a better
analogy, lets take a look at how the allocated memory is released.

Memory Release
The process of releasing memory is called garbage collection. It is performed by a CLR
component called garbage collector.
When garbage collector performs a collection, it releases only objects that are no longer in use
by the application. (For example, a local variable in a method can be accessed only while the
method executes. Afterwards, the variable is no longer needed).
To determine whether an object is used or not, garbage collector examines the applications
roots - strong references that are global to the application. Typically, these are static object
pointers, local variables, and CPU registers.
For each active root, garbage collector builds a graph which contains all objects that are
reachable from these roots.
If an object is unreachable, garbage collector considers it no longer in use and removes the
object from the heap (releases the memory occupied by the object).
After the object is removed, garbage collector compacts reachable objects in memory.

Memory Leaks

Therefore, the most appropriate representation of all objects in memory is a graph of objects. Next
time you think about how objects are stored in memory, instead of a plain stack imagine some
2D-plan with interconnected blocks (e.g., a street map with buildings and roads).
So, what is the best way to fight memory leaks in your applications? Based on the theory above,
to fight a memory leak you need to determine the objects that add up over time causing a leak, and
then find the objects that prevent the former ones from being collected (have a reference to them).
Lets take a look at a more elaborate workflow.

Who Retains the Object? A Common Approach of


Detecting Memory Leaks
For example, you suspect a memory leak in your application, but you know nothing of its origins.
What should you do? Perform the following profiling steps:
1. Run a profiling session.

Memory Leaks

10

2. At some point while working with the application, take a memory snapshot.

3. Work with the application for some time so that the leak might reveal itself more obviously,
or reproduce the actions that, in your opinion, may probably lead to the leak.
4. Take one more memory snapshot.
5. By using specific dotMemory views:
Compare snapshots to find all objects that were not collected within your profiling time
interval. Using dotMemory grouping views, determine the objects that should not be in
memory at this execution point.
Using views that show object retention paths, determine what prevents these objects
from being collected.

Memory Leaks

11

What If a Leak Is Not Obvious? Automatic Inspections


As you may have noticed, the suggested workflow is applicable only in two cases:
The leak is very obvious - the application noticeably consumes a growing amount of memory
in a rather short time interval.
You suspect a leak in some specific type and intentionally check snapshots for it.
But what if a leak doesnt reveal itself in an obvious way? Such a leak may, for example, become a
real problem for server-side applications that must work weeks or even months without restarting.
Fortunately, most of the leaks (but not all) are a result of common developers mistakes. This
allows memory profilers to automatically search for such patterns and detect most common types
of memory leaks.
In dotMemory, this is implemented in the form of automatic inspections. You see the list of leaks (if
there are any) on the overview page right after you open a snapshot. Lets take a look at how you
can use dotMemory inspections to fight the most common types of memory leaks.

How to Fight
Binding Leak
There are a number of memory leaks related to WPF data binding patterns. The patterns, if not
followed correctly, can cause a memory leak. Consider the following example:

For example, an object is subscribed to an event of another object but never unsubscribed from it.

Memory Leaks

12

class Person
{
public Person(string name)
{
Name = name;
}
public string Name { get; set; }
}

When we bind to an instances Name property, the binding target starts listening for property
change notifications. If the property is not a DependencyProperty or an object that implements
INotifyPropertyChanged interface, WPF will resort to subscribing to the ValueChanged event of
the System.ComponentModel.PropertyDescriptor class to get notifications when the source objects
property value changes.
Why is this a problem? Well, since the runtime creates a reference to this PropertyDescriptor,
which in turn references our source object, and the runtime will never know when to deallocate
that initial reference (unless explicitly told), both the PropertyDescriptor and the source object
will remain in memory.
Detecting
dotMemory has an automatic inspection for this issue. Suppose we have some control that binds to
our class and then disposes. After we profile our application and open the snapshot, the snapshot
overview page will immediately warn us about WPF binding leaks.

This should be all we need to know, but lets see if we can find proof of the theory above (about the
PropertyDescrriptors ValueChanged event handler keeping our objects in memory). After doubleclicking the list entry, we can see the object set open. When we navigate to the Group by Similar
Retention view, we see the proof - it is ValueChangedEventManager who is retaining our object.
This view groups objects by similarity of their retention paths. For each object set, the view shows the two shortest paths to roots. For more
details see [](jetbrains.com/dotmemory/help/Similar_Retention.html)

Memory Leaks

13

Solving
The simplest fix for a WPF binding leak would be making our Name property a DependencyProperty,
or implementing the INotifyPropertyChanged interface correctly in our Person class and its Name
property. For example:
class Person: INotifyPropertyChanged
{
private string _name;
public Person(string name)
{
Name = name;
}
public string Name
{
get { return _name; }
set
{
_name = value;

14

Memory Leaks

PropertyChanged(this, new PropertyChangedEventArgs(Name));


}
}
public event PropertyChangedEventHandler PropertyChanged;
}

If the object is of a type we can not edit (say, it comes from a library we depend on), we can also explicitly clear the binding by calling: BindingOperations.ClearBinding(textBox, TextBlock.TextProperty);
Note that if a binding has the OneTime mode, this leak wont exist as the binding is done only once
and the binding target wont listen for changes in the source object.

Collection Binding Leak


The next very similar issue from the WPF binding leak list is the Collection binding leak. If there
is a binding to a collection that does not implement the INotifyCollectionChanged interface, WPF
creates a strong reference to this collection. As a result, it stays in memory for the applications
entire lifetime.
Detecting
For example, some control, say a ListBox, is bound to a collection. The binding is removed
somewhere in the application by destroying the control. Lets assume weve taken a snapshot
using dotMemory right after this moment. We expect our collection to be removed from memory.
Nevertheless, if we look at the snapshot overview, we will see that a WPF collection binding leak
takes place.

If we now open this set of objects and look at the Group by Dominators view, we will see that
our collection is held in memory by the WPF DataBindEngine, an object that will be around for
the lifetime of our application. So, as long as our objects dominator stays in memory, the collection
stays as well.
This view allows you to answer the question, Who exclusively retains the object?. We will discuss the concept of dominators in later chapter
titled Ineffective Memory Usage.

Memory Leaks

15

Solving
An easy way to fix the issue is to implement the INotifyCollectionChanged interface in our custom
collection type. If the collection does not need any specific implementations, we could also inherit
from the ObservableCollection type as it handles the implementation for us.
public class MyBigCollection : ObservableCollection<int>
{
}

x:Name Leak
The WPF technology was a huge progress for the .NET Framework that made our work with user
interfaces much easier than before. Unfortunately, like any other technology, it has some pitfalls. For
example, such a common and easy operation as removing a UI control can cause a memory leak.
The thing is that WPF creates a strong global reference to any UI element that is declared in XAML
if it uses the x:Name directive.
<acmecompany:PersonEditorControl Grid.Row=0 x:Name=personEditor/>

Removing an element from code will not remove the control from memory, not even if we remove
it from the parent controls Children collection. This can be a real problem for an application that
dynamically creates and removes numerous UI elements (e.g., points on some real-time diagram).

Memory Leaks

16

private void DeleteData_OnClick(object sender, RoutedEventArgs e)


{
if (personEditor != null)
{
_grid.Children.Remove(personEditor);
personEditor = null;
}
}

Detecting
To detect the issue, we should take a snapshot in dotMemory right after the suspicious control is
removed. The leaked control will be shown on snapshot overview in the corresponding inspection
section.

If we need more details, we can drill down and use the Key Retention Paths view to see how WPF
retains the object in memory.
Solving
To ensure the control gets removed from memory, we will have to call the UnregisterName method
for the parent control. The updated code that removes the control could look like this:
private void DeleteData_OnClick(object sender, RoutedEventArgs e)
{
if (personEditor != null)
{
this.UnregisterName(personEditor);
_grid.Children.Remove(personEditor);
personEditor = null;
}
}

Event Handler Leak


It is the most classic type of a leak inherent to many modern managed frameworks. It is caused
by an oversight on a developers part. For example, imagine we open an AdWindow window in our
application and let it update its contents every few seconds. We could instantiate a DispatcherTimer
in our constructor and subscribe to the Tick event to handle these updates.

Memory Leaks

17

public AdWindow()
{
adTimer = new DispatcherTimer();
adTimer.Interval = TimeSpan.FromSeconds(3);
adTimer.Tick += ChangeAds;
adTimer.Start();
}

Now what happens if we close this AdWindow? That depends. If we do nothing, the DispatcherTimer
will keep on firing Tick events, and since were still subscribed to it, the ChangeAds event handler
will be called. If this event handler has to remain in memory for it to be called, our AdWindow will
stay in memory too, even if we expect it to be released.
Detecting
There are a number of ways to detect this type of leak. The easiest is to capture a snapshot after the
object was expected to be released. In the snapshot overview page, we will immediately see if the
object stays in memory because of an event handler leak.

See our AdWindow there? Now we should find who holds it in memory. If we double-click the entry,
we will see the details on the instance. The Key Retention Paths view will show us how the object
is retained - by the DispatcherTimer instance.

Memory Leaks

18

If we are familiar with the source code, we know where to look. But what if were seeing the source
for the first time ever? How do we know where the subscribing to this event handler takes place?
All we need to do is to double-click the EventHandler entry (here in the Key Retention Paths
diagram). This will open the specific event handler instance. The Creation Stack Trace view built
for this instance will show us that were subscribing to the event handler in the AdWindow constructor.

Memory Leaks

19

The Shortest Paths to Roots | Tree view will tell us which event were subscribing to exactly.

Solving
From the investigation above, we know which event and which event handler weve forgotten to
unsubscribe from (DispatcherTimers Tick event), and where we subscribe to it in the first place
(the AdWindow constructor).
Unsubscribing from the event in the constructor is pointless in this case, as it would render our
functionality of rotating content every few seconds useless. A more logical place to unsubscribe is
when closing the AdWindow:

20

Memory Leaks

protected override void OnClosed(EventArgs e)


{
adTimer.Tick -= ChangeAds;
base.OnClosed(e);
}

The DispatcherTimer example here is a special case, as the above will still not ensure our
AdWindow is released from memory. If we profile the application, we would be able to see
the AdWindow instance is still there. The Key Retention Paths view will help us discover
that we have to set the private variable adTimer to null as well, in order to remove another
reference from the .NET runtimes DispatcherTimers collection. Or how one memory leak
can hide another.

What If There Is Nothing Suspicious in the Objects


Retention Path? GC Roots
As weve already discussed in the section Who Retains the Object? Common Approach of Detecting
Memory Leaks, to find a memory leak means to find who prevents the object from being
collected. But what if you see nothing criminal in the objects retention path? In such a case its
worth looking at the GC roots in more detail. GC roots is one of those things that that many
developers have heard of, but only few know what is is exactly. Lets try to shed more light on
this topic.
The formal definition reads, Roots identify storage locations which refer to objects on the managed
heap. Too vague, right? The more human-friendly, though simplified, definition might say: From the
perspective of garbage collector, an applications roots are entry point references to object graph
that definitely must not and will not be collected. This makes them the ideal (and the only possible)
starting point for building retention graphs. Lets look at the possible roots:
Stack references references to local objects. Such roots live while a method executes.
Static references references to static objects. These live for the entire liftime of the app
domain.
Handles typically, these are references used for communication between managed and
unmanaged code. They must live at least until the unmanaged code needs managed objects.
Finalizer references references to objects waiting to be finalized. These live until the
finalizer is run.
Knowing the reference type allows you to identify how the object is retained, or at least gives you
clues for further analysis.
Now lets take a more detailed look at the root types and how they are distinguished by the
dotMemory profiler.

Memory Leaks

21

Regular Local Variable


This is a local variable declared in a method (variable on the stack). Reference to this variable
becomes a root during the method lifetime. For example:
static void Main()
{
...
var collection = new Collection<int>();
...
}

Heres how it looks in dotMemory:

Note that in release builds, a roots lifetime may be shorter JIT can discard the variable right after
it is no longer needed.

Static Reference
When CLR meets a static object (class member, variable, or event), it creates a global instance of this
object. The object can be accessed during the entire app life time, so static objects are almost never
collected. Thus, references to static objects is one of the main root types.
class StaticClass
{
public static Collection<string> StCollection;
}

After the collection is initialized, CLR will create a static instance of the collection. The reference to
the instance will exist for the lifetime of the application domain.

When the static object is referenced through a field, dotMemory shows you the fields name. Of
course, unnamed static references can also take place. One obvious example of such a root is a
reference to a string declared in a method.

22

Memory Leaks

static void Main()


{
...
string A = "This is a string";
...
}

Note that in the example above, CLR also creates the Regular local variable reference. Nevertheless,
to simplify further analysis, dotMemory doesnt show you this root.

Pinning Handle
One additional problem for garbage collector is the interaction between managed and unmanaged
code. For example, you need to pass an object from the managed heap to, say, an external API library.
As a small object heap is compacted during collection, the object can be moved. This is an issue for
the unmanaged code if it relies on the exact object location. One solution is to fix the object in the
heap. In this case, garbage collector gets a pinning handle to the object, which implies that the object
cannot be moved.
Considering the above, if you see the Pinning handle type, the object is probably retained by some
unmanaged code.
For example, the App object always has a pinning reference:

You can also pin objects intentionally using the fixed block.

RefCounted Handle
The root prevents garbage collection if the reference count of the object is a certain value.

Memory Leaks

23

If an object is passed to a COM library using COM Interop, CLR creates a RefCounted handle to this
object. This root is needed as COM is unable to perform garbage collection. Instead, it uses reference
counting. If the object is no longer needed, COM sets the count to 0. This means that RefCounted
handle is no longer a root and the object can be collected.

Thus, if you see RefCounted handle, then the object is probably passed as an argument to
unmanaged code.

Weak Handle
As opposed to other roots, the Weak handle does not prevent referenced objects from garbage
collection. Thus, objects can be collected at any time but still can be accessed by the application.
Access to such objects is performed via an intermediate object of the WeakReference class. Such an
approach may be efficient when working with some temporary data structures like caches.
As weak references (typically) do not survive full garbage collection, you will see weak references
mostly in combinations with other handles. For example, Weak, RefCounted handle.

Regular Handle
When handle type is undefined, dotMemory marks it as Regular handle. Typically, these are
references to system objects required during the entire lifetime of the application. The OutOfMemoryException object is a prime example. To prevent its collection, the environment references the
object through a regular handle.

Summary
Though classic memory leaks are impossible in .NET, uncontrolled memory consumption
that ends up with an OutOfMemory exception is a definite possibility.

Memory Leaks

24

To find a memory leak, answer two questions:


What objects cause the leak?
Who prevents these objects from being collected?
If you dont know where to start, start with automatic memory inspections. They automatically check your application for the most common types of memory leaks.

High Memory Traffic

As usual, lets begin with definitions.


An applications memory traffic is the amount of memory that is allocated and collected by CLR
during some time interval.
Why is high memory traffic bad? As high road traffic leads slows down cars and leads to longer
trip times, high memory traffic slows down application execution and leads to UI freezes. This is
because garbage collection is a resource-consuming process. The more collections garbage collector
has to make, the larger the CPU overhead and the poorer the application performance. Thus, memory
traffic is one of the first candidates to be checked when facing poor app performance.
What are the possible origins of high memory traffic? As with ineffective memory usage, they
typically have to do with bad code design. Fortunately, there is an upside: here we can pick out the
most common developers mistakes and provide specific tips on how to fix them. A simple example:
If you see objects of value type in the heap, then surely boxing is to blame. Boxing always implies
additional memory allocation, so removing it is very likely to make your app better.
If you want more examples, proceed right to the section titled Things Worth Looking For. If you
need a more details on the issue and want to learn some .NET theory, just continue reading.

How Garbage Collection Works


So, why is garbage collection slow? Lets recall: to remove unused objects from memory, garbage
collector has to determine whether a particular object is no longer needed (has no references from
any other objects). To do this, garbage collector builds an objects retention graph starting from
application roots. As there are millions of objects in a typical application, building a graph may
25

High Memory Traffic

26

really take a while. Of course, .NET CLR developers tried to lower the GC overhead with a number of
tricks: They organized the managed heap into generations; created another heap for large objects
- Large Object Heap; and moved garbage collection into a separate process - background garbage
collection. Lets talk about these optimizations in more details so you can better understand what
is going on when your application releases memory.

Generations
The first performance trick .NET runtime developers implemented was dividing the managed heap
into segments called generations: 0, 1, and 2. Why is this a trick? Because garbage collection is
also divided into separate steps - each of those being performed independently reduces the overall
performance impact. Heres how it works:
Objects that are smaller than 85KB are allocated on the so-called Small Object Heap (SOH).
When objects are just created, they are placed to Generation 0 (Gen 0) segment of SOH.
When Gen 0 is full (the size of the heap and generations is defined by GC), GC performs a
garbage collection. During the collection, GC removes all unreachable objects from the heap.
All reachable objects are promoted to Generation 1 (Gen 1).
The garbage collection of Gen 0 is a rather cheap operation from the performance perspective.
When Gen 1 is full, the Gen 1 garbage collection is performed. All objects that survive the
collection are promoted to Gen 2. Gen 0 collection also takes place here.
When Gen 2 is full, GC performs full garbage collection. First Gen 2 collection is performed,
then Gen 1 and Gen 0 collections. If at this point there is still not enough memory for new
allocations, GC raises the OutOfMemory exception.
During full garbage collection, GC has to pass through all objects in the heap, so this process
may have a great impact on system resources.
This is by no means a full list, but the main aspects of GC you should definitely know about.

High Memory Traffic

27

This means the worst-case scenario is high Gen2 traffic, when your application allocates a lot of
objects that become no longer needed right after promoting to Gen2. This reduces the Gen2 free
space and means that heavy Gen2 collections will occur more frequently.

Large Object Heap


The next performance trick applied in .NET comes from the fact that garbage collector should not
only remove unused objects but also compact the managed heap. The compaction is done via simple
copying, which imposes additional performance penalties. Research has shown that these penalties
outweigh heap compaction benefits if the copied objects are larger than 85KB. For this reason, all
such objects are placed in a separate segment of the managed heap called Large Object Heap (LOH).
Surviving objects in LOH are not compacted. This means that LOH becomes fragmented over time.
Starting

from .NET Framework 4.5.1, you can force GC to compact LOH during full garbage collection by using the

GCSettings.LargeObjectHeapCompactionMode property.

High Memory Traffic

28

Background Garbage Collection


For desktop applications, .NET Framework offers a so-called workstation GC mode.
There are two types of workstation garbage collection: foreground and background.
To perform a foreground GC, garbage collector suspends all managed threads except the one
that triggered the collection. This suspends the main thread as well and leads to a UI freeze.
This situation is typically called a blocking garbage collection.
Background GC is performed by a separate GC thread and does not suspend managed threads
during the heaviest Gen2 collections. Nevertheless, during Gen0 and Gen1 collections,
managed threads have to be suspended. Thus, background GC still involves short blocking
GC intervals. By default, background GC is turned on.
For server applications, there is a special server GC mode. The main difference comparing
to workstation GC is that in the server GC mode each logical processor has its own managed
heap and a separate GC thread.
Consider the following short example for better understanding. Our application has two threads:
UI Thread and User Thread. For example, at some point there were too many allocations on the UI
thread. To free up some memory, UI Thread toggles blocking Gen0 and Gen1 garbage collections
(the A1 interval on the diagram below). If memory is still not enough, a garbage collection thread
is created. It performs full garbage collection which includes Gen0 and Gen1 blocking collections
(C3) and Gen2 collection (B3 and D3) which does not block other threads. The user interface freezes
during blocking garbage collections (A1 and C1).

High Memory Traffic

29

Considerations on Fighting High Memory Traffic


In determining high memory traffic you may need to use not only memory profiling, but performance profiling as well. While you may be able to find the cause of high traffic with just a memory
profiler, performance profiling can significantly ease this process.
In light of this, fighting high memory traffic may be divided into two subtasks.

1. (Optional) Determine high memory traffic as the cause of


performance flaws
This is where you will need a performance profiler, such as JetBrains dotTrace. Why dotTrace?
Because it has a special timeline profiling mode. Unlike classic performance profiling, during
timeline profiling dotTrace collects temporal call stack and thread state data. You get the same data
about call times, but bound to a timeline. This way, you can analyze not only typical issues like
what is the slowest method?, but also ones where the order of events does matter, such as UI
freezes, excessive garbage collections, and so on. You can learn more about timeline profiling here.
After you profile your application in the timeline mode, youll be able to see all these events on a
timeline:
https://www.jetbrains.com/profiler/
https://confluence.jetbrains.com/display/NETCOM/Getting+Started+with+Timeline+Profiling

High Memory Traffic

30

dotTrace can also show you the number of blocking garbage collections in a specific time interval
(high blocking GC values clearly identify high memory traffic as the main cause of performance
issues).

Switching the analysis subject from time (in ms) to memory allocation (in MB) will allow you to see
what threads/methods allocate the most memory.

High Memory Traffic

31

Thus, using dotTrace and its timeline profiling mode, you are able to identify blocking garbage
collection as a cause of performance issues as well as threads and even methods that allocate the
most memory. But this doesnt give the exact answer on what is wrong with your code. This is when
memory profiling comes to the rescue.

2. Determine the exact objects and methods responsible for high


memory traffic
While timeline profiling allows to determine the fact of high memory traffic and whether it causes
any performance penalties, memory profiling allows you to localize the problem. More specifically,
it allows you to find the exact objects that are allocated / collected on a specific time interval and
what methods stay behind this traffic. These data is typically enough to understand what is wrong
with your code.
The profiling workflow in dotMemory is quite easy:
1. Start profiling your application with enabled memory traffic collection.

High Memory Traffic

32

2. Collect a memory snapshot after the method or functionality youre interested in finishes
working.
3. Open the snapshot and select the Memory Traffic view.

4. Analyze the data and determine the cause of high traffic.


As we already mentioned in the beginning, the typical cause of high memory traffic is poor code
design. In the next section, Things Worth Looking For, well provide our vision on the issue: what
we consider poor code design, how to find its traces in memory, and, of course, what we consider
best practices.

High Memory Traffic

33

Things Worth Looking For


Boxing
Boxing is converting a value type to the object type. For example:
int i = 5;
object o = i; // boxing takes place

Why is this a problem? Value types are stored on the stack, while reference types (object) are stored
in the managed heap. Therefore, to assign an integer value to an object, CLR has to take the value
from the stack and copy it to the heap. Of course, this movement impacts app performance.
Detecting
With dotMemory, finding boxing is an elementary task:
1. Open a memory snapshot and select the Memory Traffic view.
2. Find objects of value type. All these objects are the result of boxing.
3. Identify methods that allocate these objects and generate a major portion of the traffic.

High Memory Traffic

34

The Heap Allocations Viewer plugin also highlights allocations made because of boxing.

The main concern here is that the plugin shows you only the fact of a boxing allocation. But from
the performance perspective, youre more interested in how frequently this boxing takes place. E.g.,
if the code with a boxing allocation is called once, then optimizing it wont help much. Taking this
into account, dotMemory is much more reliable in detecting whether boxing causes real problems.
Solving
First of all: before fixing the boxing issue, make sure it really is an issue, i.e. it does generate
significant traffic. If it does, your task is clear-cut: rewrite your code to eliminate boxing. When you
introduce some struct type, make sure that the methods that work with this struct dont convert it
to a reference type anywhere in the code. For example, one common mistake is passing variables of
value types to methods working with strings (e.g. String.Format):
int i = 5;
String.Format("i = {0}", i);

A simple fix is to call the ToString() method of the appropriate value type:
int i = 5;
String.Format("i = {0}", i.ToString());

Resizing Collections
Dynamically-sized collections such as Dictionary, List, HashSet, and StringBuilder have the
following specifics: When the collection size exceeds the current bounds, .NET resizes the collection
and redefines the entire collection in memory. Obviously, if this happens frequently, your apps
performance will suffer.
Detecting
The insides of dynamic collections can be seen in the managed heap as arrays of a value type (e.g.
Int32 in case of Dictionary) or of the String type (in case of List). The best way to find resized
collections is to use dotMemory. For example, to find whether Dictionary or HashSet objects in
your app are resized too often:

High Memory Traffic

35

1. Open a memory snapshot on the Memory Traffic view.


2. Find arrays of the System.Int32 type.
3. Find the Dictionary<>.Resize and HashSet<>.SetCapacity methods and check the traffic
they generate.

The workflow for the List collections is similar. The only difference is that you should check the
System.String arrays and the List<>.SetCapacity method that creates them.

In case of StringBuilder, look for System.Char arrays created by the StringBuilder.ExpandByABlock


method.

High Memory Traffic

36

Solving
If the traffic caused by the resize methods is significant, the only solution is reducing the number
of cases when the resize is needed. Try to predict the required size and initialize a collection with
this size or larger.
List<string> list = new List<string>(1000);

In addition, keep in mind that any allocation greater than or equal to 85,000 bytes goes on the
Large Object Heap. Allocating memory in LOH has some performance penalties: as LOH is not
compacted, some additional interaction between CLR and the free list is required at the time of
allocation. Nevertheless, in some cases allocating objects in LOH makes sense, for example, in the
case of large collections that must endure the entire lifetime of an application (e.g. cache).

Enumerating Collections
When working with dynamic collections, pay attention to the way you enumerate them. The typical
major headache here is enumerating a collection using foreach only knowing that it implements the
IEnumerable interface. Consider the following example:

High Memory Traffic

37

class EnumerableTest
{
private void Foo(IEnumerable<string> sList)
{
foreach (var s in sList)
{
}
}
public void Goo()
{
var list = new List<string>();
for (int i = 0; i < 1000; i++)
Foo(list);
}
}

The list in the Foo method is cast to the IEnumerable interface, which implies further boxing of the
enumerator.
Detecting
As with any other boxing, the described behavior can be easily seen in dotMemory.
1. Open a memory snapshot and select the Memory Traffic view.
2. Find the System.Collections.Generic.List+Enumerator value type and check generated
traffic.
3. Find methods that originate those objects.

High Memory Traffic

38

As you can see, a new enumerator was created each time we called the Foo method.
The same behavior applies to arrays as well. The only difference is that you should check traffic for
the SZArrayHelper+SZGenericArrayEnumerator<> class.

The Heap Allocation Viewer plugin will also warn you about hidden allocations:

High Memory Traffic

39

Solving
Avoid casting a collection to an interface. In our example above, the best solution would be to create
a Foo method overload that accepts the List<string> collection.
private void Foo(List<string> sList)
{
foreach (var s in sList)
{
}
}

If we profile the code after the fix, well see that the Foo method doesnt create enumerators anymore.

Changing String Contents


String is an immutable type, meaning that the contents of a string object cannot be changed. When

you modify string contents, a new string object is created. This fact is the main source of performance
issues caused by strings. The more you change string contents, the more memory is allocated. This,
in turn, triggers garbage collections that impact app performance. The straightforward remedy is to
optimize your code so as to minimize the creation of new string objects.

High Memory Traffic

40

Detecting
Check all string instances that are not created by your code, but by the methods of the String class.
The most obvious example is the String.Concat method that creates a new string each time you
combine strings with the + operator.
To do this in dotMemory:
1. In the Memory Traffic view, locate and select the System.String class.
2. Find all methods of the String class that create the selected strings.
Consider an example of the function that reverses strings:
internal class StringReverser
{
public string Reverse(string line)
{
char[] charArray = line.ToCharArray();
string stringResult = null;
for (int i = charArray.Length; i > 0; i--)
stringResult += charArray[i - 1];
return stringResult;
}
}

An app that uses this function to revert a 1000-character line generates enormous memory traffic
(more than 5 MB of allocated and collected memory). A memory snapshot taken with dotMemory
reveals that most of the traffic (4 MB of allocations) comes from the String.Concat method, which,
in turn, is called by the Reverse method.

High Memory Traffic

41

The Heap Allocations Viewer plugin will also warn you about allocations by highlighting the
corresponding line of code:

Solving
In most cases, the fix is to use the StringBuilder class or handle a string as an array of chars using
specific array methods. Considering the reverse string example, the code could be as follows:
public string Reverse(string line)
{
var sb = new StringBuilder(line.Length);
for (int i = line.Length; i > 0; i--)
sb.Append(line[i - 1]);
return sb.ToString();
}

High Memory Traffic

42

dotMemory shows that after the fix traffic dropped by over 99%:

Improving Logging
When seeking ways to optimize your project, take a look at the logging subsystem. In complex
applications, for the sake of stability and support convenience, almost all actions are logged. This
results in significant memory traffic from the logging subsystem. Thats why it is important to
minimize allocations when writing messages to a log. There are multiple ways to improve logging.
Actually, the optimization approaches shown in this section are universal. The logging
subsystem was taken as an example because it works with strings most intensively.

Empty Arrays Allocation


A typical LogMessage method looks as follows:
void LogMessage(string message, params object[] args) {...}

What are the pitfalls of such implementation? The main concern here is how you call this method.
For example, the call
LogMessage("message");

will cause an empty array to allocated. In other words, this line will be equivalent to

High Memory Traffic

43

LogMessage("message", new object[] { });

Detecting

The easiest way to detect the allocation of an empty Object array is use the Heap Allocations Viewer
plugin:

Finding such arrays in dotMemory is also possible, though more cumbersome:


1. In the Plain List view, find the objects of the System.Object[] type. Open them as a separate
object set.
2. Open the Instances view for this set to see all object instances in the set.
3. Sort the list of instances by size in ascending order. All empty arrays Object[0] will be at the
top of the list.

Solving

The best solution is to create a number of method overloads with explicitly specified arguments. For
instance:

High Memory Traffic

44

void LogMessage(string message, params object[] args) {...}


void LogMessage(string message) {...}
void LogMessage(string message, object arg0) {...}
void LogMessage(string message, object arg0, object arg1) {...}

Hidden Boxing
The implementation above has a small drawback. What if you pass a value type to the following
method?
void LogMessage(string message, object arg0);

For example:
LogMessage("message", 123);

As the method accepts only the object argument, which is a reference type, boxing will take place.
Detecting

As with any other boxing, the main clue is a value type on the heap. So, all you need to do is look
at the memory traffic and find a value type. In our case this will look as follows:

Of course, the Heap Allocations Viewer will also warn you:

High Memory Traffic

45

Solving

The easiest way is to use generics - a mechanism for deferring type specification until it is declared
by client code. Thus, the revised version of the LogMessage method should look as follows:
void LogMessage<T>(string message, T arg0) {...}

Early String Allocation


Although were stating the obvious, variable allocation should be deferred as much as possible.
Consider the code below. Here the logmsg string is created regardless of whether logging is turned
on or off:
public void LogMessageConsole(string message, object arg)
{
var logmsg = String.Format("{0}: {1} / 2", DateTime.Now.ToString(), message,\
arg.ToString());
if (logEnabled)
Console.WriteLine(logmsg);
}

A better solution would be:


public void LogMessageConsole(string message, object arg)
{
if (logEnabled)
{
var logmsg = String.Format("{0}: {1} / 2", DateTime.Now.ToString(), mess\
age, arg.ToString());
Console.WriteLine(logmsg);
}
}

Excessive Logging
If you use logging for debugging purposes, make sure log calls never reach the release build. You
can do this by using the [Conditional] attribute.
In the example below, the LogMessage method will be called only if the DEBUG attribute is explicitly
defined.

High Memory Traffic

46

#define DEBUG
...
[Conditional ("DEBUG")]
public void LogMessage(string message, object arg) {...}

Lambda Expressions
Lambda expressions are a very powerful .NET feature that can significantly simplify your code
in certain situations. Unfortunately, convenience has its price. If used wrongly, lambdas can
significantly impact app performance. Lets look at what exactly can go wrong.
The trick is in how lambdas work. To implement a lambda (which is a sort of local function), the
compiler has to create a delegate. Each time a lambda is called, a delegate is created as well. This
means that if the lambda stays on a hot path (is called frequently), it will generate huge memory
traffic.
Is there anything we can do? Fortunately, .NET Framework developers have already thought about
this and implemented a caching mechanism for delegates. For better understanding, consider the
example below:
class LambdaTest
{
void Foo(Func<string, string> goo)
{
}
public void Hoo()
{
Foo((x) => x);
}
}

Now look at this code decompiled in dotPeek:

http://www.jetbrains.com/decompiler/

High Memory Traffic

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34

47

internal class LambdaTest


{
[CompilerGenerated]
private static Func<string, string> CS$<>9__CachedAnonymousMethodDelegate1;
public LambdaTest()
{
base..ctor();
}
private void Foo(Func<string, string> goo)
{
}
private void Hoo()
{
LambdaTest lambdaTest = this;
if (LambdaTest.CS$<>9__CachedAnonymousMethodDelegate1 == null)
{
// ISSUE: method pointer
LambdaTest.CS$<>9__CachedAnonymousMethodDelegate1 = new Func<string,\
string>((object) null, __methodptr(<Hoo>b__0));
}
Func<string, string> goo = LambdaTest.CS$<>9__CachedAnonymousMethodDeleg\
ate1;
lambdaTest.Foo(goo);
}
[CompilerGenerated]
private static string <Hoo>b__0(string news)
{
return news;
}
}

As you can see, a delegate is made static and created only once (line 4) - LambdaTest.CS<>9__CachedAnonymousMethodDelegate1.
So, what pitfalls should we watch out for? At first glance, this behavior wont generate any traffic.
Thats true unless your lambda contains a closure. If you pass any context (this, an instance member,
or a local variable) to a lambda, caching wont work. That makes sense: the context may change
anytime, and thats what closures are made for - passing context.
Lets look at a more elaborate example. For example, your app uses some Substring method to get

High Memory Traffic

48

substrings from strings:


private string Substring(string x)
{
var ret = x.Substring(1);
return ret;
}

Suppose this code is called frequently and strings on input are often the same. To optimize the
algorithm, you can create a cache that stores results:
private Dictionary<string, string> myCache = new Dictionary<string, string>();

Your algorithm should check whether the substring is already in the cache:
private string GetOrCreate(string key, Func<string> evaluator)
{
string ret;
if (myCache.TryGetValue(key, out ret))
return ret;
ret = evaluator();
myCache[key] = ret;
return ret;
}

The Substring method now looks as follows:


public string Substring(string x)
{
var ret = GetOrCreate(x, () => x.Substring(1));
return ret;
}

As you pass the local variable x to the lambda, the compiler is unable to cache a created delegate.
Lets look at the decompiled code:

High Memory Traffic

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25

49

private string Substring(string x)


{
LambdaCacheTest.<>c__DisplayClass1 cDisplayClass1 = new LambdaCacheTest.<>c_\
_DisplayClass1();
cDisplayClass1.x = x;
// ISSUE: method pointer
return this.GetOrCreate(cDisplayClass1.x, new Func<string>((object) cDisplay\
Class1, __methodptr(<Substring>b__0)));
}
[CompilerGenerated]
private sealed class <>c__DisplayClass1
{
public string x;
public <>c__DisplayClass1()
{
base..ctor();
}
public string <Substring>b__0()
{
return this.x.Substring(1);
}
}

There it is. A new instance of the c__DisplayClass1() is created each time the Substring method
is called (line 3). The parameter x we pass to the lambda is implemented as a public field of c__DisplayClass1 (line 14).
Detecting
As with any other example in this series, first of all, make sure that a certain lambda is in fact causing
performance issues, i.e. generating huge traffic. This can be easily checked in dotMemory. 1. Open
a memory snapshot and select the Memory Traffic view. 1. Find delegates that generate significant
traffic. Objects of ...+c__DisplayClassN are also a hint. 1. Identify the methods responsible for this
traffic.
For instance, if the Substring method from the example above is run 10,000 times, the Memory
Traffic view will look as follows:

High Memory Traffic

50

As you can see, the app has allocated and collected 10,000 delegates.
When working with lambdas, the Heap Allocation Viewer also helps a lot as it can proactively detect
delegate allocation. In our case, the plugins warning will look like this:

But once again, data gathered by dotMemory is more reliable, because it shows you whether this
lambda is a real issue (i.e. whether it does in fact generate lots of traffic).
Solving
Considering how tricky lambda expressions may be, some companies even prohibit using lambdas
in their development processes. We believe that lambdas are a very powerful instrument which
definitely can and should be used - as long as particular caution is exercised.
The main strategy when using lambdas is avoiding closures. In such a case, a created delegate will
always be cached with no impact on traffic.
Thus, for our example, one solution is to not pass the parameter x to the lambda. The fix would look
as follows:

High Memory Traffic

51

private string GetOrCreate(string key, Func<string> evaluator)


{
string ret;
if (myCache.TryGetValue(key, out ret))
return ret;
ret = evaluator(key);
myCache[key] = ret;
return ret;
}
private string Substring(string x)
{
var ret = GetOrCreate(x, (newX) => newX.Substring(1);
return ret;
}

The updated lambda doesnt capture any variables; therefore, its delegate should be cached. This
can be confirmed by dotMemory:

As you can see, now only one instance of Func is created.


If you need to pass some additional context to GetOrCreate, a similar approach (avoiding variable

High Memory Traffic

52

closure) should be used. For example:


private string GetOrCreate<T>(string key, T context, Func<string, T, string> eva\
luator)
{
...
}
private string Substring(string x, string anotherValue)
{
var ret = GetOrCreate(x, anotherValue, (newX, newAnotherValue) => newX + new\
AnotherValue);
return ret;
}

LINQ Queries
As we just saw in the previous section, lambda expressions always assume that a delegate is created.
What about LINQ? The concepts of LINQ queries and lambda expressions are closely connected and
have very similar implementation under the hood. This means that all concerns weve discussed
for lambdas are also valid for LINQs.
If your LINQ query contains a closure, the compiler wont cache the corresponding delegate. For
example:
public List<string> GetLongNames(List<string> inList, int threshold)
{
var result =
from s in inList
where s.Length > threshold
select s;
return result.ToList();
}

As the threshold parameter is captured by the query, its delegate will be created each time the
method is called. As with lambdas, traffic from delegates can be checked in dotMemory:

High Memory Traffic

53

Unfortunately, theres one more pitfall to avoid when using LINQs. Any LINQ query (as any other
query) assumes iteration over some data collection, which, in turn, assumes creating an iterator. The
subsequent chain of reasoning should already be familiar: if this LINQ query stays on a hot path,
then constant allocation of iterators will generate significant traffic.
Consider this example:
class LinqTest
{
private List<string> companies;
private List<string> GetLongNames(List<string> inList)
{
var result =
from s in inList
where s.Length > 3
select s;
return result.ToList();
}
public void Foo()
{
var longNames = GetLongNames(companies);
}
}

Each time GetLongNames is called, the LINQ query will create an iterator.

High Memory Traffic

54

Detecting
With dotMemory, finding excessive iterator allocations is an easy task: 1. Open a memory snapshot
and select the Memory Traffic view. 1. Find objects from the namespace System.Linq that
contain the word iterator. In our example we use the Where LINQ method, so we look for System.Linq.Enumerable+WhereListIterator<string> objects. 1. Determine the methods responsible
for this traffic.
For instance, if we call the Foo method from our example 10,000 times, the Memory Traffic view
will look as follows:

The Heap Allocation Viewer plugin also warns us about allocations in LINQs, but only if they
explicitly call LINQ methods. For example:

Solving
Unfortunately, the only answer here is to not use LINQ queries on hot paths. In most cases, a LINQ
query can be replaced with a foreach. In our example, a fix could look like this:

High Memory Traffic

55

private List<string> GetLongNames(List<string> inList)


{
var result = new List<string>();
foreach (var s in inList)
{
if (s.Length > 3)
result.Add(s);
}
return result;
}

As no LINQs are used, no iterators will be created.

Summary
Garbage collection is a resource-consuming process. The golden rule is: The more memory is
allocated, the more has to be collected and the slower your application.
To find whether GC is the cause of performance issues, use a performance profiler.
Use a memory profiler to find the exact objects and methods that cause high memory traffic.
A number of bad design patterns are known to cause memory traffic. Exercise caution when
working with value types (boxing), collections, strings, lambdas, and LINQs.

Ineffective Memory Usage

What is ineffective usage? When your app consumes more memory than it should, or could, we
call this ineffective. Sometimes you just feel that a particular algorithm consumes too much, but
nothing seems to explain why it does.
As we said earlier, triggers of ineffective memory usage are numerous. Typically, they are all related
to bad code design, which is also why this chapter doesnt suggest any exact solutions. Nevertheless,
there are some basic considerations you should keep in mind when faced with ineffective memory
usage.
First and foremost, if youre not satisfied with memory consumption in your application, you should
perform memory profiling.
Then, after you have a memory snapshot, use your profiler to answer two main questions:
What objects retain the most memory?
56

Ineffective Memory Usage

57

What methods allocate the most memory?


In addition to our recommendations on how you can answer these questions, this chapter also
covers a number of basic considerations concerning ineffective memory usage (in the Things Worth
Looking For section).

Who Retains Memory?


Understanding how memory is retained in your application is essential for successful optimization.
For example, you know that a major part of memory in your application is consumed by strings.
Nevertheless, in all likelihood the subject of your optimizations is not these strings per se but the data
structures that store them. Thats why Who retains the memory? is one of the main questions when
analyzing ineffective memory usage. To fully answer this question, you should be familiar with the
concept of dominators.

Dominators
Object A dominates object B if every path to B from an applications roots goes through A. In other
words, object B is retained in memory exclusively by object A: If A is garbage-collected, B is also
garbage-collected. For example, an array is a dominator for its elements (in case there are no other
references to array elements). If there are multiple paths to an object from the apps roots, it is not
dominated by anyone.

The amount of memory exclusively retained (dominated) by an object is one of the most useful
characteristics when analyzing ineffective memory usage. Consider an example.

Ineffective Memory Usage

58

There are 5 dominators on this figure: A, B, F, G, and I. C, D, and E are not dominators as
neither of them dominates F. H and J do not dominate K.
Object I retains 8 bytes of memory (J). If I is removed from memory, K will stay as it will be
still retained through the G - H path.
Object F retains 52 bytes (G + I + H + J + K).
Question (answer given below): how much memory does B retain?
So, when your application seems to consume too much memory, first, determine the largest
dominators and analyze what objects they retain (and how).
B retains 120 bytes.

Ineffective Memory Usage

59

What are the possible ways of doing this? Earlier dotMemory versions offered just one way of
analyzing dominators - the Group by Dominators view, which shows the tree of dominators sorted
by retained memory size:

Starting with version 4.3, dotMemory offers a new visual way of analyzing dominators: the
Sunburst Chart. In this view, the dominators hierarchy is shown on a sunburst chart. The more
memory a dominator retains, the larger the central angle.

Ineffective Memory Usage

60

A quick look at this chart shows what objects are crucial for your app and helps you evaluate the
largest structures.
If you click a particular dominator, the Domination Path on the right will show you the retention
path of this dominator. Double-click a dominator to zoom into the chart showing the objects retained
by this dominator in more detail.

Our experience shows that the Dominators chart is also very effective when you need to quickly
evaluate how a certain functionality works in your app. For example, below are two charts built for
an image editor application. The first one was plotted before anything is done in the app, and the
second one reflects memory usage after the user has applied an image filter.

Ineffective Memory Usage

61

After some time, if you profile your app constantly, youll even be able to see not only how your
app works, but even how particular changes in code affect memory usage.

What Methods Allocate the Most Memory?


This task is typically solved with the help of a call tree. Most memory profilers have a special call
tree view for determining the calls that have allocated the largest amounts of memory. While this
doesnt sound too complicated, in fact even for mid-size projects digging into the tree may become
a headache.

Ineffective Memory Usage

62

Of course, this algorithm is applicable to dotMemory as well. However, dotMemory 4.0 and later
offers a much easier way called Call Tree as Icicle Chart.
The idea behind the chart is simple - its a graphical representation of the call tree. Each call is shown
as a horizontal bar whose length depends on the size of objects allocated in the calls subtree. The
more memory allocated in the underlying subtree, the longer the bar. The bars color value serves
as an additional indicator - the more memory allocated by the call itself, the darker the bar.

So instead of looking at lots of numbers, start your analysis by opening the Call Tree as Icicle Chart
view. In just a glance you can match a certain call with how much memory it allocates.
For example, the following chart shows the same data as the Call Tree table from the picture above.
Notice how theres no need to dive into the call tree: main memory allocations can be seen instantly.

Ineffective Memory Usage

63

Things Worth Looking For


As we said in the beginning, ineffective memory usage is typically caused by wrong code design
decisions. Though these are probably countless, in this section well try to pick out some that are
worth looking for if you suspect your application uses memory ineffectively.

Object Lifetime
Its possible that your application stores objects longer than they are needed. Here are some basic
considerations on object lifetime in .NET.

Ineffective Memory Usage

64

Unmanaged Resources. Dispose Pattern


Caution must be exercised when working with managed objects that use unmanaged resources
such as files, streams, window handles, and so on. The thing is that unmanaged resources are not
removed by garbage collector until you do it with a destructor or Finalize method - finalizer (in
which case you should perform all cleanup operations there, e.g. close a file or a handle and so on).
As the complexity of collecting such an object is unpredictable from garbage collectors point of
view (as you may put any operation in the finalizer), CLR calls the finalizer asynchronously in a
separate thread. This thread is run periodically and independently of garbage collection. To make
this stuff work, .NET Framework developers created a special finalization queue. All finalizable
objects, once created, get an additional root reference from this queue. Therefore, when you remove
such an object, it is not collected but only promoted to the next generation (as it is referenced by the
queue). The reference from the finalization queue is then removed, but a new one is created - the
object gets a reference from the special f(inalize)Reachable or fReachable queue. The finalization
thread (remember, its the one that is run periodically) goes through objects in the fReachable queue
and runs their destructors or Finalize methods. Only after this does the finalizable object becoms
eligible for collection. So, where is the pitfall?
The problem is that if you follow this finalization pattern, you will extend the lifetime of a finalizable
object by one more GC cycle. To avoid this, we suggest employing the dispose pattern. This means
implementing the IDisposable interface on your class and add the corresponding Dispose method.
public void Dispose()
{
... // clean up unmanaged resources
GC.SuppressFinalize(this); // delete the reference from the finalization que\
ue
}

The GC.SuppressFinalize(this) deletes the reference from the finalization queue, thereby eliminating the problem of extended lifetime. All you need to do is to either call the Dispose method
explicitly:
var fObj = new MyFinalizableClass();
... // do something
fObj.Dispose();

or implicitly via the using statement:

Ineffective Memory Usage

65

using (var fObj = new MyFinalizableClass())


{
... // do something
}

Cache
Try looking at your cache from the perspective of how long it stores data. For example, the simplest
Dictionary cache implementations store cached data forever. Thus, it may store a lot of data that
will never be used again. To prevent this problem, consider implementing cache using the Most
Recently Used (MRU) or Least Recently Used (LRU) model.
On the other hand, consider a cache implemented on weak references (each object in cache is stored
via WeakReference). Though it is not really useful as an ordinary cache (data are wiped after each
full garbage collection), it may become in handy in some specific cases. In addition, you can use weak
references to enhance your MRU or LRU cache. For example, instead of removing a cache item that
is no longer needed, you can change its reference from strong to weak. Thus, there is a chance that
it will be still alive when needed (in this case, you return a strong reference to this object).

WPF Controls Undo


Continuing the topic of ineffective cache implementation, lets take a look at WPF controls. One
of the great things about WPF is that it enables Undo on several controls, e.g. on a textbox. For
every change we make to its contents, WPF will keep the actions in memory so we can easily undo
them with Ctrl+Z. Now imagine that our application has a textbox in which lots of changes are
being done. By default, the WPF UndoManager will keep up to 100 of these actions (in earlier WPF
versions, there was no limit at all).
So, having a high undo limit on textboxes in our applications may cause excessive memory usage.
Detecting
After running our application and making many changes in textbox contents, the dotMemory
snapshot overview will show a large number of Char[] objects.

Ineffective Memory Usage

66

If we drill deeper into this object set and look at its dominators (the Group by Dominators view),
we can see that these object types are held in memory by several others. The first dominator here
(TextTreeRootNode) is our textbox control itself. Of course it needs a few Char[] arrays to hold its
contents. The second one however, UndoManager, is more interesting.

It seems the UndoManager is keeping a few Char[] arrays as well. This is because WPFs undo
behavior will need this information to be able to undo/redo changes made to the textbox.
Solving
First of all, this is not really an issue. Its a feature! It is important to know its there, though, for two
reasons. First, when profiling WPF applications we may see a number of Char[] arrays being created.
Dont get distracted by the UndoManager and try focusing on other dominators if the allocations are
too excessive. Second, when building applications where a lot of text editing is done, high memory
usage can be explained by this undo behavior.
To limit the number of entries the undo and redo stacks can hold, we can update the textboxs
UndoLimit property to a smaller number. The default value was set to 1 (unlimited) in earlier .NET
version but in recent ones it defaults to 100.

Ineffective Memory Usage

67

<Grid>
<TextBox UndoLimit=10 HorizontalAlignment=Left
TextWrapping=Wrap AcceptsReturn=True />
</Grid>

We could also turn off undo entirely, by changing the IsUndoEnabled property.
<Grid>
<TextBox IsUndoEnabled=False HorizontalAlignment=Left
TextWrapping=Wrap AcceptsReturn=True />
</Grid>

String Interning
One more automatic inspection in dotMemory that helps you fight ineffective memory usage is the
String duplicates inspection. The idea behind it is quite simple: it automatically checks memory for
string objects with the same value. After you open a memory snapshot, you will see the list of such
strings:

Why are string duplicates bad? Well answer with another question: Why create a new string if it is
already in memory?
Imagine, for example, that in the background our app parses some text files with repetitive content
(say, some XML logs).

Ineffective Memory Usage

68

public void ProcessLogFile(string file)


{
using (XmlReader reader = XmlReader.Create(new StreamReader(file)))
{
...
// read XML element
// On this step, CLR allocates a new string in memory
string logEntry = reader.ReadElementContentAsString();
LogFileData.Add(logEntry); // add string to list
...
// some list processing goes here
...
}
}

So, dotMemory finds a lot of strings with identical content. What can we do?

The obvious answer is: rewrite our app so that it allocates strings with unique content just once.
Actually, there are at least two ways this can be done. The first one is to use the string interning
mechanism provided by .NET.
CLR Intern Pool
.NET automatically performs string interning for all string literals. This is done by means of an
intern pool a special table that stores references to all unique strings. But why arent the strings
in our example interned? The thing is that only explicitly declared string literals are interned on the
compile stage. Strings created at runtime are not checked for being already added to the pool. For
example:
string s = "ABC"; // will be interned
string s1 = "A";
string s2 = s1 + "BC"; // will not be interned

You can circumvent this limitation by working with the intern pool directly. For this purpose,
.NET offers two methods: String.Intern and String.IsInterned. If the string value passed to
String.Intern is already in the pool, the method returns the reference to the string. Otherwise, the
method adds the string to the pool and returns the reference to it. If you want to just check if a string

Ineffective Memory Usage

69

is already interned, use the String.IsInterned method. It returns the reference to the string if its
value is in the pool, or null if it isnt.
Thus, the fix for our log parsing algorithm could look as follows:
public void ProcessLogFile(string file)
{
using (XmlReader reader = XmlReader.Create(new StreamReader(file)))
{
...
// read XML element
string logEntry = String.Intern(reader.ReadElementContentAsString());
LogFileData.Add(logEntry); // add string to list
...
// some list processing goes here
...
}
}

Further memory profiling would show that strings are successfully interned.

Nevertheless, such an implementation has one disadvantage the interned strings will stay in
memory forever (or, to be more precise, they will persist for the lifetime of the process that hosts
our application, as the intern pool will store references to the strings even if they are no longer
needed).
If, for example, our app has to parse a large number of different log files, this could be a problem. In
such a case, a better solution would be to create a local analogue of the intern pool.
Local Intern Pool
The simplest (though very far from optimal) implementation might look like this:

Ineffective Memory Usage

70

class LocalPool
{
private readonly Dictionary<string, string> _stringPool = new Dictionary<str\
ing, string>();
public string GetOrCreate(string entry)
{
string result;
if (!_stringPool.TryGetValue(entry, out result))
{
_stringPool[entry] = entry;
result = entry;
}
return result;
}
}

The processing algorithm will change a little bit as well:


public void ProcessLogFile(string file)
{
LocalPool pool = new LocalPool();
using (XmlReader reader = XmlReader.Create(new StreamReader(file)))
{
...
// read XML element
string logEntry = pool.GetOrCreate(reader.ReadElementContentAsString());
LogFileData.Add(logEntry); // add string to list
...
// some list processing goes here
...
}
}

In this case, pool will be removed from memory with the next garbage collection after ProcessLogFile is done working.

Summary
Ineffective memory usage occers when your application consumes more memory than it
should or could.

Ineffective Memory Usage

71

To fight ineffective usage, you should answer two questions:


What objects retain the most memory?
What methods allocate the most memory?
Things worth checking include object lifetime and unmanaged resources, strings interning,
and WPF controls undo limit.

Memory Profiling in Unit Tests


In this book we talk a lot about memory profiling. But lets be honest, memory profilers can hardly
be called an everyday tool. Developers dont like to profile, and often start thinking about profiling
their product just before its release to market. Sometimes this approach works, and sometimes a lastminute issue like a leak or huge memory traffic crushes all your deadlines. The proactive approach
would be to profile your apps functionality on a daily basis, but no ones got the resources to do
that.
We think there may be a solution.
If you employ unit testing in your development process, it is likely that you regularly run a number
of tests on app logic. Now imagine that you could write some special memory profiling tests, e.g.
a test that identifies leaks by checking memory for objects of particular type, or a test that tracks
memory traffic and fails if the traffic exceeds some threshold.
This is exactly what dotMemory Unit framework allows you to do. The framework is distributed
as a NuGet package and can be used to perform the following scenarios:

Checking memory for objects of a certain type.


Checking memory traffic.
Getting the difference between memory snapshots.
Saving memory snapshots for further investigation in dotMemory.

In other words, dotMemory Unit extends your unit testing framework with the functionality of a
memory profiler.

How It Works
dotMemory Unit is distributed as a NuGet package installed to your test project:
PM> Install-Package JetBrains.DotMemoryUnit

dotMemory Unit requires ReSharper unit test runner. In this case, you should have either
ReSharper 9.1 (or later) or dotCover 3.1 (or later) installed on your machine. Another option is
to run tests using the standalone dotMemory Unit launcher. You can take the launcher either
from the NuGet package or from the zip package available for download on the dotMemory
Unit page.
https://www.jetbrains.com/dotmemory/unit/

72

Memory Profiling in Unit Tests

73

After you install the dotMemory Unit package, ReSharpers menus for unit tests will include
an additional item, Run Unit Tests under dotMemory Unit. In this mode, the test runner
will execute dotMemory Unit calls as well as ordinary test logic. If you run a test the normal
way (without dotMemory Unit support), all dotMemory Unit calls will be ignored.

dotMemory Unit works with MSTest, NUnit and most of the other unit-testing frameworks
available on the market.
dotMemory Unit can be integrated with any continuous integration system using a standalone
launcher. JetBrains TeamCity provides support for dotMemory Unit with a special plugin. For
more details please turn to the chapter Memory Profiling in Continuous Integration.

When to Use dotMemory Unit


Use memory tests in the same way as unit tests on application logic:
After you manually find an issue (such as a leak), write a memory test that covers it. Thus,
you can employ the same red / green workflow you use for ordinary tests.
Write tests for proactive testing - to ensure that new product features do not create any
memory issues, like objects left in memory or large memory traffic.
Now lets take a look at some examples to better understand what dotMemory Unit does.

Example 1. Checking for Specific Objects


Lets start with something simple. One of the most useful cases can be finding a leak by checking
memory for objects of a specific type.

Memory Profiling in Unit Tests

74

[Test]
public void TestMethod1()
{
var foo = new Foo();
foo.Bar();
// 1
dotMemory.Check(memory => //2
Assert.That(memory.GetObjects(where => where.Type.Is<Goo>()).ObjectsCoun\
t, Is.EqualTo(0))); // 3
GC.KeepAlive(foo); // protect objects from GC if this is implied by test log\
ic
}

1. A lambda is passed to the Check method of the static dotMemory type. This method creates a
dump of the managed heap, and will be called only if you run the test using Run test under
dotMemory Unit.
2. The memory object of the Memory type passed to the lambda contains all memory data for the
current execution point.
3. The GetObjects method returns a set of objects that match the condition passed in another
lambda. This line slices the memory by leaving only objects of the Goo type. The NUnit Assert
expression asserts that there should be 0 objects of the Foo type.
Note that dotMemory Unit does not force you to use any specific Assert syntax. Simply use the
syntax of the framework your test is written for. For example, the shown assertion uses the NUnit
syntax but could be easily modified for MSTest:
Assert.AreEqual(0, memory.GetObjects(where => where.Type.Is<Foo>()).ObjectsCount\
);

Example 2. Selecting Objects by a Number of


Conditions
To slice data by a number of conditions, you can build chains of GetObjects calls. The ObjectSet
type has two properties that you can use in test assertions: ObjectsCount and SizeInBytes.

Memory Profiling in Unit Tests

75

Assert.That(memory.GetObjects(where => where.Type.Is<Foo>())


.GetObjects(where => where.Generation.Is(Generation.LOH).ObjectsCount, Is.Eq\
ualTo(0));

Example 3. Checking Memory Traffic


The test for checking memory traffic is even simpler. When you need just to evaluate the amount of
memory allocated in a test, you can use the AssertTraffic attribute. The attribute is quite flexible
and allows you to filter traffic data by objects type, interface, or namespace.
In the example below, we assert that the amount of memory allocated by all of the code in
TestMethod1 does not exceed 1,000 bytes.
[AssertTraffic(AllocatedSizeInBytes = 1000, Types = new[] { typeof(string) })]
[Test]
public void TestMethod1()
{
... // Some user code
}

Example 4. Complex Scenarios for Checking Memory


Traffic
If you need to get more complex information about memory traffic (say, check for traffic of objects
of a particular type in some specific time interval), you can use a similar approach to the one in the
first example above. The lambdas passed to the dotMemory.Check method slice and dice traffic data
by various conditions.
[DotMemoryUnit(CollectAllocations=true)] // collect traffic data
[Test]
public void TestMethod1()
{
var memoryCheckPoint1 = dotMemory.Check(); // 1
foo.Bar();
var memoryCheckPoint2 = dotMemory.Check(memory =>
{
// 2
Assert.That(memory.GetTrafficFrom(memoryCheckPoint1).Where(obj => obj.In\

Memory Profiling in Unit Tests

76

terface.Is<IFoo>()).AllocatedMemory.SizeInBytes, Is.LessThan(1000));
});

bar.Foo();
dotMemory.Check(memory =>
{
// 3
Assert.That(memory.GetTrafficFrom(memoryCheckPoint2).Where(obj => obj.Ty\
pe.Is<Bar>()).AllocatedMemory.ObjectsCount, Is.LessThan(10));
});
}

1. To mark time intervals where memory traffic can be analyzed, use checkpoints created by
dotMemory.Check (as youve probably guessed, this method simply takes a memory snapshot).
2. The checkpoint that defines the starting point of the interval is passed to the GetTrafficFrom
method. For example, this line asserts that the total size of objects implementing the IFoo
interface created on the interval between memoryCheckPoint1 and memoryCheckPoint2 is less
than 1000 bytes.
3. You can use any checkpoint created earlier as a base for analysis. Thus, this line gets traffic
data between the current dotMemory.Check call and memoryCheckPoint2.

Example 4: Comparing Snapshots


Like in the standalone dotMemory profiler, you can use checkpoints not only to compare traffic
but for all kinds of snapshot comparisons. In the example below we assert that no objects from the
MyApp namespace survived garbage collection in the interval between memoryCheckPoint1 and the
second dotMemory.Check call.
var memoryCheckPoint1 = dotMemory.Check();
foo.Bar();
dotMemory.Check(memory =>
{
Assert.That(memory.GetDifference(memoryCheckPoint1)
.GetSurvivedObjects().GetObjects(where => where.Namespace.Like("MyApp"))\
.ObjectsCount, Is.EqualTo(0));
});

Memory Profiling in Continuous


Integration
When we say unit testing, we also imply continuous integration. Indeed, these two terms have
become inextricable and running unit tests is now an obligatory CI build step. What does dotMemory
Unit have to offer? A standalone dotMemory Unit launcher. dotMemoryUnit.exe is a command-line
tool that works as a mediator, running a particular standalone unit test runner and providing support
for dotMemory Unit calls in the running tests.
dotMemory Unit standalone launcher is distributed as a zip archive available for download at
dotMemory site. The dotMemory Unit NuGet package also contains the standalone launcher.
Using the tool is easy. This is how you can run NUnit tests from some MainTests.dll file:
dotMemoryUnit.exe -targetExecutable="C:\NUnit 2.6.4\bin\nunit-console.exe" -returnTargetExitCode
--"E:\MyProject\bin\Release\MainTests.dll"

Here:
-targetExecutable is the path to the unit test runner that will run tests.
-returnTargetExitCode makes the launcher return the unit test runners exit code. This is
important for CI as the build step must fail if any memory tests fail (test runners return a
nonzero exit code in this case).
The parameters passed after the double dash (--) are unit test runner arguments (in our case
its a path to the dll with tests).
Now its easier than ever to make memory tests a part of your continuous integration builds. Simply
add the command shown above as a build step on your CI server, and it will run your tests with
dotMemory Unit support.
The tools output contains data on successful and failed tests. For example:

https://www.jetbrains.com/dotmemory/download/#section=dotmemoryunit
https://www.nuget.org/packages/JetBrains.DotMemoryUnit/

77

Memory Profiling in Continuous Integration

78

...
Tests run: 3, Errors: 1, Failures: 0, Inconclusive: 0, Time: 28.3051788194675 se\
conds
Not run: 0, Invalid: 0, Ignored: 0, Skipped: 0
Errors and Failures:
1) Test Error : MainTests.IntegrationTests.Method2
AssertTrafficException : Allocated memory amount
Expected: 50,000,000
But was: 195,344,723
...

If you use JetBrains TeamCity as your CI server, you are a little bit luckier than others.

Integration with JetBrains TeamCity


For TeamCity users we created a special plugin that adds support for dotMemory Unit to all .NET
test runner types. Lets take a more detailed look at how this works.
1. On your TeamCity server, copy dotMemoryUnit.zip (get the latest version from Artifacts on
JetBrains build server to the plugins directory located in your TeamCity data directory.
2. Restart the TeamCity Server service. Now, all .NET test runners in TeamCity provide support
for dotMemory Unit.
3. As the dotMemory Unit standalone launcher is required for the plugin to work, you should
provide it to your build agent. There are two ways to do this:
Download and unzip the launcher to any directory on a TeamCity build agent. Dont
forget to Unblock the zip!
(Recommended) Use the launcher from the dotMemory Unit NuGet package referenced
by your project.
Note that if you omit binaries from the source control repository, you can use TeamCitys NuGet
Installer runner type. It will perform NuGet Package Restore before the build. All you need is to add
the NuGet Installer build step and specify the path to your solution.
1

![](images/nuget_installer_step.png)
https://teamcity.jetbrains.com/project.html?projectId=TeamCityPluginsByJetBrains_DotMemoryUnit&tab=projectOverview
https://www.jetbrains.com/dotmemory/unit/

Memory Profiling in Continuous Integration

79

1. Now, update the step used to run tests in your build configuration. Open the corresponding
build step in your build configuration:

2. Note that after we installed the dotMemoryUnit plugin, this build step now additionally
contains the JetBrains dotMemory Unit section. Here you should:
Turn on Run build step under JetBrains dotMemory Unit.
Specify the path to the dotMemory Unit standalone launcher directory in Path to
dotMemory Unit. Note that as we decided to use the launcher from the NuGet
referenced by our project (see step 3), we specify the path relative to the project checkout
directory.
In Memory snapshots artifacts path, specify a path to the directory (relative to the
build artifacts directory) where dotMemory Unit will store snapshots in case memory
test(s) fail.

Memory Profiling in Continuous Integration

80

3. Save the configuration.


Done! Now, this build step supports tests that use dotMemory Unit.
From the end-users point of view, nothing has changed. If you run the configuration and any of the
memory tests fails, the results will be shown in the overview:

The Tests tab will show you the exact tests that have failed. For example, here the reason had to do
with the amount of memory traffic:

Click on a failed test to see exactly what has gone wrong:

Memory Profiling in Continuous Integration

81

Now, you can investigate the issue more thoroughly by analyzing a memory snapshot that is saved
in build artifacts:

Conclusion
We hope our little book helped you get general understanding or at least refresh your knowledge
of how you can fight memory issues in .NET applications. In fact, one of the main book takeaways
is that you should not be afraid of memory profiling. With modern memory profilers (and little
background knowledge we hope now you do have) its not that complex and time-consuming as
its commonly believed. Moreover, with frameworks like dotMemory Unit, you can automate this
process once and for all.
If, for some reason, you missed books intro, its worth mentioning one more time that the content of
this book is based on various posts from our ReSharper blog. If you liked the book, youll definitely
like the blog as well. Its a great place to learn something new not only about our tools but also about
best .NET practices in the way we see them.

dotMemory and dotTrace


If youre interested in learning more about dotMemory and dotTrace, please note that you can download a fully functional evaluation of the products from our website at https://www.jetbrains.com/dotnet
as part of the ReSharper Ultimate package.

http://blog.jetbrains.com/dotnet/

82

You might also like