2012

Volume 27

JavaScript - Managing Memory in Windows Store Apps

By David Tepper | 2012

Windows 8 is designed to feel fluid and alive, letting users rapidly switch among multiple apps to accomplish various tasks and activities. Users expect to quickly pop in and out of different experiences, and they never want to feel like they have to wait for an app when they need to use it. In this model, apps are rarely terminated by the user; instead they’re frequently toggled between a state of execution and suspension. Apps are brought to the foreground for use and then moved to the background when the user switches to another app—and all the while users expect their machines not to slow down or feel sluggish, even as they open more and more apps.

In the Microsoft Windows Application Experience team investigations, we’ve seen that some Windows Store apps begin to encounter resource issues during prolonged use. Memory management bugs in apps can compound over time, leading to unnecessary memory usage and negatively impacting the machine overall. In our efforts to squash these bugs in our own products, we’ve identified a number of repeating problem patterns, as well as common fixes and techniques to escape them. In this article, I’ll discuss how to think about memory management in your Windows Store apps as well as ways to identify potential memory leaks. I’ll also provide some codified solutions to common issues the team has observed.

What Are Memory Leaks?

Any scenario in an app that leads to resources that can be neither reclaimed nor used is considered a memory leak. In other words, if the app is holding a chunk of memory that the rest of the system will never be able to use until the app is terminated, and the app itself is not using it, there’s a problem. This is a broader definition than the typical explanation of a memory leak, “Dynamically allocated memory that’s unreachable in code,” but it’s also more useful because it encompasses other, similar resource-utilization problems that can negatively affect both the user and the system. For example, if an app is storing data that’s reachable from all parts of the code, but the data is used only once and never released afterward, it’s a leak according to this definition.

It’s important to keep in mind that sometimes data is stored in memory that will never be used simply due to the user’s actions in that particular instance. So long as this information is potentially useable throughout the lifetime of the app or is freed when it’s no longer needed, it’s not considered a leak, despite never being used.

What Is the Impact?

Gone are the days when machines were in a race to the sky for resource availability. PCs are getting smaller and more portable, with fewer available resources than their predecessors. This is fundamentally at odds with increasingly common usage patterns that involve switching among multiple experiences rapidly, with the expectation of a snappy UI and all content immediately available. Today, apps are multitudinous and alive for longer periods of time. At the same time, machines have less memory to support them all, and user expectations of performance have never been higher.

But does leaking a few megabytes really make that big a difference? Well, the issue isn’t that a few megabytes leaked once, it’s that memory leaks in code often compound over time as use of the app continues. If a scenario leads to unrecoverable resources, the amount of unrecoverable resources will grow, usually without bounds, as the user continues to repeat that scenario. This rapidly degrades the usability of the system as a whole as less memory is available for other processes, and it leads users to attribute poor system performance to your app. Memory leaks are most severe when they appear in:

  • Frequent tasks (such as decoding the next frame of a video)
  • Tasks that don’t require user interaction to initiate (for example, auto-saving a document periodically)
  • Scenarios that run for extended periods (such as background tasks)

Leaks in these situations (and in general) can dramatically increase the memory footprint of your app. Not only can this lead to a resource-utilization crisis for the entire system, it also makes your app much more likely to be terminated instead of suspended when not in use. Terminated apps take longer to reactivate than suspended apps, reducing the ease with which users can experience your scenarios. For full details on how Windows uses a process lifetime manager to reclaim memory from unused apps, see the Building Windows 8 blog post at bit.ly/JAqexg.

So, memory leaks are bad—but how do you find them? In the next few sections I’ll go over where and how to look for these issues, and then take a look at why they occur and what you can do about them.

Different Kinds of Memory

Not all bits are allocated equally. Windows keeps track of different tallies, or views, of an app’s memory use to make performance-analysis tasks easier. To better understand how to detect memory leaks, it’s useful to know about these different memory classifications. (This section assumes some knowledge of OS memory management via paging.)

Private Working Set The set of pages your app is currently using to store its own unique data. When you think of “my app’s memory usage,” this is probably what you’re thinking of.

Shared Working Set The set of pages your app is utilizing but not owned by your process. If your app is using a shared runtime or framework, common DLLs or other multiprocess resource, those resources will take up some amount of memory. Shared working set is the measure of those shared resources.

Total Working Set (TWS) Sometimes simply called “working set,” this is the sum of the private working set and the shared working set.

The TWS represents your app’s full impact on the system, so the measurement techniques I’ll describe will use this number. However, when tracking down potential issues, you may find it useful to investigate the private or shared working sets separately, as this can tell you whether it’s your app that’s leaking, or a resource that the app is using.

Discovering Memory Leaks

The easiest way to discover how much memory your app is using in each category is to use the built-in Windows Task Manager.

  1. Launch the Task Manager by pressing Ctrl+Shift+Esc, and click on More Details near the bottom.
  2. Click on the Options menu item and make sure that “Always on top” is checked. This prevents your app from going to the background and suspending while you’re looking at Task Manager.
  3. Launch your app. Once the app appears in Task Manager, right-click on it and click “Go to details.”
  4. Near the top, right-click on any column and go to “Select Columns.”
  5. You’ll notice options here for shared and private working set (among others), but for the time being, just make sure that “Working set (memory)” is checked and click OK (see Figure 1).
  6. The value you’ll see is the TWS for your app.

Checking the Total Working Set in Windows Task Manager
Figure 1 Checking the Total Working Set in Windows Task Manager

To quickly discover potential memory leaks, leave your app and Task Manager open and write down your app’s TWS. Now pick a scenario in your app that you want to test. A scenario consists of actions a typical user would execute often, usually involving no more than four steps (navigating between pages, performing a search and so forth). Perform the scenario as a user would, and note any increase in the TWS. Then, without closing the app, go through the scenario again, starting from the beginning. Do this 10 times and record the TWS after each step. It’s normal for the TWS to increase for the first few iterations and then plateau.

Did your app’s memory usage increase each time the scenario was performed, without ever resetting to its original level? If so, it’s possible you have a memory leak in that scenario and you’ll want to take a look at the following suggestions. If not, great! But make sure to check other scenarios in your app, particularly those that are very common or that use large resources, such as images. Avoid performing this process on a virtual machine or over Remote Desktop, however; these environments can lead to false positives when looking for leaks and increase your memory usage numbers beyond their real value.

Using Pre-Windows 8 Memory-Leak Detection Tools

You might wonder if you can use existing memory-leak detection tools to identify issues with your Windows Store app. Unless these tools are updated to work with Windows 8, it’s very likely they’ll be “confused” by the app’s lack of normal shutdown (which has been replaced by suspension). To get around this, you can use the AppObject “Exit” functionality to directly close the app in an orderly fashion, rather than forcefully closing it via external termination:

  • C++—CoreApplication::Exit();
  • C#—Application.Current.Exit();
  • JavaScript—window.close();

When using this technique, make sure you don’t ship your product with this code in place. Your app won’t invoke any code that triggers on suspension and will need to be reactivated (instead of resumed) each time it’s opened. This technique should be used only for debugging purposes and removed before you submit the app to the Windows Store.

Common Sources of Memory Leaks

In this section I’ll discuss some common pitfalls we’ve seen developers run into across all kinds of apps and languages, as well as how to address these issues in your apps.

Event Handlers Event handlers are by far the most common sources of memory leaks we’ve seen in Windows Store apps. The fundamental issue is a lack of understanding about how event handlers work. Event handlers are not just code that gets executed; they are allocated data objects. They hold references to other things, and what they hold references to may not be obvious. Conceptually, the instantiation and registration of an event handler consists of three parts:

  1. The source of the event
  2. The event handler method (its implementation)
  3. The object that hosts the method

As an example, let’s look at an app called LeakyApp, shown in Figure 2.

Figure 2 LeakyApp

public sealed partial class ItemDetailPage : 
  LeakyApp.Common.LayoutAwarePage
{
  public ItemDetailPage()
  {
    this.InitializeComponent();
  }
  Protected override void OnNavigatedTo(NavigationEventArgs e)
  {
    Window.Current.SizeChanged += WindowSizeChanged;
  }
  private void WindowSizeChanged(object sender,
    Windows.UI.Core.WindowSizeChangedEventArgs e)
  {
    // Respond to size change
  }
  // Other code
}

The LeakyApp code shows the three parts of an event handler:

  • Window.Current is the object that originates (fires) the event.
    • An ItemDetailPage instance is the object that receives (sinks) the event.
  • WindowSizeChanged is the event handler method in the ItemDetailPage instance.

After registering for the event notification, the current window object has a reference to the event handler in an ItemDetailPage object, as shown in Figure 3. This reference causes the ItemDetailPage object to remain alive as long as the current window object remains alive, or until the current window object drops the reference to the Item­DetailPage instance (ignoring, for now, other external references to these objects).

A Reference to the Event Handler
Figure 3 A Reference to the Event Handler

Note that, for the ItemDetailPage instance to operate properly, while it’s alive the Windows Runtime (WinRT) transitively keeps all resources the instance is using alive. Should the instance contain references to large allocations such as arrays or images, these allocations will stay alive for the lifetime of the instance. In effect, registering an event handler extends the lifetime of the object instance containing the event handler, and all of its dependencies, to match the lifetime of the event source. Of course, so far, this isn’t a resource leak. It’s simply the consequence of subscribing to an event.

The ItemDetailPage is similar to all pages in an app. It’s used when the user navigates to the page, but is no longer needed when they navigate to a different page. When the user navigates back to the ItemDetailPage, the application typically creates a new instance of the page and the new instance registers with the current window to receive SizeChanged events. The bug in this example, however, is that when the user navigates away from the ItemDetailPage, the page fails to unregister its event handler from the current window SizeChanged event. When the user navigates away from the ItemDetailPage, the current window still has a reference to the previous page and the current window continues to fire SizeChanged events to the page. When the user navigates back to the ItemDetailPage, this new instance also registers with the current window, as shown in Figure 4.

A Second Instance Registered with the Current Window
Figure 4 A Second Instance Registered with the Current Window

Five navigations later, five ItemDetailPage objects are registered with the current window (see Figure 5) and all their dependent resources are kept alive.

Five Objects Registered with the Current Window
Figure 5 Five Objects Registered with the Current Window

These no-longer-used ItemDetailPage instances are resources that can never be used or reclaimed; they are effectively leaked. If you take one thing away from this article, make sure it’s that unregistering event handlers when they’re no longer needed is the best way to prevent the most common memory leaks.

To fix the problem in LeakyApp, we need to remove the reference to the SizeChanged event handler from the current window. This can be done by unsubscribing from the event handler when the page goes out of view, like so:

protected override void OnNavigatedFrom(NavigationEventArgs e)
{
  Window.Current.SizeChanged -= WindowSizeChanged;
}

After adding this override to the ItemDetailPage class, the ItemDetailPage instances no longer accumulate and the leak is fixed.

Note that this type of problem can occur with any object—any long-lived object keeps alive everything it references. I call out event handlers here because they are by far the most common source of this issue—but, as I’ll discuss, cleaning up objects as they’re no longer needed is the best way to avoid large memory leaks.

Circular References in Event Handlers that Cross GC Boundaries When creating a handler for a particular event, you start by specifying a function that will be called when the event is triggered, and then you attach that handler to an object that will receive the event in question. When the event actually fires, the handling function has a parameter that represents the object that initially received the event, known as the “event source.” In the button click event handler that follows, the “sender” parameter is the event source:

private void Button_Click(object sender, RoutedEventArgs e)
{
}

By definition, the event source has a reference to the event handler or else the source couldn’t fire the event. If you capture a reference to the source inside the event handler, the handler now has a reference back to the source and you’ve created a circular reference. Let’s look at a fairly common pattern of this in action:

// gl is declared at a scope where it will be accessible to multiple methods
Geolocator gl = new Geolocator();
public void CreateLeak()
{           
  // Handle the PositionChanged event with an inline function
  gl.PositionChanged += (sender, args) =>
    {
      // Referencing gl here creates a circular reference
      gl.DesiredAccuracy = PositionAccuracy.Default;
    };
}

In this example, gl and sender are the same. Referencing gl in the lambda function creates a circular reference because the source is referencing the handler and vice versa. Normally this kind of circular reference wouldn’t be a problem because the CLR and JavaScript garbage collectors (GCs) are intelligent enough to handle such cases. However, issues can emerge when one side of the circular reference doesn’t belong to a GC environment or belongs to a different GC environment.

Geolocator is a WinRT object. WinRT objects are implemented in C/C++ and therefore use a reference-counting system instead of a GC. When the CLR GC tries to clean up this circular reference, it can’t clean up gl on its own. Similarly, the reference count for gl will never reach zero, so the C/C++ side of things won’t get cleaned up either.

Of course, this is a very simple example to demonstrate the issue. What if it wasn’t a single object but instead a large grouping of UI elements such as a panel (or in JavaScript, a div)? The leak would encompass all of those objects and tracking down the source would be extremely difficult.

There are various mitigations in place so that many of these circularities can be detected and cleaned up by the GC. For example, circular references involving a WinRT event source that’s in a cycle with JavaScript code (or circular references with a XAML object as the event source) are correctly reclaimed. However, not all forms of circularities are covered (such as a JavaScript event with a C# event handler), and as the number and complexity of references to the event source grow, the GC’s special mitigations become less guaranteed.

If you need to create a reference to the event source, you can always explicitly unregister the event handler or null out the reference later to tear down the circularity and prevent any leaks (this goes back to reasoning about the lifetime of objects you create). But if the event handler never holds a reference to the source, you don’t need to rely on platform-supplied mitigation or explicit code to prevent what can be a very large resource-utilization issue.

Using Unbounded Data Structures for Caching In many apps, it makes sense to store some information about the user’s recent activities to improve the experience. For example, imagine a search app that displays the last five queries the user entered. One coding pattern to achieve this is to simply store each query in a list or other data structure and, when the time comes to give suggestions, retrieve the top five. The problem with this approach is that if the app is left open for long periods, the list will grow without bounds, eventually taking up a large amount of unnecessary memory.

Unfortunately, a GC (or any other memory manager) has no way to reason about very large, yet reachable, data structures that will never be used. To avoid the problem, keep a hard limit on the number of items you store in a cache. Phase out older data regularly and don’t rely on your app being terminated to release these kinds of data structures. If the information being stored is particularly time-sensitive or easy to reconstitute, you might consider emptying the cache entirely when suspending. If not, save the cache to local state and release the in-memory resource; it can be reacquired on resume.

Avoid Holding Large References on Suspend

No matter the language, holding large references while suspended can lead to UX problems. Your app will stay suspended for as long as the system is able to service the requests of other running processes without needing additional memory that can only be retrieved by terminating apps. Because staying suspended means your app can be accessed more easily by the user, it’s in your best interest to keep your memory footprint small during suspension.

A simple way to accomplish this is to simply free any references to large objects when suspending that can be reconstituted on resume. For example, if your app is holding an in-memory reference to local application data, releasing the reference may significantly lower your private working set, and it’s easy to reacquire on resume because this data isn’t going anywhere. (For more information on application data, see bit.ly/MDzzIr.)

To release a variable completely, assign the variable (and all references to the variable) to null. In C++, this will immediately reclaim the memory. For Microsoft .NET Framework and JavaScript apps, the GC will run when the app is suspended to reclaim the memory for these variables. This is a defense-in-depth approach to ensuring correct memory management.

Note, however, that if your app is written in JavaScript and has some .NET components, then the .NET GC won’t be run on suspend.

Memory Management in JavaScript Windows Store Apps

Here are some tips for creating resource-efficient Windows Store apps in JavaScript. These are the recommended fixes for common issues we’ve seen in our own apps, and designing with them in mind will help stave off many potential issues before they cause headaches.

Use Code-Quality Tools An often-overlooked resource, freeware code-quality tools are available to all JavaScript developers on the Web. These tools inspect your code for lots of common issues, including memory leaks, and can be your best bet for catching issues early. Two useful tools are JSHint (jshint.com) and JSLint (jslint.com).

Use Strict Mode JavaScript has a “strict” mode that limits the way you can use variables in your code. These limitations present themselves as runtime errors that get thrown when the extra rules are violated. Such coding restrictions can help you avoid common memory leaks, such as implicitly declaring variables at global scope. For more information on strict mode, its use and the imposed restrictions, check out the MSDN Library article, “Strict Mode (JavaScript),” at bit.ly/RrnjeU.

Avoid Circular Closure References JavaScript has a fairly complicated system of storing references to variables whenever a lambda (or inline) function is used. Basically, in order for the inline function to execute correctly when it’s called, JavaScript stores the context of available variables in a set of references known as a closure. These variables are kept alive in memory until such time that the inline function itself is no longer referenced. Let’s take a look at an example:

myClass.prototype.myMethod = function (paramA, paramB) {
  var that = this;
  // Some code
  var someObject = new someClass(
    // This inline function's closure contains references to the "that" variable,
    // as well as the "paramA" and "paramB" variables
    function foo() {
      that.somethingElse();
    }
  );
  // Some code: someObject is persisted elsewhere
}

After someObject is persisted, the memory referenced by “that,” “paramA” and “paramB” won’t be reclaimed until someObject is destroyed or releases its reference to the inline function it was passed in the someClass constructor.

Issues can arise with the closures of inline functions if the reference to the inline function isn’t released, as the closure references will reside permanently in memory, causing a leak. The most common way this occurs is when a closure contains a circular reference to itself. This usually happens when an inline function references a variable that references the inline function:

function addClickHandler(domObj, paramA, paramB, largeObject) {
  domObj.addEventListener("click",
  // This inline function's closure refers to "domObj", "paramA",
  // "paramB", and "largeObject"
    function () {
      paramA.doSomething();
      paramB.somethingElse();
    },
  false);
}

In this example, domObj contains a reference to the inline function (through the event listener), and the inline function’s closure contains a reference back to it. Because largeObject isn’t being used, the intent is that it will go out of scope and get reclaimed; however, the closure reference keeps it and domObj alive in memory. This circular reference will result in a leak until domObj removes the event listener reference or gets nulled out and garbage collected. The proper way to accomplish something like this is to use a function that returns a function that performs your tasks, as shown in Figure 6.

Figure 6 Using Function Scope to Avoid Circular Closure References

function getOnClick(paramA, paramB) {
  // This function's closure contains references to "paramA" and "paramB"
  return function () {
    paramA.doSomething();
    paramB.somethingElse();
  };
}
function addClickHandlerCorrectly(domObj, paramA, paramB, largeObject) {
  domObj.addEventListener(
    "click",
  // Because largeObject isn't passed to getOnClick, no closure reference
  // to it will be created and it won't be leaked
  getOnClick(paramA, paramB),
  false);
}

With this solution, the closure reference to domObj is eliminated, but the references to paramA and paramB still exist, as they’re necessary for the event handler implementation. To make sure to not leak paramA or paramB, you still need to either unregister the event listener or just wait for them to get automatically reclaimed when domObj gets garbage collected.

Revoke All URLs Created by URL.createObjectURL A common way to load media for an audio, video or img element is to use the URL.createObjectURL method to create a URL the element can use. When you use this method, it tells the system to keep an internal reference to your media. The system uses this internal reference to stream the object to the appropriate element. However, the system doesn’t know when the data is no longer needed, so it keeps the internal reference alive in memory until it’s explicitly told to release it. These internal references can consume large amounts of memory, and it’s easy to accidently retain them unnecessarily. There are two ways to release these references:

  1. You can revoke the URL explicitly by calling the URL.revokeObjectURL method and passing it the URL.
  2. You can tell the system to automatically revoke the URL after it’s used once by setting the oneTimeOnly property of URL.createObjectURL to true:
var url = URL.createObjectURL(blob, {oneTimeOnly: true});

Use Weak References for Temporary Objects Imagine you have a large object referenced by a Document Object Model (DOM) node that you need to use in various parts of your app. Now suppose that at any point the object can be released (for example, node.innerHTML = “”). How do you make sure to avoid holding references to the object so it can be fully reclaimed at any point? Thankfully, the Windows Runtime provides a solution to this problem, which allows you to store “weak” references to objects. A weak reference doesn’t block the GC from cleaning up the object it refers to and, when dereferenced, it can return either the object or null. To better understand how this can be useful, take a look at the example in Figure 7.

Figure 7 A JavaScript Memory Leak

function addOptionsChangedListener () {
  // A WinRT object
  var query = Windows.Storage.KnownFolders.picturesLibrary.createFileQuery();
  // 'data' is a JS object whose lifetime will be associated with the  
  // behavior of the application. Imagine it is referenced by a DOM node, which
  // may be released at any point.
  // For this example, it just goes out of scope immediately,
  // simulating the problem.
  var data = {
    _query: query,
    big: new Array(1000).map(function (i) { return i; }),
    someFunction: function () {
      // Do something
    }
  };
  // An event on the WinRT object handled by a JavaScript callback,
  // which captures a reference to data.
  query.addEventListener("optionschanged", function () {
    if (data)
      data.someFunction();
  });
  // Other code ...
}

In this example, the data object isn’t being reclaimed because it’s being referenced by the event listener on query. Because the intent of the app was to clear the data object (and no further attempts to do so will be made), this is now a memory leak. To avoid this, the WeakWinRTProperty API group can be used with the following syntax:

msSetWeakWinRTProperty(WinRTObj, "objectName", objectToStore);

WinRTObj is any WinRT object that supports IWeakReference, objectName is the key to access the data and objectToStore is the data to be stored.

To retrieve the info, use:

var weakPropertyValue = msGetWeakWinRTProperty(WinRTObj, "objectName");

WinRTObj is the WinRT object where the property was stored and objectName is the key under which the data was stored.

The return value is null or the value originally stored (objectToStore).

Figure 8 shows one way to fix the leak in the addOptions­ChangedListener function.

Figure 8 Using Weak References to Avoid a Memory Leak

function addOptionsChangedListener() {
  var query = Windows.Storage.KnownFolders.picturesLibrary.createFileQuery();
  var data = {
    big: new Array(1000).map(function (i) { return i; }),
    someFunction: function () {
      // Do something
    }
  };
  msSetWeakWinRTProperty(query, "data", data)
  query.addEventListener("optionschanged", function (ev) {
    var data = msGetWeakWinRTProperty(ev.target, "data");
    if (data) data.someFunction();
  });
}

Because the reference to the data object is weak, when other references to it are removed, it will be garbage collected and its memory reclaimed.

Architecting Windows Store Apps Using JavaScript

Designing your application with resource utilization in mind can reduce the need for spot fixes and memory-management-specific coding practices by making your app more resistant to leaks from the start. It also enables you to build in safeguards that make it easy to identify leaks when they do happen. In this section I’ll discuss two methods of architecting a Windows Store app written with JavaScript that can be used independently or together to create a resource-efficient app that’s easy to maintain.

Dispose Architecture The Dispose architecture is a great way to stop memory leaks at their onset by having a consistent, easy and robust way to reclaim resources. The first step in designing your app with this pattern in mind is to ensure that each class or large object implements a function (typically named dispose) that reclaims memory associated with each object it references. The second step is to implement a broadly reachable function (also typically named dispose) that calls the dispose method on an object passed in as a parameter and then nulls out the object itself:

 

var dispose = function (obj) {
  /// <summary>Safe object dispose call.</summary>
  /// <param name="obj">Object to dispose.</param>
  if (obj && obj.dispose) {
    obj.dispose();                
  }
  obj = null;
};

The goal is that the app takes on a tree-like structure, with each object having an internal dispose method that frees up its own resources by calling the dispose method on all objects it references, and so on. That way, to entirely release an object and all of its references, all you need to do is call dispose(obj)!

At each major scenario transition in your app, simply call dispose on all of the top-level objects that are no longer necessary. If you want to get fancy, you can have all of these top-level objects be part of one major “scenario” object. When switching among scenarios, you simply call dispose on the top-level scenario object and instantiate a new one for the scenario to which the app is switching.

Bloat Architecture The “Bloat” architecture allows you to more easily identify when memory leaks are occurring by making objects really large right before you release them. That way, if the object isn’t actually released, the impact on your app’s TWS will be obvious. Of course, this pattern should only be used during development. An app should never ship with this code in place, as spiking memory usage (even temporarily) can force a user’s machine to terminate other suspended apps.

To artificially bloat an object, you can do something as simple as attaching a very large array to it. Using the join syntax quickly fills the entire array with some data, making any object it’s attached to noticeably larger:

var bloatArray = [];
bloatArray.length = 50000;
itemToBloat.leakDetector = bloatArray.join("#");

To use this pattern effectively, you need a good way to identify when an object is supposed to be freed by the code. You can do this manually for each object you release, but there are two better ways. If you’re using the Dispose architecture just discussed, simply add the bloat code in the dispose method for the object in question. That way, once dispose is called, you’ll know whether the object truly had all of its references removed or not. The second approach is to use the JavaScript event DOMNodeRemoved for any elements that are on the DOM. Because this event fires before the node is removed, you can bloat the size of these objects and see if they’re truly reclaimed.

Note that sometimes the GC will take some time to actually reclaim unused memory. When testing a scenario for leaks, if the app appears to have grown very rapidly, wait a while to confirm a leak; the GC may not have done a pass yet. If, after waiting, the TWS is still high, try the scenario again. If the app’s TWS is still large, it’s extremely likely there’s a leak. You can hone in on the source by systematically removing this bloat code from the objects in your app.

Going Forward

I hope I’ve given you a strong foundation for identifying, diagnosing and repairing memory leaks in your Windows Store apps. Leaks often result from misunderstandings of how data allocation and reclamation occur. Knowledge of these nuances—combined with easy tricks such as explicitly nulling out references to large variables—will go a long way toward ensuring efficient apps that don’t slow down users’ machines, even over days of use. If you’re looking for more information you can check out an MSDN Library article by the Internet Explorer team that covers related topics, “Understanding and Solving Internet Explorer Leak Patterns,” at bit.ly/Rrta3P.


David Tepper is a program manager on the Windows Application Experience team. He has been working on application model design and application deployment since 2008, primarily focusing on the performance of Windows Store apps and how those apps can extend Windows to provide deeply integrated functionality*.***

Thanks to the following technical experts for reviewing this article: Jerry Dunietz, Mike Hillberg, Mathias Jourdain, Kamen Moutafov, Brent Rector and Chipalo Street