Memory Leaks in Lightning

Memory leaks in Lightning

Oh no, if you’re here it probably means you have a memory leak (or are just curious). Either way memory leaks are never fun and they can be a complete pain the rear to deal with. Fortunately you’re not the first, nor the last, that has had this and this article is written to get you going in the right direction. So let’s find that leak, patch up, seal it down and smooth(er) sailing from now onward!

Be sure to have a look at the embedded development principles too!

What is a memory leak?

A memory leak is a phenomenon where memory resources are being allocated and not being released or sometimes not being released quick enough (more on that later). Memory leaks are hard to trace and require special tools/knowledge to figure out. There are several strategies to hunt for memory leaks and in this article we hope to give you some tools, resources and strategies on how to deal with them.

In this article we will define a memory leak as follows: “the browser allocating system or graphics memory at a rate it becomes a problem for the experience of the user”.

Why so specific?
Not all leaks are problems! WHAT? I thought we where FIXING leaks here, not ignoring them!! Yep - not all leaks need to be solved, let’s start with that. It is entirely possible your application might be leaking without it ever being a problem. If the amount of Kb or Mb per time period is within bounds of the typical time used by the user, this is can be totally acceptable. For example if you’re leaking 1 or 2 Mb per hour, and you have more then 150 Mb of “room” to run your application. While your application sees a typical usage time of less then 2 hours, you’re totally fine! You wont hit the memory limit any time soon. Does that mean you can ignore the memory leak? Possibly… yeah, is it worth while to investigate? Sure! But it isn’t an alarming high priority if you have that much margin.

On the other hand if you are leaking 30 Mb per 5 minutes and have less then 100 Mb of “room” while your application is typically used for more then an hour. That changes the priority, this is a blocker and will need to be investigated pronto!

But if I stress test it! it leaks much harder!
Stress tests are great! No doubt, but the value of the test diminishes if it is a test that over aggressively open/closes screens that are far beyond typical usage. Reason is you have to give the browser room to initiate Garbage Collection (read more on that below), this happens on a timing the JavaScript itself cannot control. If your test is to aggressive you might hit a memory limit before the GC can kick in effectively. Which in turn nullifies the test as your test scenario is not realistic. The test needs to be within normal user behavior boundaries allowing the browser and underlying stack to run GC, clear caches, do it is cleanup routines. Building in “grace periods” where you let the stack take a breather are imperative to a proper test.
In similar lines the test timeline needs to be sensible, you cannot expect a user to navigate/watch video/play video games in your application for extremely long periods. Align your expectations to that what you expect a typical user to do, then add a little margin on top of that to keep the test scenario realistic. Now we got the disclaimers out of the way, let’s get back to memory leak hunting!

Memory leaks can be devastating, from a complete crash/hang of the application to graphical “black spots” on screen (read: “types of memory leaks” below). Either way the user that is using your application is not going to be pleased when you run out of memory. The user will likely be annoyed, the experience will be terrible. It goes without saying that no matter how cool your app looks, how fast it renders going Out Of Memory (OOM) is by far the worst experience possible. Let’s do something about that!

Types of memory leak

WHAT? There are multiple variants of memory leaks?
Yes, and don’t worry, we’ll give you tools to identify the most common ones. Luckily there are only 3 really common types:

  1. Javascript Memory Leak
  2. Graphics Memory Leak
  3. Browser Memory Leak

Type 1 and 2 are easy to distinguish, type 3 is really uncommon. So nothing to fear!

JavaScript Memory leak

A JavaScript memory leak means the application will grow in memory, indirectly causing the browser to grow to a point there is no memory left to use. No matter how smart the JavaScript Compiler/browser is, it won’t stop a memory leak from happening. This is easy to detect as the JS Heap grows indefinitely or the browser process gets very large on your process list. Unfortunately these are hard to deal with as they can be a magnitude of things within JavaScript and require going deep into the inspector/heap to figure out what is going wrong. To make things worse the JS memory leak is one of the most impactful ones, best case your app gets rebooted as a watchdog kicks in and restarts the browser process hopefully returning the user to the original URL. Worst case (and more common) is a black screen and the user can only frantically push remote buttons hoping to get back to a main menu. Very bad indeed.

We’ll give you some strategies, tools and background in “Javascript memory Leaks”.

Graphics Memory leak

Yep - the GPU (graphical processing unit) has memory too and Lightning really likes to use it well. Every picture, drawing, text, turns into a Texture on the GPU. In turn each texture takes N amount of memory out of the available GPU memory. In Lightning stuff that you draw from Lightning templates (rectangles, circles, boxes, shadows, reflections) are pretty lightweight, fonts and text sit in the middle and the biggest consumer of memory are pictures. Fancy, beautiful posters of all those wonderful action movies you’re trying to render are by far the largest consumer of texture memory.

Thankfully the graphics memory going OOM is really easy to detect, are you seeing black spots on the screen? Big black holes where a poster, text or something else should have been? Yep - that’s it, you’ve exhausted the GPU memory and there is no space left to create that shiny new texture in the new render frame. User will be annoyed for sure, but thankfully this should not crash your application (though that is entirely dependent on the browser). It will only recover once we can garbage collect some textures and make more space on the GPU.

Read more about GFX memory leaks in “Graphics Memory leaks”.

Browser Memory leak

Okay this is a rough one. This is very similar to a JS Memory leak in terms of the experience. Impossible to reproduce on a desktop and more common with new integrations with exotic browsers on new platforms. Thankfully this is a very rare case but there is no reason to exclude the lower layers from a memory leak. Though you can only prove this if you’ve fully covered your bases on Javascript Memory leak so if you haven’t done your homework on that part, you can’t claim this is a browser leak.

However are you confident your application does not leak JS Heap memory? Validated and confirmed with DevTools?
And it doesn’t crash on other devices just this one particular one? E.g. a RPI can run for ever and device ABC does fine, device DEF does not? Does it happen with similar sorts of applications?

Validate if the device has enough memory to begin with, it might not be a leak in the first place but just a shortage of available memory. For example loading a 450 MB Application on a device that has less than 250 MB runtime available will push you over the threshold though might not be a leak at all (just trying to fit a square peg into a round hole) and it means your application will need to go on a diet before you can achieve that (which is different, though related).

If the device has enough memory for your application it is time too loop in your favorite embedded browser integration team. It is beyond the scope of this article to debug that, as it requires extensive knowledge of embedded engineering with low level memory allocations, browser specific bmalloc handling and other funky tools like Valgrind to help debug that. Godspeed dear friend, hope they find it soon and peace out.

JavaScript memory leaks

Okay let’s dive into the first case of memory leaks. JS memory! What is it? Why is it a thing? How do I deal with it? Let’s get into the details.

But JAVASCRIPT?!?!11oneone

You cannot blame JavaScript. I’m sorry I don’t want to be harsh but it’s really you or one of your dependencies. JavaScript like any other coding language else uses memory.

But I don’t deal with memory in JavaScript

Yeah you do. Okay maybe not directly like C++ with malloc free and pointers, but the basic concepts of a binary computer don’t change with the programming language of choice. The device still has a CPU to process things, memory to temporarily hold data, a harddrive to persist data and a graphics processor to draw things. So indirectly whether you like it or not you are allocating memory when you write your JavaScript application. Though JavaScript, thankfully, makes this a lot easier to deal with then a low-level programming language.

So what does that mean?

Javascript is a high-level programming language. Meaning certain primitives of running software on a binary computer, such as memory allocations, are abstracted and handled by the JS engine. JavaScript is a Just-In-Time (JIT) compiled language, that compiles the code at runtime and it’s hella crazy smart, stacked with optimization tiers to make your code run lightning fast (pun intended). Like for reals, go read about the LLVM or FTL compiler tiers in JavaScriptCore if you like to get into the weeds on that.

However you are still indirectly in control over the data that is being retained in your application. If you are leaking JS memory, something is not being released and the JS runtime will not garbage collect it.

Did you just call my app garbage?

No! Maybe? Just kidding, the concept of cleaning memory within JavaScript is called “Garbage Collection”. Where old, no longer used, variables are marked as “garbage” and periodically a process starts that “collects” all the garbage and “evicts” the memory. This means that once you delete a variable it is not immediately released in memory. Only once the GC process comes and collects it the memory will be freed. That process is entirely controlled by the JS engine and you, as a developer, have no direct control over when that happens (typically).

How does memory work in JS?

Within Javascript you roughly have the following lifecycle:

  1. Assign a variable (var eww, let or const)**
  2. Use the variable
  3. Clear the variable
  4. Garbage collection/eviction of memory

** note if you use globals a lot (even a bit) and are reading this for a memory leak there’s a high change we’d get really angry with you, globals are evil. Please remove all of them, read up on why globals are very evil and try again.

In a high-level language like JavaScript you have direct control over step 1 and 2, step 3 is where it gets complex (and where likely your memory leak might be) and step 4 is done automatically. As a programmer you do not have direct control over the Garbage collection timing, you can only dereference, delete or clear a variable. Garbage Collection is done entirely autonomous by the JS engine and will kick in periodically or when the engine reaches critical memory levels on the OS.

If you’re leaking memory in JavaScript it’s likely a variable (or set of variables) that are not being cleared properly or still has a lingering references somewhere. The Garbage collection looks at the variable and checks if it is still being used somewhere. If you reference it or hold on to it the variable it will not be cleaned up and well here we go: this is a leak. For more details on memory management in JavaScript please read this.

Does that mean I have to go over ALL my variable assignments?
That would finding a needle in a haystack, not very viable. A typical project will have hundreds, if not thousands of variable declarations. Before we dive into how to find a memory leak, let’s talk about “how to determine a memory leak”.

Determining a JS memory leak

Before we actually start looking for a memory leak, let’s determine if there is really a leak and if the leak is really a problem.

Is there a memory leak?

There’s a few ways to determine a memory leak and they each have a level of precision:

  1. Look at the browser’s total memory consumption (less concise)
  2. Look at the web process (1 tab - JS runtime) memory consumption (pretty close)
  3. Look at the JS Heap (very concise)

Though it is important to understand why different levels matter, as they may tell you different stories about the origin of the leak.

My entire browser process is growing in memory

OK - this isn’t cool but other then “it’s leaking” doesn’t quite tell you where the leak might be. It may or may not be inside the JavaScript code that you’ve written. what so I dont have a memory leak? No no, there’s definitely something leaking if the browser process increases in memory indefinitely and never releases the memory in time before it goes out of memory. But there is a lot more happening inside the browser then just the JavaScript runtime. For example:

  • Playing video happens within the browser process (and decryption sometimes too!)
  • Fetching data from the network
  • Caching images that are being referenced through the image tag
  • The actual JavaScript runtime

Best way to find out if your code, or its dependencies are leaking read JS Heap down below.

If you do not have access to Chrome or Safari DevTools (which really shouldn’t be the case, you can’t safely develop applications without DevTools on your development machine, but for the sake of the argument lets assume you don’t) try the following things to isolate your problem:

  • Run your app without video enabled, does it still increase? If yes - this is likely video playback related and unrelated to your JS code (or Lightning). You may also want to check if switching video playback techniques or turning off DRM changes the situation. However helping out with video playback issues is outside of the scope of this article and Lightning.
  • Run your app without fetching big data (like EPG data or Catalogue data), turn off XHR for a test and see if it still increases. If it does, we can exclude network process handling. If it doesn’t it could still be JS code that handles the response and you’ve just isolated a bit of where the leak might be.
  • Are there other browser processes/tabs running? Since you can only see the browser process as a whole, try to exclude other tabs getting into your test. Turn them off and see if that changes the memory behaviour
  • Does navigating to about:blank release the memory? If yes, it is very likely an application/JS Heap related issue. If not, your browser might be holding on to stuff we can not control. Check with your browser integration team.

I can see the browser process/thread that I’m running increase in memory

Great! Much of the above applies, try disabling/enabling video, stop navigating, stop fetching data, turn off images and see if that changes the behavior. The biggest difference between the entire browser process is that you know for sure it is your application running in that precise tab.

Just to confirm, which process do you see growing? A typical modern browser may run 4 different types of processes:

  • UI Process (where the rendering happens)
  • WebProcess (where JS is being processed)
  • NetworkProcess (where data is fetched from the network)
  • DatabaseProcess (indexeddb or storage related)

note If you’re only seeing 1 process, you may run an older browser with a WebKit1 architecture or something different entirely. Use the previous chapter to investigate.

If UIProcess or WebProcess increases (dependent on whether or not you are looking at a WebKit or Chrome based browser, if its FireFox ¯\(ツ)/¯) it is likely application or rendering related, best is to dive into the JS Heap or start the excluding bits of code to isolate.

If NetworkProcess is growing it is related to fetching stuff from remote servers. You might be doing extremely large XHR Requests, for example retrieving a massive JSON or XML file. Or fetching overly large images >25 Mb? This will make the network process very unhappy. Try to chunk your data requests, fetch smaller images (big images are never a good idea on an embedded device) and see if the problem goes away. If it doesn’t it might be a good idea too loop in your browser integration specialists.

If DatabaseProcess increases this could be related to misuse of indexeddb. If you are using indexeddb features and this process grows beyond control, check what you’re storing as part of indexeddb. Is there something stuck that is writing something over and over? Or is the data that you are trying to store just too big to fit on a local device? Should be easy to pin-point by simply disabling that portion of your app that is trying to write to indexeddb and check if that matters.

JS Heap

OK we’ve made it this far. Congrats, we’re about to dive a little deeper into JavaScript. Buckle up and let’s talk JavaScript data handling!

When storing data in JavaScript we can store it into two different buckets: static data allocations and dynamic data allocations.

static allocations

static data allocations go into a place called the stack. It is a data structure that JavaScript uses to store static data where the engine knows the size of the allocation at compile time. This includes all primitives such as booleans, strings, numbers, undefined and good ol’ null. Since the size of the allocations are known it will allocate a fixed amount of memory. Browsers have a limit to how large such primitives can be and your memory leak is probably not here in this cateogry. If so the engine will throw errors at you before it even executes a single line of your code.

If you’ve played around with break points before or are familiar with the JavaScript event loop I’m sure you’ve seen the call stack which is a list of functions the compiler can call and is in the same category as static allocations.

And yes, using const foo = 'bar' results into a static allocation (which makes const's so favorable!).

dynamic allocation

dynamic allocations are stored in the HEAP! Now you know. This is where the engine stores objects and functions and these are dynamically allocated as needed. Meaning they will increase as required by the program without a limit. The JS engine has no limit to how much memory it will allocate and only the physical limit of available memory provided by the kernel will determine whether or not the application will go out of memory. If your application is leaking user space memory, this is where it will be.

Common JS Heap memory leaks

Okay now we understand what a memory leak is, let’s go over a few common ones:

  • Global variables. I won’t go into much details why globals are bad, there are plenty of documents on the internet and I highly encourage you to google “javascript and globals” and spend a night reading up on this.
  • Forgotten timers/intervals. Make sure to clear your timers/intervals when going out of code, if you don’t clear them they will be kept into memory by the javascript engine. This is by a large extend the most common problem with memory issues.
  • Lingering listeners. Using events is great! But be sure to use removeEventListener when exiting out a particular piece of code to make sure the browser can mark your callback function ready for GC.
  • Large XHR data requests. Trying to parse a 10 MB JSON File? Probably not a good idea, see if you can trim down the data from the server, or write a custom preprocessor that runs in the cloud to minimize the amount of data you need to send to the client. Though if you do make absolutely sure you trim the JS Object once its parsed and there are no references left. Best practise is to create a new JS Object, copy over the data you require and dereference/null the parsed JSON object as soon as possible to mark it for Garbage Collection. It gets very tricky here with the frequency of where the data is fetched and how fast the GC routine can clean it.

These are typical patterns that should be looked for in your own code. Alternatively be very mindful of projects that you are pulling in as a dependency, not every framework/tool/library out there is very mindful of memory if they are built for the desktop. Please follow embedded first development principles.

How to inspect the JS Heap?

Chrome and WebKit come with “Developer Tools” that provide neat-o features to help you debug memory leaks. You should be quite familiar with those by now, if not I highly suggest you google “Chrome devtools” and start doing some reading up on it.

Go into DevTools by doing Chrome Settings -> More Tools -> Developer Tools

This should give you a neat little window like this:

Once you are in the devtools go to the Memory tab:

Click the Take heap snapshot button:

This creates a “at the time” snapshot of the dynamic memory allocations within your application. Congratulations! You’ve made your first step into finding that leak. A few more things we suggest:

Create multiple JS Heap snapshots and compare the size.

Does the size remain stable? Does the size increase if you navigate? Are there particular screens that have a larger delta? Does the JS Heap drop if you exit screens? What if you leave the app alone for 5 minutes does it drop down, remain the same or increase?

This will help you determine where the leak might be. It could be bound to a particular section of the application, if it is only part of 1 particular screen, try to isolate the screen in a stand alone application to further pin point where the issue might sit.

Next, lets inspect the JS Heap

On the table on the left you see a Tree of different Objects/Arrays/Closures, sorted by their size in the HEAP. Compare different JS Heap snapshots and see which one of these top level categories increases drastically in size. Once you see it, dive into it by expanding the category in question.

The largest size should be on top and Google Developer Tools give you a reference to which Function is responsible for the allocation. Try to find a position in the stack that related to code that sits within your project.

This should be a pretty good approximate on where you memory leak sits, combined with the typical patterns above Happy Hunting friend.

Still not sure what to do? Here are some quick suggestions:

  • Disable that portion of the code, does it go away? Bingo! You’re in the right spot.
  • Don’t see a global, forgotten timer or lingering listener? Try forcing a delete user or user = null (where user is the data variable in question) to force the runtime to clean it.
  • Are there references higher up the call stack that are not being released? Try placing break points everywhere the object/function is being used to see if it might linger higher up the stack.
  • Get another set of eyes! The amount of time I’ve lost staring at a problem that was right in front of me, but I couldn’t see anymore because I’m so used to the code is beyond expression. Having another dev take a look is refreshing! Don’t be shy, pair programming rocks.
  • Does the stack lead to another library or dependency? Well shux. That is a tough one, either bug hunt in someone else’s code, create a ticket (though they might not care if its a desktop oriented project) OR re-evaluate the real need of the dependency.

Determining a graphics memory leak

Okay so we’ve covered JavaScript related memory leaks. Now let’s talk about our second potential case of memory issues, graphics memory. Each devices has a Graphical Processing Unit (GPU) that draws stuff on the screen. Within Lightning each item you see on the screen is a texture that has been uploaded from JavaScript to the GPU. Note, the term uploaded is different from sending a file to a remote server, but in this case there is a step where Lightning pushes or uploads one or more textures to the GPU. This is important to know as that affects how JS Memory and GPU memory are handled.

How does memory work on the GPU?

Okay lets talk graphical processors (GPU) real quick, these are dedicated processing units on a computer system that render frames. Each frame is a picture that consist of a graphical composition of what we want to draw on screen. Basically all elements, images, text, shadows, effects drawn to mapped down to 1 image is a frame. The faster the graphical processor can render frames the smoother the experience of the User Interface will be.

Okay so far so good, right. More frames per second, is better. So how do GPUs draw faster? By having a dedicated section of memory where they can access essential data as fast as they can. If a GPU has to wait for the other components (CPU, Memory, etc) to provide them with the textures to draw this would be really slow. So instead they have their own memory bucket (also known as VRAM - Video Random Access Memory) where they can store stuff that they require to draw that is separated from the rest of the system.

In a dedicated GPU card the VRAM is of a special type of memory, typically super fast, high available memory. However in embedded systems this is typically not the case and instead memory is shared between components. Meaning 1 physical block of memory hardware will be divided between GPU and CPU, the firmware/kernel configuration determines how that memory is split between the two processing units. Meaning a physical block of 2 GB Memory might be split up in 1 GB for Linux user space (kernel + programs, this is where you JS engine will live), 225 MB for Graphics Memory and the rest for video buffers (we’re ignoring those here).

This means there is much less memory available to render graphics then your PC/MAC and the memory isn’t as fast as your dedicated graphics card (GPU VRAM tends to have high clock speeds and a large access bus). Your typical videocard will have multiple Gigabytes of memory while embedded systems are likely to have a few hundred (at best) Megabytes of memory. This is where embedded first development principles start to really come into play.

What does the memory look like on a GPU?

Great question, thought you’d never ask. Everything stored on the GPU sits in a frame buffer. (A buffer for the frame, frames are the things we’re trying to draw, get it?) All textures, shadows and depth buffers are stored in on the GPU memory. And the GPU will take that memory to draw things really fast. Once it’s drawn on the screen the memory does not go away, as the next frame might require (or not!) the same objects to be rendered again. It is up to the process in control of the graphical output to clean up textures.

This is very different from JS Garbage Collection and where it gets interesting. If you are rendering using CSS3 and the DOM tree the browser will take care of cleaning up the graphics. However since we’re using Lightning (assuming you are, reading this article) and Lightning renders in WebGL this is an entirely different case! Lightning has the responsibility to run garbage collection and free up as elements/textures are no longer in use. But before we dive into memory management in Lightning, let’s talk about textures first.

Textures, why should I care?

Explaining how a GPU works is complex, it would take a book or two to do it right. However this is a, overly simplified, short introduction into some of the basics. Why is this important? Well it will help you understand the effects of your code onto the piece of hardware actually does the drawing. To understand the memory leak you have to understand the impact of rendering on a GPU.

Drawing graphics starts with vertices. See it as geometry, triangles that outline where the shape will be on the screen. These can be rasterized and filled with a texture - once a shape has been drawn we can apply effects through shaders programs (such as greyscale, shadows, reflects, etc). We’re ignoring rasterization/shaders for now and we’ll focus on texturizing for now. That’s where we fill a shape with an image.

Easy peasy right? Now why do we care? Each of these steps take up memory and processing on the GPU. The more shaders you apply higher the hit on performance. The majority of memory on a GPU is consumed in texturizing components. Mostly images, large background effects take up a significant portion of GPU Memory. The larger the image the higher the impact on the GPU memory.

Couple of things to keep in mind with images:

  • Use an image file size/compression that makes sense, e.g. trying to load 20 JPG images of 20 MB each while you only have 120 MB Graphics memory is probably not a good idea
  • Use an image size (width x height) that makes sense, avoid loading a HD resolution thumbnail for a poster
  • Re-use textures if you can, versus loading multiple of the same kind.

Lightning graphics memory lifecycle

Much alike JS Memory handling the Graphics Memory has a life cycle, each element you draw gets created, memory gets allocated and will be cleaned up once the garbage collection routine is run.

The lifecycle is pretty similar

  1. Create a component (new item in a template)
  2. Place the component on screen
  3. Remove component from screen/destroy screen
  4. Garbage collection/eviction of memory

As a developer you have direct control over step 1, step 2 and step 3. You can trigger a Lightning Garbage collection by calling this.stage.gc() however it will only kick in once the GC threshold has been reached.
For more information on the GC API, please see Garbage Collection in Lightning.

Please note running the GC does take a hit on the CPU to process this, so it may affect FPS on certain low-end devices. Use with care!

Lightning memoryPressure and strategy

The Lightning runtime does not know exactly how many Megabytes of memory are being used as we can not determine how much MB the textures/shaders/vertices will take. Instead memory usage is expressed into amount of pixels that we are actively using on the GPU. The default memoryPressure is set to 24e6 which amounts to ~125 MB of Graphics Memory. Please check your device and see if it has 125 MB of available memory, if not lower the memoryPressure to force Lightning to run GC earlier and stay below the threshold or increase it if your device has more.

For more details on how to set the memory pressure on Lightning please see this bit here.

But even with memoryPressure I’m still going OOM! just like the JS engine, Lightning will increase memory indefinitely if you do not release it. The same principles apply, you have to make sure your Lightning component is destroyed, nullified and dereferenced before we can clean it up. If you see your graphics memory increase and never go down regardless of the memoryPressure setting, you have a leak. Not sure how to see that? We’ll dive into detecting a graphics leak in the next chapter.

My graphics memory usage is always so high! Lightning will fill up the Graphics Memory up to the configured memoryPressure and only then start cleaning cache. Reason is creating/uploading textures to the GPU takes a hit on performance. In order to guarantee higher speeds Lightning will attempt to cache as much as we can. The more we can cache the lower the load on the device will be and inherently the higher the FPS will get in return. So yes, this is normal behavior for Lightning. The higher rendering speeds aren’t some magic trick, it is Lightning using the GPU to its fullest extend.

Common graphic memory leaks

There are a few patterns you should be aware of with graphical memory leaks:

  • Holding on the Lightning Elements, just like JS variables these should be released when no longer required
  • Using excessively large images, be mindful of the image quality/resolution you are pulling in
  • Repetition of images, reusing images? its easier to reference then to recreate
  • Creating large amounts of images offscreen, yes pre-creating sections of the screen outside of the view port is great but use with care

When you run out of memory the same logic applies as with the browser, before you claim it is a memory leak:

  • Does the device have enough memory in the first place? If no, lets put your app on a diet before calling it a leak
  • Does the graphics memory increase and never go down? Yep, lets talk about “how to detect a memory leak in the next chapter”

If the device does not have enough memory to run your application, e.g. your application typically needs 125 MB of Graphics Memory but the device only has 95 MB there are a couple of things you can try to “put it on a diet”:

  • Use lower resolution images, this is by far the lowest hanging fruit
  • Instead of a background image, use a background shader (shaders are cheap, images are expensive)
  • Load less content offscreen, this will affect performance but less textures preloaded means less memory
  • Only load the screens you need in memory, see below:

Lightning Router lazy load/unload and GC options. In order to lower the graphics memory used (and well indirectly the JS memory too!) you can leverage the lazy create/unload options of the router:

    "router": {
      "lazyCreate": true,
      "lazyDestroy": true,
      "gcOnUnload": true,
      "backtracking": true,
      "reuseInstance": false,
      "destroyOnHistoryBack": false

Lazy load will only load the screen (and thus its textures) when its called for in the route and likewise unload/destroy the screen when it is no longer in use and the user navigates out. This does take a little bit of performance but keeps the memory allocations limited to just the screen the user needs.

Fore more information on the Lightning Router settings please find them here.

Detecting a graphics memory leak

Dealing with a graphics memory leak is slightly less common than JS memory leaks. Especially in the desktop world this is mostly a non-issue, so there are less refined/readily available tools for you to help you pin-point your memory leak. Unfortunately there isn’t something similar as the JS Heap inspector for graphics, so we’ll have to go about this a little bit differently.

It’s important to know the memory on your target device … can’t stress this enough. You need to know your target devices, embedded first development principles apply. Without knowing what your target device can handle you are completely in the dark and it is hard to find a solution. You need to have:

  • A target device to test on, lowest denominator preferred (though having an additional fast device is nice too)
  • Tools to see the CPU usage and memory consumption, including graphics memory consumption, on your target device

If you didn’t get this for your project, demand them. It’s really hard to be successful while flying blind and you can’t see if the memory leak is really an issue or not. Worst case you might need to hop on to the console of the device and execute some bash/terminal commands to get the CPU/memory/graphics memory out of the device, it’s better then not knowing at all.

Luckily though there are some tools to help you on your development machine, though these are not 100% accurate they should give you an idea whether or not you are leaking memory.

What does a graphical memory leak look like?

The application will typically not crash, instead new uploads to the GPU turn into black “voids” as the GPU has no room to create the buffers anymore. It is really easy to detect, once you start seeing black holes in your UI it is very likely a graphical memory issue.

For example:

Note the black hole where text is supposed to be. Compared to the original:

This is an example case where a texture can no longer be created because the Graphics Memory is exhausted. This doesn’t mean the memory leak is where the black hole is. The black hole is just where it starts failing and this will likely be different every time it runs out of memory.

The Chrome Task Manager

Okay it isn’t as fancy as the memory tab in the Chrome Devtools but it is better than nothing. The Chrome task manager will get you a list of Chrome processes that are running, including the graphics memory consumption of a particular tab.

You can find the Chrome Task Manager under Settings -> More Tools -> Chrome Task Manager:

This should give you a window with your chrome processes, load up your application in your local development and find the tab responsible of running your app. It should look something like this:

Now this is where it get’s a little bit harder, there are a bunch of processes listed and your Chrome extensions too. Don’t worry we’ll help you figure this out, but before we do in some cases your GPU memory might not be listed in the Task Manager, you can right click the task column and enable it:

So now we have that you should have a bunch of processes listed, what you want to do is:

  • Find the tab that runs your application and monitor the “GPU Memory” section of your tab

Note that the first row lists Browser as the total amount of memory used, below there you find the GPU Process. The GPU Process is the UIProcess of Chrome and where the actual rendering is happening, however this is not the process we’re looking for. We’re not interested in the total amount of memory used as Chrome launches a bunch of extensions and others that might add up to the total and that might not paint a fair picture of how your app performs.

In this case just fin the Tab that has your local app loaded, should say something like Tab: <address of your application and that’s the GPU Memory we’re interested in.

Please be aware that if Chrome Task Maanger says 47 MB on your Chrome Desktop browser it does not always mean the app will also use exactly 47 MB on the target device. This is where having a device, with tools to measure this, is crucial to success. Why is there a difference? Do textures take different memory on different devices? Oh no, absolutely not, that would be crazy. However once you create a WebGL surface in the browser each browser allocates a different buffer for that graphical surface. Since Lightning uses WebGL we’re always going to create said surface to draw pretty things on. On one device an empty surface might take 15 Mb while the other does that in 5 Mb which is entirely dependent on the browser and its configuration created at when it was integrated. Either way Chrome on your desktop does not even include that graphical surface buffer memory in the Task Manager for just the Tab (it is part of the GPU Process), giving you a false illusion of having enough room to run your app on. In the end, measuring on the device is knowing for sure whether or not it fits.

Let’s isolate the leak
So now we can see the Graphical Memory being consumed, lets check if we’re leaking graphics memory. Let the app sit for a bit, navigate around for a while and let it idle again. Does it go down? Does it ever increase? If it continues to increase, congratulations we’ve established there is a Graphical Memory leak.

we get to that we need to make sure you are not holding on to Lightning Components unnecessary or using excessively large images that would push you over the memory threshold.

Unfortunately there is no “inspector” like with the JS Heap case. You will need to pin-point the memory leak yourself by excluding/including different parts of the code. You can use the following guidelines to help determine where the issue sits:

  • Does the memory leak happen when you are not navigating around? Check for Clock components, focus animations or anything that still renders without user interaction
  • Does the memory leak happen in a particular screen? If so, exclude other screens from your test
  • Once you determined the screen, try turning components on and off and see if that makes a difference
  • Does it happen in all screens? Check which components are re-used across all screens and focus on those
  • Does your animation leak or just the component? Disable animations and see if that matters
  • Disable shaders/effects and see if this improves the memory situation. If so the shader you are using might be leaking (this might be time to loop in an expert)

Once you have isolated the component in question, check if it doesn’t have any lingering references, circular references or over-use of spawning components. Try to ensure it is locked down and has a proper destruction once you clean the element.

Does that mean I can create a Lightning bug?
Hold your horses cowboy. Majority of the graphical leaks are solved in the above, though it is entirely possible you ran into a Lightning bug. It is imperative to do your due diligence in trying to isolate the problem, if you feel the issue sits within a Shader or certain specific Lightning core functionality, that your code is 100% valid and doesn’t leak but the graphics memory still increases? That might be time to loop us in, make sure it happens across all devices and we’re not looking at a deployment specific problem. But we’re human too and make mistakes, Lightning bugs happen and we’re all for making it better. A standalone test app is king Once you’ve isolated your screen or component, please isolate the element/code in question and share it in a standalone test application. This makes it much easier to share with different developers/teams and is a lot more respectful of our time.

You can share your test app in a ticket, be verbose, add logs and such. However before you go create a Lightning ticket be sure to read the Lightning Ticket guidelines.