Performance issues in Lightning?

Performance

Performance!!! Do you speak it? … Wait I thought Lightning was all about performance, why do I have to speak it??! Yes, we are. However Performance doesn’t come for free, Lightning is a tool and albeit it certainly helps it doesn’t mean performance comes without being aware of it. Not to confuse you, we definitely guarantee best in class performance but like any tool you have to use the tool right. Can you use a screwdriver to smack down a nail? Certainly, is it the best way to use the tool? Nah, it’s beter with screws (typically). Same hold for Lightning, use it right and you’ll get the best out of the tool. However use it wrongly or incorrectly and well, it’s not going to get you the same benefits.

So let’s talk about Performance, in this article we’ll talk about Lightning and its effects on performance. This is written to get a better understanding of what is performance, how to identify different “cases” of performance and tools/guidelines to improve performance for a Lightning based application. Before we can dive into the nitty gritty details of performance let’s establish a couple of ground rules to make sure we’re all talking about the same thing because well… the meaning of performance isn’t always straightforward.

Preamble

There’s a saying in Dutch that loosely translates to:

He who doesn't honor the small, isn't worth the big.

And it’s very applicable to Performance every gain you can get, even the small performance gains, help in the larger scheme of things. Don’t discount minor improvements, of course prioritize them accordingly but do not dismiss small optimizations. Enough small optimizations combined cascade into a larger performance gain on entire project.

Sure, being a few ticks faster in sorting an array isn’t going to massively boost the overall performance of your application. But be more efficient in all operations slowly adds a complete milliseconds per frame and suddenly boom your app is going to perform better. Every performance gain, small or big, is worth it in the end.

When you scope out your project’s development tasks, a separate task that says optimizations or performance for taking a bit of time to optimize the hell out of your application is absolutely worth it. And if you run into a fanatic OCD-scrum-powered-non-code-writing-agile-hipster Project Manager stressing you what story/epic/feature is attached to performance you can tell them all of them. A large portion of the success of a project is defined by how well the application will perform, you can’t discount it. And it should be non-negotiable, nobody wants an app that doesn’t perform.

Now we got that out of the way, let’s get into it.

What is Performance

When I looked up the meaning of Performance I got this:

an action, task, or operation, seen in terms of how successfully it was performed.

And I like it, a lot. It’s a good explanation. But requires a little bit of more details to apply it to a Lightning based Application. Typically the context here is a User-Interface and how well Lightning based applications “perform” on an embedded device. New to embedded devices? Please read the principles here.

So what does it mean when we say: “how well does Lightning perform”? If we further elaborate on the above we can turn it into: “how well does Lightning perform a certain action”. OK - so what are those actions precisely?

The action of good performance can be measured by various metrics. And each metric tells a specific side of the performance story and all those metrics combined would justify whether or not the collective actions performed by Lightning are good or bad. Ehhh… yes collective set of actions? Hope you’re still with me, I promise to make it less abstract soon.

One very commonly used, but notoriously vague as well, is the User Experience. The User Experience is what a user, using a Lightning app, experiences when the collective set of actions are being performed. Oof, you might want to read that again. To expand that and mix in with the above: How successful does Lightning perform a set of collective actions that result into a good User Experience? Okay I’ll stop with the lingo, but I hope you get the point. It’s not 1 thing, not even 2 things it’s the overall ability to run everything that makes a user happy when running a Lightning Application. Before we can dig a little deeper lets break down the User Experience performance into the following:

  • User doesn’t have to wait very long → responsiveness
  • The interface is intuitive → design
  • The interface runs smooth → rendering speed
  • It looks pretty → look and feel with animations and effects

Well that’s a pretty good start, now where and how does Lightning help with that?

  • Responsiveness: Speed on which the User Interface can react to input or changes.
  • Rendering speed: The amount of Frames Per Second (more on that later) we can draw on screen
  • Animations and effects: Animations of the elements on screen and rendering effects (reflections, blurs, color changes) applied

What about design and look and feel? Well… design and look & feel are ultimately determined by the Application’s implementation itself. In fact everything is directly or indirectly determined by the actual implementation (more on that later) but these two categories are solely determined by the designer & developer. Meaning the designer, the UX designer and the application developer(s) are the only ones in control for the Look and Feel of the application. They set the design, implement it and determine how it works. Lightning imposes no limits and you can go very far with a WebGL Application framework, much further than a CSS3 counterpart as Lightning provides pixel-for-pixel control.

That said we can’t stress the importance of an intuitive design. Design just like the actual development of the application all share the burden of making it work well and providing a great experience. Designing a proper, easy to use User Interface is one of the major factors of success. Hire a professional! UX designers are fantastic resources with a unique set of skills to improve the experience of your application.

Animations and effects are tools available to provide a pleasant look and feel. Having rounded corners, blurred/in focus sections to guide user focus and nifty transitions between different states of the user interface are inherently determined by the design of the User Interface. Lightning provides a rich set of tools to achieve beautiful and smooth animations or provide effects for different elements. Lightning provides the tools to create rich animations/effects and ultimately the design determine how it is used.

So far we haven’t really determined anything that objectively quantifies performance, we’ve established some base rules for what is an action that determines performance. Look and feel or User Experience are very subjective, for example what my wife deems as a good user experience might be vastly different from mine. In fact I know it is, because I’m forever tainted by looking at 3D graphics and video encoding too much in my life, having a low threshold for hiccups and issues. Okay… so what does that mean? It means we need to agree on some basic metrics to objectively determine good performance related to certain actions. Easy peasy right? Let’s get into it.

Metrics of performance

So what are measurable, relevant, metrics of measuring good performance? Since we’re going to park items related to the design and look and feel, as those are subjective and vary from each implementation, what are objective metrics we can use to determine performance?

Let’s get into some of the basic ones:

  • Rendering Frames per Second (FPS) → smooth UI
  • Remote control / key input processing → responsive
  • Startup time of the application (TTMU) → responsive

Now let’s spend some more of the article to dive into the above 3 metrics. I’ll save FPS for last as that’s the key metric I want to work off of. Not discounting input response or TTMU but there is less variability than FPS. Meaning once you have those setup properly they’re behave relatively the same for the duration of the project. Where FPS is far more “influenceable” by internal and external factors and something that needs a constant control as the project ages. Plus there is extra depth to “FPS” that we need to understand.

But before we go into FPS, lets talk Input Processing.

Input processing

Having a snappy, hyper responsive User Interface starts with processing user input. After all any time spent on this goes on top of how fast you can render what needs to happen as a result of that. Now there is very little you can influence from a Lightning Application perspective, but that doesn’t mean its something you should look out for.

In this particular explanation I’ll only cover remote control processing and I’m excluding touch based input for this. As those two have very distinct different paths, whereas the touch typically has a way more direct input path over a remote control. Why is it more direct? Well, you’re touching the device. That’s why it’s called Touch (doh!). And that’s pretty darn direct, as the input from your fingers goes through a capacitive screen directly into the driver, up to the browser and then gets processed. In a simplified view its physically finger to touchscreen (1 physical interface) to driver → browser → JavaScript (3 SW layers, ideally).

With a remote control this is something entirely different. You’re touching a device, that is telling another device it needs to do something. Get where I am going? It is not as direct, the device you are interacting with is not the device actually processing the input. So there are levels of indirection that always add up to the processing time. In the same deduction its finger to button and then remote control to device (2 physical interfaces) to software running on the remote → remote protocol → driver on the device → browser → JavaScript (atleast 5 SW layers). Now there might be more software layers in between as typically devices have a “routing” layer that will route the key to the right application that is running in the foreground, so this could add another layer or two on top of that but for the sake of keeping this comprehendible let’s assume its either really fast or doesn’t exists.

Now from a Browser perspective keys come in through their C input API and follow a pretty straight path into the JavaScript engine. Typically with Lightning Applications using a remote control these are provided as a KeyBoardEvent with keydown and keyup listeners. This is where the timer starts for processing on the Lightning perspective, anything below the keydown or keyup event emitter is beyond our control. Lightning automatically listens to these key events and focusses the key event to the appropriate screen/element in focus.

For more information on key handling in Lightning please read this. Or if you are interested in learning more about focus management please find it here.

There are two things you’d want to check:

  • When did the browser get the event?
  • When did Lightning pass the event to my view?
  • What did my view do with it?

Easiest way to do this is put some console.log's in place, measure the time. Use the performance API in the browser to get a high accuracy time report.

When did the browser get the event?

In order to figure that out you can do two things, either from the top JavaScript bind a keydown/keyup event handler and listen for the changes. These should come in at the same time as Lightning receives them and allows you to mark it with a timestamp.

Not sure about the Lighntning portion of it? Check out a local copy of Lightning and sprinkle some logs here. Rebuild and measure when the keys go into Lightning.

When did Lightning pass the event to my view?

Easy, just make a performance mark at your _handleKey() {} in your template, thats where Lightning will delegate the key to the appropriate component in focus. Mark a timestamp here to see when Lightning delivered the key to your view logic. It may be interesting to see what the difference is to when the browser first delivered the key and when Lightning delivered it. If there are issues something is obviously holding it up, likely an overloaded CPU processor that is not able to provide the ticks we need to handle keys. Check your CPU usage, is it insanely high? Thats it.

Your CPU usage is nominal but Lightning doesn’t deliver the keys fast enough? Uncommon as this is a frequently tested part, try to see if you can figure out where its stuck and/or create an issue in the Lightning github for further follow up. Be sure to read the ultimate guide to creating Lightning issues here.

What did my view do with it?

Now this is hard to capture in a guide, as there are so many different scenario’s. It’s important to think about what happens next once the key come in. Did you start with firing off a large XHR request and waiting for that to return? Probably not a good idea, this creates a “perceived” delay and not a real one, start with a loader or animation indicating to the user there’s something happening but he just needs to be patient. Is it related to animation or creating new components in Lightning? Be sure to read our section on Animation performance and Object Creation.

Start with a simple print on screen or simple loader to indicate the command has been received, to validate if te input processing is performant or not.

Now that covers input processing, let’s go into Startup time of the application (TTMU).

Startup time of the application (TTMU)

Another important factor to performance is the startup time of your application. I won’t go full lingo anymore, but “startup” is a very grey definition and can mean different things to different people. Depending on who you ask start is from the first time your app renders something to it actually displaying a gallery full of content. To avoid ambiguity lets state the “end state” as the first time you can render your splash screen and park Time to mimimal usable for a little bit later.

Startup time needs two points in time, so what is the start state? yeahhhh. That’s a fun one, for Lightning its when the browser loads us (theres some description required to that, more below). But typically theres a bunch of stuff happening before we even get loaded, for example and definitely not limited to this could be an elaborate start up flow:

User clicks app icon -> Browser get started -> Browser loads URL -> 
Resources are fetched -> App is started -> Lightning starts to render stuff

Which results into something like this:

As you can imagine, starting the browser and the browser loading the URL are totally outside of our control. This typically takes up the majority of the time, a browser is a quite extensive piece of software and they generate an inherent load on the system when it gets started. In order to speed up those someone would need to look into keeping the browser in hot-standby or working with a pool of browser tabs. Both of which are far beyond the scope of this document.

So what can we influence from our Application? Fair question, couple of key things to look out for:

Application size, how big is your JavaScript?

Larger JavaScript bundles take more time to get loaded over the network, once they are loaded the browser’s JavaScript engine will parse the code and start its initial tick. Obviously using a <1 MB library will load a tremendous amount of time faster then a 8 MB library. Simple physics of code and binary computers that can’t be avoided.

Take care when pulling in a dependency and be careful with autogenerated code. There are tools like bundlephobia that help you get a better idea of the size of your dependencies. Otherwise there are tools out there to generate a “map” of your dependencies straight from your packager like webpack or rollup.

App Initialization

Once the application size is good and we aren’t pulling in massive dependencies, this is where 99% of your gains will be. Although it wont make the browser start faster or load your application faster, this is 100% under your control. Once Lightning is initialized, which vary from platform to platform but is usually within 2s, everything is under the control of the application developer.

Typically your app will need to do several things in order to be fully functional:

  • Initialize libraries/dependencies
  • Get an authentication token
  • Get data set for your first screen
  • Start rendering components

And you could have your entire application “stall” in a splash screen or loader while the system waits for all the dependencies, data, tokens and metrics to be initialized, which looks something like this:

That’s by far “the easiest” approach but also the one that makes the user wait for an awful a lot of time. And well, users don’t like to wait. They’re likely jumping in your application for a specific reason “I want to watch X” or “I want to play Y”. Their time is precious and the longer you keep them away from their goal the less happy they’ll be. This is the Time to minimal usable (TTMU) where the user can start interacting with the application to get them to their destination or to start an exploration (if they don’t have a specific goal in mind). Lightning itself only plays a small role here, as the majority of time is spent on initialization of data, dependencies and whatnot. The actual rendering of elements on the screen is the end stage, regardless let’s talk about some common practices to make TTMU better.

In the end we want to end up with an application that loads only the minimum required to start rendering the first landing page, start with place holders and give the user the ability to start interacting with your application while you are still loading more data, initializing more libraries and what not. The interaction will be available earlier, the inherent load on the system will be easier. We want to end up with something like this:

So what can we do to improve the TTMU? These are some simple practices to make it better:

Store your auth token (if its safe & secure obviously) in local storage or a cookies (yuck I know) and try to use that previously stored token on initialization. It might have expired by then or it might not, be ready to handle an expired token and just get a new one if that’s the case. If the end-point that you require to talk to answers (or the expiration date checks out) the token is still valid you’ve just saved a token refresh step and it puts the app 1 step closer to TTMU.
Does that mean we can continue to use the previously stored token? Probably, but you might want to double check the expiration date and if the date is really close to the current time still do a token refresh, but decoupled from the init flow. Meaning you can refresh the token while the user is navigating around or when the app is idle, without blocking the init because the token you’ve previously stored is still valid and it is no longer a dependency for initialization.

Can we defer initialization of dependencies? Certain dependencies you’ll immediately need, like Lightning iself you’ll want to initialize early because we want to start drawing stuff on the screen really early on. But think about video metrics collection or video playback libraries, if your first landing screen doesn’t contain video and the first video is a few clicks away for the user, think about deferring the initialization of the library post-init. This allows the user to already start interacting with your application, that you’re still in the background initializing libraries should be allright (if they don’t hog the CPU too much) - at least the user can already start navigating around bringing him closer to his end goal.

Do we really need al that data right away? its easy to fetch everything data wise and have it all neatly “loaded” when your app is ready. But do we really need all that data? Try to load only that what is needed on the first screen, like basic user information, the users subscription status if that’s a thing and just the posters/titles of the first few rows on the landing gallery. You can load the bigger set of data later, once the user gets close to that particular section. Effectively cutting down the data requests on the initial screen to a bare minimum so we can load the landing page as quickly as possible.

Now there are also some Lightning tricks we can apply! And certainly we play our own role here, somethings that might help:

Use lazy creation don’t try to load all screens at the start, unless you want to break the device. Your app might have 5 or 10 or more screens that have been defined, be sure to use the Lightning router and enable lazy creation/lazy destroy to only spawn/create the screens you need to satisfy the initial loading. Lightning supports loading a “trail” of screens so the upper screens are loaded too if the user is deep linking into your application. If it’s just the gallery / landing page, be sure to only load that screen. For more details on the lazy creation / lazy destroy functionality see the router configuration.

Use a component pool creating components is one of the more expensive steps in Lightning. Creating a large amount of components from the get-go will hog the system. Try loading a limited set, try using a component pool to spawn a few components and then re-use them accordingly once the user starts navigating and stuff drops off (add it back to the pool) and the app needs to load new objects (take out of the pool). This saves resources and thus makes the application start faster.

Render placeholders you can already start rendering your screen but have grey animating place holders of where the final data / items will be. This speeds up the TTMU by providing the user the ability to already start looking at your screen as you’re loading the items you need. Load items asynchronously and in small chunks, so you can render the first items as soon as possible while the ones at the bottom and off screen are still being loaded and have place holders. The added benefit of this is by loading small “chunks” of data you’re not hitting the CPU with a massive “parse” of Json (or equivalent) and thus easing the CPU load as you are still initializing the application.

So that’s about it for app initialization and key processing. So far we touched upon design and UX, we briefly talked about Key Processing and App Initialization, getting a better understanding about TTMU. Now its time for the center piece Frames Per Second. Yeah baby, I hope you are as excited as I am.

Frames per Second

OK! Time for the main event. Frames Per Second, why does it matter?
Brief: everything you see on a screen are just pictures rendered at a certain speed tricking your brain they are fluid. Neat huh? This means our brain processes these frames and they turn into a fluid motion. This adds a human element to performance related to rendering speed with regard to Frames per Second.

An average human can process somewhere between 30 to 60 images per second (each image is a frame → FPS). Generally anything above 24 FPS is considered fluid by humans. That means that anything that is being rendered above this magic 24 FPS number is something that we consider pleasant or smooth. Because smoother animations result into better perceived performance one of the goals is try to stay above the 24 FPS mark. Fun fact, this is why many of the movies are recorded at 24 FPS (or higher).

So what does that mean for your application? Lightning renders WebGL so everything will be fine right? Yeah… absolutely, WebGL is hyper fast. But we run on embedded systems not a desktop. Which means Lightning will have to work with limited CPU/GPU/Memory resources and they’re all intertwined. In order to achieve 60 FPS Lightning needs to be ready to draw the next frame within 16 milliseconds (1s / 60 FPS = 16 ms). That should be OK but there are factors that affect the rendering performance of Lightning, such as a hogged CPU where we cannot get the next frame instructions out to the GPU quick enough.

Why does my CPU matter for FPS? … fair question. You would think this is only about rendering those only the GPU load matters. I wish it was, truly. But 9 out of 10 cases I run into with regard to performance on an embedded device is because the CPU is at 100%. Yes Lightning renders in WebGL and yes that primarily runs on the GPU. However the instructions that are sent to the GPU are uploaded by the CPU. Meaning the CPU has to tell the GPU what to do and it has about 16 ms to do that in order to hit 60 FPS. If it’s too busy itself it can’t send the instructions to the GPU fast enough resulting into a lower FPS.
Sure we can generate a bunch of instructions ahead of time right?? Sure… however if the user interacts and the context changes that logic sits in JavaScript. Meaning the JavaScript engine contains the logic of what to do in a certain context when the user interacts, what animations to run, what images to load, what text to change. There’s a lot happening on animations and all these instructions originate from the JavaScript Engine (running on the CPU) and are eventually turned into GPU instructions by Lightning. So it’s imminent we keep the CPU load within bounds to avoid turning the CPU into the bottleneck.

That means each device has a certain CPU load danger zone. A little peak to 100% CPU isn’t going to matter, however have a constant 100% CPU load will eventually cascade and turn the CPU into the bottleneck. It looks something like this:

Get too close to the danger zone from a CPU load perspective and eventually it will hurt performance. Each device has a different danger zone, for some devices that might be 90% CPU load for ~2 seconds and for others that might be much closer to 100% for a longer period of time. This varies per device and is dependent on the amount of cores, the operation you are doing (multithreaded or not) and fast these CPUs really are. The slower the device, the lower the danger zone will be and the faster the CPU will turn into the bottleneck.

Establish a baseline

Before we start making any changes that could affect performance let’s start by establishing a baseline, there are a couple of metrics that are key to understanding what’s going on and will vary as you make changes to improve performance.

Idle CPU Load what is the idle CPU load (when not rendering/animating anything on the device? Why is this important? Well, if the device itself is already at a 60% CPU load while doing absolutely nothing that leaves 40% of the capacity left to do rendering/animations/transitions and that is a bad thing.

Ideally you want to look for a relatively idle CPU usage and have appropriate bandwidth to do stuff from an application perspective. It should look something like this:

In the picture above you have a healthy bandwidth available to do rendering in your application. This gives an appropriate amount of resources for fetching data, loading images and running animations. A nice 70 - 80% CPU is very healthy. Do we always need 80% CPU to run our applications? No absolutely not, but the more bandwidth we have the more actions you can do in parallel, less time spent waiting on stuff, more animations and eye candy equals happier user. It is primarily to have some room for “spikes” without worrying (too much) about frame drops. With a smaller bandwidth we need to be more careful about how much we do simultaneously. Additionally 80% CPU availability is not the same across all devices, 80% “room” on a dual core MIPS device with 3k DMIPS is much much less then the same percentage on a quad core 14k DMIPS ARM processor. The speed of which the CPU can handle things certainly plays a very important role. This means you will have to establish a baseline for the device you are working on, test the limits, monitor the resources as you are pushing new animations and boundaries. The reason I bring this is up is to carry the idea you have limited resources and need to take care how to stage the steps you want to do in your application.

When your baseline CPU is too high your problem might be bandwidth related and not the animations/textures itself, this would be a bad situation:

I’d recommend figuring out what’s taking so much CPU in the baseline/background and can that be reduced first before diving into your Lightning application. Examples can be another C++ process (outside of the browser) has a bunch of stuck threads and needs to be debugged. If it is the browser itself taking up so much CPU without doing anything you’ll have to figure out what is consuming that much. Is there a super aggressive for loop? Trying to parse massive amounts of data? Maintaining a big data store in memory? Stop that, we need to render smooth UIs here, do you mind? Try to reduce it, see if you can drop the dependency or upgrade to a newer version that doesn’t have that issue. See if there are configuration parameters to ease the load (refresh timers, timeouts, etc). You will need appropriate computing bandwidth to do animations and anything you can get will directly benefit you animations/rendering speed.

An example say we want to render 3 posters, a title, a description and a background image for the screen we are in. Image it looks something like this:

Now when the user clicks left, you want to do a couple of things:

  • Update the title and description with new text
  • Move the current poster out of the screen
  • Add a new poster to the stack
  • Move the next poster forward
  • Update the background image

This results into 3 animations, 1 text update, 1 image update, something like this:

Now the easiest thing to do is just run all 5 “steps” in 1 go, this creates an inherent load on the system. As the Application will need to fetch new images, creating new Lightning components and start providing animations & instructions to the GPU. Doing all that creates a load on the system and might look like:

For the sake of the argument, this will hit the danger zone. As explained above the danger zone is where the CPU will start to lag behind because it has to process too much simultaneously and will take longer than 16 ms, even up to 50 - 100ms to provide new frame instructions to the GPU. The FPS on this particular setup will look something like this:

This would be bad, very bad indeed. Remember that 24 FPS magic number we talked about earlier? That’s the magic number that humans experience smooth animations at (on average), below 24 FPS (it will typically be around 15 FPS, dependent on the person) the human brain can process the image faster then we are rendering it and it comes across as choppy. Very bad indeed, this will very negatively impact the user experience. The user will assume the device/UI/app is slow and thats what we’re trying to avoid.

So how do we avoid this? How can we ensure we do not hit that danger zone? There are several answers to this. There is no answer fits all type of solution and you will have to figure out where your bottleneck is and what approach works for you in your particular case. Luckily though there are multiple strategies we can use here to tackle this and there’s always an answer.

How to solve performance issues

All right, so we have performance issues. Because we’re hogging the CPU or are overloading the GPU. What can we do? There are several approaches you can chose, from high impact, low complexity to low impact and a higher complexity.

In a nutshell these are typical patterns we can deploy:

  1. Simplify your animations
  2. Load data/images ahead of the animation or defer loading more data until the animations are done
  3. Check XHR requests, ensure they are fast
  4. Profile your app to see if anything else is hogging up the CPU

Other things to look out for:

  • Use small images, avoid using very large images where you do not need to
  • Load small data, many small chunks of data is much easier on the system that trying to parse massive chunks in the background
  • Make sure your images/data is loaded fast enough from a CDN or something, having slow data transfers can hog up the loading of items.

Simplifying your animations

OK - simplifying your animation sounds simple. But it certainly isn’t. Of course removing animations, speeding it up or have less effects applied to it sound appealing when trying to solve performance issues. But they should be a last resort as they diminish the User Experience. Less eye candy is an expensive price to pay for performance. So what else can we try?

Are we applying too many effects? for example if you’re doing a blur, greyscale, color change and rounded corners, each of these “effects” take some processing. See if you can offload rounded corners/greyscale to an image server in the cloud. Can we lower the amount of effects? This could be a small gain to apply.

Avoid doing the same effect over and over, for example have a focus animation? Do not reinstantiate the focus component each and every time the focus changes. Instead create 1 focus component and move that to the next object. Moving 1 existing object with an animation is much cheaper than constantly creating a new components. Lowering the CPU load on the device.

In general component creation is expensive. Highly recommend using a component pool to offload the system from constantly creating Lightning components from scratch. This makes a huge difference on the CPU load of the application.

Spread out the animations Make your animations go longer! What? Longer? Yep, hear me out. Trying to do all animations at the same time and completing the animations in 1.5s creates a bottleneck. Instead taking an extra 750 ms to 1000 ms isn’t going to scare the user away, however it does buy you quite some time to stagger the animations. Instead of running them all at once you can “trick” the user thinking its all 1 animation but in fact there are 3 smaller animations going in serial or very close succession. Yes… your animation now runs 2.25s to 2.5s longer over the initial 1.5s. But that extra time might be enough to avoid the danger zone explained above keeping the average FPS above 24.

Instead of running all animations at once, it would look like this:

Yes this makes the animation / transition of the screen overall take a longer time. But it does spread the load over time avoiding hitting the danger zone and thus creating a bottleneck in your application. How far you need to spread your animations and to what extend is dependent on the machine you are working on. Faster devices need less spread and slower machines more.

However there is a point where the spreading of actions becomes too obvious or a hinderance to the actual UX Look and Feel. Once you hit that point you have two options left:

  • Remove the animation entirely, just make it instant
  • Use a placeholder

Placeholders

Now placeholders can only be used in two places: 1) where an image needs to be loaded and 2) where text needs to be updated. However it is an overall accepted solution to hold up I’m still working on this section of the UI. Be sure to spawn a generic pool of placeholders so that you don’t have to create them every time.

You will have to create a placeholder animation first, align it with your favorite designer. But we’ve all seen those, grey animating “placeholders” where data is still loading. Make sure you can give it the right size that you want to use in your UI. Add the famous loading animation to it and use it as a standalone component in your application.

With a place holder it could go in a following order:

  1. Position place holder for image and text
  2. Start loading (XHR) the image/data (hopefully a small request)
  3. Do animations that are not image/text related in sequence
  4. By now your animations should be ending and you can place the final image/text in the placeholder

You, as a developer of the application, will need to play with the timings. Dependent on the machine you are working on you can do certain things in parallel or not. Also these solutions are not mutually exclusive and might be a combination of reducing the effects applied, using a component pool and stretching out the animations by staggering them.

The last resort

Does all of the above not help? Animation still dropping below the magic 24 FPS? Well shoot… Be sure to check out other things that might be hurting my performance below. But that means there’s little left to try to optimize, effectively dropping the amimation partly or completely dependent on how bad it is.

So what does that mean? Make the transition instant, instead of smooth moving something from position X to Y over a course of time, instantly move the object to its end state. It’s not pretty but we’re at a last resort here. You could try to play with shorter animations or reduce complexity in what you’re try to do. Does it have a wobble or bounce effect? Try reducing that first before instantly moving the component. You get the drill, create a “priority” of things you’re willing to give up in order to meet the performance bar and keep simplifying until you hit the mark.

Also note once you get closer to that magical 24 fps few things to be aware off:

  • It might also be okay when you stay around 15 to 24 FPS, it isn’t an absolutely number but a margin on where it starts to come across as choppy
  • Once you get closer to 24 fps it might not always be perfect. Meaning it might perform at or close to 24 FPS 80% of the time and should be fine. You can’t predict what other things are happening on the CPU 100% of the time so take some margin, don’t get stuck in absoluteness
  • Talk to your UX friends! They can help think of other “tricks” to make it look fancy but with a simpler animation. The UX design should never be “written in stone” and “open” to feedback if you want to hit performance marks. Small changes to the UX Design can mean the world in the final implementation (just give these UX Designers some chocolate when you do okay?).

Other things that might be hurting my performance

Of course it isn’t always just animations. As always it’s typically a combination of things, but you have to realize everything you do has an affect on the overall performance of the device. Doing small chunks, only load what is needed, lazy create and load items ahead of time are key improvements to play with. There is no bespoke “solution” to performance and almost everything requires effort and experimentation to get it right.

So what are other things we should look out for in your application?

Fetching too much images or too large images

Fetching excessively large images or just too much images certainly doesn’t help the performance, bigger images need more processing to handle them, more images need more processing. Simple physics. Try to find an appropriate balance between the size of the image you require and the resolution of the UI you are running on and find the appropriate amount of images that you can load a head of time to give a fluid “run” of the UI without stalling to load more.

Be wary about loading too much images ahead of time, load what you need plus a safe margin for the user to have continuous smooth behavior.

Fetching of images is slow

Use the network tab of your browser to measure how long the fetching of images really take. Especially directly from the Device with a WebInspector is super helpful. Is the actual load time of the image taken an excessive amount of time (>400MS) it might actually be the app waiting on the network request to return with the image and has nothing to do with the actual processing power of the device. If you do not use placeholders this could stall the responsiveness of the UI as the App is waiting for the image requests to complete.

Be sure to host the images from a CDN with close Points-of-Presence that are near your end users, to ensure high speed quick delivery times of the images to the application (such as CloudFront or Akamai).

Processing too much data

All apps require data, we get it. However it is best practise to only get what you need, typically from an API that’s pretty tailored to the Application avoiding pulling in too much generic data that you do not need. The conciser the data, the better the performance. Being able to fetch what you need when you need it for a particular screen with a minimum amount of processing is key to performance.

Loading a massive data object and then applying additional JavaScript processing to deal with that data is an expensive toll to take. Consider looking into a full-stack development or backend development to make the API more tailored for the screens that you’re planning to use to reduce the overall “overhead” in the data transfer.

Additionally use a data format that makes sense. Using XML-SOAP? Get outta here, the early 2000s are calling and they’re looking for ya. No hate, but you have to use a format that makes sense for the environment you’re going to use it in. Typically JSON works best and leveraging the browsers native JSON parser is king. Anything else like XML or YAML will require expensive processing and handling from the CPU further limiting your options to create a performant app.

Limit the processing on data

Sure, sometimes you need to shuffle the data around a little bit to make it more applicable to your view. However if you need to do massive operations of shifting parts of the data around/restructuring it in a way to make it fit for the UI you are spending precious resources on data that shouldn’t be needed in the first place.

Of course we should check if there is a better way to use the data in your template/view without going through an expensive data restructuring step. If theres not we should look into making the API a little more “UI friendly” for your project. Maybe adding a glue layer to adapt it server side? or a new API that has a better fitting structure?

Spending too much cycles on data should be avoided or at least optimized with great care, avoid spending too much resources on the client in JavaScript to reorganize data structures just for your view.

Limit caching data

Avoiding caching too much, that’s the CDNs job not yours. Surely it’s nice to have some data in memory and this is totally fine but this should be within limits of what the JS HEAP can handle on the box and eventually chipping away at your available performance for the device. You should only cache the minimum required to satisfy the screens you want to quickly load. Otherwise play with the loader / placeholder tricks and let the CDN and Browser cache things for you. This will reduce the pressure on the device giving you more room to spend performance on animations and fluid UIs.

Fetching of data is slow

I think by now you should see where I’m going, make sure the data loads fast. Is an API taking more then >500ms to respond? That should be resolved, use a CDN to speed up transfer to your device. Or reduce the “size” of the call you are making to make the data more manageable for the API (using smaller chunks is always better to begin with). We want to avoid holding up the user because we’re either making a really big request or the API is just too slow to respond.

Go to the webinspector -> network tab and filter by XHR requests to see which calls are potentially slowing your application down.

Take care with dependencies

Let’s face it, that project that you’re pulling in from the internet does not care about embedded… or well typically. It’s rarely the case a project is designed for the same use case as what we’re trying to do here. Surely some projects absolutely care about performance and go through great lengths to ensure its lean & mean. But they’re often not and I’d like you to assume it is not. So when you’re evaluating a dependency ask yourself:

  • Do you really need it? Do you need the full extend of the dependency or is there a smaller alternative?
  • Is it configurable, can you lower timers/timeouts to “ease” the load on the system?
  • Does it cache? if so can you limit how much it caches?
  • How easy is it to debug?
  • Have you measured its performance on the embedded device you are working on?

For example, let’s take REDUX. REDUX is a great project, don’t get me wrong. But it’s often used for the sake of “propagating data properties” which is a very small portion of what REDUX can do. But it does take a significant portion of memory to do what it’s designed to do and it’s rather hard to debug due it’s more “black box”-y nature. Is there another way to do data property binding? Can we live without REDUX? If the answer is yes, try to drop it as a dependency and BOOM! you’ve made a step into making the performance slightly better.

Want to continue to use REDUX? Sure, thats fine. Just make sure you properly probe it for memory leaks, make sure it doesn’t linger any listeners when transitioning through components/screens in Lightning. Performance test it by ensuring the throughput of the project is adequate for what we’re trying to do and it doesn’t turn into the bottleneck of the application.

My code is slow

EEK. Yes it happens, we’ve all written code that doesn’t perform well. It is okay. So how do we go about code performance? Let’s dedicate a whole chapter on this, there’s much more to talk about.

Writing performant JavaScript code

Okay … well if you would google you can find 101 articles regarding Writing JavaScript Code & Performance. However since we are on the subject of performance I feel it is important enough to justify a place in this guide. Let me start with that opinions vary, with JavaScript there is no 1 single solution to a problem and this leaves a lot of room for personal preference. Because JavaScript is a loosely typed language, with relatively simple data primitives and no direct memory control it gives a lot of room to choose different approaches. Add in years of evolution on ECMAScript (the specification for JavaScript) with full backwards compatibility for previous versions, there’s just a lot of tools/approaches to choose from.

It’s important to do your own due diligence, don’t just take this guide as the single source of truth. This is not the single source of truth, there are likely more ways to do achieve performance and please alongside this guide spend some time searching the web for similar articles. Try to distill whatworks for you and see what you can apply that fits in your own code style.

Small functions!

I like talking to the older generation of engineers and this one is from Suresh Kumar (C/C++ developer):

If I need to scroll to read my entire function 
I break it up in separate functions

Of course that’s very hard to be concise with different resolutions, he was thankfully running a low resolution because that’s easier to read at an older age. And not even thinking about those freak engineers who use their screen in portrait mode (if you’re one of them, jk jk). But you have to understand the gist of the approach, smaller functions matter!

Why do they matter? well for two reasons:

  • Smaller functions are easier to understand, our brains are bad at remember stuff and the moment you scroll you might forget what was happening up top. It keeps it readable
  • Smaller functions can be JIT’ed better!

JIT’ed? Yeah weird term I know, stands for “Just In Time” the compilation approach the JS Engine uses. So the JavaScript engine has different optimization tiers. The JS Engines are hella smart and will start applying optimization strategies on your code, small functions just get optimized a lot better by the engine. Resulting into higher performance overall! Neat huh?

On top of that they use a heat map to determine which functions get called a lot. The more frequent a function gets called the higher the JS engine will try to push it in the optimization tiers. That means more frequently used functions get optimized more! YEET!

Use native functions!

Using browser native functions is typically faster then write your own. And I mainly mean .map, .filter and .reduce over doing DIY for loops or while loops. This is simply because the JS Engine is more optimized to handle the precise operation associated with .filter over a generic for loop.

For example:

let result = [];
for(let i=0; i<data.length; i++) {
  if (data[i] === 'condition')
  	result.push(data[i]);
}

Is much slower compared to:

let result = data.filter(d => d === 'condition');

Plus let’s be honest, it reads much better ey? (sorry that’s subjective, I know I promised sue me).

But this not limited to just array handling, always prioritize JSON.parse() over doing parsing in JavaScript, it’s going to be faster.

Same goes for crypto functions (if your browser supports it) and all of those, the more native Browser APIs you can use the better.

Go async, the ES6/ES7 way!

Now this is harder to quantify, but overall don’t stay stuck in ES5 or older. Using the ES5 .prototype approach and nested callback's will perform worse over classes and promises.

But more importantly get comfortable with writing async code in JavaScript using promises or async and await code (though be careful as you can create blocking code with await). But in general deferring processing by writing async code will allow the JS Engine to optimize better over callbacks (and dont even get started by excessive timeouts, thats a boo boo).

Defer non-essential tasks!

It’s very likely you have a lot of code running in your application, not all might be mission critical. In fact I double-dare you that there is at least something happening that isn’t mission critical. Well, say hello to my little friend named window.requestIdleCallback()! This is creates a function that the browser will call when it’s idle, ain’t that a beauty? Put your non-essential code handling in here and wait for the browser to let you know it’s got nothing todo (just be sure that its not so busy that never happens, but hey thats what this article is about!).

For example:

const handleId = window.requestIdleCallback(() => {
  Analytics.sendEvent('appNeedsBacon'....);
}, {
  timeout: 2000
});

Lets you send the bacon event once theres nothing happening, ensuring it doesn’t get in the way when we’re animating and trying to stay away from the danger zone and above the magic 24 FPS.

RegEx is King :crown:

I know Erik will love this part. But RegEx is the absolute king of processing in JavaScript. I hate it, I will avoid RegEx to large extends because I’m just not as good at it. But I should be and so should you. Need to find something in a data structure? Need to process something, like a string or a document? RegEx is the only way you should think off.

Why is that? RegEx runs in a separate engine! No way you say? YEAH! It has its own RegEx parsing engine. I believe the Irregexp engine is the most commonly used one, but don’t hold that against me.

That means your regular expression will not be executed by the JS Engine, instead it will be handed off to another engine. That has its own parser, compiler, code generator and interpreter! Neat right? Those regex engines are also optimized to the wazoo to such an extend you cannot beat it with regular processing in JavaScript by trying to dissect the data with .split, .splice and what not. Sure a simple operation might be fair game, but anything with even the slightest complexity the RegEx engine will do better, faster, quicker, meaner and without breaking a sweat.

Plus think of it this way, we’re trying to offload the JS Engine remember? Lower CPU load equals better performance, hand it off to the regex engine! Let it deal with it so we get some more ticks back for our friend the JS Engine.

HEY We got Arrays too you know?

Okay I know this is a bit more marginal than the ones above. But in JavaScript you typically run into the same primitives: functions, objects, strings, arrays, numbers and booleans (ok ok Classes and some more, but are typically just functions/objects too underwater so ignoring those). And well it seems most of the code I run into heavily leans on objects to store their data, state, structures, etc.

And I get it! Objects are neat and nice to work with, they’re sorta organized and you can key/value stuff. Neat stuff I agree, however Arrays are faster! And sometimes they can do the same thing just fine.

Something that stores:

let data = {
	'a' : <data for a>,
	'b' : <data for b>,
	'c' : <data for c>,
}

Works really well. But doing this:

let data = [<data for a>, <data for b>, <data for c>];

Will perform better when you need to pull it out of the array/process it over an object based approach in the JS Engine. I can’t really explain it other then Objects have a little more overhead then Arrays do and we tend to do Object.keys() to create an array before processing it, basically adding an extra step. Having your data directly in array gives it immediate access to .map, .filter and .reduce without converting it into an array first.

OK OK so much for the totally objective stuff. If you don’t like biased stuff I suggest this is the end of the article, call it a quits and go get a coffee. If you’re not afraid of my opinion, feel free to continue to read and take it all with a grain of salt. Do your own due diligence on these topics, most of the stuff below is personal. Be sure to call me out on something in the comments, I love constructive feedback or debating endlessly about subjective things with random strangers on the internet.

Early returns are queen!

Ever saw something like this:

let resp;
if (thing) {
	resp = thing
} else if (anotherThing) {
   resp = anotherThing
} else if (yetAnotherthing) {
	resp = yetAnotherThing
} else {
   resp = unknownThing
}

return resp

Yikes right? It’s hard to read, hard to maintain and I find them just really ugly. Instead early returns are the way of success:

if (thing)
 	return thing
  
if (anotherThing)
 	return anotherThing
  
if (yetAnotherthing)
	return yetAnotherThing
	
return unkownThing

Easier to read and easier to maintain, though this only work up to a certain amount of things :slight_smile: but its by far the way to go.
Typically when I start off in a function I do my checks at the beginning and implement early returns if those checks don’t pan out and then do the actual processing/handling/workings of the function safely below the early returns.

For larger mappings of things to a response, consider creating a functional object based response of things. For example:

let thingMapper = {
 'thing' : thing,
 'anotherThing: anotherThing,
 'yetAnotherThing' : yetAnotherThing
}

return thingResponder[value] || unkownThing

This leans into the “smaller” functions turn into better JS parsing, if your function often follows a similar path in returns (eg. it 99% early returns on thing) this allows the JS Engine to optimize that path to the moon and back! So it’s a win win, readability and performance.

Functional or Classes?

I prefer both! In fact you can call me bi-classional (sorry, not sorry). But in short, there are two “mainstream” approaches to writing ES6/ES7 code, using classes or functional programming and I like to use both in different contexts

Classes

Anyone with an Object Oriented history or severe love for OO programming will typically like classes. Why do I like classes? They’re easy to read! you know exactly what it is and the scope is clean.

For example I like:

class DataThingy {
    constructor(data) {
        this._data = data;
    }

    get data() { return this._data; }
    set data(data) { this._data = data; }

    doThingy() {
        ...do thingies...
    }
}

once I do:

let dataThingy = new DataThingy(props)

Everyone understands I just instantiated a new dataThingy, it has its own scope and you should expect me to create more of those. That’s what I like to use them for, cases where I plan to instantiate many dataThingies each with their own scope/lifecycle. For example for a dataModel, I like them a lot.

BUT classes have their ugliness, the whole this._data is just a pain. Having the getter/setter name clashes with the data privates creating the this._data pattern, where we start adding underscores to avoid clashes, is just not very pretty. I know this is solved in newer ES specs and I can’t wait. For now its a thorn in my eye and the absolute biggest gripe about classes.

Also extends can get very messy, it sounds great on paper having the ability to extend something. But in practice people start chain extending stuff and I have to dig through 6 or 7 layers to figure out what I’m inheriting. No thank you.

Don’t even get me started about static functions on classes, if you do end up using statics or a class with only statics. You should be looking towards functional.

Functional

Let’s start off with const, and consts good mmmkay. Very good! Because once you const something it doesn’t end up in the HEAP, it’s a static allocation that gets pre-allocated once the JS Engine starts your code and that’s a good thing! Yay for const!
Why does that matter? Well functional programming relies heavily on const usage, for example:

const coolFunction = (value) => { ... }

Is something you’re going to see a lot and it’s nice! It means the coolFunction name is now immutable and no one can override that function name and do something else with it (we all remember the prototype ES5 horror do we?).

Typically the pattern goes further into having separate export where you can determine the “API” of your file, for example:

const coolFunction = (value) => { ... }

export default {
	coolFunction
}

Now this is very readable, you can see I’ve exported coolFunction and find coolFunction above to see what it does. If I where to add a private function that don’t want to get exported, it turns into:

const coolFunction = (value) => { ... }
const privateCoolFunction = (value => { ... }

export default {
	coolFunction
}

You can’t access privateCoolFunction, as it isn’t exported but I can use privateCoolFunction freely within my module. Another big plus over classes, as all of their function patterns within the class are exported (unless you do the same above the class, but then it gets messy with scope).

But the best thing I like about this approach is the scope! It retains scope across imports if you set it in a variable, for example if I do this:

let coolVariable = 'a'
const coolFunction = (value) => { ... }
const getCoolVariable = () => { return coolVariable }
const privateCoolFunction = (value => { ... }

export default {
	coolFunction,
	getCoolVariable
}

I can access coolVariable once I import it, if for example coolFunction would change coolVariable and I import the file on another place that same coolVariable is available for me. And that’s just freaking neat when dealing with stuff that you initialize once and then require to use in the same scope all over the place. I can just import coolModule and directly do what I need with it. With classes, you’d have to instantiate it at some place and then pass it along to make the scope work the same across the space.

Now this just the tip of Functional Programming obviously, there are entire books on the subject and goes much much further. I’m just highlighting what I like about it to justify why I mix the two approaches. If you want to learn more there are tons of really great videos/articles/books on the internet around functional. I don’t have a favorite I can recommend so search around until you find something that makes sense for you!

And that’s where I do my balance between the two approaches:

Is it a module that I need to instantiate many off with each a different scope, such as data model type of things? I’ll prefer a Class.

Is it a module that I need to instantiate once then share across my project? I’ll prefer a Functional approach.

Anyway - that’s about all I had. Hope it helps!

1 Like