Web Application Optimization Basics: Tips and Best Practices


IN THIS article, we would like to present some (basic) approaches to improve performance. Basically, they refer to web application optimization, but some things are true for “ordinary” programs. We will cover topics such as profiling, batch processing, asynchronous request processing, and more. 

When should you optimize?

The very first question you need to ask yourself before you start optimizing something is whether you are satisfied with the current performance. For example, if you are developing a game, what is the minimum FPS on “average” hardware and “medium” settings? If it drops below, say, 30, then the players will notice it. Even if the average frame rate is 60 FPS, it is the minimum FPS value that determines the feeling of the game – “slows down” or “runs smoothly”. 

Let’s say you realize that you are not satisfied with the current performance of the application. What to do?


Any optimization should start with numbers. If you don’t have execution times for individual parts of your application, you can’t optimize efficiently. You can optimize fragments for a long time that are easier for you to optimize than those that really slow down because you do not have a complete picture of what is happening.

When developing, it is often convenient to use profiling to identify bottlenecks. For PHP, for example, there is a good profiler called xhprof. If you have a large and unfamiliar project for you, then the profiler is almost the only way to quickly find bottlenecks in the code, if there are any. However, the “regular” profiler is rarely used during day-to-day development, for several reasons:

even the most “good” profilers significantly slow down the application;

for the results you need to go to a separate place or run separate viewers;

the results themselves need to be stored somewhere (profiling data usually takes up a significant amount of space).

For these reasons, as a replacement for a separate profiling tool in web development, so-called “debug panels” are embedded in the development version of the site, which provide a summary (with the ability to view details) of various metrics that are built right into the code. Usually, this is the number and execution time of SQL queries, less often the number and execution times of requests to other services (for example, to Memcache). Almost always, the total execution time, the size of the response from the server, and the amount of memory consumed are also measured.

Measure again!

You are great, you covered everything with timers, everything is wonderful in the development environment, but everything is different in production? How about measuring production performance? If you’re writing in PHP, for example, there’s a great UDP production performance measurement tool called Pinba. Based on this tool, you can leave the debug panel for development and, in addition, get real-time statistics on your timers in a “combat” environment.

Measure the size of the returned data and indicators of the performance counters built into the browser. Maybe your site needs a CDN to serve static data, maybe you just need to change your hosting provider.

Be lazy

In the process of performance analysis, you will definitely come across many things that can simply be thrown out of the code, or “wrapped in an if”, and include the necessary piece only when it is really required.

It is very common that large fragments of the page remain almost unchanged (for example, the “header” and “footer” of the page). If this is the case, then the obvious solution is to either pre-generate the content or put it in the cache and not redraw it every time.

Use (asynchronous) batch processing

In a variety of projects, you can meet the same trivial design error: processing dozens, or hundreds of records, one at a time, instead of processing everything together, in a single request.

Another way to speed up batch processing is asynchrony. If your language allows it and you can group requests to different services into one and execute them asynchronously, then you will also get a noticeable decrease in response time, and the more services, the greater the gain. This is weakly applicable to MySQL, but well applicable when working with, say, the slow Google Datastore API.

So, we’ve covered the basic ways to improve the performance of (web) applications. Using the above optimization approaches, you will be able to effortlessly speed up projects many times, sometimes dozens of times, spending just a few days for the largest of them. I hope the article, if not teaching you about optimization, at least pushes you in the right direction, and the world will become a little better.

The latest stories