The web is chock full of web performance advice. There are books on the subject, I've written articles about them, and there are countless case studies proving just how critical web performance is. All of these tips, patterns and "best practices" are important to understand and apply when appropriate, but the current state of the art, science and technology that drives the web (not to mention the politics!) has created a climate in which one simple web performance trick will likely provide a bigger boost to your site's performance more than any other:

High Speed

Enable Transport Layer Security!

Wait a second. What? Did I just say Transport Layer Security? Haven't we always learned that TLS/SSL/HTTPS actually degrades web performance?

Perhaps. But we'll get to that in a moment.

What's really important to realize is that the web has evolved. HTTPS used to be a progressive enhancement that provided additional security and trust. For better or for worse, it's now a key that unlocks many of the webs most useful features; features which greatly improve performance including:

  • HTTP2
  • Brotli Compression
  • Service Workers
  • Any other feature spec'ed with a [SecureContext] attribute

Let's take a moment to unpack each of these.


Technically, nothing in the HTTP2 specification requires TLS. In fact, the authors of the spec directly comment on this (emphasis mine) in the HTTP2 FAQ:

>Does HTTP/2 require encryption?

> No. After extensive discussion, the Working Group did not have consensus to require the use of encryption (e.g., TLS) for the new protocol.

> However, some implementations have stated that they will only support HTTP/2 when it is used over an encrypted connection, and currently no browser supports HTTP/2 unencrypted.

Browser support for HTTP2 is surprisingly strong, coming in at more than 70% of the global market share, but notice that little number "2" on every green box in Can I Use?

HTTP2 Usage

It's fine text says "Only supports HTTP2 over TLS (https)". That's too bad because HTTP2 can provide a pretty hefty boost to performance. To illustrate the improvement, CloudFlare has created this nifty comparison of HTTP1 and HTTP2.


In my tests, HTTP2 was ~2.5 times faster than HTTP1. Of course, your mileage may vary, but you don't get any benefit unless you enable TLS.


GZip has been the standard used for HTTP compression for 15+ years - ever since RFC 2616 was published. Compressing responses with GZip provides a huge boost to web performance and bandwidth utilization - but the algorithms that drive GZip have been around since the early 80's. In all that time, surly we've had to come up with something better, right?

Well, there have been several attempts at "better than GZip" compression on the web, but the list reads like the Chicago Cubs historical record - lots of loses and disappointment. They include:

Brotli is yet another new compression technology that could one day appear on this list. Only time will tell, but I have hope. Why will Brotli be different? Eric Lawrence provides insight into one of the biggest reasons why so many compression technologies have not taken off on the web, and how Brotli will solve the problem:

> Past attempts to add new compression algorithms have demonstrated that a non-trivial number of intermediaries (proxies, gateway scanners) fail when Content-Encodings other than GZIP and DEFLATE are specified, so Brotli will probably only be supported over HTTPS connections, where intermediaries are less likely to interfere.

At the moment, Brotli is currently available for use in Firefox and behind a flag in Chrome and Opera - but only on TLS connections. It's "Under Consideration" by Microsoft's Edge team. I expect a global majority of users to support Brotli well before the end of the year.

How much better is Brotli compression? I used Fiddler's Compressibility extension on the Chicago Cub's site as a point of reference. They could reduce their page weight by 15.1% simply by switching to Brotli.


Not only would 15% fewer bytes improve page speed, it would also significantly reduce any sites bandwidth bill.

But once again, you're going to need an HTTPS connection to reap these benefits.

Service Workers

Service Workers are currently the web's "hottest new thing". In fact, Wired has gone so far as to call them the web's savior. I could write several articles about what Service Workers are, but for the sake of brevity in an already long post, I'll give you the quick facts:

  • Service Workers are a JavaScript proxy that gives web developers ultimate control over how browsers interact with the network for their site.
  • This means they have programmatic control of browser cache, and can leverage that to make sites not only work offline, but also to make them faster than ever.
  • It already works in a majority of the browsers used: Service Workers
  • It is under consideration in Edge and WebKit.

If you're not feeling up to speed with this whole Service Worker thing, I highly recommend you watch Jake Archibald's intro to the topic:

Service Workers are coming, look busy

Even if you decide to skip the video, let me clue you into to one key point: All the power of Service Workers is only available with TLS.


By now, you might have realized that the coolest new features that the web has to offer are only available in a secure context. HTTP2, Brotli, Service Workers, heck - even Web Sockets basically require TLS.

This is a trend that will continue. In fact, the web's spec writers even have new shorthand to define "powerful features" that will only work within a secure context. It seems clear to me that if they are going out of their way to define a shorthand for secure contexts, they plan on using them a lot more.

So if improved connection handling, compression and caching (not to mention security and trust) aren't enough motivation to get you to use TLS, consider all of the future's cool "powerful features" that you'll be missing out on as well.

But Doesn't TLS Degrade Web Performance?

The truth of the matter is that yes, TLS in and of itself can degrade web performance. However, by optimizing TLS and certificate configuration one can drastically reduce the overhead of TLS. Don't trust me? Take Google's Adam Langley's word for it:

> On our production frontend machines, SSL/TLS accounts for less than 1% of the CPU load, less than 10 KB of memory per connection and less than 2% of network overhead. Many people believe that SSL/TLS takes a lot of CPU time and we hope the preceding numbers will help to dispel that.

How exactly does Google optimize its TLS usage? It must be some form of black magic right? Nope. It's 100% documented and covered by Ilya Grigorik on his site and this corresponding presentation:

Is TLS Fast Yet?

Does TLS degrade web performance? After you watch Ilya's presentation, I think you'll agree. I'm calling that myth busted.

TLS can be optimized so that it's overhead in and of itself is marginal - but the real power comes with all the features that it unlocks - the one weird trick which unequivocally improves web performance.