Jamie Gaskins

Ruby/Rails developer, coffee addict

Don't Change Perceived Browser Functionality

Jul 30, 2016 @ 11:16pm

When developing front-end apps, you have the ability to override some basic browser functionality, and this is a bit of a double-edged sword. Some functionality is required to be overridden in front-end apps, like link clicks and form submissions. If you don't override these, your front-end app starts working less like an app and more like a document-based web page. Overriding these is okay as long as they still appear to work just like they do on web pages — links take me to a different place in the app and forms store my input somewhere, whether locally or sent to the server.

However, even though you can override some functionality, there is a list of basic functionality the browser provides with which you should not interfere. Users expect these features to work:

  • Copy/paste
  • Right click
  • Command/Control-click to open a new tab
  • Shift-click (opens a new window in Chrome, adds to Reading List in Safari)
  • Searching within a page
  • Back button
  • Refresh
  • Cmd-# to navigate directly to a specific tab

Copy/Paste

Copy/paste in web pages is useful for so many things. The thing I personally use it for most is pasting a password stored in 1Password, but some websites disable copy/paste in password fields "for security purposes". This is a misguided attempt to keep their users safe. There's just no way I'm going to type in a 50-character password (assuming their app even allows passwords that long).

If there is ever a reason to disable copy/paste, I haven't come across it. Maybe detecting a copy/paste is reasonable sometimes, but preventing it entirely is never what a user wants.

Right Click

Right-clicking anywhere on a web page traditionally brings up a context menu for the element clicked. On a Mac, Ctrl-clicking does the same thing (a throwback to when Macs only had one mouse button). This context menu might have different information based on the type of element you're right-clicking.

For example, a link's context menu might provide options to open its target in a new tab or window, download the link target, etc. A video element's context menu might let you open it in full-screen mode, show/hide controls, etc.

Overriding this is extremely situational. Apps like Google Docs get a pass because their target audience expects it to work like Microsoft Office, but think hard about whether taking the default right click from the user is actually improving their experience.

Command/Control-click to open a new tab

This one frustrates me the most and is frequently unintentional on the developer's part. When you command-click a link, it fires a click event on that element, so if a click handler calls event.preventDefault() indiscriminately, it keeps its users from opening the link's target in a new tab unless they right-click and select "Open in New Tab" (another reason not to override right click).

Don't feel bad if you've broken this before by mistake. It's very common. Even a giant like Twitter still breaks it in their desktop web app. To fix it, you can put something like this at the top of your click handlers:

function handleClick(event) {
  var hasModifiers = (
    event.metaKey ||
    event.shiftKey ||
    event.altKey ||
    event.ctrlKey
  );

  // Only handle unmodified left click. Leave everything else alone.
  if(hasModifiers || event.button !== 1) return;

  event.preventDefault(); // Only prevent AFTER confirming you should handle this.
};

Notice we check the value of event.button in there. A right click doesn't trigger a normal click event, so we don't need to worry about it, so why do we need to check that?

Turns out, clicking the mouse button (middle click) does trigger a click event with event.button === 2. You don't want to handle that the same way as a left click.

Shift-click

This is closely related to Command/Control-click. In many browsers, Shift-click opens a link's target in a new window. In Safari, it adds the link to the user's Reading List.

Web apps like Gmail force all Shift-clicked links to open in a new window, so if I'm using Safari and I find a great link in Ruby Weekly, I can't Shift-click it to add to my Reading List. I have to right-click and select "Add to Reading List" instead (as you may have guessed, the positioning of the right-click example above these was deliberate).

Searching within a page

There are two violations of this behavior that I've seen. The first is overriding Cmd-F to activate your app's own search. This is a misguided attempt to improve searching for content, but if I'm using Cmd-F, I probably want to use the search feature provided by the browser. Gitter used to override Cmd-F this way, but they've since removed it.

Offering your own search with Cmd-F will indeed let users know it's there, but it's frustrating way to find out because at that moment it's probably not what they want. Use Shift-Cmd-F for your search if you like, but leave the basic one alone. If your search bar stays on your page, you can label it with Search (Shift-Cmd-F) to let them know how to get to it quickly with the keyboard. Many of your power users will likely appreciate that.

The other violation I've seen is moving DOM nodes around while the user scrolls. This is usually done for performance reasons, but it's annoying when you know a particular word or phrase appears on the page but it's not coming up in a search. The Facebook timeline and the Twitter's mobile web timeline do this.

If you want to save memory, only render images that appear within the viewport (and remove them when they are scrolled out of the viewport), but please leave text there.

Alternatively, in the case of Facebook and Twitter, reducing the number of DOM nodes per item in the timeline would go a long way to reducing memory usage. The Twitter desktop web timeline is unbearably slow sometimes, but uses nearly 50,000 DOM elements (not counting text nodes) to render 400 tweets:

Photo of JavaScript console with evidence of tweet count vs DOM element count

A 400-element list isn't lightweight, but you don't need list items to average 125 elements inside them.

Back Button

If I click an element and it swaps out a large portion of the page content, that appears as a navigation to me, regardless of whether it triggered a browser-level navigation. When I click the back button (for the sake of brevity, I'm just going to refer to all similar functionality to be filed under "clicking the back button"), I expect to be "taken back" to that previous content.

For example, if I'm viewing a list of messages and I click one of them, it might replace the list of messages with the contents of that one message's thread. As the user, I don't care if this is "technically" a navigation. It looks like navigation from the perspective of someone who doesn't know or care about the internal implementation. If I then hit the back button, I expect to see the list of messages again.

The easiest way to handle this is to use a web framework that provides a router. Ember, Clearwater, React (with React Router), and even Backbone provide this functionality.

Refresh

If an app is stalled for some reason (waiting on incoming data that's taking too long to load, a JavaScript exception broke my click handlers, etc), the most common thing for users to do (besides closing the tab) is refreshing the page. If I'm not in the same spot I was in before the refresh (for example, I have to drill back down through several layers of content to get back there), that's a frustrating user experience.

Losing some internal app state is understandable, but I should at least be in the same spot.

Handling this case is also important for the mobile web. Mobile browsers frequently dump pages to save memory. When you go back to them, they have to reload the page from scratch. If you're shown the app's entry point again, this is probably going to be frustrating. Instead, when the page reloads, I should be right where I was when I left off.

Using some sort of routing to store where you are in the app is essential to providing this kind of user experience.

Tab Navigation via Command-

In most browsers these days, Cmd-# (where # is a number from 1-9) selects that specific tab (1-9). Some WYSIWYG editors override this by setting Cmd-1 through Cmd-6 to correspond to headings h1 through h6.

This is problematic because you may need to swap between a few different tabs to get all the information you need to write up a document. If your app modifies your document instead of swapping tabs, that's gets old real quick.

Conclusion

The app that users expect to be using is the browser; your app just runs inside it. Be respectful of that context. If you do override functionality, ensure that the functionality you're providing in its place feels similar — don't override functionality with entirely different functionality.

The best way to avoid breaking functionality by accident is to use an app framework that handles the minutiae for you. Clearwater, Ember.js, and React Router for React.js are frameworks/libraries that I've personally used that handle all the necessary link- and routing-related functionality for you. You'll never need to worry about breaking modified link clicks, the back button, or page refreshes in these ways.

The rest of the browser features listed above (copy/paste, right-click, page searching, and Cmd-#) are things you have to go out of your way to break. Push back against any product manager or client that decides that they want to override any of those features. Their job is to make decisions that improve a product. Overriding those goes against that goal; assure them of that.

On the Interview Process

Apr 17, 2016 @ 02:50pm

I was just reading a series of tweets about interviewing for a job and this one in particular reminded me of an interview process I endured once:

Let me tell you, the thing that makes a candidate do the worst is when the interviewer just does NOT GIVE A FUCK and doesn't even listen

For this particular job, I applied online. It was the first time I'd actually approached a company in a while. I was excited because it seemed like a great company doing cool things with sweet tech.

Start with a Code Challenge

Their first response to me was "here, take this 2-hour code challenge". Obviously, they were more diplomatic about it, but that was the meat of their response. There wasn't any real conversation, just schedule a code challenge. I thought, okay, sure, this is dumb, but I'll just get through it and then the interview process will begin for real.

The thing I don't like about on-your-own code challenges as part of the interview process is that you can't talk about your own process as you go through it. Well, I mean, you could, but they won't hear you, so it doesn't count. If you get stuck on something, they can't hear you say "at this point, these are the possibilities I've got in mind for how to solve this …". All they see is the finished product (for some arbitrary definition of "finished") of some amount of time you spent on a contrived problem intended to trick you that you only learned about minutes before you started. If you did get stuck, it only looks like you didn't finish, not that you thought of three different ways to go about it only to realize partway in that two of them didn't work because of the contrivedness of the problem.

What makes it even worse in this particular case is that they knew nothing about me yet. They had an interview-quality program from me with zero humanity attached to it because they hadn't spoken with me at all at this point. It is really significantly easier to dismiss a piece of code in a vacuum than it is if you have an actual person associated with it with whom you've actually had a conversation.

After this, I had to submit it college-style by zipping it up and emailing it to them. Maybe we'll discuss my code during the interview, right?

Finally, the Interview

I got a response the following week. Someone at the company set up a 30-minute video interview with me. Thirty minutes. They made me do a 2-hour code challenge but will only spend a quarter of that time talking with me? Totally not getting a good vibe here.

She launched straight into the interview questions after minimal pleasantries. She didn't tell me what her role was at the company and I didn't feel comfortable asking — I didn't want her to think that I assume she's not an engineer because she's a woman — so I decided to roll with it and try to figure out based on the questions she asks. This made me a bit nervous because who you're talking to matters. An HR manager's eyes will glaze over if your answers are overly technical and a developer will likely not care about "HR-style" responses.

Her first question: "What are you career goals?" Well, that sounds like a very HR-like question. I also had no friggin' idea how to answer it. I dunno, I just wanna work with great people and fun tech on cool stuff that gets people what they need or want.

Maybe I should've gone with that answer, but I'm never sure what kind of answer people want to that question and I'm not comfortable saying "I just wanna use fun tech to make great software" because then I feel like I sound like a novice.

Second question: "What is one thing you're strong at?" Another very HR-like question. Also another question I'm not comfortable with. Talking about what I think I'm good at feels indistinguishable from bragging. I'm not even sure how I responded, but I probably stumbled through something for at least 2 full minutes trying not to sound like an idiot and failing miserably.

Third question: "What is another thing you're strong at?" Uhhh … shit. Another one?

Fourth question: "What is another thing you're strong at?" Wait, what? Three times in a row? Making this three separate questions has really made me nervous. Why didn't she just ask for 3 things in a single question? Is she repeating it because she didn't like my first two answers and she's trying to give me a third try?

Fifth: "What is one thing you're weak at?" Well, I saw this one coming after "what are you strong at", so at least I wasn't surprised.

Sixth: "What is another thing you're weak at?" I probably should've seen this coming.

Her next question was surprisingly not a third repetition of that one: "What are the names of your previous 3 bosses?" Finally, not a subjective question! I responded and then realized that that was a really odd question to ask. "We contact them as part of the interview process to rate your performance." Ah, right, because why would you care about references I supply willingly?

You might recall from the beginning of this post a quote about the interviewer not giving a fuck about your responses. Well, during this entire video call, she hadn't been looking at the camera at all and her facial expression never changed. She was clearly not interested in this interview from the get-go. This was the equivalent of having lunch with someone and they're dicking around on their phone the whole time. Even if they're actually holding conversation with you, it doesn't feel like you have their attention. What was the point of this being a video call? Wouldn't audio have sufficed?

I knew by this point that this interview was pointless, but I kept going because we only had 10 minutes left out of 30.

She asked if I had any questions of her. Bear in mind that I still wasn't 100% sure she was an HR manager.

I asked what technologies they use. "Rails on the back end, legacy stuff is Angular, and there's some React — without JSX — and all new stuff is in Elm."

"React without JSX". The fact that she specified that means she's almost certainly a developer. Shit. I'd been wrong this whole time. That means I gave pretty stupid responses. Ugh.

We discussed the tech a bit more and suddenly she's looking at the camera and her eyes are lit up and she's actually showing facial expressions, especially when we discussed how they don't use JSX. This is the interview I wanted the entire time: flapping our gums about nerd shit. Unfortunately, this was only for less than 5 minutes of the interview.

This seemed like a great time to bring up my submission for their code challenge. "Oh, no, I didn't evaluate it." The dev interviewing me for a dev position had never laid eyes on the code that got me this interview.

But then she realized our 30 minutes were up and signed off the call pretty quickly. I was sure I bombed the interview.

The Verdict

The next day, I received an email from someone else saying, sure enough, they didn't want to move forward:

After some internal conversations, we decided that the developer we're seeking right now has a different set of strengths.

I can only assume this is in reference to the three things the interviewer asked about me being strong at — what else could they possibly know about my strengths? Or maybe he was talking about the code submission that nobody ever once talked to me about.

This isn't some enterprise megacorp. This is a reasonably well known startup that's doing great things with cool tech and they want awesome devs to do it, but this doesn't seem like a good way to hire awesome devs. Nothing about this went well. Nothing.

Ways to Improve This

Obviously, complaining about this interview process is one thing, but without talking about how they could've done better, I'm just whining. This can be cathartic, but it isn't helpful.

Talk to me first

If you want to get to know me, talk to me. Appreciate me as a human being and let me know it. The majority of that interview wasn't an interview. It was an interrogation. There was no discussion. It was "I ask a question, you answer it". They were open-ended questions, certainly, but after I responded she offered no conversation in return. Just went straight to the next question. That does nothing to put people at ease, especially if you launch into it almost right away.

If a candidate approaches your company about a job, they're putting themselves out there. That's hard for some people, even exceptionally talented ones. I personally suffer from anxiety, which makes it pretty difficult to talk one-on-one with someone for the first time even under optimal conditions. If you don't treat me like you actually care to learn about me, I guarantee you won't learn much.

If the candidate approaches your company in earnest, it means you probably have the upper hand because they want the job. You have to appreciate that and not abuse it.

Code with me

Pairing is a great way to learn how someone works. Even if they're both driving and navigating, just get them to talk through what they're doing, why they're doing it, and what their thought process is if they're not actively writing code. And even if they are actively writing code, their thought process is still useful since you'll get to see what alternative methods they're considering and discuss why they aren't choosing those.

Pairing also helps you connect with the candidate as a person. Both driver and navigator have to appreciate each other as people in order to get anything done.

Talk to me about my code

Demanding code before you talk to me is bad. Demanding code that you're never going to discuss with me is unacceptable.

If you're not going to pair with a candidate but instead require them to submit to a code challenge, discuss it with them. Tell them what you liked. Don't tell them straight away what you didn't like, but instead ask them why they made decisions you don't agree with. Just because someone doesn't solve a particular problem the way you would have, it doesn't mean their solution is wrong.

Also keep in mind that a code challenge is essentially unpaid work. I'm not going to handcraft a 100% artisanal, locally sourced, free-range solution to your contrived problem because that takes time.

In this particular case, I had a 2-hour time limit, so every moment I spent thinking about one aspect of my solution was a moment I couldn't spend thinking about it another way. When I got stuck because I overlooked something silly, the time I spent figuring out where I got stuck was yet more time I couldn't spend on being productive.

If you don't give the candidate an opportunity to talk about their code, you have no way to know about things like this. You only make assumptions about what you think you can intuit. These assumptions may be dead-on, but they may also be way off. You won't know unless you bring it up with them.

Interviewing Is Hard

Yes, I get it. Interviewing is one of the hardest parts about hiring. People are different. Some people don't interview well (myself included). Some people don't do well on code challenges. How do you evaluate them as candidates?

As if that's not bad enough, maybe you don't have time for interviews because you've got too much to do. Your feature-request and bug-report backlogs are getting out of hand and time spent on interviews is time you're not chipping away at these "more urgent" matters.

The thing is, the more time you spend on making your candidates feel comfortable, the higher the probability of finding the right candidate — or at least narrowing it down to a few, at which point it probably doesn't matter whom you choose. If you interview everyone like the experience I had here, your chance of finding the right person is no better than if you just flip a coin for each candidate.

And finding the right candidate will boost your team's productivity significantly, helping you reduce the workload that's keeping you from spending time on interviews. If finding the right candidate is truly a priority for you, you'll find more than 30 minutes to invest in talking to them and your team will be better off for it.

Followup: Turbolinks vs the Virtual DOM

Mar 13, 2016 @ 04:31am

A couple weekends ago, I wrote an article comparing re-render times between Turbolinks and Clearwater. It wasn't focused on Clearwater, exactly, but I'd just gotten done with some work on it so that's where my mind was.

Before I get too far into this, I want to point out that I wasn't trying to call out Nate as the sole human being that carries the point of view that Turbolinks is necessarily better than a virtual DOM. I've seen at least a dozen people tweeting about how a vdom is too complex, likely spawned by DHH's tweets on the subject. Nate's tweets just happened to be the ones that spurred me into running this experiment in the first place. I'd like to take this opportunity to apologize to Nate for making it look like I'm calling him out.

I posted an example app, which proved to be more problematic than I'd realized because, in true internet fashion, people then began scrutinizing it in an effort to find many ways to tell me I'm doing it wrong.

In their defense, I didn't make it abundantly clear that I was comparing only the performance of re-rendering and that first render was completely out of scope of the article. I feel like I made my point clear that re-rendering with a virtual DOM was faster, but I didn't make it clear that re-rendering on navigation was the sole focus.

In my defense, I didn't realize I had to, considering that's the only thing for which Turbolinks is in any way useful.

Let's go into a couple of examples of people's responses to that article:

totally unfair comparison though https://turbolinks-vs-clearwater.herokuapp.com/ 1 second wait to click "turbolinks" 8 second wait after clicking "clearwater" — @samsaffron

It's unfair if I cared about first-render time. Of course Turbolinks is great for the first render. It's almost the same performance on first render as not using Turbolinks — or any other JavaScript — because it's really only useful for reusing the existing JS and CSS in memory and only throwing away the DOM.

A virtual DOM, when you're not server-rendering, is always going to lose the first-render race. I didn't spend any time on first-render optimizations because I only spent an hour or so on the app to begin with. I didn't gzip it or send only the initial data needed to render the list (such as ids and titles). This would've saved a whole lot of that first-render time. In light of that, I'm surprised it only took 8x because it sent a lot of data.

fairer comparing "click->ajax->json->render" vs "click->render" vs "click->pjax->replace" — @samsaffron

This is probably the fairest criticism I received. He's saying I should've included an AJAX call in there to compare fetching from the server with Turbolinks.

If you look at the Turbolinks performance screenshot, though, you'll notice there is nothing between the click event and the JS to render it to the DOM. I eliminated the request by using a link I'd previously clicked so Turbolinks would have it cached. This allowed me to focus on CPU usage.

Adding server requests convolutes the experiment with more variables for which we can't control. Sam would see very little difference between a Turbolinks request and an AJAX+vdom-render, considering he lives in Australia, which is about the farthest you can get from the Heroku infrastructure without leaving the atmosphere. The latency he'd see would fuzz the results, making them look potentially almost the same. However, someone on the east coast of the US who has 20ms latency to the server would see something quite different.

so they're using Clearwater caching to get that speed, but not Rails template caching? 💩 — @seanlinsley

"Why isn't this person making every possible optimization to this clearly contrived app?"

Come on.

Maybe they didn't realize that I was using a Turbolinks-cached endpoint and that any server round-trip time, if it did happen, wouldn't be factored into the JS+render time, which only counts CPU usage and not time spent on I/O. But even if they don't, that's really reaching to find something to complain about.

It goes the other way, too

There was even some destructive criticism directed at Nate based on very little context. Nearly every single person I saw who jumped into this with more than a "hey, this looks interesting" jumped straight to the conclusion that no thought was put into this by anyone other than them.

In fact, the one person who had the most level head on this whole topic was the person whose tweets I was criticizing in the first place. Nate asked me several questions about Clearwater to make sure he had enough information before he posted a response to mine. I'm sure he wants to mention the tradeoffs of either approach. I think it'll be an awesome read and I hope he can find the time to finish writing it.

But please, when you read a technical comparison of two technologies, please think about it a bit and assume the author has done the same. Ask questions before criticizing; they may have considered the conclusion you're jumping to but just haven't articulated themselves enough.

B'more on Rails Attendance

Mar 09, 2016 @ 12:45am

Tonight was a massive turnout at the talk night for B'more on Rails, one of the meetup groups I co-organize. When the group was first created on meetup.com, the original organizers put an arbitrary limit of 65 attendees at the meetup, thinking it'd probably never be reached. Tonight, we not only reached that, but we had at least 18 people on a waiting list.

This is due in large part to Natasha Jones' Workshop for Women last month. Women from that workshop accounted for over a quarter of the attendees tonight, and half of Hack Night a few weeks ago (Talk Night is the 2nd Tuesday of the month, Hack Night is the 4th Tuesday).

We also had a pretty impressive attendance from Towson University — two professors and close to a dozen students. And another professor from another university in Baltimore. There are probably a half dozen universities with a footprint in the city and he didn't specify, so I don't know which one.

One of the things that made me happiest about it was that even with that many people there, it wasn't a giant mess of white dudes. There were definitely some there, but I'd estimate they (well, I suppose "we" is more accurate, considering I'm a white guy) were only 25-30%, tops. Considering that that percentage is usually at least 75%, this was a refreshing change. I love knowing that the outreach that we've been doing has been working!

In fact, two of the three presenters tonight were women of color. They gave presentations that were perfect for the audience we had tonight. Vaidehi Joshi talked about state machines (a preview of her Ruby on Ales talk) and Ashley Jean talked about password hashing with BCrypt — her first tech talk ever. I spoke to several people who attended who were still very new to programming and they said they got a lot out of both talks, which is fantastic when you consider that neither of those are beginner-level topics.

As you can probably tell, I'm bursting with excitement and happiness at how well this meetup went. This is despite the fact that there were so many people there that we ran out of seats and the temperature inside the room was 10° higher than it was outside it.

It was something that we learned very quickly we weren't setup for logistically, though. We only barely had room for everyone, so we may need to lower that 65-person limit. We also had no idea how much food to buy for that many people (we way overshot it) and it's a little awkward to tell sponsors "oh, by the way, the food bill is probably going to be 2-3x higher for the foreseeable future, especially until we can figure out how much food to get for this crowd". :-)

Turbolinks vs the Virtual DOM

Feb 28, 2016 @ 08:41pm

On Friday, Nate Berkopec tweeted out:

Side effect of the Turbolinks-enabled mobile app approach - guaranteed to be fast on old/low-spec devices b/c the Javascript is so simple. — @nateberkopec

Think about the operational complexity of React versus Turbolinks. An entire virtual DOM versus "$('body').innerHTML(someAjaxResponse)". — @nateberkopec

He justified his hypothesis by showing that the Ember TodoMVC takes 4x as long to update as a Turbolinks version, which I found odd because his original claim was about virtual DOMs, but the Ember TodoMVC uses an old version of Ember that doesn't use a virtual DOM — Ember's virtual DOM, called Glimmer, didn't appear until 2.0. It injects HTML, exactly what Turbolinks does. The only difference is that that HTML is generated by the browser. It trades a round trip to the server for CPU usage on the client.

Having spent the last year or so studying the performance advantages and disadvantages of virtual-DOM implementations and trying to ensure that Clearwater is fast enough for any app you want to write (including outperforming React), I had a sneaking suspicion that Turbolinks would not be faster than a virtual DOM that uses intelligent caching. I base that on the way HTML rendering in a browser works. This is kinda how node.innerHTML = html works in JS:

  • Parse HTML, find tags, text nodes, etc.
  • Generate DOM nodes for each of those tags and wire them together into the same structure represented in the HTML
  • Remove the existing nodes from the rendered DOM tree
  • Replace the removed nodes with the newly generated nodes
  • Match CSS rules to each DOM node to determine styles
  • Determine layout based on those styles
  • Paint to the screen

With a virtual DOM, there is no HTML parsing at all. This is why you never have to worry about sanitizing HTML with Clearwater or React. It's not that "it's sanitized for you" (which I've heard people say a lot); it's that the parser is never even invoked.

Instead, our intermediate representation is a JS object which has properties that mirror what the actual DOM node's will. Copying this to a real DOM node is trivial. The advantage that the HTML-parsing method has here is that it can be done in native code rather than through the JS API.

The part where replacing HTML really bogs down is in rendering. Removing all those DOM nodes and regenerating them from scratch is not cheap when you have a lot of them. When very little actually changes in the DOM (Nate's example was adding an item to a todo list, so the net change is that one li and its children get added to the DOM), you're doing all that work for nothing. All CSS rules, styles, and layouts have to be recalculated instead of being able to reuse most of them.

Even with persistent data structures (data structures that return a new version of themselves with the requested changes rather than modifying themselves internally), when you add an item to an array, you are only using a new container. All the elements in the array are the exact same objects in memory as the previous version. This is why persistent data structures are still fast, despite occurring in O(n) time. If it had to duplicate the elements (and all the objects they point to, recursively), it would be so slow as to be unusable if you had to do it frequently.

Injecting a nearly identical DOM tree is exactly that. It generates entirely new objects all the way down. We had exactly this problem at OrderUp before moving our real-time delivery dashboard from Backbone/Marionette to React.

The Benchmark

I built a primitive blog-style app using Rails 5.0.0.beta3 that generates 2000 articles using the faker gem and added routes for a Turbolinks version and a Clearwater app. I then clicked around both. Here's what I found:

Turbolinks timeline screenshot

Turbolinks took 101ms, 62ms of which was rendering. I'm not sure why it had to execute JS for 32ms, but it did. I even helped Turbolinks out here by not including the major GC run that occurred on every single render. I only mention it here to acknowledge that it did happen.

Clearwater took 8ms

Clearwater took 8ms. Not 8ms of JS. Not 8ms of rendering and style calculation. Just 8ms. From the link click to the final paint on the screen, it executed 4x as fast as Turbolinks' JS and nearly 8x as fast as it could render to the DOM. Overall, it is an order of magnitude faster than the Turbolinks version, despite rendering inside the browser. This is huge on an old/low-spec device — the same devices Nate advised using Turbolinks for.

Using intelligent caching is what allows it to perform so quickly. All I did was use a cached version of the articles list if the articles array was the same array as before.

Partial Replacement Support?

Nate did mention that Turbolinks 5 does not "yet" support partial replacement, so maybe that will be implemented and it won't have to blow away the entire DOM, but the coupling I noticed in the README for Turbolinks 3 between the controller and the rendered HTML was a little off-putting. It seems like a weird server-side Backbone thing. Note that there is no release of Turbolinks 3, though.

Celso Fernandez also pointed out that the Turbolinks README contains a section explaining that partial replacement was intentionally removed from Turbolinks 5, so it looks like this performance won't improve in Rails 5.

TwitterGithubRss