Why Developing for Firefox is Torturous

I’m going to argue for something unpopular in this post: Firefox has been lagging too far behind the curve for too long. When it comes to web standards, Firefox has not been fit to compete for years. Firefox’s high adoption rate is hindering the progress of the web, which is being led currently by other browsers, like Google Chrome.

Let me explain why the above argument is unpopular in the open source community.

Back in the early 2000s, one of Firefox’s main goals was to combat the stagnation of web standards for which Internet Explorer was responsible. I began using Firefox way back when it was called Phoenix, and back then, the browser was a truly necessary, awesome revolution that helped shape the web into what it is today. It was an ambitious project to create a revolutionary, open, responsibly developed web browser and it became an example to be followed by all open source projects. Firefox helped found web standards.

Many people who grew up using Firefox today develop for it, in the spirit of furthering the open web, of pushing forward the dream that eventually led web technologies past the atrocious limits set by Internet Explorer and Netscape. Those developers, who defend Firefox today because they intend to defend its necessity when it replaced Internet Explorer, do not realize that Firefox has long since stopped being the at the front of the race when it comes to the web frameworks and technologies. They get very passionate when I complain against Firefox helping with the stagnation of web standards, not realizing that Firefox has been holding the web back since at least 2011.

I have been a third-party developer for Chrome, Firefox and Safari for almost two years. I write code that has to be seamlessly interoperable between all three browsers, and that is deployed to tens of thousands of users. Most of my users use Firefox and install the Firefox version of my browser extension. However, Firefox’s stagnation is so ardent that it has been the platform that requires by far the greatest effort and special attention for my project. It has been the platform that requires, without exaggeration, twice if not three times the effort I spend when compared to Chrome or Safari.

The release schedule for my browser extension can be held back for almost a month due to Firefox’s painfully slow addon update review process. While I’ve had Apple (a far more closed, developer-unfriendly company) review my updates on average within five days to a week, this same review process takes an average of three weeks just for a preliminary review in the case of Firefox. Last February, I submitted a detailed request to Mozilla’s Bug Tracker asking for changes to the review process, possibly introducing a “trusted developer” program — my request hasn’t received any responses. Some of my updates include critical bug fixes, and even when they don’t, I am faced with the choice of either letting Firefox users lack almost a month behind other platforms, or just delay update launches altogether.

In terms of engine and codebase, problems run far deeper. Here are just some very few examples: Code that runs fast through Chrome’s V8 engine runs almost twice as slowly on Firefox. Myriad HTML5 features that are fully available in Chrome are only partially available in Firefox or are just starting to become fully available. The implementation of a secure random number generator, which has been available in Chrome, Safari and Opera for years, took more than three years for Firefox to implement, and only became available in Firefox 21. Packaged, local web applications, which have been the standard in Google Chrome for years, remain inaccessible in Firefox (except for Firefox OS), leaving developers stuck with the almost satirically archaic XULRunner framework. On top of all of this, Firefox is replete with CPU hogs and memory leaks that I cannot, for the life of me, reproduce on other browsers while developing my extension, and still lacks proper sandboxing.

This is just an incomplete summary of the problems that run down to Firefox’s very nature, and that make it require possibly a total rebuild in order to be able to compete with a browser like Google Chrome. I am not a Google employee, and not a Chrome fanatic. It’s just that much of my frustration lies with just how stark the difference is between developing for both browsers. When I’m developing for Chrome, code runs quickly. Developer tools work. I don’t get surprise CPU or memory issues that I can’t reproduce in other browsers. On top of this, Chrome offers so many APIs that in many cases you’re left to cherry-pick which one is better for your project (a packaged app, a native Chrome app, an extension?), and uses modern, simple structures, much to the opposite of XULRunner, to tie everything to the browser. Code-signing your extension isn’t a weakly documented process. Updating your extension doesn’t force you to run into three-week delays. Chrome is leading the way forward, and Firefox is at this point just a popular platform I am starting to wish I never had to account for in the first place.

While Firefox almost single-handedly revolutionized the web in the Internet Explorer era, the era of Firefox being the harbinger of web standards has ended, and it’s about time developers got out of their nostalgia and realized it. Firefox cannot keep up with the advancements being made in the current Google Chrome era, and is therefore, through its large user-base (28% as of April 2013, according to one statistic) holding web development back. If Firefox wants to compete, it must first realize that it is currently failing to compete.

Posted May 27, 2013 by Nadim in Computing, Internet

Thoughts on Critiques of JavaScript Cryptography

Updated with new information and re-published on May 25, 2013.

In 2011, Matasano Security published “JavaScript Cryptography Considered Harmful,” an article in which it argued that web JavaScript cryptography is “doomed” due to a number of factors. Matasano suggested SSL/TLS (HTTPS) as a viable solution in the many use cases where JavaScript cryptography is sometimes being deployed. I myself deploy JavaScript cryptography in Cryptocat, my open source project.

While some of Matasano’s points against JavaScript cryptography are, of course, reasonable and worthy of investigation, many of their attacks against JavaScript cryptography lack a complete understanding of how the technology has advanced in the short-term past; the main point Matasano seems to be missing is that while JavaScript cryptography is, admittedly, very difficult to deploy correctly, it is still possible to use it to counter problems that HTTPS cannot counter alone.

One of these problems is that data sent via HTTPS will always be readable as plaintext on the server, even in situations where the server is supposed to simply store ciphertext without being able to read its plaintext equivalent. However, while JavaScript cryptography is theoretically capable of sending ciphertext that cannot be decrypted server-side, Matasano raises many interesting issues against using JavaScript cryptography in this scenario:

“Secure delivery of JavaScript to browsers is a chicken-egg problem.”

With Chrome, Firefox, and Safari, it is possible to deliver web apps as signed, local browser plugins, applications and extensions. This greatly helps with solving the problem. In fact, I believe that it is necessary to deliver JavaScript cryptography-using webapps as signed browser extensions, as any other method of delivery is too vulnerable to man-in-the-middle attacks to be considered secure.

“Any attacker who could swipe an unencrypted secret can, with almost total certainty, intercept and alter a web request.”

This problem is simply mitigated by deploying JavaScript cryptography in conjunction with HTTPS. While HTTPS has its own set of problems, not taking advantage of its benefits in the sensitive context of JavaScript cryptography is lazy.
However, while Matasano encourages the usage of HTTPS (referred to in their article as SSL/TLS,) they say that “having established a secure channel with SSL, you no longer need JavaScript cryptography; you have “real” cryptography.” They forget that HTTPS is incapable of sending ciphertext that the server cannot decrypt. JavaScript cryptography can do this; it remains very useful in such a context.

“JavaScript lacks a secure random number generator.”

This is in fact not the case. window.crypto.getRandomValues() is a secure random number generator available in JavaScript, and supported by major web browsers.

“JavaScript cryptography hasn’t implemented well-known standards such as PGP; such a high-level language can’t be trusted with cryptography.”

Crypto-JS is a JavaScript cryptography library that implements many primitives in a fashion that is natively compatible with OpenSSL.

“JavaScript’s ‘view-source transparency’ is illusory.”

It is the case in every cryptography implementation that the implementation will need to be reviewed by an expert before being declared secure. This is no different in JavaScript, C, or Python. The only benefit of view-source transparency is that the same code actually being executed by the browser can be read, which is impossible with compiled binary crypto. From that standpoint, “view-source transparency” is not illusory; it becomes illusory when we take the idea too far. With the use of a local signed browser plugin, it becomes only necessary to audit the plugin code once.

The Real Issue with Browser Crypto

There is no question that JavaScript cryptography is still a growing field that requires much work, somewhat on the part of browser providers to clamp down DOM sharing and other issues in their code, but also on the part of the crypto community to find ways to safely implement functions in JavaScript crypto. JavaScript cryptography is difficult, heck, downright dangerous to implement correctly, but creative solutions to difficult problems, such as those discussed above, are a way forward.

However, the ultimate problem with browser cryptography is that there is no standard for innate, in-browser encryption. Very much like HTML5 and CSS, there needs to be an international, vetted, audited, cross-browser standard  for browsers to be capable of securely encrypting and communicating sensitive information. There’s no denying the urgent need for such a standard , considering the ridiculous rate in which the browser is becoming pretty much the mainstream central command for personal information.

Fortunately, the W3C Web Cryptography Working Group, which I am a member of, is working hard on solving this problem.

Update (Sep 14, 2011): This post apparently sent Matasano Security employee Thomas Ptacek on a passive-aggressive tirade against me on his Twitter feed, where he posted a bunch of angry attacks that I feel really don’t speak well of Matasano Security’s reputation as a company. As a security researcher, I don’t appreciate having a particular company react against me with public personal attacks and rudeness when I take the time to write a considerate, respectful reply to their analysis.

Posted May 25, 2013 by Nadim in Computing, Internet, Security