Memory Management Remains A Security Risk For Chrome, Other Browsers

As many as 70-percent of security risks in Chrome and other browsers arise from memory management problems. That's according to a recent report compiling details and statistics shared by both Microsoft and Google. While the underlying problems vary, ranging from quirks of old programming languages to code that's just poorly written, use-after-free vulnerabilities account for half of that figure.

That represents a hard problem that engineers from both companies are looking to address.

What exactly are the memory management security risks in browsers?

The figures here are based on two separate reports, as noted above. For its part, Google analyzed 912 security bugs with high or critical severity rankings that it's fixed since 2015. Microsoft examined 12 years of security fixes. But they all hinge on the same kinds of security issues. Namely, that's the use-after-free vulnerabilities mentioned above. But also buffer overflow, race conditions, double free, wild pointers, and others.

Advertisement

For Google, specifically, there are three areas of concern. Those are code that handles untrustworthy inputs or runs with no sandboxing. Sandboxing is effectively isolating the code of a site or app away from that of others. That is in addition to unsafe programming languages.

In fact, the search giant is presently enforcing a "rule of 2." That rule requires new Chrome features to be written without broaching more than two of those problem points.

Regardless, those are almost universal memory security problems for all browsers, not just Chrome.

Advertisement

Now, the chief concern for each of the browser giants right now — from Chrome to Firefox — is the use of unsafe languages. Google Chrome and others have already placed a lot of emphasis on solutions pertaining to site isolation. In fact, Google has been actively incorporating site isolation in some form for Chrome since at least 2018. And untrustworthy inputs are likely to remain since browsers rely so heavily on input from users and other sources.

Site isolation has gone a long way toward fixing the problem but, as Google puts it, that has reached its maximum benefit. More work in that area is likely to severely impact performance.

While finding safer programming languages is a ubiquitous solution, however, not every company is taking the same approach.

Advertisement

What's the solution?

Mozilla has spearheaded efforts on that front, for instance. But it has invested itself heavily in the exploration and development of the Rust programming language. Microsoft is exploring the use of Rust as well. But its efforts, especially now that Edge is Chromium-based, extend well beyond that. The company has also examined fixing unsafe languages via its Checked C project. And it's building out a programming language of its own, via Project Verona, that's similar to Rust.

For Google's part, the approach is three-pronged as well. The search giant has said that it is looking into developing custom C++ libraries, for instance. Google would use those in Chrome's database and offer up better protection against prevalent vulnerabilities. It's also exploring Rust, Swift, JavaScript, Kotlin, and Java as possible replacements. The company has explored Kotlin since at least 2017 when it added support for the language officially to Android.

Among the more novel solutions, Google is exploring the possibility that use-after-free vulnerabilities, in particular, could simply be turned into security crashes. By doing so, Google might effectively shut down the vulnerability in terms of it being exploitable. But with a minimal impact on performance.

Advertisement