Dweb: Identity for the Decentralized Web with IndieAuth

In the Dweb series, we are covering projects that explore what is possible when the web becomes decentralized or distributed. These projects aren’t affiliated with Mozilla, and some of them rewrite the rules of how we think about a web browser. What they have in common: These projects are open source and open for participation, and they share Mozilla’s mission to keep the web open and accessible for all.

We’ve covered a number of projects so far in this series that require foundation-level changes to the network architecture of the web. But sometimes big things can come from just changing how we use the web we have today.

Imagine if you never had to remember a password to log into a website or app ever again. IndieAuth is a simple but powerful way to manage and verify identity using the decentralization already built into the web itself. We’re happy to introduce Aaron Parecki, co-founder of the IndieWeb movement, who will show you how to set up your own independent identity on the web with IndieAuth.

– Dietrich Ayala

Introducing IndieAuth

IndieAuth is a decentralized login protocol that enables users of your software to log in to other apps.

From the user perspective, it lets you use an existing account to log in to various apps without having to create a new password everywhere.

IndieAuth builds on existing web technologies, using URLs as identifiers. This makes it broadly applicable to the web today, and it can be quickly integrated into existing websites and web platforms.

IndieAuth has been developed over several years in the IndieWeb community, a loosely connected group of people working to enable individuals to own their online presence, and was published as a W3C Note in 2018.

IndieAuth Architecture

IndieAuth is an extension to OAuth 2.0 that enables any website to become its own identity provider. It builds on OAuth 2.0, taking advantage of all the existing security considerations and best practices in the industry around authorization and authentication.

IndieAuth starts with the assumption that every identifier is a URL. Users as well as applications are identified and represented by a URL.

When a user logs in to an application, they start by entering their personal home page URL. The application fetches that URL and finds where to send the user to authenticate, then sends the user there, and can later verify that the authentication was successful. The flow diagram below walks through each step of the exchange:

Diagram showing IndieAuth work-flow, from browser to client, to user URL to endpoint

Get Started with IndieAuth

The quickest way to use your existing website as your IndieAuth identity is to let an existing service handle the protocol bits and tell apps where to find the service you’re using.

If your website is using WordPress, you can easily get started by installing the IndieAuth plugin! After you install and activate the plugin, your website will be a full-featured IndieAuth provider and you can log in to websites like https://indieweb.org right away!

To set up your website manually, you’ll need to choose an IndieAuth server such as https://indieauth.com and add a few links to your home page. Add a link to the indieauth.com authorization endpoint in an HTML <link> tag so that apps will know where to send you to log in.

<link rel="authorization_endpoint" href="https://indieauth.com/auth">

Then tell indieauth.com how to authenticate you by linking to either a GitHub account or email address.

<a href="https://github.com/username" rel="me">GitHub</a>
<a href="mailto:you@example.com" rel="me">Email</a>

Note: This last step is unique to indieauth.com and isn’t part of the IndieAuth spec. This is how indieauth.com can authenticate you without you creating a password there. It lets you switch out the mechanism you use to authenticate, for example in case you decide to stop using GitHub, without changing your identity at the site you’re logging in to.

If you don’t want to rely on any third party services at all, then you can host your own IndieAuth authorization endpoint using an existing open source solution or build your own. In any case, it’s fine to start using a service for this today, because you can always swap it out later without your identity changing.

Now you’re ready! When logging in to a website like https://indieweb.org, you’ll be asked to enter your URL, then you’ll be sent to your chosen IndieAuth server to authenticate!

Learn More

If you’d like to learn more, OAuth for the Open Web talks about more of the technical details and motivations behind the IndieAuth spec.

You can learn how to build your own IndieAuth server at the links below:

You can find the latest spec at indieauth.spec.indieweb.org. If you have any questions, feel free to drop by the #indieweb-dev channel in the IndieWeb chat, or you can find me on Twitter, or my website.

The post Dweb: Identity for the Decentralized Web with IndieAuth appeared first on Mozilla Hacks – the Web developer blog.

Firefox 63 – Tricks and Treats!

It’s that time of the year again- when we put on costumes and pass out goodies to all. It’s Firefox release week! Join me for a spook-tacular1 look at the latest goodies shipping this release.

Web Components, Oh My!

After a rather long gestation, I’m pleased to announce that support for modern Web Components APIs has shipped in Firefox! Expect a more thorough write-up, but let’s cover what these new APIs make possible.

Custom Elements

To put it simply, Custom Elements makes it possible to define new HTML tags outside the standard set included in the web platform. It does this by letting JS classes extend the built-in HTMLElement object, adding an API for registering new elements, and by adding special “lifecycle” methods to detect when a custom element is appended, removed, or attributes are updated:

class FancyList extends HTMLElement { constructor () { super(); this.style.fontFamily = 'cursive'; // very fancy } connectedCallback() { console.log('Make Way!'); } disconnectedCallback() { console.log('I Bid You Adieu.'); }
} customElements.define('fancy-list', FancyList);

Shadow DOM

The web has long had reusable widgets people can use when building a site. One of the most common challenges when using third-party widgets on a page is making sure that the styles of the page don’t mess up the appearance of the widget and vice-versa. This can be frustrating (to put it mildly), and leads to lots of long, overly specific CSS selectors, or the use of complex third-party tools to re-write all the styles on the page to not conflict.

Cue frustrated developer:

There has to be a better way…

Now, there is!

The Shadow DOM is not a secretive underground society of web developers, but instead a foundational web technology that lets developers create encapsulated HTML trees that aren’t affected by outside styles, can have their own styles that don’t leak out, and in fact can be made unreachable from normal DOM traversal methods (querySelector, .childNodes, etc.).

let shadow = div.attachShadow({ mode: 'open' });
let inner = document.createElement('b');
inner.appendChild(document.createTextNode('I was born in the shadows'));
shadow.appendChild(inner);
div.querySelector('b'); // empty

Custom elements and shadow roots can be used independently of one another, but they really shine when used together. For instance, imagine you have a <media-player> element with playback controls. You can put the controls in a shadow root and keep the page’s DOM clean! In fact, Both Firefox and Chrome now use Shadow DOM for the implementation of the <video> element.

Expect a deeper dive on building full-fledged components here on Hacks soon! In the meantime, you can plunge into the Web Components docs on MDN as well as see the code for a bunch of sample custom elements on GitHub.

Fonts Editor

a screenshot of the fonts panel being used to adjust a variable font

The Inspector’s Fonts panel is a handy way to see what local and web fonts are being used on a page. Already useful for debugging webfonts, in Firefox 63 the Fonts panel gains new powers! You can adjust the parameters of the font on the currently selected element, and if the current font supports Font Variations, you can view and fine-tune those paramaters as well. The syntax for adjusting variable fonts can be a little unfamiliar and it’s not otherwise possible to discover all the variations built into a font, so this tool can be a life saver.

Read all about how to use the new Fonts panel on MDN Web Docs.

Reduced motion preferences for CSS

Slick animations can give a polished and unique feel to a digital experience. However, for some people, animated effects like parallax and sliding/zooming transitions can cause vertigo and headaches. In addition, some older/less powerful devices can struggle to render animations smoothly. To respond to this, some devices and operating systems offer a “reduce motion” option. In Firefox 63, you can now detect this preference using CSS media queries and adjust/reduce your use of transitions and animations to ensure more people have a pleasant experience using your site. CSS Tricks has a great overview of both how to detect reduced motion and why you should care.

Conclusion

There is, as always, a bunch more in this release of Firefox. MDN Web Docs has the full run-down of developer-facing changes, and more highlights can be found in the official release notes. Happy Browsing!

1. not particularly spook-tacular

The post Firefox 63 – Tricks and Treats! appeared first on Mozilla Hacks – the Web developer blog.

WebAssembly’s post-MVP future: A cartoon skill tree

/* temporary fix until theme is updated */.article dl{max-width:65ch}.article dt{font-weight:bold}.article dd{margin-top:.75rem;margin-bottom:.75rem}

People have a misconception about WebAssembly. They think that the WebAssembly that landed in browsers back in 2017—which we called the minimum viable product (or MVP) of WebAssembly—is the final version of WebAssembly.

I can understand where that misconception comes from. The WebAssembly community group is really committed to backwards compatibility. This means that the WebAssembly that you create today will continue working on browsers into the future.

But that doesn’t mean that WebAssembly is feature complete. In fact, that’s far from the case. There are many features that are coming to WebAssembly which will fundamentally alter what you can do with WebAssembly.

I think of these future features kind of like the skill tree in a videogame. We’ve fully filled in the top few of these skills, but there is still this whole skill tree below that we need to fill-in to unlock all of the applications.

Skill tree showing WebAssembly skills which will be filled in through the course of the post.

So let’s look at what’s been filled in already, and then we can see what’s yet to come.

Minimum Viable Product (MVP)

The very beginning of WebAssembly’s story starts with Emscripten, which made it possible to run C++ code on the web by transpiling it to JavaScript. This made it possible to bring large existing C++ code bases, for things like games and desktop applications, to the web.

The JS it automatically generated was still significantly slower than the comparable native code, though. But Mozilla engineers found a type system hiding inside the generated JavaScript, and figured out how to make this JavaScript run really fast. This subset of JavaScript was named asm.js.

Once the other browser vendors saw how fast asm.js was, they started adding the same optimizations to their engines, too.

But that wasn’t the end of the story. It was just the beginning. There were still things that engines could do to make this faster.

But they couldn’t do it in JavaScript itself. Instead, they needed a new language—one that was designed specifically to be compiled to. And that was WebAssembly.

So what skills were needed for the first version of WebAssembly? What did we need to get to a minimum viable product that could actually run C and C++ efficiently on the web?

Skill: Compile target

The folks working on WebAssembly knew they didn’t want to just support C and C++. They wanted many different languages to be able to compile to WebAssembly. So they needed a language-agnostic compile target.

They needed something like the assembly language that things like desktop applications are compiled to—like x86. But this assembly language wouldn’t be for an actual, physical machine. It would be for a conceptual machine.

Skill: Fast execution

That compiler target had to be designed so that it could run very fast. Otherwise, WebAssembly applications running on the web wouldn’t keep up with users’ expectations for smooth interactions and game play.

Skill: Compact

In addition to execution time, load time needed to be fast, too. Users have certain expectations about how quickly something will load. For desktop applications, that expectation is that they will load quickly because the application is already installed on your computer. For web apps, the expectation is also that load times will be fast, because web apps usually don’t have to load nearly as much code as desktop apps.

When you combine these two things, though, it gets tricky. Desktop applications are usually pretty large code bases. So if they are on the web, there’s a lot to download and compile when the user first goes to the URL.

To meet these expectations, we needed our compiler target to be compact. That way, it could go over the web quickly.

Skill: Linear memory

These languages also needed to be able to use memory differently from how JavaScript uses memory. They needed to be able to directly manage their memory—to say which bytes go together.

This is because languages like C and C++ have a low-level feature called pointers. You can have a variable that doesn’t have a value in it, but instead has the memory address of the value. So if you’re going to support pointers, the program needs to be able to write and read from particular addresses.

But you can’t have a program you downloaded from the web just accessing bytes in memory willy-nilly, using whatever addresses they want. So in order to create a secure way of giving access to memory, like a native program is used to, we had to create something that could give access to a very specific part of memory and nothing else.

To do this, WebAssembly uses a linear memory model. This is implemented using TypedArrays. It’s basically just like a JavaScript array, except this array only contains bytes of memory. When you access data in it, you just use array indexes, which you can treat as though they were memory addresses. This means you can pretend this array is C++ memory.

Achievement unlocked

So with all of these skills in place, people could run desktop applications and games in your browser as if they were running natively on their computer.

And that was pretty much the skill set that WebAssembly had when it was released as an MVP. It was truly an MVP—a minimum viable product.

This allowed certain kinds of applications to work, but there were still a whole host of others to unlock.

Heavy-weight Desktop Applications

The next achievement to unlock is heavier weight desktop applications.

Can you imagine if something like Photoshop were running in your browser? If you could instantaneously load it on any device like you do with Gmail?

We’ve already started seeing things like this. For example, Autodesk’s AutoCAD team has made their CAD software available the browser. And Adobe has made Lightroom available through the browser using WebAssembly.

But there are still a few features that we need to put in place to make sure that all of these applications—even the heaviest of heavy weight—can run well in the browser.

Skill: Threading

First, we need support for multithreading. Modern-day computers have multiple cores. These are basically multiple brains that can all be working at the same time on your problem. That can make things go much faster, but to make use of these cores, you need support for threading.

Skill: SIMD

Alongside threading, there’s another technique that utilizes modern hardware, and which enables you to process things in parallel.

That is SIMD: single instruction multiple data. With SIMD, it’s possible to take a chunk of memory and split up across different execution units, which are kind of like cores. Then you have the same bit of code—the same instruction—run across all of those execution units, but they each apply that instruction to their own bit of the data.

Skill: 64-bit addressing

Another hardware capability that WebAssembly needs to take full advantage of is 64-bit addressing.

Memory addresses are just numbers, so if your memory addresses are only 32 bits long, you can only have so many memory addresses—enough for 4 gigabytes of linear memory.

But with 64-bit addressing, you have 16 exabytes. Of course, you don’t have 16 exabytes of actual memory in your computer. So the maximum is subject to however much memory the system can actually give you. But this will take the artificial limitation on address space out of WebAssembly.

Skill: Streaming compilation

For these applications, we don’t just need them to run fast. We needed load times to be even faster than they already were. There are a few skills that we need specifically to improve load times.

One big step is to do streaming compilation—to compile a WebAssembly file while it’s still being downloaded. WebAssembly was designed specifically to enable easy streaming compilation. In Firefox, we actually compile it so fast—faster than it is coming in over the network— that it’s pretty much done compiling by the time you download the file. And other browsers are adding streaming, too.

Another thing that helps is having a tiered compiler.

For us in Firefox, that means having two compilers. The first one—the baseline compiler—kicks in as soon as the file starts downloading. It compiles the code really quickly so that it starts up quickly.

The code it generates is fast, but not 100% as fast as it could be. To get that extra bit of performance, we run another compiler—the optimizing compiler—on several threads in the background. This one takes longer to compile, but generates extremely fast code. Once it’s done, we swap out the baseline version with the fully optimized version.

This way, we get quick start up times with the baseline compiler, and fast execution times with the optimizing compiler.

In addition, we’re working on a new optimizing compiler called Cranelift. Cranelift is designed to compile code quickly, in parallel at a function by function level. At the same time, the code it generates gets even better performance than our current optimizing compiler.

Cranelift is in the development version of Firefox right now, but disabled by default. Once we enable it, we’ll get to the fully optimized code even quicker, and that code will run even faster.

But there’s an even better trick we can use to make it so we don’t have to compile at all most of the time…

Skill: Implicit HTTP caching

With WebAssembly, if you load the same code on two page loads, it will compile to the same machine code. It doesn’t need to change based on what data is flowing through it, like the JS JIT compiler needs to.

This means that we can store the compiled code in the HTTP cache. Then when the page is loading and goes to fetch the .wasm file, it will instead just pull out the precompiled machine code from the cache. This skips compiling completely for any page that you’ve already visited that’s in cache.

Skill: Other improvements

Many discussions are currently percolating around other ways to improve this, skipping even more work, so stay tuned for other load-time improvements.

Where are we with this?

Where are we with supporting these heavyweight applications right now?

Threading
For the threading, we have a proposal that’s pretty much done, but a key piece of that—SharedArrayBuffers—had to be turned off in browsers earlier this year.
They will be turned on again. Turning them off was just a temporary measure to reduce the impact of the Spectre security issue that was discovered in CPUs and disclosed earlier this year, but progress is being made, so stay tuned.
SIMD
SIMD is under very active development at the moment.
64-bit addressing
For wasm-64, we have a good picture of how we will add this, and it is pretty similar to how x86 or ARM got support for 64 bit addressing.
Streaming compilation
We added streaming compilation in late 2017, and other browsers are working on it too.
Tiered compilation
We added our baseline compiler in late 2017 as well, and other browsers have been adding the same kind of architecture over the past year.
Implicit HTTP caching
In Firefox, we’re getting close to landing support for implicit HTTP caching.
Other improvements
Other improvements are currently in discussion.

Even though this is all still in progress, you already see some of these heavyweight applications coming out today, because WebAssembly already gives these apps the performance that they need.

But once these features are all in place, that’s going to be another achievement unlocked, and more of these heavyweight applications will be able to come to the browser.

Small modules interoperating with JavaScript

But WebAssembly isn’t just for games and for heavyweight applications. It’s also meant for regular web development… for the kind of web development folks are used to: the small modules kind of web development.

Sometimes you have little corners of your app that do a lot of heavy processing, and in some cases, this processing can be faster with WebAssembly. We want to make it easy to port these bits to WebAssembly.

Again, this is a case where some of it’s already happening. Developers are already incorporating WebAssembly modules in places where there are tiny modules doing lots of heavy lifting.

One example is the parser in the source map library that’s used in Firefox’s DevTools and webpack. It was rewritten in Rust, compiled to WebAssembly, which made it 11x faster. And WordPress’s Gutenberg parser is on average 86x faster after doing the same kind of rewrite.

But for this kind of use to really be widespread—for people to be really comfortable doing it—we need to have a few more things in place.

Skill: Fast calls between JS and WebAssembly

First, we need fast calls between JS and WebAssembly, because if you’re integrating a small module into an existing JS system, there’s a good chance you’ll need to call between the two a lot. So you’ll need those calls to be fast.

But when WebAssembly first came out, these calls weren’t fast. This is where we get back to that whole MVP thing—the engines had the minimum support for calls between the two. They just made the calls work, they didn’t make them fast. So engines need to optimize these.

We’ve recently finished our work on this in Firefox. Now some of these calls are actually faster than non-inlined JavaScript to JavaScript calls. And others engines are also working on this.

Skill: Fast and easy data exchange

That brings us to another thing, though. When you’re calling between JavaScript and WebAssembly, you often need to pass data between them.

You need to pass values into the WebAssembly function or return a value from it. This can also be slow, and it can be difficult too.

There are a couple of reasons it’s hard. One is because, at the moment, WebAssembly only understands numbers. This means that you can’t pass more complex values, like objects, in as parameters. You need to convert that object into numbers and put it in the linear memory. Then you pass WebAssembly the location in the linear memory.

That’s kind of complicated. And it takes some time to convert the data into linear memory. So we need this to be easier and faster.

Skill: ES module integration

Another thing we need is integration with the browser’s built in ES module support. Right now, you instantiate a WebAssembly module using an imperative API. You call a function and it gives you back a module.

But that means that the WebAssembly module isn’t really part of the JS module graph. In order to use import and export like you do with JS modules, you need to have ES module integration.

Skill: Toolchain integration

Just being able to import and export doesn’t get us all the way there, though. We need a place to distribute these modules, and download them from, and tools to bundle them up.

What’s the npm for WebAssembly? Well, what about npm?

And what’s the webpack or Parcel for WebAssembly? Well, what about webpack and Parcel?

These modules shouldn’t look any different to the people who are using them, so no reason to create a separate ecosystem. We just need tools to integrate with them.

Skill: Backwards compatibility

There’s one more thing that we need to really do well in existing JS applications—support older versions of browsers, even those browsers that don’t know what WebAssembly is. We need to make sure that you don’t have to write a whole second implementation of your module in JavaScript just so that you can support IE11.

Where are we on this?

So where are we on this?

 

Fast calls between JS and WebAssembly
Calls between JS and WebAssembly are fast in Firefox now, and other browsers are also working on this.
Easy and fast data exchange
For easy and fast data exchange, there are a few proposals that will help with this.

As I mentioned before, one reason you have to use linear memory for more complex kinds of data is because WebAssembly only understands numbers. The only types it has are ints and floats.

With the reference types proposal, this will change. This proposal adds a new type that WebAssembly functions can take as arguments and return. And this type is a reference to an object from outside WebAssembly—for example, a JavaScript object.

But WebAssembly can’t operate directly on this object. To actually do things like call a method on it, it will still need to use some JavaScript glue. This means it works, but it’s slower than it needs to be.

To speed things up, there’s a proposal that we’ve been calling the host bindings proposal. It let’s a wasm module declare what glue must be applied to its imports and exports, so that the glue doesn’t need to be written in JS. By pulling glue from JS into wasm, the glue can be optimized away completely when calling builtin Web APIs.

There’s one more part of the interaction that we can make easier. And that has to do with keeping track of how long data needs to stay in memory. If you have some data in linear memory that JS needs access to, then you have to leave it there until the JS reads the data. But if you leave it in there forever, you have what’s called a memory leak. How do you know when you can delete the data? How do you know when JS is done with it? Currently, you have to manage this yourself.

Once the JS is done with the data, the JS code has to call something like a free function to free the memory. But this is tedious and error prone. To make this process easier, we’re adding WeakRefs to JavaScript. With this, you will be able to observe objects on the JS side. Then you can do cleanup on the WebAssembly side when that object is garbage collected.

So these proposals are all in flight. In the meantime, the Rust ecosystem has created tools that automate this all for you, and that polyfill the proposals that are in flight.

One tool in particular is worth mentioning, because other languages can use it too. It’s called wasm-bindgen. When it sees that your Rust code should do something like receive or return certain kinds of JS values or DOM objects, it will automatically create JavaScript glue code that does this for you, so you don’t need to think about it. And because it’s written in a language independent way, other language toolchains can adopt it.

ES module integration
For ES module integration, the proposal is pretty far along. We are starting work with the browser vendors to implement it.
Toolchain support
For toolchain support, there are tools like wasm-pack in the Rust ecosystem which automatically runs everything you need to package your code for npm. And the bundlers are also actively working on support.
Backwards compatibility
Finally, for backwards compatibility, there’s the wasm2js tool. That takes a wasm file and spits out the equivalent JS. That JS isn’t going to be fast, but at least that means it will work in older versions of browsers that don’t understand WebAssembly.

So we’re getting close to unlocking this achievement. And once we unlock it, we open the path to another two.

JS frameworks and compile-to-JS languages

One is rewriting large parts of things like JavaScript frameworks in WebAssembly.

The other is making it possible for statically-typed compile-to-js languages to compile to WebAssembly instead—for example, having languages like Scala.js, or Reason, or Elm compile to WebAssembly.

For both of these use cases, WebAssembly needs to support high-level language features.

Skill: GC

We need integration with the browser’s garbage collector for a couple of reasons.

First, let’s look at rewriting parts of JS frameworks. This could be good for a couple of reasons. For example, in React, one thing you could do is rewrite the DOM diffing algorithm in Rust, which has very ergonomic multithreading support, and parallelize that algorithm.

You could also speed things up by allocating memory differently. In the virtual DOM, instead of creating a bunch of objects that need to be garbage collected, you could used a special memory allocation scheme. For example, you could use a bump allocator scheme which has extremely cheap allocation and all-at-once deallocation. That could potentially help speed things up and reduce memory usage.

But you’d still need to interact with JS objects—things like components—from that code. You can’t just continually copy everything in and out of linear memory, because that would be difficult and inefficient.

So you need to be able to integrate with the browser’s GC so you can work with components that are managed by the JavaScript VM. Some of these JS objects need to point to data in linear memory, and sometimes the data in linear memory will need to point out to JS objects.

If this ends up creating cycles, it can mean trouble for the garbage collector. It means the garbage collector won’t be able to tell if the objects are used anymore, so they will never be collected. WebAssembly needs integration with the GC to make sure these kinds of cross-language data dependencies work.

This will also help statically-typed languages that compile to JS, like Scala.js, Reason, Kotlin or Elm. These language use JavaScript’s garbage collector when they compile to JS. Because WebAssembly can use that same GC—the one that’s built into the engine—these languages will be able to compile to WebAssembly instead and use that same garbage collector. They won’t need to change how GC works for them.

Skill: Exception handling

We also need better support for handling exceptions.

Some languages, like Rust, do without exceptions. But in other languages, like C++, JS or C#, exception handling is sometimes used extensively.

You can polyfill exception handling currently, but the polyfill makes the code run really slowly. So the default when compiling to WebAssembly is currently to compile without exception handling.

However, since JavaScript has exceptions, even if you’ve compiled your code to not use them, JS may throw one into the works. If your WebAssembly function calls a JS function that throws, then the WebAssembly module won’t be able to correctly handle the exception. So languages like Rust choose to abort in this case. We need to make this work better.

Skill: Debugging

Another thing that people working with JS and compile-to-JS languages are used to having is good debugging support. Devtools in all of the major browsers make it easy to step through JS. We need this same level of support for debugging WebAssembly in browsers.

Skill: Tail calls

And finally, for many functional languages, you need to have support for something called tail calls. I’m not going to get too into the details on this, but basically it lets you call a new function without adding a new stack frame to the stack. So for functional languages that support this, we want WebAssembly to support it too.

Where are we on this?

So where are we on this?

Garbage collection
For garbage collection, there are two proposals currently underway:

The Typed Objects proposal for JS, and the GC proposal for WebAssembly. Typed Objects will make it possible to describe an object’s fixed structure. There is an explainer for this, and the proposal will be discussed at an upcoming TC39 meeting.

The WebAssembly GC proposal will make it possible to directly access that structure. This proposal is under active development.

With both of these in place, both JS and WebAssembly know what an object looks like and can share that object and efficiently access the data stored on it. Our team actually already has a prototype of this working. However, it still will take some time for these to go through standardization so we’re probably looking at sometime next year.

Exception handling
Exception handling is still in the research and development phase, and there’s work now to see if it can take advantage of other proposals like the reference types proposal I mentioned before.
Debugging
For debugging, there is currently some support in browser devtools. For example, you can step through the text format of WebAssembly in Firefox debugger.But it’s still not ideal. We want to be able to show you where you are in your actual source code, not in the assembly. The thing that we need to do for that is figure out how source maps—or a source maps type thing—work for WebAssembly. So there’s a subgroup of the WebAssembly CG working on specifying that.
Tail calls
The tail calls proposal is also underway.

Once those are all in place, we’ll have unlocked JS frameworks and many compile-to-JS languages.

So, those are all achievements we can unlock inside the browser. But what about outside the browser?

Outside the Browser

Now, you may be confused when I talk about “outside the browser”. Because isn’t the browser what you use to view the web? And isn’t that right in the name—WebAssembly.

But the truth is the things you see in the browser—the HTML and CSS and JavaScript—are only part of what makes the web. They are the visible part—they are what you use to create a user interface—so they are the most obvious.

But there’s another really important part of the web which has properties that aren’t as visible.

That is the link. And it is a very special kind of link.

The innovation of this link is that I can link to your page without having to put it in a central registry, and without having to ask you or even know who you are. I can just put that link there.

It’s this ease of linking, without any oversight or approval bottlenecks, that enabled our web. That’s what enabled us to form these global communities with people we didn’t know.

But if all we have is the link, there are two problems here that we haven’t addressed.

The first one is… you go visit this site and it delivers some code to you. How does it know what kind of code it should deliver to you? Because if you’re running on a Mac, then you need different machine code than you do on Windows. That’s why you have different versions of programs for different operating systems.

Then should a web site have a different version of the code for every possible device? No.

Instead, the site has one version of the code—the source code. This is what’s delivered to the user. Then it gets translated to machine code on the user’s device.

The name for this concept is portability.

So that’s great, you can load code from people who don’t know you and don’t know what kind of device you’re running.

But that brings us to a second problem. If you don’t know these people who’s web pages you’re loading, how do you know what kind of code they’re giving you? It could be malicious code. It could be trying to take over your system.

Doesn’t this vision of the web—running code from anybody who’s link you follow—mean that you have to blindly trust anyone who’s on the web?

This is where the other key concept from the web comes in.

That’s the security model. I’m going to call it the sandbox.

Basically, the browser takes the page—that other person’s code—and instead of letting it run around willy-nilly in your system, it puts it in a sandbox. It puts a couple of toys that aren’t dangerous into that sandbox so that the code can do some things, but it leaves the dangerous things outside of the sandbox.

So the utility of the link is based on these two things:

  • Portability—the ability to deliver code to users and have it run on any type of device that can run a browser.
  • And the sandbox—the security model that lets you run that code without risking the integrity of your machine.

So why does this distinction matter? Why does it make a difference if we think of the web as something that the browser shows us using HTML, CSS, and JS, or if we think of the web in terms of portability and the sandbox?

Because it changes how you think about WebAssembly.

You can think about WebAssembly as just another tool in the browser’s toolbox… which it is.

It is another tool in the browser’s toolbox. But it’s not just that. It also gives us a way to take these other two capabilities of the web—the portability and the security model—and take them to other use cases that need them too.

We can expand the web past the boundaries of the browser. Now let’s look at where these attributes of the web would be useful.

Node.js

How could WebAssembly help Node? It could bring full portability to Node.

Node gives you most of the portability of JavaScript on the web. But there are lots of cases where Node’s JS modules aren’t quite enough—where you need to improve performance or reuse existing code that’s not written in JS.

In these cases, you need Node’s native modules. These modules are written in languages like C, and they need to be compiled for the specific kind of machine that the user is running on.

Native modules are either compiled when the user installs, or precompiled into binaries for a wide matrix of different systems. One of these approaches is a pain for the user, the other is a pain for the package maintainer.

Now, if these native modules were written in WebAssembly instead, then they wouldn’t need to be compiled specifically for the target architecture. Instead, they’d just run like the JavaScript in Node runs. But they’d do it at nearly native performance.

So we get to full portability for the code running in Node. You could take the exact same Node app and run it across all different kinds of devices without having to compile anything.

But WebAssembly doesn’t have direct access to the system’s resources. Native modules in Node aren’t sandboxed—they have full access to all of the dangerous toys that the browser keeps out of the sandbox. In Node, JS modules also have access to these dangerous toys because Node makes them available. For example, Node provides methods for reading from and writing files to the system.

For Node’s use case, it makes a certain amount of sense for modules to have this kind access to dangerous system APIs. So if WebAssembly modules don’t have that kind of access by default (like Node’s current modules do), how could we give WebAssembly modules the access they need? We’d need to pass in functions so that the WebAssembly module can work with the operating system, just as Node does with JS.

For Node, this will probably include a lot of the functionality that’s in things like the C standard library. It would also likely include things that are part of POSIX—the Portable Operating System Interface—which is an older standard that helps with compatibility. It provides one API for interacting with the system across a bunch of different Unix-like OSs. Modules would definitely need a bunch of POSIX-like functions.

Skill: Portable interface

What the Node core folks would need to do is figure out the set of functions to expose and the API to use.

But wouldn’t it be nice if that were actually something standard? Not something that was specific to just Node, but could be used across other runtimes and use cases too?

A POSIX for WebAssembly if you will. A PWSIX? A portable WebAssembly system interface.

And if that were done in the right way, you could even implement the same API for the web. These standard APIs could be polyfilled onto existing Web APIs.

These functions wouldn’t be part of the WebAssembly spec. And there would be WebAssembly hosts that wouldn’t have them available. But for those platforms that could make use of them, there would be a unified API for calling these functions, no matter which platform the code was running on. And this would make universal modules—ones that run across both the web and Node—so much easier.

Where are we with this?

So, is this something that could actually happen?

A few things are working in this idea’s favor. There’s a proposal called package name maps that will provide a mechanism for mapping a module name to a path to load the module from. And that will likely be supported by both browsers and Node, which can use it to provide different paths, and thus load entirely different modules, but with the same API. This way, the .wasm module itself can specify a single (module-name, function-name) import pair that Just Works on different environments, even the web.

With that mechanism in place, what’s left to do is actually figure out what functions make sense and what their interface should be.

There’s no active work on this at the moment. But a lot of discussions are heading in this direction right now. And it looks likely to happen, in one form or another.

Which is good, because unlocking this gets us halfway to unlocking some other use cases outside the browser. And with this in place, we can accelerate the pace.

So, what are some examples of these other use cases?

CDNs, Serverless, and Edge Computing

One example is things like CDNs, and Serverless, and Edge Computing. These are cases where you’re putting your code on someone else’s server, and they make sure that the server is maintained and that the code is close to all of your users.

Why would you want to use WebAssembly in these cases? There was a great talk explaining exactly this at a conference recently.

Fastly is a company that provides CDNs and edge computing. And their CTO, Tyler McMullen, explained it this way (and I’m paraphrasing here):

If you look at how a process works, code in that process doesn’t have boundaries. Functions have access to whatever memory in that process they want, and they can call whatever other functions they want.

When you’re running a bunch of different people’s services in the same process, this is an issue. Sandboxing could be a way to get around this. But then you get to a scale problem.

For example, if you use a JavaScript VM like Firefox’s SpiderMonkey or Chrome’s V8, you get a sandbox and you can put hundreds of instances into a process. But with the numbers of requests that Fastly is servicing, you don’t just need hundreds per process—you need tens of thousands.

Tyler does a better job of explaining all of it in his talk, so you should go watch that. But the point is that WebAssembly gives Fastly the safety, speed, and scale needed for this use case.

So what did they need to make this work?

Skill: Runtime

They needed to create their own runtime. That means taking a WebAssembly compiler—something that can compile WebAssembly down to machine code—and combining it with the functions for interacting with the system that I mentioned before.

For the WebAssembly compiler, Fastly used Cranelift, the compiler that we’re also building into Firefox. It’s designed to be very fast and doesn’t use much memory.

Now, for the functions that interact with the rest of the system, they had to create their own, because we don’t have that portable interface available yet.

So it’s possible to create your own runtime today, but it takes some effort. And it’s effort that will have to be duplicated across different companies.

What if we didn’t just have the portable interface, but we also had a common runtime that could be used across all of these companies and other use cases? That would definitely speed up development.

Then other companies could just use that runtime—like they do Node today—instead of creating their own from scratch.

Where are we on this?

So what’s the status of this?

Even though there’s no standard runtime yet, there are a few runtime projects in flight right now. These include WAVM, which is built on top of LLVM, and wasmjit.

In addition, we’re planning a runtime that’s built on top of Cranelift, called wasmtime.

And once we have a common runtime, that speeds up development for a bunch of different use cases. For example…

Portable CLI tools


WebAssembly can also be used in more traditional operating systems. Now to be clear, I’m not talking about in the kernel (although brave souls are trying that, too) but WebAssembly running in Ring 3—in user mode.

Then you could do things like have portable CLI tools that could be used across all different kinds of operating systems.

And this is pretty close to another use case…

Internet of Things


The internet of things includes devices like wearable technology, and smart home appliances.

These devices are usually resource constrained—they don’t pack much computing power and they don’t have much memory. And this is exactly the kind of situation where a compiler like Cranelift and a runtime like wasmtime would shine, because they would be efficient and low-memory. And in the extremely-resource-constrained case, WebAssembly makes it possible to fully compile to machine code before loading the application on the device.

There’s also the fact that there are so many of these different devices, and they are all slightly different. WebAssembly’s portability would really help with that.

So that’s one more place where WebAssembly has a future.

Conclusion

Now let’s zoom back out and look at this skill tree.

Skill tree showing WebAssembly skills which will be filled in through the course of the post.I said at the beginning of this post that people have a misconception about WebAssembly—this idea that the WebAssembly that landed in the MVP was the final version of WebAssembly.

I think you can see now why this is a misconception.

Yes, the MVP opened up a lot of opportunities. It made it possible to bring a lot of desktop applications to the web. But we still have many use cases to unlock, from heavy-weight desktop applications, to small modules, to JS frameworks, to all the things outside the browser… Node.js, and serverless, and the blockchain, and portable CLI tools, and the internet of things.

So the WebAssembly that we have today is not the end of this story—it’s just the beginning.

The post WebAssembly’s post-MVP future: A cartoon skill tree appeared first on Mozilla Hacks – the Web developer blog.

Introducing Opus 1.3

The Opus Audio Codec gets another major update with the release of version 1.3 (demo).

Opus is a totally open, royalty-free audio codec that can be used for all audio applications, from music streaming and storage to high-quality video-conferencing and VoIP. Six years after its standardization by the IETF, Opus is now included in all major browsers and mobile operating systems. It has been adopted for a wide range of applications, and is the default WebRTC codec.

This release brings quality improvements to both speech and music compression, while remaining fully compatible with RFC 6716. Here’s a few of the upgrades that users and implementers will care about the most.

speech vs music probabiities graph with opus 1.3

Opus 1.3 includes a brand new speech/music detector. It is based on a recurrent neural network and is both simpler and more reliable than the detector that has been used since version 1.1. The new detector should improve the Opus performance on mixed content encoding, especially at bitrates below 48 kb/s.

There are also many improvements for speech encoding at lower bitrates, both for mono and stereo. The demo contains many more details, as well as some audio samples. This new release also includes a cool new feature: ambisonics support. Ambisonics can be used to encode 3D audio soundtracks for VR and 360 videos.

You can read all the details of The Release of Opus 1.3.

Read more

The post Introducing Opus 1.3 appeared first on Mozilla Hacks – the Web developer blog.

Trustworthy Chrome Extensions, by default

Incredibly, it’s been nearly a decade since we launched the Chrome extensions system. Thanks to the hard work and innovation of our developer community, there are now more than 180,000 extensions in the Chrome Web Store, and nearly half of Chrome desktop users actively use extensions to customize Chrome and their experience on the web.


The extensions team’s dual mission is to help users tailor Chrome’s functionality to their individual needs and interests, and to empower developers to build rich and useful extensions. But, first and foremost, it’s crucial that users be able to trust the extensions they install are safe, privacy-preserving, and performant. Users should always have full transparency about the scope of their extensions’ capabilities and data access.


We’ve recently taken a number of steps toward improved extension security with the launch of out-of-process iframes, the removal of inline installation, and significant advancements in our ability to detect and block malicious extensions using machine learning. Looking ahead, there are more fundamental changes needed so that all Chrome extensions are trustworthy by default.


Today we’re announcing some upcoming changes and plans for the future:


User controls for host permissions
Beginning in Chrome 70, users will have the choice to restrict extension host access to a custom list of sites, or to configure extensions to require a click to gain access to the current page.



While host permissions have enabled thousands of powerful and creative extension use cases, they have also led to a broad range of misuse – both malicious and unintentional – because they allow extensions to automatically read and change data on websites. Our aim is to improve user transparency and control over when extensions are able to access site data. In subsequent milestones, we’ll continue to optimize the user experience toward this goal while improving usability. If your extension requests host permissions, we encourage you to review our transition guide and begin testing as soon as possible.



Changes to the extensions review process
Going forward, extensions that request powerful permissions will be subject to additional compliance review. We’re also looking very closely at extensions that use remotely hosted code, with ongoing monitoring. Your extension’s permissions should be as narrowly-scoped as possible, and all your code should be included directly in the extension package, to minimize review time.



New code readability requirements
Starting today, Chrome Web Store will no longer allow extensions with obfuscated code. This includes code within the extension package as well as any external code or resource fetched from the web. This policy applies immediately to all new extension submissions. Existing extensions with obfuscated code can continue to submit updates over the next 90 days, but will be removed from the Chrome Web Store in early January if not compliant.


Today over 70% of malicious and policy violating extensions that we block from Chrome Web Store contain obfuscated code. At the same time, because obfuscation is mainly used to conceal code functionality, it adds a great deal of complexity to our review process. This is no longer acceptable given the aforementioned review process changes.


Additionally, since JavaScript code is always running locally on the user’s machine, obfuscation is insufficient to protect proprietary code from a truly motivated reverse engineer. Obfuscation techniques also come with hefty performance costs such as slower execution and increased file and memory footprints.


Ordinary minification, on the other hand, typically speeds up code execution as it reduces code size, and is much more straightforward to review. Thus, minification will still be allowed, including the following techniques:

  • Removal of whitespace, newlines, code comments, and block delimiters
  • Shortening of variable and function names
  • Collapsing the number of JavaScript files


If you have an extension in the store with obfuscated code, please review our updated content policies as well as our recommended minification techniques for Google Developers, and submit a new compliant version before January 1st, 2019.

Required 2-Step Verification
In 2019, enrollment in 2-Step Verification will be required for Chrome Web Store developer accounts. If your extension becomes popular, it can attract attackers who want to steal it by hijacking your account, and 2-Step Verification adds an extra layer of security by requiring a second authentication step from your phone or a physical security key. We strongly recommend that you enroll as soon as possible.

For even stronger account security, consider the Advanced Protection Program. Advanced protection offers the same level of security that Google relies on for its own employees, requiring a physical security key to provide the strongest defense against phishing attacks.



Looking ahead: Manifest v3
In 2019 we will introduce the next extensions manifest version. Manifest v3 will entail additional platform changes that aim to create stronger security, privacy, and performance guarantees. We want to help all developers fall into the pit of success; writing a secure and performant extension in Manifest v3 should be easy, while writing an insecure or non-performant extension should be difficult.


Some key goals of manifest v3 include:
  • More narrowly-scoped and declarative APIs, to decrease the need for overly-broad access and enable more performant implementation by the browser, while preserving important functionality
  • Additional, easier mechanisms for users to control the permissions granted to extensions
  • Modernizing to align with new web capabilities, such as supporting Service Workers as a new type of background process

We intend to make the transition to manifest v3 as smooth as possible and we’re thinking carefully about the rollout plan. We’ll be in touch soon with more specific details.


We recognize that some of the changes announced today may require effort in the future, depending on your extension. But we believe the collective result will be worth that effort for all users, developers, and for the long term health of the Chrome extensions ecosystem. We’re committed to working with you to transition through these changes and are very interested in your feedback. If you have questions or comments, please get in touch with us on the Chromium extensions forum.


Posted by James Wagner, Chrome Extensions Product Manager



Chrome 70 beta: shape detection, web authentication, and more

Unless otherwise noted, changes described below apply to the newest Chrome Beta channel release for Android, Chrome OS, Linux, macOS, and Windows. View a complete list of the features in Chrome 70 on ChromeStatus.com. Chrome 70 is beta as of September 13, 2018.

Shape Detection Origin Trial

The Shape Detection API makes a device’s shape detection capabilities available on the web, letting you identify faces, barcodes, and text in images. It does this without the use of a performance-killing library. As Chrome 70, this API is available for experimentation through a Chrome origin trial.

The Shape Detection API consists of three APIs: A Face Detection API, a Barcode Detection API and a Text Detection API. Given an image bitmap or a blob, the Face Detection API returns the location of faces and the locations of eyes, noses, and mouths within those faces. To give you rudimentary control of performance, you can limit the number of returned faces and prioritize speed over performance. The Barcode Detection API decodes barcodes and QR codes into strings. (There is a QR code demo at https://qrsnapper.com/) The value can be anything from a single set of digits to multi-line text. The Text Detection API reads Latin-1 text (as defined in iso8859-1) in images. All of these APIs only expose what’s supported in the underlying device.

A short example will give you a taste of these APIs, all of which work in a similar fashion. The code below finds all barcodes or QR codes in a given image and prints their values to the console.

const image = document.getElementById('someImage');
try {
const barcodeDetector = new BarcodeDetector();
const barcodes = await barcodeDetector.detect(image);
barcodes.forEach(barcode => console.log(barcodes.rawValue));
} catch (exception) {
console.error('Boo! Barcode detection failed:', exception);
}

Web Authentication

Chrome 70 has two updates to the Web Authentication API related to the PublicKeyCredential type.

The Credential Management API, enabled in Chrome 51, defined a framework for handling credentials that included semantics for creating, getting, and storing them. It did this through two credential types: PasswordCredential and FederatedCredential. The Web Authentication API adds a third credential type, PublicKeyCredential, which allows web applications to create and use strong, cryptographically attested, and application-scoped credentials to strongly authenticate users. The PublicKeyCredential type was enabled by default on desktop in Chrome 67. In Chrome 70 it is also enabled by default on Android.

Also enabled by default are macOS’s TouchID and Android’s fingerprint sensor via Web Authentication. These allow developers to access biometric authenticators through the Credential Management API‘s PublicKeyCredential type.

A Web Authentication verification screen

Other Features in this Release

Displaying a dialog causes pages to lose fullscreen

Dialog boxes, specifically authentication prompts, payments, and file pickers require context for users to make decisions. Fullscreen, by definition is immersive, and removes the context that a user needs to make a decision. Chrome now exits full screen whenever a page shows a dialog box.

HTML

Add referrerpolicy support to <script> elements

Many resource-fetching elements have support for the referrerpolicy attribute, which lets developers provide a keyword to influence the value of the Referer HTTP header that accompanies a request. The <link> element already has support for this, so it is technically possible to preload a script with a developer-set referrer policy. Starting in this version of Chrome, the <script> element supports the referrerpolicy as well.

The <rp> element defaults to display:none

The default style of the <rp> element is changed to display:none instead of display:inline even if it is not inside the <ruby> element as defined in the HTML specification. This behavior is implemented in the user agent stylesheet, but the web author can override it.

Intervention Reports

An intervention is when a user agent does not honor an application request for security, performance, or annoyance reasons. With this change, Chrome can be configured to send intervention and deprecation messages to your servers using the Report-To HTTP Response header field and surface them in the ReportingObserver interface. This is the first of several proposed uses for the Report-To header. Follow these links to learn more about the header and the interface.

Media

Support codec and container switching with MSE using SourceBuffer.changeType()

This change adds the SourceBuffer.changeType() method to improve cross-codec or cross-bytestream transitions during playback with Media Source Extensions.

Support Opus in mp4 with Media Source Extensions

Opus is an audio codec already supported by the HTML5 src attribute on <url> elements. It is now supported by Media Source Extensions.

‘name’ attribute for dedicated workers

Dedicated workers now have a name attribute, which is specified by an optional argument on the worker’s constructor. This lets you distinguish dedicated workers by name when you have multiple workers with the same URL. Developers can print name in the DevTools console which will make it easier to debug workers. When the name parameter is omitted, an empty string is used as the default value. For more information, see the discussion on GitHub.

ontouch* APIs default to disabled on desktop

To avoid confusion on touch feature detection, ontouch* members on window, document, and element are disabled by default on desktop (Mac, Windows, Linux, ChromeOS). Note that this is not disabling touches, and usage such as addEventListener("touchstart", ...) is not affected.

Options dictionary for postMessage methods

An optional PostMessageOptions object has been added as an argument to postMessage() for 6 of the 7 interfaces where it’s supported, specifically, DedicatedWorkerGlobalScope, MessagePort, ServiceWorker, ServiceWorker.Client, Window, and Worker. This gives the function a similar interface on its definitions and allows it to be extended in the future. Because broadcastChannel.postMessage() doesn’t take additional arguments (such as transfer) it is not being changed.

RTCPeerConnection.getConfiguration()

This  getConfiguration() was implemented according to the WebRTC 1.0. Specifically it returns the last configuration applied via setConfiguration(), or if setConfiguration() hasn’t been called, the configuration the RTCPeerConnection was constructed with.

Symbol.prototype.description

A description property is being added to Symbol.prototype to provide a more ergonomic way of accessing the description of a Symbol. Previously, the description could only be accessed indirectly through Symbol.protoype.toString().

TLS 1.3

TLS 1.3 is an overhaul of the TLS protocol with a simpler, less error-prone design that improves both efficiency and security. The new design reduces the number of round-trips required to establish a connection and removes legacy insecure options, making it easier to securely configure a server. It additionally encrypts more of the handshake and makes the resumption mode more resilient to key compromise.

Update behavior of CSS Grid Layout percentage row tracks and gutters

Percentage row tracks and gutters in grid containers now have indefinite heights. Previously, these were behaving similarly to percentage heights in regular blocks, but the CSS Working Group has resolved to make them behave the same as for columns, making them symmetric. Percentages are now ignored when computing intrinsic height and resolved afterwards against that height. That way both column and row axes will have symmetric behavior to resolve percentages tracks and gutters.

Web Bluetooth available on Windows 10

Web Bluetooth allows websites to communicate over GATT with nearby user-selected Bluetooth devices in a secure and privacy-preserving way. In Chrome 56, this shipped on Android, ChromeOS, and macOS. In Chrome 70 it is shipping on Windows 10. For earlier versions of Windows and Linux, it is still behind a flag (chrome://flags/#enable-experimental-web-platform-features).

WebUSB on Dedicated Workers

WebUSB is enabled inside dedicated workers. This allows developers to perform heavy I/O and processing of data from a USB device on a separate thread to reduce the performance impact on the main thread.

Deprecations and Interoperability Improvements

Chrome sometimes deprecates, removes, or changes features to increase interoperability with other browsers. This version of Chrome includes the following such changes.

Remove AppCache from insecure contexts.

AppCache is now removed from insecure contexts. AppCache is a powerful feature that allows offline and persistent access to an origin, which is a powerful privilege escalation for cross-site scripting. This removes that attack vector by only allowing it only over HTTPS. Developers looking for an alternative to AppCache are encouraged to use service workers. An experimental library is available to ease that transition.

Remove anonymous getter for HTMLFrameSetElement

The anonymous getter for HTMLFrameSetElement is non-standard and therefore removed.

Deprecate and remove Gamepads.item()

The legacy item() accessor is removed from the Gamepads array. This change improves compatibility with Firefox which is so far the only browser to implement GamepadList.

Deprecate Custom Elements v0

Custom Elements are a web components technology that lets you create new HTML tags, beef up existing tags, or extend components authored by other developers. Custom Elements v1 have been implemented in Chrome since version 54, which shipped in October 2016. Custom Elements v0 is now deprecated with removal expected in Chrome 73, around March 2019.

Deprecate HTML Imports

HTML Imports, which allow HTML to be imported from one document to another, are now deprecated. Removal is expected in Chrome 73, around March 2019. Sites depending on HTML imports already require a polyfill on non-Chromium browsers. When HTML imports is removed, sites that have the polyfill should continue to work on Chrome.

Deprecate Shadow DOM v0

Shadow DOM is a web components technology that uses scoped subtrees inside elements. Shadow DOM v1 has been implemented in Chrome since version 53, which shipped in August of 2016. Shadow DOM v0 is now deprecated with removal expected in Chrome 73, around March 2019. Sites depending on HTML imports already require a polyfill on non-Chromium browsers. When Shadow DOM v0 is removed, sites that have the polyfill should continue to work on Chrome.

Deprecate SpeechSynthesis.speak() without user activation

The speechSynthesis.speak() function now throws an error if the document has not received a user activation. This API is being abused by sites since it is one of the only remaining APIs which does not adhere to autoplay policies in Chrome.

10 years of Speed in Chrome

* { line-height: 1.38 !important; white-space: initial !important; } ul { font-size: 11pt !important; } li { line-height: 1.38 !important; } img { margin-bottom: 0 !important; } i { text-align: center !important; }

Speed has been one of Chrome’s four core principles, since it was first launched ten years ago. We’ve always wanted to enable web developers to provide users with fast, engaging web experiences. On Chrome’s 10th birthday, we thought it would be fun to look at what we’ve done to improve speed over the years and what we’re doing next.

Many components in the browser contribute to speed

V8 is Chrome’s JavaScript and WebAssembly engine. With web pages using an increasing amount of JavaScript, a fast engine to handle it is an important cornerstone. Over the years, we worked on a new JavaScript execution pipeline for V8, launching Ignition (a new interpreter) and TurboFan (an optimizing compiler). These allowed us to improve performance on the Speedometer benchmark by 5-10%. Script streaming enabled us to parse JavaScript on a background thread as soon as it began downloading, improving page loads by up to 10%. We then added background compilation reducing main-thread compile time by up to 20%.
Our work on project Orinoco enabled concurrent garbage collection, freeing up the main thread and reducing jank. Over time, we also shifted to focusing on real-world JavaScript performance, helping us double the performance of the React.js runtime and improve performance for libraries like Vue.js, Preact, and Angular up to 40%. Parallel, concurrent, and incremental garbage collection reduced garbage collection induced jank by 100× since the initial V8 commit. We also implemented WebAssembly, enabling developers to run non-JavaScript code on the web with predictable performance, and launched the Liftoff baseline compiler to ensure fast startup times of WASM apps. These new components are just the latest in a 10-year effort that has improved V8’s performance release-to-release for an improvement of 20x over the years.


V8 Bench scores for a range of Chrome releases over the years. V8 Bench is the predecessor to the old Octane benchmark. We’ve used it for this chart because unlike Octane, it can run in all Chrome versions, including the initial Chrome Beta.

Chrome has also played a key role in helping evolve network protocols and transport layers through SPDY, HTTP/2 and QUIC. SPDY aimed to address limitations of HTTP/1.1 and became the foundation of HTTP/2 protocol, which is now supported by all modern browsers. In parallel, the team has been actively iterating on QUIC, which aims to further improve latency and user experience and now has an active IETF effort behind it. QUIC’s benefits are noticeable for video services like YouTube. Users reported 30% fewer rebuffers when watching videos over QUIC.



Next up is Chrome’s rendering pipeline. This is responsible for ensuring webpages are responsive to users and display at 60 frames per second (fps). To display content at 60fps, Chrome has 16ms to render each frame. This includes JavaScript execution, style, layout, paint and pushing pixels to the user’s screen. Of this 16ms, the less Chrome uses, the more web developers have to delight their users. Improvements to our rendering pipeline have included optimizing how we identify which elements in a page need to be redrawn and better tracking visually non-overlapping sets. This reduced the time to paint new frames to the screen by up to 35%.

In 2015, a user-centric performance model called RAIL was introduced by the Chrome team. We recently updated it.


What about memory consumption? Between Chrome 63 and 66, a ~20-30% improvement in renderer process memory usage was observed. We hope to continue exploring ways to build on this now that site-isolation has landed. Ignition and TurboFan reduced V8’s overall memory footprint, slimming it down by 5-10% on all devices and platforms supported by V8. Some sleuthing this year also discovered memory leaks impacting 7% of sites on the web, which we’ve now fully fixed. Many components contribute to Chrome’s speed including the DOM, CSS and storage systems like IndexedDB. To learn more about our improvements to performance, continue keeping an eye on the Chromium Blog.

Give web developers more power to measure and optimize their web pages


Understanding where to start with improving your site can be a tedious process. To help, we explored several tools for understanding the lab signals and real-world experience felt by your users. Over the years, the Chrome DevTools Performance panel became a powerful way to visualize the play-by-play breakdown of how web pages were displayed in a lab setting. To continue lowering the friction for finding performance opportunities sites had, we then worked on Lighthouse – a tool for analyzing the quality of your website, giving you clear measurements of your site’s performance and specific guidance for improving your users’ experience. Lighthouse can be accessed directly from inside the DevTools Audits panel, run from the command-line, or integrated with other development products like WebPageTest.
Lighthouse running in the Chrome DevTools Audits panel


To complement the lab data provided by Lighthouse, we released the Chrome User Experience Report to help developers get access to field metrics for the experience their users feel in the real-world, such as First Contentful Paint and First Input Delay. Now, developers can build out their own custom site performance reports and track progress over millions of origins using the CrUX Dashboard.

We also introduced a number of Web Platform capabilities to help developers optimize the loading of their sites. We shipped Resource Hints and <link rel=preload> to allow developers to inform the browser what resources are critical to load early on. Chrome was one of the first browsers to implement support for byte-saving approaches like Brotli for compression, WOFF2 for smaller web fonts and WebP support for images.
We’ve been excited to see an increasing number of browsers support these features over time. Chrome implemented Service Workers, enabling offline caching & network resilience for repeat visits to pages. We’re delighted to see broad modern browser support for the feature.

In fact, Google Search now uses Service Worker and navigation preload for opportunistic caching on repeat searches. This led to a 2x improvement in page load times for repeat visits.
As we look to the future, we are also excited about the potential that emerging standards like native lazy-loading for images & iframes, and image formats like AV1 have to help deliver content to users efficiently.

Enjoy more of the web on your data-plan with Chrome

Over the last 10 years, the size of web pages has been ever-increasing, but for many users coming online for the first time, data can be prohibitively expensive or painfully slow. To help, Chrome released data-conscious features over the years like Chrome’s Data Saver. Data Saver intelligently optimizes pages, saving up to 92% on data consumption.

Going ahead, we are exploring new ways to help you save data. For users on the slowest connections, we’ve been working on Chrome for Android, allowing for smarter page optimizations to show essential content earlier. These page transformations load far faster than the full page, and we’re continuing to improve our fidelity, coverage, and performance.

We’ve also been experimenting with putting guardrails in place for users who are data- or network- constrained. For example, we’re exploring bringing native lazy-loading to Chrome, as well as providing users the option to stop additional requests from a page when it uses a lot of data.


We’re just getting started…


Together, these changes help developers and businesses deliver useful content to their users sooner. We know there’s still work to be done. Here’s to offering improvements to page load performance over another 10 years!

Posted by Addy Osmani, JavaScript Janitor

10 Years of Chrome DevTools

* { line-height: 1.38 !important; white-space: initial !important; } ul { font-size: 11pt !important; } li { line-height: 1.38 !important; } img { margin-bottom: 0 !important; } i { text-align: center !important; }

Chrome is turning 10! Thank you for making the web development community so open, collaborative, and supportive. DevTools draws inspiration from countless other projects. Here’s a look back at how DevTools came about, and how it’s changed over the years.

In the beginning, there was Firebug

Imagine for a moment that browsers didn’t ship with developer tools. How would you debug JavaScript? You’d basically have 3 options:
  • Sprinkle window.alert() calls throughout your code.
  • Comment out sections of code.
  • Stare at the code for a long time until the JavaScript gods bless you with a solution.
What about layout issues? Network errors? Again, all you could really do is conduct painstaking experiments in your code. This was the reality of web development up until 2006. Then a little tool called Firebug came along and changed everything.

A screenshot of Firebug’s Net panel, taken from Saying Goodbye to Firebug (source and license)

Firebug was a Firefox extension that let you debug, edit, and monitor pages in real-time. As a web developer suddenly you went from having no visibility into your pages to having what are essentially the core features of modern developer tools. The ability to understand exactly why Firefox was behaving as it was unleashed a flood of creativity on the web. Without Firebug, the Web 2.0 era wouldn’t have been possible.

WebKit Web Inspector

Around the same time as Firebug’s launch, a few Google engineers started working on a project which would eventually lead to Chrome. From the start, Chrome was a mashup of different code libraries. For rendering the Chrome engineers opted for WebKit, which is the open-source project that still powers Safari to this day. An added bonus of using WebKit was that it came with a handy tool called the Web Inspector.

A screenshot of the Web Inspector, taken from Web Inspector Redesign (source and license)

Like the Net panel of Firebug, the original Web Inspector probably looks familiar. Much of its functionality lives on to this day as the Elements panel in Chrome DevTools. Web Inspector launched a few days after Firebug, and Safari was the first browser to bundle developer tooling directly into the browser.

The “Inspect Element” era


Chrome brought many innovative ideas to the browser ecosystem, such as the omnibox that combined search and the address bar, and a multi-process architecture that prevented one hanging tab from crashing the entire browser. But the innovation we like the most was providing developer tools in every build to every user, exposed with the click of a mouse.

“Inspect Element” in 2010


Before Chrome, developer tooling was an opt-in experience. You either had to install an extension, like Firebug, or enable some flags, as is still the case in Safari today. Chrome was the first browser to make developer tooling accessible from every browser instance. We’d like to claim that we had a grand vision for creating a developer-friendly browser from the start, but the reality is that Chrome had a lot of compatibility issues in its early days (which makes sense, since no one was building for it) and we needed to give web developers an easy way to fix these issues. Web developers told us that it was a useful feature, and we kept it.

The mobile era

For the first few years of the DevTools project, we essentially added chapters to the stories that Firebug and Web Inspector started. The next big shift in how we approached DevTools happened when it became clear that smartphones were here to stay.
Our first mission in this brave new world was to enable developers to debug real mobile devices from their development machines, which we call remote debugging. DevTools was actually well-positioned to handle remote debugging, thanks to another consequence of Chrome’s multi-process architecture. Early on in the DevTools project we realized that the only way a debugger could reliably access a multi-process browser was through a client-server protocol, with the browser being the server, and the debugger being the client. When mobile Chrome came around, the protocol was already baked into it, so we just had to make the DevTools running on your development machine communicate with the Chrome running on your mobile device over the protocol. This protocol still forms the backbone of DevTools today, and is now known as the Chrome DevTools Protocol.

Remote Debugging

Remote debugging was a step in the right direction, and to this day is still the primary tool for making sure that your sites behave reasonably on real mobile devices. Over time, however, we realized that remote debugging could be a bit tedious. When you’re in the early phases of building out a site or feature, you usually just need a first-order approximation of the mobile experience. This prompted us to create a set of mobile simulation features, such as:

  • Precisely emulating the mobile viewport, simulating touch-based input and device orientation.
  • Throttling the network connection to simulate 3G and CPU to simulate less-powerful mobile hardware.
  • Spoofing user agent, geolocation, accelerometer data and more.

We collectively refer to these features as Device Mode.

An early prototype of Device Mode

Device Mode in 2018

The performance era

While the mobile era unfolded, big apps like Gmail were pushing the limits of the web’s capabilities. Gmail-scale bugs called for Gmail-scale tools. One of our first major contributions to the tooling ecosystem was to show a play-by-play breakdown of exactly everything that Chrome had to do in order to display a page.

The original Timeline panel, as announced in Do More with Chrome Developer Tools

The Performance panel in 2018


These tools were a step in the right direction, but in order to spot optimization opportunities you needed to learn the nitty-gritty details about how browsers work and sift through a lot of data. Lately we’ve been building on this foundation to provide more
guided performance insights. The new Lighthouse engine powers the Audits panel, and is also available as a Node module for integration with CI systems.

Performance suggestions in the Audits panel


The Node.js era

Up until 2014 or so, we mainly thought of DevTools as a tool for building great experiences on Chrome. The rise of Node prompted us to rethink our role in the web ecosystem.
For the first few years of Node’s existence, Node developers were in a situation similar to that of web developers before Firebug, or Gmail developers before the Timeline panel: the scale of Node apps outpaced the scale of Node tools. Given that Node runs on Chrome’s JavaScript engine, V8, DevTools was a natural candidate to fill the gap. Support for debugging Node with DevTools landed in 2016 and includes the usual DevTools features, such as breakpoints, code stepping, blackboxing, source maps for transpiled code, and so on.

Node Connection Manager


The DevTools protocol ecosystem

The name Chrome DevTools Protocol (CDP) suggests an API that only DevTools can use. The reality is more general than that: it’s the API that enables programmatic access to Chrome. Over the last few years, we’ve seen a few third-party libraries and applications join the protocol ecosystem:

  • chrome-remote-interface provides low level JavaScript access to the protocol
  • Puppeteer brings it to the next level of abstraction and enables automation of the evergreen headless Chrome browser with modern JavaScript API
  • Lighthouse automates the process of finding ways to improve the performance and quality of pages

We’re excited to see thousands of projects depend on these packages to enable rich interaction with Chrome. If you’re in the tooling or automation business, it’s worth checking out the protocol to see if it opens up any opportunities in your domain. For example, the VS Code and WebStorm teams both use it to enable JavaScript debugging within their respective IDEs.

What’s next?

Our core mission is to build tools that help you create great experiences on the web. We very much rely on your feedback to help us determine what products or features to build.

Thank you for creating such a vibrant community. We look forward to another 10 years of building the web with you.

Posted by the Chrome DevTools team

7 Ways to Increase Website Security, Without Sacrificing UX

7 Ways to Increase Website Security

Without Sacrificing UX

Did you know that the longest word in the English dictionary is 189,891 letters long?

It is (and you can see a full spelling here). This word is the name for a protein dubbed “Tintin” and would take you more than 3 and a half hours to say out loud. Pretty crazy, right?

But why in the world am I sharing this with you today? Because 189,891 letters is way too long! It could have been reduced to 50, 25, or even 10 letters, but it wasn’t.

Why?

Because people love to complicate things.

And nowhere is this more true than in the realm of online security and user experience. You see, most self-proclaimed gurus will try and overwhelm you with fancy industry jargon and advanced (but entirely unnecessary) security recommendations in order to appear like they know what they’re talking about and sell you on overpriced services.

But the truth of the matter is that increasing the security of your website without sacrificing the user experience is actually pretty simple.

If you implement a few key tactics and adhere to some basic security standards, you can sleep soundly at night knowing that your website is not only easy to use but keeps you, and your users, safe.

With that in mind, here are 7 quick and easy tips to boost your website’s security without destroying the user experience.

1. Use reCaptcha to Verify Form Submissions

Out of all the security recommendations I’m going to make in this guide, this particular recommendation is the only one that will have an appreciable impact on the quality of your users’ experience.

Setting up reCaptcha for your various lead and purchase forms is admittedly, not a user-friendly thing to do.

However, when you consider that this simple tactic will protect you from 90% of the possible spam tactics and form hacking, it’s well worth the 0.02 seconds it will require your users to validate that they are indeed “Not a Robot”.

To set up reCaptcha, simply follow the steps outlined on this page.

2. Limit the Plugins You Install (and Keep Them Updated)

One of the biggest mistakes that most webmasters make is that they download far too many plugins in order to improve the UX of their site.

Like most things in the online space, a good user experience comes down to only a small handful of things such as a clean design, easy navigability, and fast load times.

Adding dozens of unvetted plugins to optimize the minutiae of your UX is a guaranteed way to compromise the integrity of your website and expose your users’ private data to hackers.

Instead, I recommend picking a small handful of plugins that are well reviewed and approved by your CMS or website builder and then stick to those. This will mitigate the chances that your website is (successfully) attacked and will make managing your site significantly easier.

Just be sure to keep them up to date in order to patch potential weaknesses as they arise.

3. Create a Secret WP Login Page

By far the easiest way for a hacker to gain entry to your admin dashboard and wreak havoc on your previously immaculate website is through a brute force attack.

A brute force attack is simply an attack where a hacker will go to your login page and then use an automated software to rapidly guess different number and letter combinations until they crack your username and password.

However, a hacker cannot execute a brute force attack if they don’t know the login URL to your WordPress page.

Most WordPress websites use the traditional /wp-admin/ URL to login to their site meaning that hackers know exactly where to go if they want to brute force their way into your dashboard.

By using a plugin like ManageWP you can change this URL to a custom address like /my-secret-login/ and stop 99% of brute force attacks in their tracks.

4. Invest in an SSL Certificate

I’m baffled at how often I have to repeat this recommendation…

An SSL (or secure socket layer) is a standard security protocol that establishes an encrypted link between a web server and a browser.

This means that the information your customers and audience members submit on your website (such as names, email addresses, and credit card numbers) cannot be intercepted by hackers. This is good for the user experience and great for your site’s security.

The installation of an SSL certificate costs around $60 and many top tier web hosts will provide them free of charge.

5. Upgrade (or Change) Your Web Hosting Provider

The security (or lack thereof) of your website is largely dependent on the quality of the web hosting provider that is hosting your website.

A high quality web host acts as your first layer of defense against hackers and they will provide you with a free SSL certificate, network monitoring, firewalls, anti-malware, and damage recovery programs, just to name a few features.

Do your research and find the web host that is right for you. Regardless of price, most popular web hosts today offer these types of security features and also include one free website migration, meaning that you typically can switch your hosting over to a new provider over the course of afternoon.

6. Use a Separate Platform for Your Checkout Pages

A simple and (relatively) easy to implement tactic for improving your website security is to use a separate platform for your checkout pages.

For example, many marketers will offer a list of products on their website and then send customers to a secure Clickfunnels checkout page to complete their purchase.

This strategy will take nothing away from the user experience but will add another barrier to entry for potential hackers ensuring that you and your customers remain safe and secure.

7. Setup Daily Backups

The worst user experience a webmaster can commit is to allow thousands of fans to lose access to one of their favorite sites overnight.

And this scenario is much more common than you might imagine.

By setting up daily (or at least weekly) website backups you will prevent your data and content from being lost in the event of a security breach and you will ensure that your fans always have a place to go for the latest and greatest in your particular niche.

You can do this manually or ask your web host to backup your website on a recurring basis.

3 Handy New Features in Chrome DevTools

3 Handy New Features in Chrome DevTools

The Most Useful New Additions

Chrome DevTools are consistently marching forward and providing us with helpful new features that make our lives as web designers easier.

In this roundup we’ll be checking out three of the handiest recent additions. Let’s jump in!

1. Inspect and Modify CSS Variable Values

If you were using CSS variables in your CSS, up until now when you inspected your code with Chrome DevTools all you would see in the Styles tab would be something like font-size: var(--fontsize), with no indication of what that variable actually represented. To make sense of your code you’d have to find the variable name in the :root section of your CSS, look up the value, remember it, go back to the section of code you were inspecting and try to work with it from there.

Now, DevTools will compute CSS variable in the Styles tab so you can inspect their values inline, as well as see previews of colors. You’ll also be able to make modifications to your CSS variable values. Here’s how it all works.

Note: You’ll need Chrome 67 to try these features. Chromium 67 does not yet have them included.

How to Inspect Values Inline

In the screenshot below you’ll see an example of the new variable inspection functionality. In the Styles tab, look at the color and font-size properties highlighted by the red box. In front of the variable var(--text) you’ll see a small colored square–this is a preview of the color defined by this variable. Below it you’ll see the mouse hovering over var(--fontsize), which makes its computed value appear in a tooltip. This hovering functionality can be used to inspect the value of any type of CSS variable.

Dev Tools Var Color Value Preview

Introducing the Variables Color Palette

The bottom section of the color picker in Chrome DevTools already gave us access to different types of color palettes, such as “Material” colors, or the colors detected as being in use by the current page. When using the color picker on CSS variables an additional palette now becomes available, one that shows you all the colors defined as variables in your style sheet.

To open the color picker, click on the little color preview square in front of a color variable. To access the list of available palettes click the toggle switch located on the right side of the palette section.

Dev Tools Switch Palettes

This will show you the available color palettes, one per line. Here you’ll see a line labeled “CSS Variables”, and if you hover over the colors here you’ll see a tooltip showing you the variables they represent. Select the “CSS Variables” palette by clicking on that line.

Dev Tools Select CSS Variable Palette

Now when you select a different color from this palette, the name of that new variable will be substituted into the code. For example, in the image below the variable name --text has been replaced by --alternative-text after clicking on a different color in the palette.

Dev Tools Switch Color Var

Editing Variable Values

Not only can you inspect and modify CSS variable code for individual styles, you can also do so at the root level.

Select any element, scroll down to the bottom of the Styles, tab and find :root, where all your variables are defined. You can change any variable value here and the changes will be updated across your whole page.

To edit a color value, click the color preview box next to it and use the color picker. To edit other types of values, such as font sizes for example, double-click them and type in your new value.

In the example below you can see we’re using the color picker to adjust the value of the --text variable controlling our text color.

Dev Tools Edit Values

If you’d like to give this new functionality a go, we’ve setup this pen so you can use DevTools to tinker with the background color, text color and font size of a simple page:


2. Pretty Print / Format Minified Code

If you’ve ever tried to debug minified code you know it can be a painful process. Thanks to Chrome / Chromium 66 you won’t need to deal with that issue any longer, as it now has the ability to pretty print / format minified code for better readability.

To use this feature go into the Sources tab, in there go to the Page tab, then select a minified file from the list of loaded files in the left panel. This will display the file’s contents in the center window. At the bottom of this window you’ll see a pair of curly brackets {}, and if you hover over them you’ll see a tooltip that reads “Pretty print”.

Dev Tools Pretty Print

Click this button and your file will be “prettily printed”, and made far more debugging friendly:

Dev Tools Pretty Printed

You can do the same thing for minified CSS files, but instead of the tooltip saying “Pretty print” it will say “Format”.

Dev Tools Format CSS Button

Click the little {} button and your CSS will become nice and readable as well:

Dev Tools Formated CSS

3. Preserve Tweaks Between Reloads

Have you ever been working on some tweaks to a page via DevTools, but then needed to refresh the page and had to lose all the adjustments you’d been testing out?

As of Chrome / Chromium 65 you’ll never have that problem again, thanks to the “Local Overrides” feature. With this feature active, any tweaks you make via DevTools can be automatically saved in a local file on your hard drive, and when you refresh the page that file will be loaded in, thereby preserving your adjustments.

To turn on local overrides go to the Sources tab, and in there to the Overrides tab. If you can’t see the tab, expand the selection by clicking the >> button.

Dev Tools Sources Overrides

The first time you activate local overrides you’ll see a little button labeled + Select folder for overrides. As the name suggests, this will let you specify where your local override files should be saved. Hit that button and select the folder you want to use:

Dev Tools Select Folder for Overrides

Once you select a folder you’ll see a security warning appear at the top of the browser. Go ahead and click Allow to proceed:

Now try making a change to a web page using DevTools, such as changing the text color on a live site for example. Refresh the page and you’ll see your color change will still be present.

If you go back into the Overrides section now you’ll see a newly added file. Assuming the tweak you made was to a text color, it will be a CSS file. This is the file that contains your local overrides, and as long as local overrides are active this file will be loaded instead of the site’s true file. Brilliant!

You can disable local overrides at any time by unchecking the Enable Local Overrides box. You can also delete individual override files from the Overrides tab, or manually through your system’s file manager.

Wrapping Up

There we have them: three great new features from Chrome DevTools that should prove super useful! CSS variables are now easier to work with, minified code can now be made readable, and using DevTools to test tweaks to sites is more powerful.

We’ll look forward to seeing what the Chrome DevTools team has in store for us next!

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from Youtube
Vimeo
Consent to display content from Vimeo
Google Maps
Consent to display content from Google