Vue.js And SEO: How To Optimize Reactive Websites For Search Engines And Bots

Vue.js And SEO: How To Optimize Reactive Websites For Search Engines And Bots

Vue.js And SEO: How To Optimize Reactive Websites For Search Engines And Bots

Paolo Mioni

Reactive JavaScript Frameworks (such as React, Vue.js, and Angular) are all the rage lately, and it’s no wonder that they are being used in more and more websites and applications due to their flexibility, modularity, and ease of automated testing.

These frameworks allow one to achieve new, previously-unthinkable things on a website or app, but how do they perform in terms of SEO? Do the pages that have been created with these frameworks get indexed by Google? Since with these frameworks all — or most — of the page rendering gets done in JavaScript (and the HTML that gets downloaded by bots is mostly empty), it seems that they’re a no-go if you want your websites to be indexed in search engines or even parsed by bots in general.

In this article, I will talk mostly about Vue.js, since it is the framework I’ve used most, and with which I have direct experiences in terms of indexing by the search engines on major projects, but I can assume that most of what I will cover is valid for other frameworks, too.

Replacing jQuery With Vue.js

Did you know that you can incorporate Vue into your project the same way that you would incorporate jQuery — with no build step necessary? Tell me more →

Some Background On The Problem

How Indexing Works

For your website to be indexed by Google, it needs to be crawled by Googlebot (an automated indexing software that visits your website and saves the contents of pages to its index) following links within each page. Googlebot also looks for special Sitemap XML files in websites to find pages that might not be linked correctly from your public site and to receive extra information on how often the pages in the website change and when they have last changed.

A Little Bit Of History

Until a few years ago (before 2009), Google used to index the content of a website’s HTML — excluding all the content created by JavaScript. It was common SEO knowledge that important links and content should not be written by JavaScript since it would not get indexed by Google, and it might cause a penalty for the website because Google might consider it “fake content” as if the website’s owner was trying to show users something different from what was shown to the search engines and trying to fool the latter.

It was very common practice by scammers to put a lot of SEO-friendly content in the HTML and hide it in JavaScript, for example. Google has always warned against this practice:

“Serving Googlebot different content than a normal user would see is considered cloaking, and would be against our Webmaster Guidelines.”

You could get penalized for this. In some cases, you could be penalized for serving different content to different user agents on the server side, but also for switching content via JavaScript after the page has loaded. I think this shows us that Google has been indexing websites executing JavaScript for a long time — at least for the sake of comparing the final HTML of the website (after JavaScript execution) and the raw HTML it was parsing for its indexes. But Googlebot did not execute JavaScript all the time, and Google was not using the JavaScript-generated content for indexing purposes.

Then, given the increased usage of AJAX to deliver dynamic content on websites, Google proposed an “AJAX crawling scheme” to help users index AJAX-based websites. It was very complicated; it basically required the website to produce a rendering of pages with AJAX content included. When requested by Google, the server would provide a version of the page with all (or most) of the content that would have been generated dynamically by JavaScript included in the HTML page — pre-rendered as an HTML Snapshot of the content. This process of having a server-side solution deliver content that was (for all other purposes) meant to be generated client-side, implied that those wanting to have a site that heavily relied on JavaScript indexed in Google had to go through a lot of technical hassles.

For example, if the content read by AJAX came from an external web service, it was necessary to duplicate the same web service calls server-side, and to produce, server-side, the same HTML that would have been produced client-side by JavaScript — or at least a very similar one. This was very complicated because, before the advent of Node.js, it required to at least partially duplicate the same rendering logic in two different programming languages: JavaScript for the frontend, and PHP, Java, Python, Ruby, and so on, on the backend. This is called “server-side rendering”, and it could lead to maintenance hell: if you made important changes to how you were rendering content in the frontend you had to duplicate those changes on the backend.

The only alternative to avoid duplicating the logic was to parse your own site with a browser executing JavaScript and save the end results to your server and serve those to Googlebot. This is sort of similar to what is now called “pre-rendering”.

Google (with its AJAX crawling scheme) also guaranteed that you would avoid penalties due to the fact that in this case you were serving different content to Googlebot and to the user. However, since 2015, Google has deprecated that practice with an official blog post that told website managers the following:

“Today, as long as you’re not blocking Googlebot from crawling your JavaScript or CSS files, we are generally able to render and understand your web pages like modern browsers.”

What this told us was not that Googlebot had suddenly acquired the capability of executing JavaScript when indexing web pages, since we know that it had done so for a very long time (at least to check for fake content and scams). Instead, it told us that the result of JavaScript execution would be indexed and used in SERPs.

This seems to imply that we don’t have to worry about providing Google with server-side rendered HTML anymore. However, we see all sorts of tools for server-side rendering and pre-rendering made available for JavaScript frameworks, it seems this is not the case. Also, when dealing with SEO agencies on big projects, pre-rendering seems to be considered mandatory. How come?

How Does Google Actually Index Pages Created With Front-End Frameworks?

The Experiment

In order to see what Google actually indexes in websites that have been created with a front-end framework, I built a little experiment. It does not cover all use cases, but it is at least a means to find out more about Google’s behavior. I built a small website with Vue.js and had different parts of text rendered differently.

The website’s contents are taken from the description of the book Infinite Jest by David Foster Wallace in the Infinite Jest Wiki (thanks guys!). There are a couple of introductory texts for the whole book, and a list of characters with their individual biography:

  • Some text in the static HTML, outside of the Vue.js main container;
  • Some text is rendered immediately by Vue.js because it is contained in variables which are already present in the application’s code: they are defined in the component’s data object;
  • #Some text is rendered by Vue.js from the data object, but with a delay of 300ms;
  • The character bios come from a set of rest APIs, which I’ve built on purpose using Sandbox. Since I was assuming that Google would execute the website’s code and stop after some time to take a snapshot of the current state of the page, I set each web service to respond with an incremental delay, the first with 0ms, the second with 300ms, the third with 600ms and so on up to 2700ms.

Each character bio is shortened and contains a link to a sub-page, which is available only through Vue.js (URLs are generated by Vue.js using the history API), but not server-side (if you call the URL of the page directly, you get no response from the server), to check if those got indexed too. I assumed that these would not get indexed, since they are not proper links which render server-side, and there’s no way that Google can direct users to those links directly. But I just wanted to check.

I published this little test site to my Github Pages and requested indexing — take a look.

The Results

The results of the experiment (concerning the homepage) are the following:

  • The contents which are already in the static HTML content get indexed by Google (which is rather obvious);
  • The contents which are generated by Vue in real-time always get indexed by Google;
  • The contents which are generated by Vue, but rendered after 300ms get indexed as well;
  • The contents which come from the web service, with some delay, might get indexed, but not always. I’ve checked Google’s indexing of the page in different moments, and the content which was inserted last (after a couple of seconds) sometimes got indexed, sometimes it didn’t. The content that gets rendered pretty quickly does get indexed most of the time, even if it comes from an asynchronous call to an external web service. This depends on Google having a render budget for each page and site, which depends on its internal algorithms, and it might vary wildly depending on the ranking of your site and the current state of Googlebot’s rendering queue. So you cannot rely on content coming from external web services to get indexed;
  • The subpages (as they are not accessible as a direct link) do not get indexed as expected.

What does this experiment tell us? Basically, that Google does index dynamically generated content, even if comes from an external web service, but it is not guaranteed that content will be indexed if it “arrives too late”. I have had similar experiences with other real, production websites besides this experiment.

Competitive SEO

Okay, so the content gets indexed, but what this experiment doesn’t tell us is: will the content be ranked competitively? Will Google prefer a website with static content to a dynamically-generated website? This is not an easy question to answer.

From my experience, I can tell that dynamically-generated content can rank in the top positions of the SERPS. I’ve worked on the website for a new model of a major car company, launching a new website with a new third-level domain. The site was fully generated with Vue.js — with very little content in the static HTML besides <title> tags and meta descriptions.

The site started ranking for minor searches in the first few days after publication, and the text snippets in the SERPs reported words coming directly from the dynamic content.

Within three months it was ranking first for most searches related to that car model — which was relatively easy since it was hosted on an official domain belonging to the car’s manufacturer, and the domain was heavily linked from reputable websites.

But given the fact that we had had to face strong opposition from the SEO company that was in charge of the project, I think that the result was still remarkable.

Due to the tight deadlines and lack of time given for the project, we were going to publish the site without pre-rendering.

Animated Text

What Google does not index is heavily-animated text. The site of one of the companies I work with, Rabbit Hole Consulting, contains lots of text animations, which are performed while the user scrolls, and require the text to be split into several chunks across different tags.

The main texts in the website’s home page are not meant for search engine indexing since they are not optimized for SEO. They are not made of tech-speak and do not use keywords: they are only meant to accompany the user on a conceptual journey about the company. The text gets inserted dynamically when the user enters the various sections of the home page.

Rabbit Hole Consulting
(Image source: Rabbit Hole Consulting) (Large preview)

None of the texts in these sections of the website gets indexed by Google. In order to get Google to show something meaningful in the SERPs, we added some static text in the footer below the contact form, and this content does show as part of the page’s content in SERPs.

The text in the footer gets indexed and shown in SERPs, even though it is not immediately visible to the users unless they scroll to the bottom of the page and click on the “Questions” button to open the contact form. This confirms my opinion that content does get indexed even if it is not shown immediately to the user, as long as it is rendered soon to the HTML — as opposed to being rendered on-demand or after a long delay.

What About Pre-Rendering?

So, why all the fuss about pre-rendering — be it done server-side or at project compilation time? Is it really necessary? Although some frameworks, like Nuxt, make it much easier to perform, it is still no picnic, so the choice whether to set it up or not is not a light one.

I think it is not compulsory. It is certainly a requirement if a lot of the content you want to get indexed by Google comes from external web service and is not immediately available at rendering time, and might — in some unfortunate cases — not be available at all due to, for example, web service downtime. If during Googlebot’s visits some of your content arrives too slowly, then it might not be indexed. If Googlebot indexes your page exactly at a moment in which you are performing maintenance on your web services, it might not index any dynamic content at all.

Furthermore, I have no proof of ranking differences between static content and dynamically-generated content. That might require another experiment. I think that it is very likely that, if content comes from external web service and does not load immediately, it might impact on Google’s perception of your site’s performance, which is a very important factor for ranking.

Recommended reading: How Mobile Web Design Affects Local Search (And What To Do About It)

Other Considerations


Up until recently, Googlebot used a fairly old version of Chromium (the open-source project on which Google Chrome is based), namely version 41. This meant that some recent JavaScript or CSS features could not be rendered by Google correctly (e.g. IntersectionObserver, ES6 syntax, and so on).

Google has recently announced that it is now running the latest version of Chromium (74, at the time of writing) in Googlebot, and that the version will be updated regularly. The fact that Google was running Chromium 41 might have had big implications for sites which decided to disregard compatibility with IE11 and other old browsers.

You can see a comparison of Chromium 41 and Chromium 74’s support for features here, however, if your site was already polyfilling missing features to stay compatible with older browsers, there should have been no problem.

Always use polyfills since you never know which browser misses support for features that you think are commonplace. For example, Safari did not support a major and very useful new feature like IntersectionObserver until version 12.1, which came out in March 2019.

JavaScript Errors

If you rely on Googlebot executing your JavaScript to render vital content, then major JavaScript errors which could prevent the content from rendering must be avoided at all costs. While bots might parse and index HTML which is not perfectly valid (although it is always preferable to have valid HTML on any site!), if there is a JavaScript error that prevents the loading of some content, then there is no way Google will index that content.

In any case, if you rely on JavaScript to render vital content to your end users, then it is likely that you already have extensive unit tests to check for blocking errors of any kind. Keep in mind, however, that Javascript errors can arise from unpredictable scenarios, for example, in case of improper handling of errors on API responses.

It is better to have some real-time error-checking software in place (such as Sentry or LogRocket) which will alert you of any edge-case errors you might not pick during unit or manual testing. This adds to the complexity of relying on JavaScript for SEO content.

Other Search Engines

The other search engines do not work as well as Google with dynamic content. Bing does not seem to index dynamic content at all, nor do DuckDuckGo or Baidu. Probably those search engines lack the resources and computing power that Google has in spades.

Parsing a page with a headless browser and executing JavaScript for a couple of seconds to parse the rendered content is certainly more resource-heavy than just reading plain HTML. Or maybe these search engines have made the choice not to scan dynamic content for some other reasons. Whatever the cause of this, if your project needs to support any of those search engines, you need to set up pre-rendering.

Note: To get more information on other search engines’ rendering capabilities, you can check this article by Bartosz Góralewicz. It is a bit old, but according to my experience, it is still valid.

Other Bots

Remember that your site will be visited by other bots as well. The most important examples are Twitter, Facebook, and other social media bots that need to fetch meta information about your pages in order to show a preview of your page when it is linked by their users. These bots will not index dynamic content, and will only show the meta information that they find in the static HTML. This leads us to the next consideration.


If your site is a so-called “One Page website”, and all the relevant content is located in one main HTML, you will have no problem having that content indexed by Google. However, if you need Google to index and show any secondary page on the website, you will still need to create static HTML for each of those — even if you rely on your JavaScript Framework to check the current URL and provide the relevant content to put in that page. My advice, in this case, is to create server-side (or static) pages that at least provide the correct title tag and meta description/information.


The conclusions I’ve come to while researching this article are the following:

  1. If you only target Google, it is not mandatory to use pre-rendering to have your site fully indexed, however:
  2. You should not rely on third-party web services for content that needs to be indexed, especially if they don’t reply quickly.
  3. The content you insert into your HTML immediately via Vue.js rendering does get indexed, but you shouldn’t use animated text or text that gets inserted in the DOM after user actions like scrolling, etc.
  4. Make sure you test for JavaScript errors as they could result on entire pages/sections not being indexed, or your site not being indexed at all.
  5. If your site has multiple pages, you still need to have some logic to create pages that, while relying on the same front-end rendering system as the home page, can be indexed by Google as individual URLs.
  6. If you need to have different description and preview images for social media between different pages, you will need to address this too, either server-side or by compiling static pages for each URL.
  7. If you need your site to perform on search engines other than Google, you will definitely need pre-rendering of some sort.
Smashing Editorial (dm, yk, il)

Acknowledgements: Many thanks to Sigrid Holzner of SEO Bavaria / Rabbit Hole Consulting for her review of this article.

4 Ways to Extract Website Content From a Client

In a perfect world, we’d all have all the content we needed before we ever touched a wireframe, on paper or otherwise. The hate for lorem ipsum is real, and I do understand why, but it’s a simple fact that we do not live in a perfect world. Clients are often ready to hand over a down payment, but not actually ready to build the site yet.

If you find yourself in this situation (and it will happen a lot at the beginning of your career), you’ll need to help your client get ready. And if they don’t hire a copy writer, you’ll need to help them write the content themselves.

1. Give Them Constraints

If your client is writing their own content, they may need to be told what to write. Most people are not writers by nature. It’s a skill that can be learned by just about anyone, but it takes some doing, whether you have a natural talent for it or not. Most people, when told to write some content for a website, are probably going to stare at the blank screen for a while.

Most people are not writers by nature

Then, hesitantly, they might begin to pick out letters on their keyboard, one by one. It’ll be a slog, but they’ll have that first grand sentence: “Hi! Welcome to the home page of our website.” And then they might write a bunch of stuff that would be better suited to the “About Us” page.

People have long made the argument that total creative freedom doesn’t make for good design; constraints do. Constraints force us to solve problems, but they also give us direction, and purpose. Yes, it means doing some of their website planning and strategy for them, but no one said you had to do it for free.

2. Go Through The Process With Them Before They Write

Even instructions like, “Okay, you need a paragraph of introductory text for the home page.” might be a bit vague for people unfamiliar with writing website copy. Get on Skype, or even meet them in person to take your client through the plan you have for their website (wireframes or other prototypes may come in handy here), and give them examples of what they might say.

Also be sure to tell them how much content is intended for each page, page section, or UI element. If only a sentence or two will reasonably fit, make sure they know this. If they can go nuts on the “About Us” page, make sure they know that, too.

And yes, giving them a space to go nuts is probably a good idea. Everyone wants unleash their inner Hemingway, and if the “About” page ends up being as long and annoying as The Old Man and the Sea, that’s the price we pay for good relationships with our clients.

As you go through your instructions, write them down, and send them to your client via email for reference. This way, they’ll always know what the plan is.

Charge by the hour for this bit, at least.

3. Go Ahead And Annoy Them A Bit

Ever had a client give you a deadline, then disappear? You have no obligation to take that lying down. Now, they might be busy, and have other legitimate priorities. If they tell you a family member is sick, just work for another client for a while.

But if they just disappear on you, don’t be afraid to remind them once in a while. They might genuinely forget, and need the reminder. Even if they haven’t forgotten, they might need a little motivation. And yes, you might annoy them a bit, but clients should respect your time, too.

If they can’t finish even one project, there probably isn’t a long-term relationship on the table

Now don’t e-mail them every day. That’s excessive. An e-mail per week should be fine to start with, and you can always increase that number as deadlines approach. If they e-mail you back with something like, “Thanks, I’m working on it!”, or, “For god’s sake please stop, I’m working on it!”… you can safely stop sending them e-mails for a while.

Don’t worry too much about annoying them. If they can’t finish even one project, there probably isn’t a long-term relationship on the table.

4. Use Software To Make It All A Bit Easier

Of course, this is all a fair bit of work. You can automate the process of getting content from your clients just a little bit, though. If you’ve got the budget for one more darned SAAS product in your pipeline, you could try out Content Snare.

You literally just set up forms that specifically request the content you need. You can put in character limits, and basically define the information required with various kinds of inputs. You want constraints? They’ve got constraints, and automatic email reminders.

Now the downside to this software is the cost. At the time of this writing, the cheapest plan is $24US per month (billed yearly). It’s affordable, probably, for a designer with plenty of clients already. But when every dollar counts, this is one tool you can probably do without.

For anyone who’s a little cash-strapped, you can replicate the basic functionality for requesting content with a much simpler tool like Google Forms. Just make one for each page, and go. You can embed these forms, too, so if you already have something like a “client area” set up on your website, you could theoretically set each client up with their own set of forms to fill out, all in one place.

Automated reminder emails? Well, there’s no shortage of mass mailing applications out there. If you’re already using one, you could schedule some reminders pretty easily. Just be sure to turn them off once you’ve gotten a response.

Annoying them is one thing. Using robots to do it is another.


Featured image via Unsplash.

Add Realistic Chalk and Sketch Lettering Effects with Sketch’it – only $5!

p img {display:inline-block; margin-right:10px;}
.alignleft {float:left;}
p.showcase {clear:both;}
body#browserfriendly p, body#podcast p, div#emailbody p{margin:0;}

The “Inside” Problem

So, you’re working on a design. You need a full-width container element because the design has a background-color that goes from edge-to-edge horizontally. But the content inside doesn’t necessarily need to be edge-to-edge. You want to:

  1. Limit the width (for large screens)
  2. Pad the edges
  3. Center the content

It’s “the inside problem” in web layout. It’s not hard, it’s just that there are lots of considerations.

The classic solution is an outer and inner container.

The parent element is naturally as wide as it’s own parent, and let’s assume that’s the <body> element, or the entire width of the browser window. That takes the background-color and pads the left and right sides. The inside element is what limits the width inside and centers.

footer { --contentWidth: 400px; background: lightcoral; padding: 2rem 1rem;
} .inside { max-width: var(--contentWidth); margin: 0 auto;

This is what my brain reaches for first. Doesn’t use anything fancy and feels perfectly understandable. That “inside” element isn’t wonderfully desirable, only because it feels like busywork to remember to add it to the markup each time this pattern is used, but it does the trick with few other downsides.

See the Pen
Classic "inside" element
by Chris Coyier (@chriscoyier)
on CodePen.

What if you only can use a single element?

These type of limitations aren’t my favorite, because I feel like a healthy project allows designers and developers to have whatever kind of control over the final HTML, CSS, and JavaScript output they need to do the best possible job. But, alas, sometimes you’re in a weird position as a contractor or have legacy CMS issues or whatever.

If you only have a single element to work with, padding sorrrrrta kinnnnda works. The trick is to use calc() and subtract half of the content’s maximum width from 100%.

<footer> Content
footer { --contentWidth: 600px; background: lightcoral; padding: 2rem calc((100% - var(--contentWidth)) / 2);

See the Pen
by Chris Coyier (@chriscoyier)
on CodePen.

The problem here is that it doesn’t prevent edge-touching, which might make this entirely unacceptable. Maybe you could select elements inside (paragraphs and whatnot…) and add padding to those (with a universal selector, like footer > *). It’s tempting to put padding way up on the <body> or something to prevent edge-touching, but that doesn’t work because we want that edge-to-edge background color.

What if you’re already inside a container you can’t control and need to break out of it?

Well, you can always do the ol’ full-width utility thing. This will work in a centered container of any width:

.full-width { width: 100vw; margin-left: 50%; transform: translateX(-50%);

But that leaves the content inside at full width as well. So, you’d need to turn to an inside element again.

See the Pen
Full width element with inside element
by Chris Coyier (@chriscoyier)
on CodePen.

Also, as soon as you have a vertical scrollbar, that 100vw value will trigger an obnoxious horizontal scrollbar. Some sites can pull off something like this to get rid of that scroll:

body { overflow-x: hidden; }

That’s pretty nice. If you can’t do that, though, you might need to set an explicit width on the scrollbar, then subtract that from 100vw.

body { scrollbar-width: 20px; /* future standards way */
} body::-webkit-scrollbar { /* long-standing webkit way */ width: 20px;
} .full-width { width: calc(100vw - 20px);

Even that kinda sucks as it means the full-width container isn’t quite full width when there is no vertical scrolling. I’d love to see CSS step it up here and help, probably with improved viewport units handling.

There are a variety of other ways of handling this full-width container thing, like Yanking to the edges with margins and such. However, they all ultimately need viewport units and suffer from the same scrollbar-related fate as a result.

If you can definitely hide the overflow-x on the parent, then extreme negative-margin and positive-padding can do the trick.

This is kinda cool in that it uses nothing modern at all. All very old school CSS properties.

See the Pen
Full Width Bars Using Negative Margins
by Chris Coyier (@chriscoyier)
on CodePen.

Can CSS Grid or Flexbox help here?

Meh. Not really.

I mean, sure, you could set up a three-column grid and place the content in the center column, while using the outside columns as padding. I don’t think that’s a particularly compelling use of grid and it adds complication for no benefit — that is, unless you’re already using and taking advantage of grid at this scope.

Fake the edges instead.

There is no law that the background-color needs to come from one single continuous element. You could always “fake” the left and right sides by kicking out a huge box-shadow or placing a pseudo element wherever needed.

We cover various techniques around that here.

The post The “Inside” Problem appeared first on CSS-Tricks.

Designing For Users Across Cultures: An Interview With Jenny Shen

Designing For Users Across Cultures: An Interview With Jenny Shen

Designing For Users Across Cultures: An Interview With Jenny Shen

Rachel Andrew

In this video, we are pleased to feature Jenny Shen who is a UX Consultant and has worked with numerous startups and brands including Neiman Marcus, Crate&Barrel, eBuddy, IBM, TravelBird and Randstad. Her current focus is helping businesses innovate and designing inclusive product experiences for global users. She is interviewed by Jason Pamental, who has already spoken at our San Francisco conference. Jason is a strategist, designer, technologist, and author of Responsive Typography from O’Reilly.

In their conversation, we discover how we can approach localizing and internationalizing our websites, over and above simply offering a translation of the material. This is something that Jenny will also focus on in her talk at our Toronto SmashingConf.

Vitaly: Okay, hello everyone. I’m looking forward to having a wonderful conversation today. We have Jason with us today. Jason, how are we doing today?

Jason: I’m doing very well. I’m excited about this.

Vitaly: Oh yes.

Jason: Something new and fun.

Vitaly: This is new and fun. Some of you might know we have Smashing TV and Smashing TV is all about planning some sort of webinars and sessions and interviews and all that. We always look for new adventures. Jason, you like adventures?

Jason: Very much.

Vitaly: Who doesn’t like adventures? In this particular adventures, we’re looking into actually just having conversations. Like, you know, you take a cup of coffee, sit down with a person you admire or you like or you feel like they have something to share. You just have a conversation. This is not about slides, it’s not about presenting, it’s all about really just kind of human interaction between two people genuinely interested in a particular topic. And so, with that, I’m very privileged to have Jason with us today, who’s going to be the interviewer, and who’s going to introduce the speaker or the person who he’s going to speak with. We just came from Smashing Con, San Francisco two weeks ago. It was a wonderful experience because Jason would just come on stage, sit down, take a cup of coffee, work through his design process and stuff. And he’s very curious, right? This is something that you need in a person who can run interviews really well. You might see Jason more often in the future. Maybe, Jason, you can introduce yourself. What do you do for life? What’s the meaning of life for you?

Jason: Well, I suppose in the order of frequency, it’s spending time with my wife, walking the dogs which, most people see on Instagram, riding my bike, and then a whole lot of stuff about typography. Which, is what I was demonstrating when I was at Smashing, San Francisco. The thing that is sort of common for me that runs through is just being curious about stuff and learning new things so the chance to actually learn from more amazing people who are gonna be getting on stage at other Smashing events was too good to pass up. So, I’m pretty excited about this.

Vitaly: We couldn’t be more excited to have you. I think it’s time for me to get my breakfast. I’m sorry, I’m so hungry. I woke up four hours ago, was all about meetings and Jason will take over. Jason, have a wonderful time. I’m looking forward to seeing you when they’re wrapping up this session. Okay? Jason, the stage is yours.

Jason: Thanks, Vitaly. Well, I’m super excited about this for a whole bunch of reasons. But, the main one is I get to introduce to you someone who, correct me if I’m wrong, but I think this is the first time you’re speaking at a Smashing Event? Is that true?

Jenny Chen: Yes. It is the first time.

Jason: Okay. Well, The voice that you’re hearing and the face that you’re seeing is Jenny Chen who is a UX and localization consultant who’s worked with all kinds of big brands including Neiman Marcus, Crate and Barrel, and IBM. In the course of your travels over the web of a number of years which has some pretty amazing lists of credentials. I mean, some things that really stood out to me, that actually I think kind of made you a little bit more compelling in terms of who I really wanted to talk to first: is that not only are you doing all of this incredible work but you’re also a regional director for EMEA for Ladies UX, which is an amazing organization, and you also started your own mentorship program. That teaching aspect, you know, I think is one of the things that I love about getting up on stage and giving talks and workshops and stuff. So, before we actually jump into what you’re gonna be talking about, I’d really love to hear a little bit more from you, about your journey from Taipei, to where you are now, to how you came to be in this industry.

Jenny Chen: Yeah, sure. Thank you, Jason, for the amazing introduction. Yeah. So, as you were saying, I started from Taipei. I was born in Taipei, Taiwan. My journey was…I moved around in a lot of places. My family moved to Canada and I studied there. I studied in Vancouver, Canada.

Jason: Oh, wow.

Jenny Chen: Yeah. I studied Interaction Design. At the time it was like Human-Computer Interaction.

Jason: Right.

Jenny Chen: And then I moved to Singapore and now I’m based in the Netherlands consulting regarding UX projects/localization projects. And just like you mentioned, I am a volunteer EMEA director at Ladies UX and I also run my own mentorship program in the spare time. Yeah. I’ve also been speaking in [crosstalk 00:04:59]

Jason: Because you must have a load of spare time then? So, tell me a little bit about the typical day for you if there is one.

Jenny Chen: Mm-hmm (affirmative) Typical day. These days I have more of a typical day because I’m working with clients and then I am basically just taking my to-do list and doing the job that can help the organization, can help shape product strategy, offer feedback to designers, do some consulting on localization, working on research. And, yeah, like a typical day I could be reviewing a design, giving feedback to my design team. I could be helping a client with more of an approach to hire a designer and I could be running a workshop on product strategy, like really talking about, “This is model canvas and valid composition.” And some days I’m drafting a user research strategy and on some days I am flying over to a different country to actually conducting on-site localization and culture research. So, yeah, there’s not really a typical day because I really do different types of work, types of projects, and I get to work with really amazing clients.

Jason: That’s amazing. I’ve looked at your resume. Your speaking schedule last year was incredible. You were at some of the best events on the web. You were speaking on all kinds of different things. It makes me feel so monotonous. All I ever do is talk about web typography and you seem to cover an incredible range of topics. That really is fascinating to me. And I love that your focus is so well-rounded in that it’s not just about UX it’s also about how design can impact the business. And that’s something that I think is really fascinating and it’s really starting to gain a lot of prominence with research from InVision and McKinsey about what design can bring to the rest of the organization. So, how long has that been more of the focus about business model innovation and all those kinds of strategic topics?

Jenny Chen: Yeah. I actually just transitioned from a designer to a strategist in a little bit more than a year ago. I’d been in the design-

Jason: Really?

Jenny Chen: Yeah. I’d been in the design industry for like six, seven years and I’d been doing, you know, wireframing, the same type of thing the designer would do. Wireframes, prototypes, icons, and stuff like that and it was to the point I really wanted to be more involved on the business side of things. Now that I’m in this role for like more than a year, I really see how being more business-minded and being aware of the business goals and how that needs to work together with a strategy and a design to actually move the needle. Really, the starting point just because I’d been a designer for like six, seven years and I really want to do more. I really want to actually see the impact of my designs. So, that seems like the natural step. And I think learning from a lot of experts from my community as well as going to different conferences and listen and learn from those people who do strategy and are leading design. So, I’m very honored to have a chance to be in those conferences and learn from these leaders.

Jason: That’s really amazing. I hope we’ll have time to come back to that a little bit because I think a lot of designers, as they advance in their career, really look for ways that they can achieve a greater level of impact than just this one project they’re working on. And I think it’s really hard for designers to kind of figure out where they can go.

Jenny Chen: Yeah.

Jason: So, that’s amazing to hear that you’ve made such a great transition. I can’t help but think that there’s a really great relationship between multi-lingual, multi-cultural and localization as this sort of central part of business strategy and how it relates to design and I gather that’s kind of what you’re gonna be talking about in San Francisco. Is that…I’m sorry, in Toronto. Is that true?

Jenny Chen: Yeah. So, my talk will be on moreover how culture affects the design and then I’ll also be touching on how…what are some of the reasons…how can companies benefit from localization. How can companies benefit from expanding to a new market? So these are the type of things that I want to talk about in my talk in Toronto as well as showcasing some case studies. How do reputable companies…how do big companies approach localization and market expansion? Because I have been doing this specifically designs with multiple cultures since 2013 and I’ve definitely learned a lot and then also learned from the companies who are really experts in doing internationalization and localization. So, yeah. I am really excited to share about this.

Jason: That’s really great. And I think for a lot of people, when they think about a language addition to a website, they kind of lump adding a language into what people refer to when they say internationalization. But I know I learned a ton when I listened to Robin Larson from Shopify talk about their work over the past year or so in adding multiple languages to their system. But the phrase you used was localization and that was the thing that really stuck out to me and what I wanted to ask you about because that was something Robin spoke about where it’s not just the language but it’s the phrasing and it’s the cultural things about other aspects of design. So, I’d love to hear more about what that means to you when you’re designing and the kinds of things that you consider in adding a language…whether it’s English and Chinese or Korean or whatever the other kind of cultural implications that go along with that.

Jenny Chen: Yeah. So, regarding localization, for me, it means in all kinds of ways how to adapt a product, an interface, an application to meet the needs, the preferences, expectations, and behaviors of the local users. Like you mentioned, it’s not just about translation, but there are many things from icons, from symbols and colors and sometimes you have text direction and of course the content…all these sort of things that can help a local user feel like, “Hey, this app or this software is designed with me in mind. It’s just not some foreign company…they only hired some translators and they expect me to feel connected to the product.” So, localization, that’s what it means for me and that’s the kind of work that I like to do.

Jason: Mm-hmm (affirmative) And so how often is that work…just for frame of reference, I’ve mostly worked on web content management systems. So, when that first comes up, the first thing that comes to mind is, “Okay. I need to add a language pack. I need to factor this into the language encodings for the theme,” and that sort of thing. But I know there’s a lot of other considerations and there’s a whole range of what people work with. From things that are sort of static sites, where you have a lot of freedom to customize things. But I think a lot of us end up dealing with either its an app infrastructure or a website infrastructure that has to support those multiple languages. So what kind of scenarios have you had to deal with in terms of the technology behind…you know and how you…I’m trying to phrase this better. You know sort of implementing that design and finding the freedom to change the icons to change the phrasing to chan — you know, to make it feel connected. Are you often involved in the technical implementation of that or at least mapping things properly?

Jenny Chen: So actually, on a technical side, not really and there is really different kinds of clients. And then some of them I come into a project and they have already things mapped out, and then usually when I come in is when they have decided a market, or maybe they are thinking about localization. They haven’t decided what market, but they have the infrastructures in place, so I can’t really speak to about the technical infrastructure. But then I’m thinking like what might be useful for someone to know about like why rolling the process, and how to actually even think about, “Well should we change this icon?” It’s all related to — We should think about the business case of localization. I mean we don’t do it just because we can we don’t do it just because its fun, but localization or expanding to a different market or supporting multiple languages. There must be, well, there should be a business reason behind it: is because we want to expand markets, we want to expand to a different market, we want to reach the users, and definitely we are hoping for some success matrix from that market. And if we deem that this is a market that is likely to succeed, or we want to experiment and then the users of that culture/of that market, and they have a strong tendency to let say go for applications that are feel more native, feel more intuitive. And as user experience practitioners like we know that designing the user experience like is going to make the users more loyal, more engaged. So its also considering like the user experience business matrix to decide: Okay, do we want to have this customization available, do we even want to customize it, or do we just want to go for like the minimum localization effort which typically is translations and content localization.

Jason: And so how often does it go further than that? So I mean the things that come to mind that we had kinda gone back and forth a little bit in the questions beforehand: language length, or color, or typography, or layout. How often does that come into play?

Jenny Chen: Mm-hmm (affirmative) That’s a really good question. I would say that it really depends on the industry, it depends on the company stage, it also depends on like where are they, what their business goals are. For just a start-up it’s unlikely that they will fully customize it. They may even not expand into multiple markets when they are just figuring out their product market fit. But lets say for a really established company like Spotify, Shopify as well, they are… they already have a like a market that’s a home market that’s doing really well, and they want to expand and for the — for some target market where they have a really distinct culture like Japan for example, where there’s a lot of different like influences and that can actually affect the layout or the localization element, for example, Singapore or China. And then we look at evaluating what is the — what do we have to do to be successful in the market? For some market, it might not be necessary, like maybe for some markets, they might require less changes than the other. So, I would say, this is a really — it depends, kind of [inaudible 00:17:06] answer to kind of, for us to know, what is actually required and how often does it actually go beyond the basic localization?

Jason: Right, and so in your role, sort of advising your clients on these sorts of things… Do you actually go so far as think about what would the team look like that could do this successfully? Like, what kinds of designers and skills sets you would want to see, to help them be successful?

Jenny Chen: Hmm. Yeah, in my experience, the localization team — then again, depending on the state, depending on — are they in the beginning of setting up the team? Let’s say if they haven’t gotten a team set-up, usually, there is a localization team that takes care of the localization elements, or maybe some, to make sure there is consistency but there is also certain customization elements with the differing market but while the other product team could be focused on specific features. So let’s say like, the whole market team will design the checkout flow, the localization team will then take that checkout flow and customize it for a different market. And, depending on the company size, some more established companies, they could have like the Germany team, the Netherlands team, the Nordics team, the Latin team, to actually hire people who are aware of the culture differences, of the local expectations, the legal requirements and all those things that can actually make or break the product. They either hire people on the ground, or they hire people with that experience, with that knowledge, in their office.

Jason: Right.

Jenny Chen: But there’s really multiple ways we could go about it. What’s really most necessary is people with that knowledge, people with that cultural understanding who can actually design for that target market.

Jason: That’s great. I think that leads into a couple of other things that I really wanted to ask you about. One is, I mean, your background is so geographically varied. How much has that influenced your career direction, in terms of what interests you, and the kinds of things you’ve wanted to focus on?

Jenny Chen: When I was still studying [inaudible 00:19:36] people always like set out their career goal, and what they wanted to be, in 5 years, 10 years, what I want to do. I honestly have never thought that I would be in this industry, in the localization industry. And, I really love what I do, and I think the reason why I’m doing this and maybe like, what shaped my path going here is just having curiosity, you know, towards other cultures and towards the world. I guess, as I traveled more and more, my mind started to open and to really understand cultural differences, the local ways of life. And, being a UX designer like understanding how important it is to have our product user-centered. Then I look at people who are living in other countries and I see, you know, what kind of things do they actually use: What kind of apps, what kind of website, and how that’s so different from what we know and what we’re accustomed to. That’s one of the reasons, curiosity. I really love to travel and I also have moved to many countries just to really be immersed in the local culture, really connect with the local people, try to learn some local language. I’m terrible at Dutch (laughs), but I try where I can. I think it really has enriched my life, it really has enriched my professional experience. I mean, when I moved to Singapore, that’s actually how it gave me the opportunity to design for Malaysia, the Philippines, and Indonesia and countries in that region. When I moved to Amsterdam, I was able to design for Spain, and France, and Germany, and Turkey, like all the countries in this region. I feel very blessed and I really love what I do. I think, again, my curiosity and passion for traveling definitely have played a role in this.

Jason: Yeah, sure sounds like it. So, if you were to try and take — well, so, there’s two parts to this one: I’m wondering if there’s something that you would want, if you could go back to do differently? Like, is there something you had wished you learned more. You’ve moved into business strategy quite a bit more, do you wish you would have studied business? What are the sorts of things that you’re looking to fill in now, that you maybe wished you had learned earlier?

Jenny Chen: Yeah, I think about this sometimes. I think it might have been quite helpful if I studied business administration. But, at the same time, having a degree in design, and having a solid training background in research…I think that’s also a huge asset. Often times I talk to clients and they actually need a researcher. They need somebody who has done this a lot, and somebody who understands the science behind user interface, visibility test and how to like, minimize bias in the whole research process. So I feel like, maybe I should’ve studied business but at the same time, I’m also really happy that I studied design.

Jason: Sure.

Jenny Chen: But, something I’m definitely trying to make up where I don’t have so much expertise regarding the business side. It’s just that I am talking to experts in this area, I’m reading books and listening to podcasts. But definitely, if someone who wants to take on a strategist role, I would say that would be really helpful. Right now that’s actually something that, rather than the design and what tools to use, I’m definitely more interested to learn about the business side of things.

Jason: Mm-hmm (affirmative). At an agency I worked at a few years ago, a bunch of us actually took a Coursera class together, and had a little discussion every week about — It was an MBA focused program to learn about business models, structures, and what is the business model canvas and all those kinds of things. That was really fascinating, I certainly appreciated that. So, the other side of that last question was: your advice to designers who are looking to do more work like this. What are the kinds of things that if a designer wants to understand localization more, and start to move into this world, what kinds of advice would you have for them?

Jenny Chen: I think one thing that quite helped me to do my work in localization is just to be, again, be curious. Not just curious, and physically traveling. Let’s say at the designer who might not have the opportunity to go abroad and do a research trip in another country, we can at least look at international tech news. I still say in [inaudible 00:24:38] my contact in Singapore, and I read tech news in South East Asia, in Taiwan, in other countries where there are English versions available or at least in a language that I can read. You can also download apps or go on websites and really just try to be more aware of how designs or how the software can be different. And, definitely keep an eye out what the other companies are doing in other markets. That is definitely really interesting. We can follow that news like tech crimes to next lab, there’s a lot of news sources, just to keep an eye out and also learn on what people are actually doing regarding localization.

Jason: That’s amazing, great. That’s awesome advice, thank you. The last thing I’m going to ask you about — I think we’re probably getting close to a good time to wrap up but, for you, now, with all these things that you’re doing, what’s getting you really excited? What’s the new thing that you see coming that you’re really excited to learn about and incorporate in the work that you do?

Jenny Chen: Something that’s really new and really exciting… For me personally, I’m just really happy that more and more people are thinking about localization and sharing that knowledge. Like what you just said, Robin is great, and I really like the work that she does, and so people like her, people like me, who are sharing/ raising the awareness of the importance of considering the local cultures, considering the nuances when developing a localized product. Overall, I’m just happy that people are raising awareness of this issue. I really hope more and more companies who actually doing it would be on-stage, or like writing or speaking more about it so other people can ultimately learn from the successful companies. I’m sure like, Facebook does a lot of things, DropBox does a lot of things, but then it’s just that so far we haven’t seen people actively talking about localization or internationalization, so that’s something I’m really excited about.

Jason: That’s great. Well, this has been absolutely amazing, I can’t thank you enough. For anyone who is going to be in Toronto, I hope that — if any of you are listening, I hope that you take this to heart. Go say hi to Jenny, tell her how much this work has influenced you. It’s such a big part of being at these events to be able to people and learn more about what they’re working on. Don’t hesitate, that’s what we’re all there for. We’re all learning together, some of us have just read a couple of pages ahead, and we’re happy to share what we’ve learned. Thank you so much, Jenny, this has been amazing.

Jenny Chen: Thank you so much Jason. I’m so happy to take part in this, thank you.

Vitaly: Thank you so both of you for actually making it all happen. Wonderful interview and also wonderful insights from you Jenny, thank you so much for that. Just a quick note from me, this is just one of the little sessions that we have about people who are going to kind of speak at our conferences but also just interesting people doing interesting work. This is important. I think at this point there’s a lot of failure of highlighting people who are kind of passionately and working hard behind the scenes doing incredible work to change the world. So this is kind of just our humble attempt to bring a little bit of spotlight to those people. With this in mind, thank you so much for watching. Looking forward to the next one.

That’s A Wrap!

We’re looking forward to welcoming Jenny at SmashingConf Toronto 2019, with a live session on designing for users across cultures. We’d love to see you there as well!

Please let us know if you find this series of interviews useful, and whom you’d love us to interview, or what topics you’d like us to cover and we’ll get right to it!

Smashing Editorial (ra, il)
Internet Censorship is Here: How Far Will it Go?

Within hours of the recent mass shooting at a New Zealand mosque by a far-right terrorist, the country’s authorities were scrambling to ensure a sickening video the murderer streamed on Facebook was barred from the nation’s screens. Due to the nature of the Internet, the task of removal proved very difficult. But eventually, the government succeeded — using controversial tactics usually associated with Internet censorship by authoritarian regimes.

For some, the action of one highly democratic nation was a worrying reminder that Internet freedom should not be taken for granted. For others it was a triumph of taste and decency over a Wild West online community that still refuses to accept regulation while simultaneously failing to take responsibility for its actions.

a billion Internet users are barely aware that Facebook and Google exist

Versions of this debate are being played out around the world, as authorities, online companies, journalists and web professionals try to strike a balance between free speech and protecting Internet users from highly offensive — and potentially also subversive — content. The spread of “fake news”, alleged attempts by foreign powers to meddle in elections, and the age-old difficulty of defining what should be permitted in a free society, are all part of this debate.

With the technology and the excuses for Internet censorship already in place, it’s a debate that will shape the future of the Web. Or should that be ‘futures’, plural?

Full Censorship Can Be Achieved

In China, a billion Internet users are barely aware that Facebook and Google exist. Authorities have no difficulty in ensuring unpleasant content is not seen on the search engines and social media boards that are available there: The Christchurch video was blocked just as effectively as disturbing footage of the Tiananmen Square massacre is, because the Chinese government has built a system of highly effective controls on the Internet known as “the Great Firewall of China”.

Officially called the Golden Shield Project, China’s system of Internet controls has made fools of the experts who said that the Internet could not be tamed or censored. Jon Penney, a Fellow at Harvard’s Berkman Center for Internet & Society and Toronto’s Citizen Lab, told Open Democracy recently that although China’s technology is not yet fully understood by the west, it is:

…among the most technically sophisticated Internet filtering/censorship systems in the world.

“Basically, access to the Internet in China is provided by eight Internet Service Providers, which are licensed and controlled by the Ministry of Industry and Information Technology,” he said. “These ISPs are important, because we’re learning that they do a lot of the heavy lifting in terms of content filtering and censorship.”

Controlling ISPs was one crucial brick of that firewall that allowed New Zealand to take the Christchurch killer’s video down. Indeed, what was controversial for many was the use of such an approach — and the fact that the government used a set of unpublished ‘blacklists’ of the sites it required to be blocked. Kalev Leetaru, a big data expert, wrote on Forbes: “The secret nature of the blacklist and opaque manner in which the companies decided which websites to add to the list or how to appeal an incorrect listing, echoed similar systems deployed around the world in countries like China.”

A Different Internet

China’s great firewall also tracks and filters keywords used in search engines; blocks many IP addresses; and can ‘hijack’ the Domain Name System to ensure attempts to access banned sites draw a blank. This is thought to be done at ISP level, but also further along the system as well, ensuring that browsing even a permitted foreign site from within China can be frustratingly slow. But with sites such as Google, Facebook, Twitter and Wikipedia blocked, most Chinese users simply view an entirely different Internet and App ecosystem.

most Chinese users simply view an entirely different Internet

Adrian Shahbaz, the research director for technology and democracy at Freedom House, an independent watchdog for democracy, says other authoritarian regimes — including Saudi Arabia and the United Arab Emirates — are already showing interest in China’s technology and censorship system. Russia is building its own version, which will allow it to totally isolate the domestic web from the rest of the Internet; ostensibly, this is to ensure the country’s ability to defend itself from a “catastrophic cyber attack”.

There are concerns that this censorship will spread to the West, where attempts to clamp down on hate speech, and to stop foreign ‘trolls’ pushing fake news in a bid to cause instability and influence elections, mean there is no shortage of justification for introducing controls. French President Emanuel Macron and US President Donald Trump are among the democratic leaders who have threatened crackdowns in the last few months alone.

Censorship or Responsible Regulation?

ISP controls and direct censorship are not the only threats to a unified and ‘free’ internet. With most people consuming their Internet through just a few very popular social media platforms or mainstream news providers, governments can also lean directly on these companies. Singapore — a country that admittedly sits in the bottom 30 of the Press Freedom Index — has just introduced a new “anti-fake news law” allowing authorities in the city-state to remove articles deemed to breach government regulations.

The country’s prime minister said the law will require media outlets to correct fake news articles, and “show corrections or display warnings about online falsehoods so that readers or viewers can see all sides and make up their own minds about the matter.”

Internet giants such as Facebook, Twitter and Google have their Asia headquarters in Singapore and are expected to come under pressure to aid implementation, meaning that those sites could look different when viewed from the city-state. Singapore may not be known for its freedom of speech, but its approach is telling as to how less authoritarian regimes — and those without China’s technology — can impose a creeping web censorship by leaning of the big tech companies that deliver most of the Internet users see.

The Singaporean premier added that “in extreme and urgent cases, the legislation will also require online news sources to take down fake news before irreparable damage is done.” It is not hard to imagine these words coming from a Western leader, or a judge.

Facebook is Already on Board

Facebook itself, after coming under intense pressure over the use of the site to spread everything from dubious news reports to videos promoting suicide, has now joined the calls for regulation. “From what I’ve learned, I believe we need new regulation in four areas: harmful content, election integrity, privacy and data portability,” Mark Zuckerberg said in a statement recently.

Copyright as Censorship

On the subject of data, Zuckerberg cited Europe’s GDPR — a set of regulations governing the use and storage of personal data — as an example to follow. But it is another EU law, passed in recent weeks, that threatens further Internet fragmentation.

The new Copyright Directive will require tech firms to automatically screen for and remove unauthorised copyrighted material from their platforms. Many campaigners have argued the directive will be harmful to free expression, since the only way to guarantee compliance is to simply block any user-generated content that references other copyrighted material in any way, including criticism, remixes, or even simple quotes.

until now, people have been relatively free to publish material online and then suffer the consequences

While the EU directive aims to bolster quality online news journalism by banning its wholesale re-use, sites that rely on user-generated content could end up looking very different when viewed from within Europe, compared to the US for example. Experts talk of a “splintering”, which means that there will effectively be different Internets in different jurisdictions.

Copyright enforcement, of course, is not censorship. And there have always been categories of images, for example, that are illegal in most jurisdictions. But until now, people have been relatively free to publish material online and then suffer the consequences, as was the case in the days of print. Proponents of tighter controls at source argue that simply removing material from sites once it is known to be illegal is a never-ending and ultimately pointless task, especially in the face of organized ’trolls’ who can re-post at will.

During the first 24 hours after the Christchurch attack, Facebook removed 1.5 million re-posts of the murderer’s video, for example. It was only the introduction of controls at ISP level that finally blocked it in New Zealand, at least.

The Human Element

“Extremist content” and “fake news’ look set to be the next targets for politicians who favor stricter Internet controls, or, as they may argue, greater responsibility from ISP providers or major websites. Unlike copyright, this is at least partially subjective, and would require real people, employed by the authorities, to decide what is acceptable on our screens. China, naturally, already employs an army of such censors; it even pays another large group to post material that is explicitly favorable to its policies.

Leetaru said: “Like New Zealand’s recent blocking efforts, China’s system officially exists for the same reason: to block access to disturbing content and content that would disrupt social order. In the Chinese case, however, the system has famously morphed to envelope all content that might threaten the government’s official narratives or call into question its actions.

“In New Zealand’s case, website censorship was limited to a small set of sites allegedly hosting sensitive content relating to the attack. Yet, the government’s apparent comfort with instituting such a nation-wide ban so swiftly and without debate reminds us of how Chinese-style censorship begins.”

Can’t imagine it happening? Britain’s government recently published a ‘White Paper’ — a way of signalling possible legislation — which proposed that social media companies should be forced to take down, within 24 hours, “unacceptable material” that “undermines our democratic values and principles”.

What Constitutes Fake News?

Exactly what constitutes “fake news” has always been open to interpretation: during election campaigns, some democratic leaders have already learned that it is a good label to discredit critical reports with. In Russia, fake news was banned recently, and is defined as anything that “exhibits blatant disrespect for the society, government, official government symbols, constitution or governmental bodies of Russia.”

One area that is being actively targeted in Europe is “extremist” material fostering violence or hatred. In Germany, which already has a system to force platforms to remove “hate speech,” this has recently included censure on a woman who posted pictures of the Iranian women’s volleyball team to contrast their attire in the 1970s (shorts and vests) and now (headscarves and long sleeves).

The following joke was deemed hateful enough to land the poster a social media ban: “Muslim men are taking a second wife. To finance their lives, Germans are taking a second job.”

Another area that Western governments are showing increasing concern about is private groups that carefully regulate membership, designed to allow like-minded people to share their views unchallenged. Already, there have been calls for Facebook to clamp down on these closed groups or “echo chambers”, on the grounds that they are able to serve undiluted misinformation without challenge. While these requests may once again sound reasonable, it is unclear what would constitute an echo chamber and what kind of ‘misinformation’ could be considered unacceptable — or indeed, who would decide that.

How to Beat the Censors

For those wanting to beat EU copywrite laws and, for example, see a meme their friend in California is ‘lol-ing’ about, a virtual private network (VPN) should be a good solution. Already recommended by many security experts, VPNs are encrypted proxy servers that hide your own IP address and can make it look like you are browsing from a different country. For occasional use, even using a public proxy site, a ‘browser within a browser’ may well work.

There are various levels of VPN – an in depth look at the options is available here. However, sophisticated censorship systems such as the Great Firewall of China are capable of detecting VPN use and blocking that too.

A popular alternative to VPN use is the Tor browser, which is designed with anonymity in mind. Although experts rate Tor’s privacy features (and therefore its anti-censorship abilities) higher than VPNs, Tor can also be blocked. What’s more, you have to install the browser on your device and using Tor does not hide the fact that you are using Tor. Both Tor and VPNs are illegal in some countries and their use could put you at risk.

Tor is also the gateway of preference for accessing the Deep Web or Dark Web — which are also used heavily by activists and journalists who are trying to circumvent curbs on their freedom of expression. In a detailed article explaining how to access and use the Dark Web, technology journalist Conor Shiels says:

The Deep Web has been heralded by many as the last bastion of internet privacy in an increasingly intrusive age, while others consider it one of the evilest places on the internet.

The Deep Web is technically any site not indexed by search engines. Such sites would be an obvious place for private groups to base themselves if they are thrown of Facebook or even banned — although of course they may find it harder to recruit new members if they remain hidden from the casual user.

Although the Deep or Dark Web is a popular place for illegal activity, it is not illegal in itself. For those seeking an uncensored experience, it remains a place hidden from the authorities, but of course, the flip side is that you will be hiding your own postings from the vast majority of web users. This aspect of censorship will perhaps be the hardest to bypass as authorities move to cut off the most popular sites and platforms from certain news, views and activities.


Featured image via Unsplash.

Add Realistic Chalk and Sketch Lettering Effects with Sketch’it – only $5!

p img {display:inline-block; margin-right:10px;}
.alignleft {float:left;}
p.showcase {clear:both;}
body#browserfriendly p, body#podcast p, div#emailbody p{margin:0;}

Popular Design News of the Week: May 13, 2019 – May 19, 2019

Every week users submit a lot of interesting stuff on our sister site Webdesigner News, highlighting great content from around the web that can be of interest to web designers. 

The best way to keep track of all the great stories and news being posted is simply to check out the Webdesigner News site, however, in case you missed some here’s a quick and useful compilation of the most popular designer news that we curated from the past week.

Note that this is only a very small selection of the links that were posted, so don’t miss out and subscribe to our newsletter and follow the site daily for all the news.

Polypane: The Browser for Responsive Web Development and Design


PHP Isn’t the Same Old Crappy Language it was Ten Years Ago


Does Anyone Use Social Sharing Buttons?


What’s up with the New Facebook App Logo?


CSS Grid Based Website Builder


Is ‘The Fold’ Still Relevant in Today’s Scrolling and Skimming Culture?


Google Fonts is Adding Font-display


Animating CSS Grid Rows and Columns


46 Form Design Best Practices


5 Things to Be Mindful of When You Design Filters


Things that Come Back to Haunt Web Designers


Cut your Forms in Half


Design Notes: A Free Resource Library for Product Designers


Material Design Guidelines for Dark Theme


I Wrote the Book on User-friendly Design. What I See Today Horrifies Me


How Design Boosts Conversion


From Zero to Hero: Look at Hero Images in Web Design


Free Online Tools for UI/UX to Try in 2019


How to Choose the Best Static Site Generator in 2019


3 Big Mistakes I Made as an Illustrator and How You Can Avoid Them


Will You Get Sued for Using Old Adobe Apps?


The Flexible Future of Branding and the Death of the Logo as We Know it


Site Design: Lusion


Why is Pricing Design so Hard?


I Turned my Designer Interview Task for Google into a Startup


Want more? No problem! Keep track of top design news from around the web with Webdesigner News.

Add Realistic Chalk and Sketch Lettering Effects with Sketch’it – only $5!

p img {display:inline-block; margin-right:10px;}
.alignleft {float:left;}
p.showcase {clear:both;}
body#browserfriendly p, body#podcast p, div#emailbody p{margin:0;}

Footnotes That Work in RSS Readers

Feedbin is the RSS reader I’m using at the moment. I was reading one of Harry’s blog posts on it the other day, and I noticed a nice little interactive touch right inside Feedbin. There was a button-looking element with the number one which, as it turned out, was a footnote. I hovered over it, and it revealed the note.

The HTML for the footnote on the blog post itself looks like this:

<p>...they’d managed to place 27.9MB of images onto the Critical Path. Almost 30MB of previously non-render blocking assets had just been turned into blocking ones on purpose with no escape hatch. Start render time was as high as 27.1s over a cable connection<sup id="fnref:1">
<a href="#fn:1" class="footnote">1</a></sup>.</p>

Just an anchor link that points to #fn:1, and the <sup> makes it look like a footnote link. This is how the styling would look by default:

The HTML for the list of footnotes at the bottom of the blog post looks like this:

  1. 5Mb up, 1Mb down, 28ms RTT. 

As a little side note, I notice Harry is using scroll-behavior to smooth the scroll. He’s also got some nice :target styling in there.

All in all, we have:

  1. a link to go down and read the note
  2. a link to pop back up

Nothing special there. No fancy libraries or anything. Just semantic HTML. That should work in any RSS reader, assuming they don’t futz with the hash links and maintain the IDs on the elements as written.

It’s Feedbin that sees this markup pattern and decides to do the extra UI styling and fancy interaction. By inspecting what’s going on, it looks like they hide the originals and replace them with their own special stuff:

Ah ha! A Bigfoot spotting! It’s right in their source.

That means they fire off Bigfoot when articles are loaded and it does the trick. Like this:

See the Pen
Bigfoot Footnotes
by Chris Coyier (@chriscoyier)
on CodePen.

That said, it’s based on an already functional foundation. Lemme end this with that same markup pattern, and I’ll try to look at it in different RSS readers to see what they do. Feel free to report what it does in your RSS reader of choice in the comments, if it does anything at all.

Azul is an abstract board game designed by Michael Kiesling and released by Plan B Games1 in 2017. From two to four players collect tiles to fill up a 5×5 player board. Players collect tiles by taking all the tiles of one color from a repository, and placing them in a row, taking turns until all the tiles for that round are taken. At that point, one tile from every filled row moves over to each player’s 5×5 board, while the rest of the tiles in the filled row are discarded. Each tile scores based on where it is placed in relation to other tiles on the board. Rounds continue until at least one player has made a row of tiles all the way across their 5×5 board.

  1. Plan B makes other cool games like Century and Reef. 

The post Footnotes That Work in RSS Readers appeared first on CSS-Tricks.

Everything You Ever Wanted to Know About inputmode

The inputmode global attribute provides a hint to browsers for devices with onscreen keyboards to help them decide which keyboard to display when a user has selected any input or textarea element.

<input type="text" inputmode="" />
<textarea inputmode="" />

Unlike changing the type of the form, inputmode doesn’t change the way the browser interprets the input — it instructs the browser which keyboard to display.

The inputmode attribute has a long history but has only very recently been adopted by the two major mobile browsers: Safari for iOS and Chrome for Android. Before that, it was implemented in Firefox for Android way back in 2012, and then subsequently removed several months later (though it is still available via a flag).

Almost six years later, Chrome for Android implemented the feature — and with the recent release of iOS 12.2, Safari now supports it too.

This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.



Mobile / Tablet

iOS SafariOpera MobileOpera MiniAndroidAndroid ChromeAndroid Firefox

But before we go deep into the ins and outs of the attribute, consider that the WHATWG living standard provides inputmode documentation while the W3C 5.2 spec no longer lists it in its contents, which suggests they consider it obsolete. Given that WHATWG has documented it and browsers have worked toward supporting it, we’re going to go assume WHATWG specifications are the standard.

inputmode accepts a number of values. Let’s go through them, one by one.


<input type="text" inputmode="none" />

We’re starting here because it’s very possible we don’t want any type of keyboard on an input. Using inputmode=none will not show a keyboard at all on Chrome for Android. iOS 12.2 will still show its default alphanumeric keyboard, so specifying none could be sort of a reset for iOS in that regard. Regardless, none is intended for content that renders its own keyboard control.


<input type="text" inputmode="numeric" />

This one is probably the one of the more common inputmode values out in the wild because it’s ideal for inputs that require numbers but no letters — things like PIN entry, zip codes, credit card numbers, etc. Using the numeric value with an input of type="text" may actually make more sense than setting the input to type="number" alone because, unlike a numeric input, inputmode="numeric" can be used with maxlength, minlength and pattern attributes, making it more versatile for different use cases.

The numeric value on Chrome Android (left) and iOS 12.2 (right)

I’ve often seen sites using type=tel on an input to display a numeric keyboard, and that checks out as a workaround, but isn’t semantically correct. If that bums you out, remember that inputmode supports patterns, we can add pattern="\d*" to the input for the same effect. That said, only use this if you are certain the input should only allow numeric input because Android (unlike iOS) doesn’t allow the user to change to the keyboard to use letters, which might inadvertently prevent users from submitting valid data.


<input type="text" inputmode="tel" />

Entering a telephone number using a standard alphanumeric keyboard can be a pain. For one, each number on a telephone keyboard (except 1 and 0) represents three letters (e.g. 2 represents A, B and C) which are displayed with the number. The alphanumeric keyboard does not reference those letters, so decoding a telephone number containing letters (e.g. 1-800-COLLECT) takes more mental power.

The tel value on Chrome Android (left) and iOS 12.2 (right)

Using inputmode set to tel will produce a standard-looking telephone keyboard, including keys for digits 0 to 9, the pound (#) character, and the asterisk (*) character. Plus, we get those alphabetic mnemonic labels (e.g. ABC).


<input type="text" inputmode="decimal" />
The decimal value on Chrome Android (left) and iOS 12.2 (right)

An inputmode set to the decimal value results in a subtle change in iOS where the keyboard appears to be exactly the same as the tel value, but replaces the +*# key with a simple decimal (.). On the flip side, this has no effect on Android, which will simply use the numeric keyboard.


<input type="text" inputmode="email" />

I’m sure you (and at least someone you know) has filled out a form that asks for an email address, only to make you swap keyboards to access the @ character. It’s not life-threatening or anything, but certainly not a great user experience either.

That’s where the email value comes in. It brings the @ character into the fray, as well as the decimal (.) character.

The email value on Chrome Android (left) and iOS 12.2 (right)

This could also be a useful keyboard to show users who need to enter a Twitter username, given that@ is a core Twitter character for identifying users. However, the email address suggestions that iOS display above the keyboard may cause confusion.

Another use case could be if you have your own email validation script and don’t want to use the browsers built-in email validation.


<input type="text" inputmode="url" />
The url value on Chrome Android (left) and iOS 12.2 (right)

The url value provides a handy shortcut for users to append TLDs (e.g. .com or with a single key, as well keys typically used in web addresses, like the dot (.) and forward slash (/) characters. The exact TLD displayed on the keyboard is tied to the user’s locale.

This could also be a useful keyboard to show users if your input accepts domain names (e.g. as well as full URIs (e.g. Use type="url" instead if your input requires validating the input.


<input type="text" inputmode="search" />
The search value on Chrome Android (left) and iOS 12.2 (right)

This displays a blue Go key on iOS and a green Enter key on Android, both in place of where Return. As you may have guessed by the value’s name, search is useful for search forms, providing that submission key to make a query.

If you’d like to showSearch instead of Enter on iOS and a magnifying glass icon on Android in place of the green arrow, use type=search instead.

Other things you oughta know

  • Chromium-based browsers on Android — such as Microsoft Edge, Brave and Opera — share the same inputmode behavior as Chrome.
  • I haven’t included details of keyboards on iPad for the sake of brevity. It’s mostly the same as iPhone but includes more keys. Same is true for Android tablets, save for third-party keyboards, which might be another topic worth covering.
  • The original proposed spec had the values kana and katakana for Japanese input but they were never implemented by any browser and have since been removed from the spec.
  • latin-name was also one of the values of the original spec and has since been removed. Interestingly, if it’s used now on Safari for iOS, it will display the user’s name as a suggestion above the keyboard.

    The latin-name value displays my name as an auto-fill suggestion


Oh, you want to see how all of these input modes work for yourself? Here’s a demo you can use on a device with a touchscreen keyboard to see the differences.


Weekly news: PWA Issue on iOS, Performance Culture, Anti-Tracking in Browsers

Šime posts regular content for web developers on Each week, he covers timely news at the intersection of development standards and the tools that make them available on the web.

Installed PWAs cannot easily be restarted on iOS

Maximiliano Firtman: On iOS, it is not possible to restart an installed PWA by closing it from the recently used apps screen and then immediately reopening it. Instead of restarting the app, iOS restores its state. This can be a problem for users if the PWA gets stuck in a broken state.

After some undefined time, the saved context seems to disappear. So if you get out of the PWA, do nothing with your phone and wait some hours to go back to the PWA, it restarts from scratch.

Instilling a performance culture at The Telegraph

Gareth Clubb: At The Telegraph (a major UK newspaper), we set up a web performance working group to tackle our “organizational” performance challenges and instill a performance culture. The group meets regularly to review third-party tags and work on improving our site’s performance.

We’ve started deferring all JavaScript (including our own) using the <script defer> attribute. This change alone nearly doubled our (un-throttled) Lighthouse performance score.

Deferring our JavaScript hasn’t skewed any existing analytics and it certainly hasn’t delayed any advertising. […] The First Ad Loaded metric improved by an average of four seconds.

We also removed 1 MB of third-party payload from our new front end. When one of our teams requests the addition of any new script, we now test the script in isolation and reject it if it degrades our metrics (first contentful paint, etc.).

When we started this process, we had a collection of very old scripts and couldn’t track the original requester. We removed those on the premise that, if they were important, people would get back in touch — no one did.

Microsoft plans to add tracking prevention to the Edge browser

Kyle Pflug: Microsoft has announced plans to add options for blocking trackers to the Edge browser. Malicious trackers would be blocked automatically, and the user would have the option to additionally block all potential trackers.

This would make Edge the fourth major browser with some form of built-in anti-tracking feature (two other major browsers, Opera and UC Browser, include ad blockers instead).

  1. In 2015, Firefox added Tracking Protection — recently renamed to Content Blocking — becoming the first major browser to protect users from third-party trackers (when browsing the web in private mode).
  2. Since 2017, Safari prevents cross-site tracking by default, through a feature called Intelligent Tracking Prevention (ITP). Users are prompted to allow tracking when they try to interact with third-party widgets on websites.

  3. Earlier this year, Samsung Internet added an experimental feature called Smart Anti-Tracking that denies third-party trackers access to cookies.

The post Weekly news: PWA Issue on iOS, Performance Culture, Anti-Tracking in Browsers appeared first on CSS-Tricks.

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Consent to display content from Youtube
Consent to display content from Vimeo
Google Maps
Consent to display content from Google