The Case For Brand Systems: Aligning Teams Around A Common Story

The Case For Brand Systems: Aligning Teams Around A Common Story

The Case For Brand Systems: Aligning Teams Around A Common Story

Laura Busche

There’s a disconnect. If you’ve been in business for enough time you’ve definitely felt it. Teams are pulling companies in different directions with internal processes and specialized frameworks for absolutely everything. They’re focused on building their “part” of the product, and doing it incredibly well. There’s just one problem: our internal divisions mean nothing to customers.

In customers’ eyes, it’s the same brand replying to their support tickets and explaining something in a modal; the same entity is behind the quality of the product they use and the ads that led them to that product in the first place. To them, there are no compartments, no org charts, no project managers. There is only one brand experience. At the end of the day, customers are not tasting individual ingredients, or grabbing a bite during each stage of preparation, they’re eating the entire meal. At once. In sit-downs that keep getting shorter.

Something is getting lost in the process. Somewhere between our obsession with specializing and our desire to excel at individual roles, we’ve lost perspective of the entire brand experience. We spend our days polishing individual touchpoints, but not enough time zooming out to understand how they come together to shape our customer’s journey. The time for shying away from the word “holistic” is up — it’s the only way customers will ever perceive you. And that is the ultimate inspiration behind Brand Systems, a tool we will explore throughout this article.

What Is A Brand System?

At its core, a Brand System is a shared library that helps define, preserve, and improve the customer’s experience with the brand.

Just like we can’t assume that the sum of quality ingredients will result in a great meal, our individual contributions won’t magically translate into a memorable brand experience for customers. Their brand experiences are blended, multifaceted, and owner-blind. It’s time for the product, design, engineering, marketing, support, and various other functions to establish a single source of truth about the brand that is accessible to, understood, and used by all. That source is a Brand System.

Why Is It Useful? The Benefits Of Building A Brand System

Your brand is the story that customers recall when they think about you. When you document that story, your team can align to live up to and protect those standards consistently. There are multiple benefits to building and maintaining a Brand System:

  • Cohesion
    It helps preserve a uniform presence for the brand.
  • Access
    It empowers teams to apply assets consistently. A common language facilitates collaboration across different functional areas.
  • Flexibility
    This system is in continuous iteration, providing an up-to-date source of truth for the team.
  • Speed
    When your building blocks are readily available, everything is easier to construct. Collaboration is also accelerated as individuals get a fundamental understanding of how others’ work contributes to the brand experience.
  • Ethos
    Documenting the brand’s ethos will motivate the team to preserve, protect, and extend it. That level of care is perceived and appreciated by users.

To accomplish the above most effectively, the Brand System must remain accessible, empowering, holistic, extensible, flexible, and iterative. Let’s take a closer look at each of these traits and some examples of how they can be put in practice.

Accessible

Open to all team members, regardless of their level of technical expertise or direct involvement with each element. Systems that perpetuate divisions within organizations can be visually mesmerizing, but end up fragmenting the brand experience.

Help Scout provides a visual example of what the concepts of illustration scale and grid mean for those who may not be familiar with them. (Large preview)

Empowering

Full of examples that encourage continuous application. Some aspects of the brand experience can be hard to grasp for those who don’t interact with them daily, but a Brand System should empower everyone to recognize what is fundamentally on and off-brand — regardless of the touchpoint involved. Does the company’s support team feel confident about how the brand is visually represented? Does the design team have a general idea of how the brand responds to customers’ requests?

Webflow provides clear examples and instructions to empower different team members to write for the product, even if they’re not used to it. (Large preview)

Holistic

No aspect of the brand’s execution is too foreign to include. Brand Systems surface the tools and frameworks various teams are using to clarify the bigger picture. Too often the customer’s experience appears disjointed because we’ve compartmentalized it. The brand experience “belongs” to marketing, or design, or some kind of executive-level team, so resources like UI components or support scripts are dealt with separately. As the unifying backbone of all business functions, the brand experience informs execution at every level.

For WeWork, physical spaces are a key part of the entire experience. Their brand system includes guidelines in this respect. (Large preview)

Extensible

Individual teams can develop specific subsections within the system to fit their needs. Regardless of how specialized these different areas get, the Brand System remains accessible to all. Team members decide how deeply they want to dive into each element, but the information is readily available.

Atlassian makes use of its Confluence platform to host guidelines related to different parts of the brand experience. Sections have varying levels of access and are regularly maintained by different teams. (Large preview)

Flexible

There can be alternative versions of the Brand System for external audiences, including media and partners. While we’d like the brand experience to unfold in owned channels, the reality is that sometimes customers interact with it through earned channels controlled by third parties. In those cases, a minimized version of the Brand System can help preserve the story, voice, and visual style.

Instagram, for example, offers a special set of guidelines for users broadcasting content about the brand. (Large preview)

Iterative

The Brand System should evolve and be continuously updated over time. To that end, host and serve the content in a way that makes it easy to access and edit as things change.

Salesforce surfaces ongoing updates to its Lightning system in a section called Release Notes. (Large preview)

What Does A Brand System Contain?

Brand Systems solve a fundamental disconnect in teams. Nobody outside of marketing feels empowered to market the product, nobody outside of design feels like they have a solid grasp of the brand’s style guidelines, nobody outside of support feels confident responding to a customer. Sounds familiar? That’s why it’s crucial to build a Brand System that is as all-encompassing and transparent as possible.

While Design Systems are the closest implementation of a resource like this, a Brand System is centered around the brand story, understanding symbols and strategy as necessary layers in its communication. It is informed by product and design teams, but also functional areas like support, marketing, and executive leadership that are key to shaping the entire brand experience. In Brand Systems, visual components are presented in the context of a business model, a company mission, positioning, and other strategic guidelines that must be equally internalized by the team.

Comet, a Design System by Discovery Education. (Large preview)

A Brand Style Guide is another commonly used tool to align teams around a set of principles. While useful, these guides focus heavily on marketing and design specifications. Their main objective is to impart a shared visual language for the brand so that it is applied cohesively across the board. That resolves part of the problem but doesn’t reveal how other functional areas contribute to the overall brand experience. Therefore, these guidelines could be considered as part of the larger Brand System.

Brand Guidelines for the Wildlife Conservation Society
Brand Guidelines for the Wildlife Conservation Society (Designed by Pentagram) (Large preview)

That said, these are some of the sections and components you should consider including in your Brand System:

Story

If the brand experience is embroidery, the story is the thread weaving all the touchpoints together. It reveals what has been and will continue to be important, who the brand is trying to serve, and how it plans to do so most effectively. Start your Brand System by answering questions like:

  • What is this brand’s core value proposition?
  • What does its history look like? Include a timeline and key milestones.
  • How do you summarize both of the above in an “About Us” blurb?
  • What are the brand’s core values? What propels this team forward?
  • Who is the audience? Who are the main personas being addressed and what are these individuals trying to get done?
(Large preview)
The Smithsonian Institution included a section called Audience Segments in the strategy section of their branding website. (Large preview)
  • What does their journey with us look like? If known, what is their sentiment towards the brand?
  • What is the brand’s mission as it relates to its target customers? How about the team itself and society at large?
  • How does this business model work, in general terms? (Specifics will be included in the “Strategy” section).
  • Who are the brand’s direct competitors? What sets it apart from them?
  • How does this brand define product/service quality? Are there any values around that?
  • What is the brand’s ultimate vision?
Shopify summarizes the brand’s Product Experience Principles in their Polaris design system. (Large preview)

Symbols

You know what drives the brand and the narrative it brings to the table. This part of the Brand System is about actually representing that. Symbols are sensory: they define what the brand looks, sounds, and feels like. Because they help us convey the story across multiple touchpoints, it’s vital for the entire team to have a grasp of what they stand for and how they can be properly deployed.

Here are some examples of building blocks you’ll want to define in this section:

  • Main and alternative versions of the brand’s logo
  • Color palette
  • Typography scheme
  • Icons, photography, and illustration style
  • Patterns, lines, and textures
  • Layout and spacing
  • Slide deck styling
  • Stationery and merchandising applications
  • Motion & sound, including audiobranding
  • Packaging (if applicable)
  • Scent (if applicable).

For brands built around digital products:

  • User Interface components
  • Interaction patterns
  • Key user views & states
  • User Experience guidelines
  • Code standards
  • Tech stack overview.
IBM includes patterns for common actions like “Next” in its Carbon design system. (Large preview)

Bear in mind that the nature of your product will determine how you complete the section above. A tech-based company might list elements like interaction patterns, but a retail brand might look at aspects like store layout.

Strategy

Once you’ve clarified the brand’s story, and the symbols that will represent it, define how you go about growing it. This part of the Brand System addresses both messaging and method, narrative and tactics.

It’s not enough for the team to be aware of why this brand is being built: the goal is to get everyone familiar with how it is sustained, grown, and communicated.

Just like customer support can feel alienated from the design function, these growth-related tenets are often not surfaced beyond marketing, acquisition, or executive conversations. However, those kinds of strategic ideas do impact customers at every touchpoint. A Brand System is a crash course that makes high-level perspective accessible to all, unlocking innovation across the entire team.

Here are some guiding questions and components to include:

  • How does this brand define success? What growth metric does the team focus on to track performance? (GMV, revenue, EBITDA, sales, etc).
  • What does the conversion funnel look like? How does this brand acquire, activate and retain customers?
    • What is the average Customer Acquisition Cost (CAC)?
    • What is the retention rate?
    • What is the Customer’s Lifetime Value (LTV)?
    • Which channels drive the most revenue? Traffic?
    • Which products and categories drive the most revenue? Traffic?
    • Which of the buyer personas defined in the “Story” section bring the most revenue? Traffic?
  • What are the brand’s key objectives for this year (to be updated)? These are company-wide goals.
  • What initiatives or themes is the team focusing on to meet those objectives this year?
  • How is the team organized to meet those objectives?
  • What channels does this brand use to communicate with its target audience? Name them and specify how big they are in terms of reach and/or traffic.
    • Owned
      • Website, blog, or other web properties
      • Social channels
      • Email
    • Earned
      • Ongoing brand partnerships
      • Referrals
    • Paid
      • Specific ad platforms
      • Influencers
      • Affiliates
  • What is this brand’s personality like?
  • What defines the brand’s voice? How does it adopt different tones in specific situations?
  • How does the brand’s voice come to life in owned, earned, and paid channels?
    • Company website and landing pages
    • Social channels. Clarify if tone varies
    • Email
    • Ads
    • Sales collateral
    • Press releases
    • Careers page & job sites
  • How does the brand’s voice come to life in the customer’s experience with the product, before, during, and after purchase?
    • Call to action
    • Navigation
    • Promotion
    • Education
    • Success/error
    • Apology/regret
    • Reassurance/support
  • How does the brand’s voice come to life internally (within the team)?
    • Employee handbooks
    • Onboarding process.
  • What topics should this brand aim to be a thought leader at?
UC Berkeley defines the themes and key messages to highlight when speaking to different audiences. (Large preview)
  • Grammar and spelling-wise, does this brand adhere to a specific style manual?
  • What are some common words and expressions users can expect from this brand?
Mailchimp’s Content Style Guide
Mailchimp’s Content Style Guide (Source) (Large preview)

How To Get Started With Your Own Brand System

If you’re feeling ready to shape your own Brand System, here are some steps you can take to approach the challenge as a team:

1. Kickoff

Set up a company-wide planning session. If you can’t have everyone attend, make sure there’s someone representing product, design, marketing, support, and the executive team. Lead a conversation with two goals:

  • Brand audit
    How is the brand represented in various contexts? Is there brand debt to account for? Consider the brand’s story, symbols, and strategy.
  • Brand definition
    Discuss how this brand:
    • sounds,
    • looks,
    • speaks,
    • feels,
    • interacts,
    • introduces itself in a few words.

2. Production

After having that open discussion, delegate the various sections described above (Story, Symbols, Strategy) to the teams that interact most closely with each of them. Ask specific individuals to document the brand’s point of view regarding each system component (listed in point 2).

3. Publication

Upload the full Brand System to a central location where it is readily available and editable. Schedule a team-wide presentation to get everyone on the same page.

4. Summary

Condense the main guidelines in a one-page document called Brand-At-a-Glance. Here are some of the key points you could include:

  • Brand mission, vision, and values
  • Visual identity: color palette, main logo versions, type scheme, illustration style
  • About Us blurb.

5. Revision

Establish a periodic cross-functional meeting where the brand is protected and defined. Frequency is up to you and your team. This is also where the Brand System is formally updated, if needed. In my experience, unless this space is scheduled, the revisions simply won’t happen. Time will pass, customers’ needs will change, the brand experience will shift, and there won’t be a common space to document and discuss it all.

Brand Systems: Aligning Teams Around A Common Story

When customers interact with your brand, they’re not aware of what’s going on backstage. And there is no reason they should. All they perceive is the play you’re presenting, the story you’re sharing, and the solution it represents for them. When the individual actors go off script, as great as they might sound solo, the brand experience breaks.

Enter the Brand System: a shared library that defines the brand’s story, symbols, and strategy. In doing so, this tool aligns all actors around the multi-faceted experience customers truly perceive. Within teams, Brand Systems help avoid fragmentation, confusion, and exclusion. Externally, they’re a script to deliver the most compelling brand experience possible — at every touchpoint, every single time.

Smashing Editorial (ah, yk, il)
What I Learned From Designing AR Apps

What I Learned From Designing AR Apps

What I Learned From Designing AR Apps

Gleb Kuznetsov

The digital and technological landscape is constantly changing — new products and technologies are popping up every day. Designers have to keep track of what is trending and where creative opportunities are. A great designer has the vision to analyze new technology, identify its potential, and use it to design better products or services.

Among the various technologies that we have today, there’s one that gets a lot of attention: Augmented Reality. Companies like Apple and Google realize the potential of AR and invest significant amounts of resources into this technology. But when it comes to creating an AR experience, many designers find themselves in unfamiliar territory. Does AR require a different kind of UX and design process?

As for me, I’m a big fan of learning-by-doing, and I was lucky enough to work on the Airbus mobile app as well as the Rokid AR glasses OS product design. I’ve established a few practical rules that will help designers to get started creating compelling AR experiences. The rules work both for mobile augmented reality (MAR) and AR glasses experiences.

Rokid Glasses motion design exploration by Gleb Kuznetsov

Glossary

Let’s quickly define the key terms that we will use in the article:

  • Mobile Augmented Reality (MAR) is delivering augmented reality experienced on mobile devices (smartphones and tablets);
  • AR Glasses are a wearable smart display with a see-through viewing an augmented reality experience.

1. Get Buy-In From Stakeholders

Similar to any other project you work for, it is vital that you get support from stakeholders as early in the process as is possible. Despite being buzzed about for years, many stakeholders have never used AR products. As a result, they can question the technology just because they don’t understand the value it delivers. Our objective is to get an agreement from them.

“Why do we want to use AR? What problem does it solve?” are questions that stakeholders ask when they evaluate the design. It’s vital to connect your design decisions to the goals and objectives of the business. Before reaching stakeholders, you need to evaluate your product for AR potential. Here are three areas where AR can bring a lot of value:

  • Business Goals
    Understand the business goals you’re trying to solve for using AR. Stakeholders always appreciate connecting design solutions to the goals of the business. A lot of time business will respond to quantifiable numbers. Thus, be ready to provide an explanation off how your design is intended to help the company make more money or save more money.
  • Helpfulness For Users
    AR will provide a better user experience and make the user journey a lot easier. Stakeholders appreciate technologies that improve the main use of the app. Think about the specific value that AR brings to users.
  • Creativity
    AR is excellent when it comes to creating a more memorable experience and improving the design language of a product. Businesses often have a specific image they are trying to portrait, and product design has to reflect this.

Only when you have a clear answer to the question “Why is this better with AR?”, you will need to share your thoughts with stakeholders. Invest your time in preparing a presentation. Seeing is believing, and you’ll have better chances of buy-in from management when you show a demo for them. The demo should make it clear what are you proposing.

2. Discovery And Ideation

Explore And Use Solutions From Other Fields

No matter what product you design, you have to spend enough time researching the subject. When it comes to designing for AR, look for innovations and successful examples with similar solutions from other industries. For example, when my team was designing audio output for AR glasses, we learned a lot from headphones and speakers on mobile phones.

Design User Journey Using “As A User I Want” Technique

One of the fundamental things you should remember when designing AR experiences is that AR exists outside of the phone or glasses. AR technology is just a medium that people use to receive information. The tasks that users want to accomplish using this technology are what is really important.

“How to define a key feature set and be sure it will be valuable for our users?” is a crucial question you need to answer before designing your product. Since the core idea of user-centered design is to keep the user in the center, your design must be based on the understanding of users, their goals and contexts of use. In other words, we need to embrace the user journey.

When I work on a new project, I use a simple technique “As a ||, I want [goal] because [reason].” I put myself in the user’s shoes and think about what will be valuable for them. This technique is handy during brainstorming sessions. Used together with storyboarding, it allows you to explore various scenarios of interaction.

In the article “Designing Tomorrow Today: the Airbus iflyA380 App,” I’ve described in detail the process that my team followed when we created the app. The critical element of the design process was getting into the passenger’s mind, looking for insights into what the best user experience would be before, during and after their flight.

To understand what travelers like and dislike about the travel experience, we held a lot of brainstorming sessions together with Airbus. Those sessions revealed a lot of valuable insights. For example, we found that visiting the cabin (from home) before flying on the A380 was one of the common things users want to do. The app uses augmented reality so people can explore the cabin and virtually visit the upper deck, the cockpit, the lounges — wherever they want to go — even before boarding the plane.

IFLY A380 iOS app design by Gleb Kuznetsov
IFLY A380 iOS app design by Gleb Kuznetsov. (Large preview)

App also accompanies passengers from the beginning to the end of their journey — basically, everything a traveler wants to do with the trip is wrapped up in a single app. Finding your seat is one of the features we implemented. This feature uses AR to show your seat in a plane. As a frequent traveler, I love that feature; you don’t need to search for the place at the time when you enter the cabin, you can do it beforehand — from the comfort of your couch. Users can access this feature right from the boarding pass — by tapping on ‘glass’ icon.

IFLY A380 app users can access the AR feature by tapping on the ‘glass’ icon
IFLY A380 app users can access the AR feature by tapping on the ‘glass’ icon. (Large preview)

Narrow Down Use Cases

It might be tempting to use AR to solve a few different problems for users. But in many cases, it’s better to resist this temptation. Why? Because by adding too many features in your product, you make it not only more complex but also more expensive. This rule is even more critical for AR experience that generally requires more effort. It’s always better to start with simple but well-designed AR experience rather than multiple complex but loose designed AR experiences.

Here are two simple rules to follow:

  • Prioritize the problems and focus on the critical ones.
  • Use storyboarding to understand exactly how users will interact with your app.
  • Remember to be realistic. Being realistic means that you need to strike a balance between creativity and technical capabilities.

Use Prototypes To Assess Ideas

When we design traditional apps, we often use static sketches to assess ideas. But this approach won’t work for AR apps.

Understanding whether a particular idea is good or bad cannot be captured from a static sketch; quite often the ideas that look great on paper don’t work in a real-life context.

Thus, we need to interact with a prototype to get this understanding. That’s why it’s essential to get to prototyping state as soon as possible.

It’s important to mention that when I say ‘prototyping state’ I don’t mean a state when you create a polished high-fidelity prototype of your product that looks and work as a real product. What I mean is using a technique of rapid prototyping and building a prototype that helps you experience the interaction. You need to make prototypes really fast — remember that the goal of rapid prototyping is in evaluating your ideas, not in demonstrating your skills as a visual designer.

3. Design

Similar to any other product you design, when you work on AR product, your ultimate goal is to create intuitive, engaging, and clean interface. But it can be challenging since the interface in AR apps accounts both for input and output.

Physical Environment

AR is inherently an environmental medium. That’s why the first step in designing AR experience is defining where the user will be using your app. It’s vital to select the environment up front. And when I say ‘environment’, I mean a physical environment where the user will experience the app — it could be indoors or outdoors.

Here are three crucial moments that you should consider:

  1. How much space users need to experience AR? Users should have a clear understanding of the amount of space they’ll need for your app. Help users understand the ideal conditions for using the app before they start the experience.
  2. Anticipate that people will use your app in environments that aren’t optimal for AR. Most physical environments can have limitations. For example, your app is AR table tennis game but your users might not have a large horizontal surface. In this case, you might want to use a virtual table generated based on your device orientation.
  3. Light estimation is essential. Your app should analyze the environment automatically and provide contextual guidance if the environment is not good enough. If the environment is too dark or too bright for your app, tell the user that they should find a better place to use your app. ARCore and ARKit have a built-in system for light estimation.

When my team designed Airbus i380 mobile AR experience, we took the available physical space into account. Also, we’ve considered the other aspects of interaction, such as the speed at which the user should make decisions. For instance, the user who wants to find her seat during the boarding won’t have too much time.

We sketched the environment (in our case, it was a plane inside and outside) and put AR objects in our sketch. By making our ideas tangible, we got an understanding of how the user will want to interact with our app and how our app will adapt to the constraints of the environment.

AR Realism And AR Objects Aesthetics

After you define the environment and required properties, you will need to design AR objects. One of the goals behind creating AR experience is to blend virtual with real. The objects you design should fit into the environment — people should believe that AR objects are real. That’s why it’s important to render digital content in context with the highest levels of realism.

Here are a few rules to follow:

  • Focus on the level of details and design 3D assets with lifelike textures. I recommend using multi-layer texture model such as PBR (Physically Based Rendering model). Most AR development tools support it, and this is the most cost-effective solution to achieve an advanced degree of detail for your AR objects.
  • Get the lighting right. Lighting is a very crucial factor for creating realism — the wrong light instantly breaks the immersion. Use dynamic lighting, reflect environmental lighting conditions on virtual objects, cast object shadows, and reflections on real-world surfaces to create more realistic objects. Also, your app should react to real-world changing of lighting.
  • Minimize the size of textures. Mobile devices are generally less powerful than desktops. Thus, to let your scene load faster, don’t make textures too large. Strive to use 2k resolution at most.
  • Add visual noise to AR textures. Flat-colored surfaces will look fake to the user’s eye. Textures will appear more lifelike when you introduce rips, pattern disruptions, and other forms of visual noise.
  • Prevent flickering. Update the scene 60 times per second to prevent flickering of AR objects.

Design For Safety And Comfort

AR usually accompanied by the word ‘immersive.’ Creating immersive experience is a great goal, but AR immersion can be dangerous — people can be so immersed in smartphones/glasses, so they forget what is happening around them, and this can cause problems. Users might not notice hazards around them and bump into objects. This phenomenon is known as cognitive tunneling. And it caused a lot of physical traumas.

  • Avoid users from doing anything uncomfortable — for example, physically demanding actions or rapid/expansive motion.
  • Keep the user safe. Avoid situations when users have to walk backward.
  • Avoid long play AR sessions. Users can get fatigued using AR for extended periods. Design stop points and in-app notifications that they should take a break. For instance, if you design an AR game, let users pause or save their progress.

Placement For Virtual Objects

There are two ways of placing virtual objects — on the screen or in the world. Depending on the needs of your project and device capabilities, you can follow either the first or second approach. Generally, virtual elements should be placed in world space if they suppose to act like real objects (e.g., a virtual statue in AR space), and should be placed as an on-screen overlay if they intended to be UI controls or information messages (e.g., notification).

Rokid Glasses
Rokid Glasses. (Large preview)

‘Should every object in AR space be 3D?’ is a common question among designers who work on AR experiences. The answer is no. Not everything in the AR space should be 3D. In fact, in some cases like in-app notifications, it’s preferable to use flat 2D objects because they will be less visually distracting.

Rokid Glasses motion design exploration by Gleb Kuznetsov
Rokid Glasses motion design exploration by Gleb Kuznetsov. (Large preview)

Avoid Using Haptic Feedback

Phone vibrations are frequently used to send feedback in mobile apps. But using the same approach in AR can cause a lot of problems — haptic feedback introduces extra noise and makes the experience less enjoyable (especially for AR Glasses users). In most cases, it’s better to use sound effect for feedback.

Make A Clear Transition Into AR

Both for MAR and AR glass experiences, you should let users know they’re about to transition into AR. Design a transition state. For the ifly380 app, we used an animated transition — a simple animated effect that user sees when taps on the AR mode icon.

Trim all the fat.

Devote as much of the screen as possible to viewing the physical world and your app’s virtual objects:

  • Reduce the total number of interactable elements on the screen available for the user at one moment of time.
  • Avoid placing visible UI controls and text messages in your viewport unless they are necessary for the interaction. A visually clean UI lends itself seamlessly to the immersive experience you’re building.
  • Prevent distractions. Limit the number of times when objects appear on the user screen out of the blue. Anything that appears out of the blue instantly kills realism and make the user focus on the object.

AR Object Manipulation And Delineating Boundaries Between The ‘Augment’ And The ‘Reality’

When it comes to designing a mechanism of interaction with virtual objects, favor direct manipulation for virtual objects — the user should be able to touch an object on the screen and interact with it using standard, familiar gestures, rather than interact with separate visible UI controls.

Also, users should have a clear understanding of what elements they can interact with and what elements are static. Make it easy for users to spot interactive objects and then interact with them by providing visual signifiers for interactive objects. Use glowing outlines or other visual highlights to let users know what’s interactive.

Scan object effect for outdoor MAR by Gleb Kuznetsov
Scan object effect for outdoor MAR by Gleb Kuznetsov. (Large preview)

When the user interacts with an object, you need to communicate that the object is selected visually. Design a selection state — use either highlight the entire object or space underneath it to give the user a clear indication that it’s selected.

Last but not least, follows the rules of physics for objects. Just like real objects, AR objects should react to the real-world environment.

Design For Freedom Of Camera

AR invites movement and motion from the user. One of the significant challenges when designing or AR is giving users the ability to control the camera. When you give users the ability to control the view, they will swing device around in an attempt to find the points of interest. And not all apps are designed to help the user to control the viewfinder.

Google identifies four different ways that a user can move in AR space:

  1. Seated, with hands fixed.
  2. Seated, with hands moving.
  3. Standing still, with hands fixed.
  4. Moving around in a real-world space.

The first three ways are common for mobile AR while the last one is common for AR glasses.

In some cases, MAR users want to rotate the device for ease of use. Don’t interrupt the camera with rotation animation.

Consider Accessibility When Designing AR

As with any other product we design, our goal is to make augmented reality technology accessible for people. Here are a few general recommendations on how to address real-world accessibility issues:

  • Blind users. Visual information is not accessible to blind users. To make AR accessible for blind users, you might want to use audio or haptic feedback to deliver navigation instructions and other important information.
  • Deaf or hard-hearing users. For AR experience that requires voice interaction, you can use visual signals as an input method (also known as speechreading). The app can learn to analyze lip movement and translate this data in commands.

If you’re interested in learning more practical tips on how to create accessible AR apps, consider watching the video talk by Leah Findlater:

Encourage Users To Move

If your experience demands exploration, remind users they can move around. Many users have never experienced a 360-degree virtual environment before, and you need to motivate them to change the position of their device. You can use an interactive object to do that. For example, during I/0 2018, Google used an animated fox for Google Maps that guided users to the target destination.

This AR experience uses an animated bird to guide users
This AR experience uses an animated bird to guide users. (Large preview)

Remember That Animation Is A Designer’s Best Friend

Animation can be multipurpose. First, you can use a combination of visual cues and animation to teach users. For example, the animation of a moving phone around will make it clear what users have to do to initialize the app.

Second, you can use animation to create emotions.

One second of emotion can change the whole reality for people engaging with a product.

Well-designed animated effects help to create a connection between the user and the product — they make the object feel tangible. Even a simple object such as loading indicator can build a bridge of trust between users and the device.

Rokid Alien motion design by Gleb Kuznetsov
Rokid Alien motion design by Gleb Kuznetsov. (Large preview)

A critical moment about animation — after discovering the elements of design and finding design solutions for the animation base, it’s essential to spend enough time on creating a proper animated effect. It took lots of iterations to finish a loading animation that you see above. You have to test every animation to be sure it works for your design and be ready to adjust color, positioning, etc. to give the best effect.

Prototype On The Actual Device

In the interview for Rokid team, Jeshua Nanthakumar mentioned that the most effective AR prototypes are always physical. That’s because when you prototype on the actual device, from the beginning, you make design work well on hardware and software that people actually use. When it comes to unique displays like on the Rokid Glasses, this methodology is especially important. By doing that you’ll ensure your design is implementable.

Motion design language exploration for AR Glasses Rokid by Gleb Kuznetsov
Motion design language exploration for AR Glasses Rokid by Gleb Kuznetsov. (Large preview)

My team was responsible for designing the AR motion design language and loading animation for AR glasses. We decided to use a 3D sphere that will be rotated during the loading and will have nice reflections on its edges. The design of the animated effect took two weeks of hard work of motion designers and it looked gorgeous on high-res monitors of our design team, but the final result wasn’t good enough because the animation caused motion sickness.

Motion sickness often caused by the discrepancies between the motion perceived from the screen of AR Glasses and the actual movement of the user’s head. But in our case, the root cause of the motion sickness was different — since we put a lot of attention in polishing details like shapes, reflections, etc. unintentionally we made users focus on those details while the sphere was moving.

As a result, the motion happened in the periphery, and since humans are more sensitive to the moving objects in the periphery this caused motion sickness. We solved this problem by simplifying the animation. But it’s critical to mention that we won’t be able to find this problem without testing on the actual device.

If we compare the actual procedure of testing of AR apps with traditional GUI apps, it will be evident that testing AR apps require more manual interactions. A person who conducts testing should determine whether the app provides the correct output based on the current context.

Here are a few tips that I have for conducting efficient usability testing sessions:

  • Prepare a physical environment to test in. Try to create real-world conditions for your app — test it with various physical objects, in different scenes with different lighting. But the environment might not be limited to scene and lighting.
  • Don’t try to test everything all at once. Use a technique of chunking. Breaking down a complex flow into smaller pieces and testing them separately is always beneficial.
  • Always record your testing session. Record everything that you see in the AR glass. A session record will be beneficial during discussions with your team.
  • Testing for motion sickness.
  • Share your testing results with developers. Try to mitigate the gap between design and development. Make sure your engineering team knows what problem you face.

Conclusion

Similar to any other new technology, AR comes with many unknowns. Designers who work on AR projects have a role of explorers — they experiment and try various approaches in order to find the one that works best for their product and delivers the value for people who will use it.

Personally, I believe that it’s always great to explore new mediums and find new original ways of solving old problems. We are still at the beginning stages of the new technological revolution — the exciting time when technologies like AR will be an expected part of our daily routines — and it’s our opportunity to create a solid foundation for the future generation of designers.

Smashing Editorial (cc, yk, il)
Popular Design News of the Week: June 17, 2019 – June 23, 2019

Every week users submit a lot of interesting stuff on our sister site Webdesigner News, highlighting great content from around the web that can be of interest to web designers. 

The best way to keep track of all the great stories and news being posted is simply to check out the Webdesigner News site, however, in case you missed some here’s a quick and useful compilation of the most popular designer news that we curated from the past week.

Note that this is only a very small selection of the links that were posted, so don’t miss out and subscribe to our newsletter and follow the site daily for all the news.

Internet has Meltdown Over Facebook’s Libra Logo

 

How to Section your HTML

 

Clarity 2.0 Released – Open Source Design System

 

Facebook Libra

 

One Designer’s Struggle to Redesign his Website

 

Building a Web-Based Motion Graphics Editor

 

Styling in Modern Web Apps

 

7 Ways to Design Better Forms

 

Why I Don’t Use Web Components

 

Birds’ Eye Logos

 

Introducing: Whimsical Mind Maps

 

How to Build an Animated CSS Thermometer Chart

 

Death of IE; its Aftermath on Cross Browser Compatibility

 

Vivaldi 2.6

 

Gitting your First Dev Job

 

Building a Component Library Using Figma

 

Sustainable Web Manifesto

 

What are Design Systems and Why do You Need One?

 

Do Animations Deliver a Better User Experience?

 

A Complete Guide to Iconography

 

Adobe has an Ambitious Plan to Help the Public Spot Fake Images

 

Drop Caps & Design Systems

 

Thinking Beyond Screens

 

Optimizing Google Fonts Performance

 

Baseline Grid

 

Want more? No problem! Keep track of top design news from around the web with Webdesigner News.

Add Realistic Chalk and Sketch Lettering Effects with Sketch’it – only $5!

Source
p img {display:inline-block; margin-right:10px;}
.alignleft {float:left;}
p.showcase {clear:both;}
body#browserfriendly p, body#podcast p, div#emailbody p{margin:0;}

Design Your Mobile Emails To Increase On-Site Conversion

Design Your Mobile Emails To Increase On-Site Conversion

Design Your Mobile Emails To Increase On-Site Conversion

Suzanne Scacca

I find it interesting that Google has pushed so hard for web designers to shift from designing primarily for desktop to now designing primarily for mobile. Google’s right… but why only focus on designing websites to appeal to mobile users? After all, Gmail is a leader within the email client ranks, too.

Email can be an incredibly powerful driver of conversions for websites, according to a 2019 report from Barilliance.

On average, emails convert at a rate of 1.48%. That includes all sent emails though — which includes the ones that go unopened as well as the ones that bounce. If you look at emails that are opened by recipients, however, the average conversion rate jumps to 17.75%.

Let’s go even further with this:

Recent data from Litmus reveals that more emails are opened on mobile than on any other device:

Litmus email opens data
Litmus data reveals that 43% of email opens occur on mobile devices. (Source: Litmus) (Large preview)

Many sources even put the average mobile open rate at well over 50%. But whether it’s 43% or 50%+, it’s clear that mobile is most commonly the first device people reach for to check their emails.

Which brings us to the following conclusion:

If users are more likely to open email on mobile and we know that opened emails convert at a higher rate than those that go unopened, wouldn’t it make sense for designers to prioritize the mobile experience when designing emails?

Mobile Email Design Tips to Increase Conversions

Let’s explore what the latest research says about designing emails for mobile users and how that can be used to increase opens, clicks and, later, your website’s conversion rates (on mobile and desktop).

Design the Same Email for Mobile and Desktop

Although email is often ranked as the most effective marketing channel for acquiring and retaining customers, that’s not really an accurate picture of what’s going on.

According to Campaign Monitor, here’s what’s actually happening with mobile email subscribers:

Campaign Monitor mobile CTR data
Campaign Monitor charts the progression from mobile email opens to click-through rate. (Source: Campaign Monitor) (Large preview)

The open rates on mobile are somewhat on par with the Litmus data earlier.

However, it can take multiple opens before the email recipient actually clicks on a link or offer within an email. And guess what? About a third of them make their way over to desktop — where they convert at a higher rate than those that stay on mobile.

As the report states:

Data from nearly 6 million email marketing campaigns suggests the shift to mobile has made it more difficult to get readers to engage with your content, unless you can drive subsequent opens in a different environment.

I’ve reconstructed the graphic above and filled it with the number of people who would take action from an email list of 1,000 recipients:

Campaign Monitor mobile open and click data
An example of how Campaign Monitor’s data translates into real-world numbers. (Source: Campaign Monitor) (Large preview)

At first glance, it looks as though mobile is the clear winner — at least in terms of driving traffic to the website. After the initial mobile open, 32 subscribers go straight to the website. After a few more opens on mobile, 5 more head over there.

Without a breakdown of what the user journey looks like when opened on desktop, though, the calculation of additional clicks you’d get from that portion of the list isn’t so cut-and-dried.

However, let’s assume that Litmus’s estimate of 18% desktop opens is accurate and that Campaign Monitor’s 12.9% click-through rate holds true whether they open the email first on mobile or desktop. I think it’s safe to say that 23 desktop-only email opens can be added to the total.

So, that brings it to:

37 clicks on mobile vs 26 on desktop.

Bottom line: while mobile certainly gets more email subscribers over to a website, the conversion-friendliness of desktop cannot be ignored.

Which is why you don’t want to segment your lists based on device. If you want to maximize the number of conversions you get from a campaign, enable subscribers to seamlessly move from one device to the other as they decide what action to take with your emails.

In other words, design the same exact email for desktop and mobile. But assume that the majority of subscribers will open the email on their mobile device (unless historical data of your campaigns says otherwise). Then, use mobile-first design and marketing tips to create an email that’s suitable for all subscribers, regardless of device.

Factor in Dark Mode When Choosing Your Colors

You don’t want there to be anything that stands in your users’ way when it comes to moving from email to website. That’s why you have to consider how their choice of color and brightness for their mobile screen affects the readability or general appearance of your email design.

There are a number of ways in which this can become an issue.

As we hear more and more about how harmful blue light from our devices can be, it’s no surprise that Dark Mode options are beginning to roll out. While it’s prevalent on desktops right now, it’s mostly in beta for mobile. The same goes for email apps.

That said, smartphone users can hack their own “Dark Mode” of sorts. This type of color inversion can be enabled through the iPhone’s “Accessibility” settings.

Gadget Hacks iPhone 'Dark Mode' hack
Gadget Hacks shows how iPhone users can hack their own ‘Dark Mode’. (Source: Gadget Hacks) (Large preview)

Essentially, what this does is invert all of the colors on the screen from light to dark and vice versa.

Unfortunately, the screenshotting tool on my iPhone won’t allow me to capture the colors exactly as they appear. However, what I can show you is how the inversion tool alters the color of your email design.

This is an email I received from Amtrak last week. It’s pretty standard with its branded color palette and brightly colored notices and CTA buttons:

Amtrak email on Gmail mobile app
What a promotional email from Amtrak looks like on the Gmail mobile app. (Source: Amtrak) (Large preview)

Now, here is what that same email looks like when viewed through my iPhone’s “Smart Invert” setting:

Amtrak email with inverted colors
What a promotional email from Amtrak looks like in Gmail when colors are inverted. (Source: Amtrak) (Large preview)

The clean design of the original with the white font on the deep blue brand colors is gone. Now, there’s a harsh mix of colors and a hard-to-read Amtrak logo in its place.

You can imagine how this kind of inconsistent and disjointed color display would create an off-putting experience for mobile users.

But what do you expect them to do? If they’re struggling with the glare from their mobile device, Dark Mode (or some other brightness adjustment) will make it easier for them to open and read emails in the first place. Even if it means compromising the appearance of the email you so carefully designed.

One bright spot in all this is that the official “Dark Mode” being rolled out to iPhone (and, hopefully, other smartphones) soon won’t alter the look of HTML emails. Only plain-text messages will be affected.

However, it’s important to still be mindful of how the design choices you make may clash with a surrounding black background. Brightly colored backgrounds, in particular, are likely to clash with the surrounding black of your email app.

How do you solve this issue? Unfortunately, you can’t serve different versions of your email to users based on whether they’re viewing it in Dark Mode or otherwise. You’ll just have to rely on your own tests to confirm that potential views in Dark Mode don’t interfere with your design or message.

In addition to the standard testing you do, set your own smartphone up with Dark Mode (or its hack counterpart). Then, run your test email through the filter and see what happens to the colors. It won’t take long to determine what sort of colors you can and cannot design with for email.

Design the Subject Line

The subject line is the very first thing your email subscribers are going to see, whether it shows up as a push notification or they first encounter it in their inbox. What do you think affects their initial decision to click open an email rather than throw it in the Trash or Spam box immediately? Recognizing the Sender will help, but so will the attractiveness of the subject line.

As for how exactly you go about “designing” a subject line, there are a few things to think about. The first is the length.

Marketo conducted a study across 200+ email campaigns and 2 million emails sent to subscribers. Here is what the test revealed about subject line length:

Marketo subject line length test
Marketo tested over 2 million sent emails to determine the ideal subject line length. (Source: Marketo) (Large preview)

Although the 4-word subject line resulted in the highest open rate, it had a poor showing in terms of clicks. It was actually the 7-word subject line that seemed to have struck the perfect balance with subscribers, leading 15.2% of them to open the email and then 10.8% of them to click on it.

While you should still test this with your own email list, it appears that seven words is the ideal length for a subject line.

Next, you have to think about the buzzwords used in the subject line.

To start, keep this list of Yesware’s Spam Trigger Words out of it:

Yesware list of spam-trigger words
Yesware’s analysis and list of the top spam-trigger words. (Source: Yesware) (Large preview)

If you want to increase the chance the email will be opened, read, clicked on and eventually convert on-site, you have to be savvy about which words will appear within the subject line.

What I’d suggest you do is bookmark CoSchedule’s Email Subject Line Tester tool.

CoSchedule email subject line tester
CoSchedule tests and scores your email subject lines with one click. (Source: CoSchedule) (Large preview)

Here’s an example of how CoSchedule analyzes your subject lines and clues you in to what increases and decreases your open rates:

CoSchedule subject line score
The first part of CoSchedule’s subject line analysis and scoring tool. (Source: CoSchedule) (Large preview)

As you can see, CoSchedule tells you which kinds of words increase open rates as well as those that don’t. Do enough of these subject line tests and you should be able to compile a good set of wording guidelines for your writers and marketers.

Further down, you’ll get more insight into what makes for a strongly worded and designed subject line:

CoSchedule subject line recommendations
The second part of CoSchedule’s subject line assessment and recommendations. (Source: CoSchedule) (Large preview)

CoSchedule will provide recommendations on how to shorten or lengthen the character and word counts based on best practices and results.

Finally, at the very bottom of your subject line test you’ll see this:

CoSchedule email client preview
The final part of CoSchedule’s subject line tester includes an email client preview. (Source: CoSchedule) (Large preview)

This gives you (or, rather, your writer) a chance to see how the subject line will appear within the “design” of an email client. It’s not a problem if the words get cut off on mobile. It’s bound to happen. However, you still want everything that does appear to be appealing enough to click on.

Also, don’t forget about dressing up your subject lines with emoji.

When you think about it, emoji in mobile email subject lines make a lot of sense. Text messaging and social media are ripe with them. It’s only natural to use this fun and truncated form of language in email, too.

Campaign Monitor makes a good point about this:

If you replace words with recognizable emoji, you’ll create shorter subject lines for mobile users. Even if it doesn’t shorten your subject line enough to fit on a mobile screen, it’s still an awesome way to make it stand out from the rest of your subscribers’ cluttered inboxes.

The CoSchedule test will actually score you based on how (or if) you used emoji, too:

CoSchedule Emoji Evaluation
CoSchedule suggests that the use of emoji in subject lines will give you an edge. (Source: CoSchedule) (Large preview)

As you can see, CoSchedule considers this a competitive advantage in marketing.

Even just looking at my own email client, my eye is instantly drawn to the subject line from Sephora which contains a “NEW” emoji:

Sephora subject line with emoji
A Sephora email containing an emoji stands out from others in the inbox. (Source: Sephora) (Large preview)

Just be careful with which emoji you use. For starters, emoji are displayed differently from device to device, so it may not have the same effect on some of your subscribers if it’s a more obscure choice.

There’s also the localization aspect to think about. Not every emoji is perceived the same way around the globe. As Day Translations points out, the fire symbol is one that could cause confusion as some countries interpret it as a literal fire whereas some may view it as a symbol for attraction.

That said, emoji have proven to increase both open rates and read rates of emails. Find the right mobile-friendly emoji to include in your subject line and you could effectively increase the number of subscribers who visit your website as a result.

Wrap-Up

There are so many different kinds of emails that go out from websites:

  • Welcome message
  • Post-purchase transaction email
  • Abandoned cart reminder
  • Promotional news
  • Product featurette
  • New content available
  • Account /rewards points
  • And more.

In other words, there are plenty of ways to get in front of email subscribers.

Just remember that the majority of them will first open your email on mobile. And some will reopen it on mobile over and over again until they’re compelled to click on it or trash it. It’s up to you to design the email in a way that motivates them to visit your website and, consequently, convert.

Further Reading on SmashingMag:

Smashing Editorial (ra, yk, il)
JAMstack Fundamentals: What, What And How

JAMstack Fundamentals: What, What And How

JAMstack Fundamentals: What, What And How

Vitaly Friedman

We love pushing the boundaries on the web, and so we’ve decided to try something new. You probably have heard of JAMstack — the new web stack based on JavaScript, APIs, and Markup — but what does it mean for your workflow and when does it make sense in your projects?

As a part of our Smashing Membership, we run Smashing TV, a series of live webinars, every week. No fluff — just practical, actionable webinars with a live Q&A, run by well-respected practitioners from the industry. Indeed, the Smashing TV schedule looks pretty dense already, and it’s free for Smashing Members, along with recordings — obviously.

We’ve kindly asked Phil Hawksworth to run a webinar explaining what JAMStack actually means and when it makes sense, as well as how it affects tooling and front-end architecture. The one hour-long webinar is now available as well. We couldn’t be happier to welcome Phil to co-MC our upcoming SmashingConf Toronto (June 25-26) and run JAMStack_conf London, which we co-organize on July 9-10 this year as well. So, let’s get into it!

Phil Hawksworth: Excellent, okay, well let’s get into it then. Just by way of a very quick hello, I mean I’ve said hello already, Scott’s given me a nice introduction. But yes, I currently work at Netlify, I work in the developer experience team there. We are hopefully going to have plenty of time for Q&A but, as Scott mentioned, if you don’t get a chance to ask questions there, or if you just rather, you can ping them directly at me on Twitter, so my Twitter handle is my names, it’s Phil Hawksworth, so any time you can certainly ask me questions about JAMstack or indeed anything on Twitter.

Phil Hawksworth: But I want to start today just by kind of going back in time a little bit to this quote which really resonates very, very strongly with me. This is a quote from the wonderful Aaron Schwartz who, of course, contributed so much to the Creative Commons and the open web and he wrote this on his blog way back in 2002, and he said, “I care about not having to maintain cranky AOL server, Postgres and Oracle installs.” AOL server, I had to look up to remind myself was an open source web server at the time.

Phil Hawksworth: But this chimes really strongly with me. I also don’t want to be maintaining infrastructure to keep a blog alive, and that’s what he was talking about. And it was in this blog post on his own blog and it was titled “Bake, Don’t Fry”. He was picking on a term that someone who’d built a CMS recently had started to use, and he kind of popularized this term about baking (Bake, Don’t Fry); what he’s talking about there is pre-rendering rather than rendering on demand, so baking the content ahead of time, rather than frying it on demand when people come and ask for it — getting things away from request time and into kind of build time.

Phil Hawksworth: And when we’re talking about pre-rendering and rendering, what we mean by that is we’re talking about generating markup. I feel a bit self-conscious sometimes talking about kind of server render or isomorphic rendering or lots of these kind of buzzwordy terms; I got called out a few years ago at a conference, Frontiers Conference in Amsterdam when I was talking about rendering on the server and someone said to me, “You mean generating HTML? Just something that outputs HTML?” And that’s, of course, what I mean.

Phil Hawksworth: But all of this kind of goes a long way towards simplifying the stack. When we think about the stack that we serve websites from; I’m all about trying to simplify things, I’m super keen on trying to simplify the stack. And that’s kind of at heart of this thing called “JAMstack” and I want to try and explain this a little bit. The “JAM” in JAMstack stands for JavaScript, APIs and Markup. But that’s not enough really to help us understand what it means — what on earth does that really mean?

Phil Hawksworth: Well, what I want to try and do in the next half hour or so, is I want to kind of expand on that definition and give more of a description of what JAMstack is. I want to talk a bit about the impact and the implications of JAMstack, and, you know, think about what that can give us as to why we might choose it. Along the way, I’m going to try to mention some of the tools and services that will be useful, and hopefully, I’ll wrap up with some resources that you might want to dig into and perhaps mention some first steps to get you under way.

Phil Hawksworth: So, that’s the plan for the next half-hour. But, I want to, kind of, come back to this notion about simplifying the stack, because, hopefully, people who join this or have come to watch this video later on, maybe you’ve got a notion of what JAMstack is, or maybe it’s a completely new term, and you’re just curious. But, ultimately, there are a lot of stacks out there, already. There are lots of ways that you can deliver a website. It feels like we’ve been building different types of infrastructure for a really long time, whether that’s a LAMP stack or the MAMP stack, or the — I don’t know — the MEAN stack. There’s a bunch of them floating by on the screen here. There are lots and lots of them.

Phil Hawksworth: So, why on earth would we need another one? Well, JAMstack is, as I mentioned, is JavaScript/API/Markup, but if we try and be a tiny bit more descriptive, JAMstack is intended to be a modern architecture, to help create fast and secure sites and dynamic apps with JavaScript/APIs and pre-rendered markup, served without web servers, and it’s this last point which is, kind of, something that sets it apart and maybe, makes it a little bit more, kind of, interesting and unusual.

Phil Hawksworth: This notion of serving something without web servers, that sounds either magical or ridiculous, and hopefully, we’ll figure out what along the way. But to try and shed some light over this and describe it in a little bit more detail, it’s sometimes useful to compare it to what we might think of as a traditional stack, or a traditional way of serving things on the web. So, let’s do that just for a second. Let’s just walkthrough, perhaps, what a request might look like as it gets serviced in a traditional stack.

Phil Hawksworth: So, in this scenario, we got someone opening up a web browser and making a request to see a page. And maybe that request hits a CDN, but probably, more likely, it hit some other infrastructure that we are hosting — as the people who own this site. Maybe we tried to make sure that this is going to scale under lots of load because we, obviously, want a very popular and successful sight. So, perhaps we got a load balancer, that has some logic in it, which will service that request to one of a number of web servers that we’ve provisioned and configured and deployed to. There might be a number of those servers.

Phil Hawksworth: And those servers will execute some logic to say, “Okay, here’s our template, we need to populate that with some data.” We might get our data from one of a number of database servers that will perform some logic to look up some data, return that to the web server, create a view that we then pass back through the load balancer. Perhaps, along the way, calling off to CDN, stashing some assets in the CDN, and I should clarify, a CDN is a Content Delivery Network, so a network of machines distributed around the Internet to try and get request service as close to possible to the user and add things, like caching.

Phil Hawksworth: So, we might stash some assets there, and ultimately, return a view into the browser, into the eyeballs of the user, who gets to then experience the site that we built. So, obviously, that’s, either, an oversimplification or a very general view of how we might service a request in a traditional stack. If we compare that to the JAMstack, which is servicing things in a slightly different way, this is how it might look.

Phil Hawksworth: So, again, same scenario, we’re starting in a web browser. We’re making a request for a view of the page, and that page is already in a CDN. It serves statically from there, so it’s returned to the user, into the browser, and we’re done. So, obviously, a very simplified view, but straight away, you can start to see the differences here in terms of complexity. In terms of places that we need to manage code, deeply code, all of those different things. So, for me, one of the core attributes one of a JAMstack, is that it means that you’re building a site that’s capable of being served directly from a CDN, or even from a static file server. CDN is something that we might want to put in place to handle load, but ultimately, this could be served directly from any kind of static file server, kind of static hosting infrastructure.

Phil Hawksworth: JAMstack, kind of, offers an opportunity to reduce complexity. Just comparing those two parts of the diagram that we’ll come back to a few times, over the course of the next half hour, you can see that it’s an opportunity to reduce complexity and reduce risk. And so, it means that we can start to enjoy some of the benefits of serving static assets, and I’m going to talk about what those are a little bit later on. But you might be looking at this and thinking, “Well, great, but isn’t this just the new name for static websites?” That’s a reasonable thing to level at me when I’m saying, “We’re going to serve things statically.”

Phil Hawksworth: But I want to come back to that. I want to talk about that a little bit more, but first of all, I want to, kind of, talk about this notion of stacks and what on earth is a stack, anyway? And I think of a stack as the layers of technology, which deliver your website or application. And we’re not talking about the build pipeline, or the development process, but certainly the way we serve sites can have a big impact on how we develop and how we deploy things, and so on.

Phil Hawksworth: But here, we’re talking about the technology stack, the layers of technology, that actually deliver your website and your application to the users. So, let’s do another little comparison. Let’s talk about the LAMP stack for a second.

Phil Hawksworth: The LAMP stack, you may remember, is made up of an apache web server, for doing things like the HCP routing and the serving of static assets. PHP, for some pre-processing, so pretty hyper-text processing; applying the logic, maybe building the views for the templates and what have you. And has some access to your data, by my NISQL, and then LINUX is the operating system that sits beneath all of that, keeps that all breathing. We can wrap that up together notionally as this web server. And we may have many of these servers, kind of, sitting together to serve a website.

Phil Hawksworth: That’s a, kind of, traditional look at the LAMP stack, and if we compare that to the JAMstack, well, here, there’s a critical change. Here, we’re actually moving up level, rather than thinking about the operating system and thinking about how we run the logic to deliver a website. Here we’re making an assumption that we’re going to be serving these things statically. So, we’re doing the ACP routing, and the serving of assets from a static server. That can be reasonably done. We got very good at this, over the years, building ways to deliver static websites, or static assets.

Phil Hawksworth: This might be a CDN, and again, we’ll talk about that in a moment. But the area of interest for us, is happening more in the browser. So, here, this is where our markup is delivered and is parsed. This is where JavaScript can execute, and this is happening in the browser. In many ways, the browser has become the runtime for the modern web. Rather than having the runtime in the server infrastructure, itself, now we’ve moved that up a level, closer to the user, and into the browser.

Phil Hawksworth: When it comes to accessing data, well, that’s happening through, possibly, external APIs, making calls via JavaScripts to these external APIs to get server access, or we can think APIs as the browser APIs, being able to interact with JavaScript with capabilities right there in your browser.

Phil Hawksworth: Either way, the key here about the JAMstack is that, we’re talking about things that are pre-rendered: they’re served statically and then, they maybe progressively enhanced in the browser to make use of browser APIs, JavaScripts, and what have you.

Phil Hawksworth: So, let’s just do this little side-by-side comparison here. Again, I just want to kind of reiterate that the JAMstack has moved up a level to the browser. And if we see the two sides of this diagram, with the LAMP stack on the left and effectively, the JAMstack on the right, you might even think that, well, even when we were building things with the LAMP stack, we’re still outputting mark-up. We’re still outputting JavaScript. We might still be accessing APIs. So, in many ways, the JAMstack is almost like a subset of what we were building before.

Phil Hawksworth: I used to sometimes talk about JAMstack as the assured stack, because it’s assures a set of tools and technologies that we need to deliver a site. But, either way, it’s a much simplified way of delivering a site that, kind of, does away with the need for things to execute and perform logic at the server at request time.

Phil Hawksworth: So, this can do a lot of things. This can, kind of, simplify deployments and again, I’m going to call back to this diagram from time-to-time. If we think about how we deploy our code and our site, for every deploy, from the very first one, through the whole development lifecycle, all the way through the life of the website. On the traditional stack, we might be having to change the logic and the content for every box on that diagram.

Phil Hawksworth: Whereas, in the JAMstack, when we’re talking about deploying, we’re talking at getting things to the CDN, getting things to the static server, and that’s what the deployment entails. The build, the kind of logic that runs the build — that can run anywhere. That doesn’t need to run in the same environment that’s hosting the web server. In fact, it doesn’t! It starts the key to the JAMstack. We put the separation at what happens at request time, serving these static assets, versus what happens at build time, which can be your logic that you run to build and then to the deployment.

Phil Hawksworth: So, this kind of decoupling is a key thing, and I’m going to come back to that later on. We’ve got very good at serving static files, and getting things to a CDN or getting things to the file system (the file server) is somewhere that we’ve seen huge, kind of, advancement over the last few years. There are a lot of tools and processes, now, that can help us do this really well. Just to call out a few services that can serve static assets well and give workflows to getting your build to that environment, they’re the usual suspects that you might imagine the big clouds infrastructure providers, like Azure, AWS, and Google Cloud.

Phil Hawksworth: But then, there are others. So, the top one on the right is a service called Surge, which has been around for a few years. It allows you to run a command in your build environment and deploy your assets through to their hosting environment. Netlify, the next one down, is where I work and we do very much the same thing but we have different automation as well. I could go into it another time. And the one on the bottom, another static hosting environment site, called Now.

Phil Hawksworth: So, there’s a bunch of different options for doing this, and all of these spaces provide different tooling for getting to the CDN, as quickly as possible. Getting your sites deployed in the most seamless way that we can. And they all have something in common where they’re building on the principal of running something locally. I often think of a static site generator as something that we might run in a build which when we run that, it takes things like content and templates and maybe, data from different services and it outputs something which can be served statically.

Phil Hawksworth: We can preview that locally in our static server. Something that is kind of simple to do on any local development environment, and then the process of deploying that is getting that to the hosting environment and ideally, out to a CDN in order to get, kind of, scale. So, with that kind of foundation laid out, I want to address a bit of a common misconception when it comes to JAMstack sites. And I didn’t do myself any favors by opening this up as describing JAMstack sites as being JavaScript, APIs, and Markup. Because the common misconception is that every JAMstack site has to be JavaScript and APIs, and Markup, but this kind of thing that we’ve overlooked is that we don’t have to use all three — every one one of these is, kind of, optional. We can use as much, or as little of these as we like. In the same way that a LAMP stack site wouldn’t necessarily need to be hitting a data base. Now, I’ve built things in the past that are served by an apache server, on a Linux machine, and I’ve been using PHP, but I haven’t been hitting a database and I wouldn’t start to rename a stack necessarily for that.

Phil Hawksworth: So, if we think about what is a JAMstack site, then it could be a bunch of different things. It might be something that’s built out with a static site generator, like Jekyll, pulling content from YAML files to build a site that has no JavaScript, doesn’t hit APIs at all, and we serve it on something, like GitHub Pages. Or, that would be a JAMstack site. Or maybe we’re using a static site generator, like Gatsby, which is, rather in a Ruby environment for Jekyll, now this is a JavaScript static site generator built in the React ecosystem.

Phil Hawksworth: That might be pulling content again, and it’s organizing Markdown files. It might be enriching that with calls to APIs, GraphQL’s APIs. It might be doing things in the browser, like doing JavaScript hydration of populating templates right there in the browser. And it might be served on Amazon S3. Is that a JAMStack site? Yeah, absolutely!

Phil Hawksworth: Moving on to a different static site generator, Hugo, which is built with Go! Again, we might be organizing content in Markdown files, adding interactions in the browser using JavaScript. Maybe not calling any external APIs and maybe hosting that on Google Cloud. Is it JAMstack? Absolutely! You see, I’m building to a theme here. Using something like Nuxt, another static site generator, now built in the View ecosystem. Maybe that’s pulling content from different adjacent files? Again, we might be using JavaScript interactions in the browser, perhaps calling APIs to do things like e-Commerce, serving it another static site. Another hosting infrastructure like Netlify, is it a JAMstack? Yes, it is!

Phil Hawksworth: But we might even go, you know, go off to this side end of the scale, as well. And think about a handmade, progressive web app that we’ve built artisanally, hand-rolled, JavaScript that we built ourselves. We’re packaging it up with webpack. We’re maybe using JavaScript web tokens and calling out to APIs to do authentication, interacting with different browser APIs, hosting it on Azure. Yes, that’s JAMstack as well!

Phil Hawksworth: And, you know, all of these things, and many more, can be considered JAMstack, because they all share one attribute in common and that is none of them are served with an origin server. None of them have to hit a server that performs logic at request time. These are being served as static assets, and then enriched with JavaScript and calls to APIs, afterwards.

Phil Hawksworth: So, again, I just want to reiterate that a JAMstack means it’s capable of being served directly from the CDN. So, I want to just call out some of the impacts and implications of this, because why would we want to do this? Well, the first notion is about security, and we’ve got a greatly reduced surface area for attack, here. If we think about (coming back to this old diagram again), the places where we might have to deal with an attack, we have to secure things like the load balancer, the webservers, the database servers. All of these things, we have to make sure aren’t able to be penetrated by any kind of an attack and, indeed, the CDN.

Phil Hawksworth: If the more pieces we can take out of this puzzle, the fewer places that can be attacked and the fewer places we have to secure. Having few moving parts to attack is really very valuable. At Netlify, we operate our own CDNs, so we get the luxury of being able to monitor the traffic that comes across it, and even though all of the sites hosted on Netlify, all of the JAMstack sites that you might imagine, none of them have a WordPress admin page on them because it’s completely decoupled. There is no WordPress admin, but we see a huge volume of traffic, probing for things like WP Admin, looking for ways in, looking for attack vectors.

Phil Hawksworth: I really love some of the things that Kent C. Dodds has done. I don’t know if you are familiar with the React community, you’ve probably encountered Kent C. Dodds in the past. He doesn’t use WordPress, but he still routes this URL, the WPAdmin. I think he used to route it through to a Rick Roll video on YouTube. He’s certainly been trolling people who have gone probing for it. But, the point is, by decoupling things in that way and, kind of, moving things that happen, build time from what happens in request time, we can just drastically reduce our exposure. We’ve got no moving parts at request time. The moving parts are all completely decoupled at build time. Potentially, on completely, well, necessarily on completely different infrastructure.

Phil Hawksworth: This, of course, also has an impact on performance, as well. Going back to our old friend here, the places we might want to try and improve performance across the stack here, when there’s logic that needs to be executed at these different places. The way that this often happens in traditional stacks is, they start to add layers, add static layers, in order to improve performance. So, in other words, try and find ways that we can start to behave as if it’s static. Not have to perform that logic at each level of the stack in order to speed things up. So, we’re starting to introduce things like caching all over the infrastructure and obvious places we might find to do that is in the web server, rather than perform that logic, we want to serve something immediately without performing that logic.

Phil Hawksworth: So, it’s kind of like a step towards, kind of, being pseudo-static. Likewise in database servers, we want to add caching layers to cache-com queries. Even in the low balance, the whole CDN, you can think of as a cache. But on the traditional stack, we need to figure out how to manage that cache, because not everything will be cached. So, there’s going to some logic about. What needs to be dynamically populated versus what can be cached. And the JAMstack model, everything is cached. Everything is cached from the point that the deployment is done, so we can think about it completely differently.

Phil Hawksworth: This, then, starts to, kind of, hint through to scaling, as well. And by scale, I’m talking about, how do we handle large loads of traffic? Traditional stacks will start to add infrastructure in order to scale. So, yes, to caching. We’re starting to put in place in our traditional stack. That will help — to a degree. What typically happens is, in order to handle large loads of traffic, we’ll start expanding the infrastructure and starting to add more servers, more pieces to handle these requests, costing these things out and estimating the load is a big overhead and it can be a headache for anyone doing technical architecture. It certainly was for me, which is why I was starting to lean much more towards doing the JAMstack approach where I just know that everything is served from the CDN, which is designed by default to handle scale, to handle performance right out of the gate.

Phil Hawksworth: So, I also want to give a nod to developer experience, and the impact this can have there. Now, developer experience should never be seen as something which trumps user experience, but I believe that a good developer experience can reduce friction, can allow for developers to do a much better job of building up to great user experiences!

Phil Hawksworth: So, when we think about where the developer experience lives, and where the areas of concern for a developer are here: well, in a traditional stack, we might need to think about how we get the code to all of these different parts of the infrastructure, and how they all play together. In the JAMstack world, really, what we’re talking about is this box here at the bottom. You know, how do we ran the build and them, how do we automate a deployment to get something served in the first place? And the nice thing is, that in the JAMstack model, what you do in that build is completely up to you.

Phil Hawksworth: That’s a really well-defined problem space, because ultimately, you’re trying to build something you can serve directly from a CDN. You want to pre-render something, using whatever tools you like: whether it’s a static site generator built in Ruby or Python or JavaScript or Go or PHP, you have the freedom to make that choice. And so, that can create a much nicer environment for you to work in. And also, it creates an opportunity to have real developer confidence because a real attribute of JAMstack sites is that they can be much more easily served as immutable and atomic deployment.

Phil Hawksworth: And I want to, kind of, jump away from the slides just for a moment, to describe what that means, because an immutable deployment and an atomic deployment can… (that can just sound a little bit like marketing speak) but what I’m going to do, is I’m going to jump into my browser. Now … actually, I’m going to go back for a second. Let me… just do this.

Phil Hawksworth: Here we are. This will be easier. Jumping right into the page. Now, Scott, you will tell me, Scott, if you can’t see my browser, I’m hoping. Now, assuming everyone can see my browser.

Scott: Everything looks good.

Phil Hawksworth: Excellent! Thank you very much!

Phil Hawksworth: So, what I’m doing here, is I’m using Netlify as an example, as an example of the service. But, this is an attribute which is common to sites that can be hosted, statically. So, when we talk about an immutable deployment, what we mean is, that rather each deployment of code having to touch lots of different parts of the infrastructure, and change lots of things, here we’re not mutating the state of the site on the server. We’re creating an entirely new instance of the site for every deployment that’s happened. And we can do that because the site is a collection of static assets.

Phil Hawksworth: Here, I’m looking at the deployment that have happened from my own website. I’ll give you a treat. There you are, that’s what it looks like. It’s just a blog, it doesn’t look like anything particularly remarkable or exciting. It’s a statically generated blog, but what I have here is every deployment that’s ever happened, lives on in perpetuity, because it’s a collection of static assets that are served from a CDN. Now, I could go back as far as my history can carry me and go and look at the site, as it was back in… when was this? This was August, 2016. And by virtue of it being a set of static assets, we’re able to host this on its own URL that lives on in perpetuity and if I even wanted to, I could decide to go in and publish that deployment.

Phil Hawksworth: So, now, anyone’s who’s looking at my website, if I go back to my website here, if I refresh that page, now that’s being served directly from the CDN with the assets that were there before. Now, I can navigate around again. Here, you can see. Look, I was banging on about this, I was using these terrible terms like isomorphic rendering and talking about the JAMstack back in 2016. So, this is now what’s being served live on my site. Again, because there are mutual deployments that just live on forever. I’m going to just put my own, kind of, peace of mind, I’m going to — is this the first page? Yeah. I’m going to go back to my latest deployment. I’ll have to shut again, and get me back into the real world. Let me just make sure this is okay.

Phil Hawksworth: Okay! Great! So, then now, this is back to serving my previous version, or my latest version of the site. I’ll hop back to keynote. So, this is possible because things are immutable and atomic. The atomic part of that means, again, that the deployment is completely contained. So, you never get the point where some of the assets are available on the web server, but some of them won’t. Only when everything is there in context and everything is there, complete, do we toggle the serving of the site to the new version. Again, this is the kind of thing you can do much more easily if you’re building things out as a JAMstack site that serves directly from the CDN as a bunch of assets.

Phil Hawksworth: I noticed that my timer has reset, after going back and forward from keynote, so I think I have about six or seven minutes left. Tell me, Scott, if—

Scott: So, yeah, we’re still good for about ten minutes.

Phil Hawksworth: Ten minutes? Okay, wonderful!

Scott: There’s no rush.

Phil Hawksworth: Thank you, that should be good!

Phil Hawksworth: So, just switching gear a tiny bit and talking about APIs and services (since APIs is part of JAMstack), the kind of services that we then might be able to use is mostly JAMstack. You know, we might be using services that we built in-house, or we might be using bought-services. There are lots of different providers who can do things for us, and that’s because that’s their expertise. Through APIs, we might be pulling in content from content management systems as a service, and there’s a bunch of different providers for this, who specialize in giving a great content management experience and then, exposing the content through API, so you used to be able to pull them in.

Phil Hawksworth: Likewise, there are different ways that you can serve assets. People like Cloudary are great at this, for doing image optimization, serving assets directly to your sites, again, through APIs. Or what about e-Commerce? You know, there are places like Stripe or Snipcart that can provide us API, so that we don’t have to build these services ourselves and get into the very complex issues that come with trying to build an e-Commerce engine. Likewise, identity, from people like Auth0 who are using Oauth. There are lots of services that are available and we can consume these things through APIs, either in the browser or at build time, or sometimes, a combination of both.

Phil Hawksworth: Now, some people might think, “Well, that’s fine, but I don’t want to give the keys to the kingdom away. I don’t want to risk giving these services over to external providers,” and to that, I say, well, you know, vendors who provide a single service really depend on its success. If there’s a company that’s core business, or entire business, is in building-out an e-Commerce engine, an e-Commerce service for you, they’re doing that for all of their clients, all of their customers, so they really depend on its success and they have the specialist skills to do that.

Phil Hawksworth: So, that kind of de-risks it from you a great deal. Especially when you start to factor in the fact that you can have your technical and service-level contracts to give you that extra security. But it’s not all about bought services, it’s also about services you can build yourselves. Now, there’s a number of ways that this can happen, but sometimes, you absolutely need a little bit of logic on the server. And so far, I’ve just been talking about taking the server out of the equation. So, how do we do that?

Phil Hawksworth: Well, this is where serverless can really come to the rescue. Serverless and JAMstack, they just fit together really beautifully. And when I’m talking about serverless, I’m talking about no functions as a service. I know that serverless can be a bit of a loaded term, but here, I’m talking about functions as a service. Because functions as a service can start to enable a real micro-services architecture. Using things like AWS Lambda or Google Cloud functions or similar functions as a service, can allow you to build out server infrastructure without a server. Now, you can start deploying JavaScript logic to something that just runs on demand.

Phil Hawksworth: And that means, you can start supplementing some of these other services with, maybe, very targeted small services you build yourselves that can run the serverless functions. These kind of smaller services are easier to reason about, understand, build out and they create much greater agility. I want to just mention a few examples and results from JAMstack sites. I’m not going to go down the server list avenue too much, right now. We can, maybe, come back to that in the questions. I really just kind of want to switch gear and, thinking about time a little bit, talk about some examples and results.

Phil Hawksworth: Because there are some types of sites that lend themselves in a very obvious way to a JAMstack site. Things like the documentation for React, or Vuejs, those [inaudible 00:32:40], pre-rendered JAMstacks sites. As do sites for large companies, such as Citrix, this is a great example of Citrix multi-language documentation. You can actually view the video from the JAMstack conference that happened in New York, where Beth Pollock had worked on this project, talked about the change that went on in that project. From building on traditional, non-enterprised infrastructure to a JAMstack approach and building with Jekyll, which is not necessarily the fastest generating static site generator, but still, they saw a huge improvement.

Phil Hawksworth: Beth talked about the turnaround time for updates went from weeks to minutes. Maybe people are kind of frowning at the idea of weeks for updates to sites, but sometimes in big complex infrastructure, with lots of stakeholders and lots of moving parts, this really is the situation we’re often finding ourselves in. Beth also talked about estimating the annual cost savings for this move to a JAMstack site. To get the site running properly, estimated their savings of around 65%. That’s huge kind of savings. Changing gear to a slightly different type of site, something a little closer to home, Smashing Magazine itself is a JAMstack site, which might be a little bit surprising, because on one hand, yes, there’s lots of articles and it’s also content, which is published regularly, but not every minute of the day, for sure.

Phil Hawksworth: So, that might lend itself, perhaps, for something that’s pre-generated, but of course, there’s also a membership area and an event section, and a job board, and e-Commerce, and all of these things. This is all possible on the JAMstack because not only are we pre-rendering, but we’re also enriching things with JavaScript and the front end to call out to APIs, which let some of these things happen. The project that I think I saw Vitaly arrive in the call, so that’s going to be good, we might be able to come back to this in a few minutes.

Phil Hawksworth: But the project that migrated, Smashing Magazine onto a JAMstack approach, I believe, simplified the number of platforms from five, effectively down to one. And I’m using Vitaly’s words directly here: Vitaly talked about having some caching issues, trying to make the site go quickly, using probably every single WordPress caching plug-in out there, and goodness knows, there are a few of them! So, Smashing Magazine saw an improvement in performance, time to first load went from 800 milliseconds to 80 milliseconds. Again, I’m simplifying the infrastructure that served the site up in the first place. So, it’s kind of interesting to see the performance gains that can happen there.

Phil Hawksworth: Another totally different type of site. This is from the Google Chrome team, who built this out to demonstrate at Google I/O this year. This very much feels like an application. This is Minesweeper in a browser. I apologize if you’re watching me play this. I’m not playing it while talking to you; I recorded this sometime ago and it’s agony to see how terrible I seem to be at Minesweeper while trying to record. That’s not a mine, that can’t be!

Phil Hawksworth: Anyway, we’ll move on.

Phil Hawksworth: The point of that is, this is something that feels very much more like an app, and it was built in a way to be very responsible about the way it used JavaScript. The payload to interactive was 25KB. This progressively would download and use other resources along the way, but it meant that the time to interact was under five seconds on a very slow 3G network. So, you can be very responsible with the way you use JavaScript and still package up JavaScript, as part of the experience for a JAMstack site.

Phil Hawksworth: So, I’m kind of mindful of time. We’re almost out of time, so what is the JAMstack? Well, it’s kind of where we started from. JAMstack sites are rendered in advance: they’re not dependent on an origin server (that’s kind of key), and they may be progressively enhanced in the browser with JavaScript. But as we’ve shown, you don’t have to use JavaScript at all. You might just be serving that statically, ready to go, without that. It’s an option available to you.

Phil Hawksworth: This key tenant, I think of, JAMstack sites is they’re served without web service. They’re pre-rendered and done in advance.

Phil Hawksworth: If you’re interested in more, it’s already been mentioned a little bit, there is a conference in London in July — July 9th and 10th. The speakers are going to be talking about all kinds of things to do with performance in the browser, things that you can do in the browser, results of building on the JAMstack and things to do with serverless.

Phil Hawksworth: There’s also a bunch of links in this deck that I will share, after this presentation, including various bits and pieces to do with static site generation, things like headless CMS, the jamstack.org site itself, and a great set of resources on a website called “The New Dynamic” which is just always full of latest information on JAMstack. We’re out of time, so I’ll wrap it up there, and then head back across to questions. So, thanks very much for joining and I’d love to take questions.

Scott: Thanks, Phil. That was amazing, thank you. You made me feel quite old when you pulled up the Minesweeper reference, so—

Phil Hawksworth: (laughs) Yeah, I can’t take any credit for that, but it’s kind of fascinating to see that as well.

Scott: So, I do think Vitaly is here.

Vitaly: Yes, always in the back.

Phil Hawksworth: I see Vitaly’s smiling face.

Vitaly: Hello everyone!

Phil Hawksworth: So, I’m going to hand it over to Vitaly for the Q&A, because I seem to have a bit of a lag on my end, so I don’t want to hold you guys up. Vitaly, I’ll hand it over to you.

Scott: Okay. Thanks, Scott.

Vitaly: Thanks, Scott.

Vitaly: Hello—

Vitaly: Oh, no, I’m back. Hello everyone. Now Scott is back but Phil is gone.

Scott: I’m still here! Still waiting for everything.

Vitaly: Phil is talking. Aw, Phil, I’ve been missing you! I haven’t seen you, for what, for days, now? It’s like, “How unbelievable!” Phil, I have questions!

Vitaly: So, yeah. It’s been interesting for us, actually, to move from WordPress to JAMstack — it was quite a journey. It was quite a project, and the remaining moving parts and all. So, it was actually quite an undertaking. So, I’m wondering, though, what would you say, like if we look at the state of things and if we look in the complexes, itself, that applications have. Especially if you might deal with a lot of legacy, imagine you have to deal with five platforms, maybe seven platforms here, and different things. Maybe, you have an old legacy project in Ruby, you have something lying on PHP, and it’s all kind of connected, but in a hacky way. Right? It might seem like an incredible effort to move to JAMstack. So, what would be the first step?

Scott: So … I mean, I think you’re absolutely right, first of all. Re-platforming any site is a big effort, for sure. Particularly if there’s lots of legacy. Now, the thing that I think is kind of interesting is an approach that I’ve seen getting more popular, is in identifying attributes of the site, parts of the site, that might lend themself more immediately to being pre-generated and served statically than others. You don’t necessarily have to everything as a big bang. You don’t have to do the entire experience in one go. So, one of the examples I shared, kind of, briefly was the Citrix documentations site.

Scott: They didn’t migrate all of Citrix.com across to being JAMstack. They identified a particular part that made sense to be pre-rendered, and they built that part out. And then, what they did was they started to put routing in front of all the requests that would come into their infrastructure. So, it would say, “Okay, well, if it’s in this part of the past the domain, either in the sub-domain or maybe, through to a path, route that through something which is static, then the rest of it, pass that through to the rest of the infrastructure.

Scott: And I’ve seen that happen, time and time again, where with some intelligent redirects, which, thankfully, is something that you can do reasonably simply on the JAMstack. You can start to put fairly expressive redirect engines in front of the site. It means that you can pass things through just that section of the site that you tried to take on as a JAMstack project. So, choosing something and focusing on that first, rather than trying to do a big bang, where you do all of the legacy and migration in one. I think that’s key, because, yeah, trying to do everything at once is pretty tough.

Vitaly: It’s interesting, because just, I think, two days, maybe even today, Chris Coyier wrote an article renaming JAMstack to SHAMstack where, essentially, it’s all about JavaScript in which we need think about steady coasting, because JavaScript could be hosted steadily as well. And it’s interesting, because he was saying exactly that. He actually — when we think about JAMstack, very often, we kind of tend to stay in camps. It’s either, fully rendered and it lives a static-thing. Somewhere there, in a box and it’s served from a city and that’s it, or it’s fully-expressive and reactive and everything you ever wanted. And actually, he was really right about a few things, like identifying some of the things that you can off-load, to aesthetic-side, generated assets, so to say, to that area.

Vitaly: And, JAMstackify, if you might, say some of the fragments of your architecture. Well, that’s a new term, I’m just going to coin, right there! JAMstackify.

Phil Hawksworth: I’m going to register the domain quickly, before anybody else.

Phil Hawksworth: And it’s a nice approach. I think, it kind of makes my eye twitch a little bit when I hear that Chris Coyier has been redubbing it the SHAMstack, because it makes it sound like he thinks it’s a shambles. But I know that he’s really talking about static-hosting and markup, which I—

Vitaly: Yes, that’s right.

Phil Hawksworth: I really like, because the term JAMstack can be really misleading, because it’s trying to cover so many different things and the point I was trying to, I probably hammered it many times in that slide, is that it can be all kinds of things. It’s so broad, but the key is pre-rendering and hosting the core of the sites statically. It’s very easy for us to get into religious wars about where it needs to be a React site. It has to be a React app, in order to be a JAMstack site, or it’s a React app, so it can’t be JAMstack. But, really, the crux of it is, whether you use JavaScript or not, whether you’re calling APIs or not, if you pre-render and get things into a static host that can be very performant, that’s the core of JAMstack.

Vitaly: Yes, absolutely.

Phil Hawksworth: We’re very fortunate that browser’s are getting so much more capable, and the APIs that are there within browser’s themselves can allow us to do much more as well. So, that kind of opens the doors even further, but it doesn’t mean that everything that we build as a JAMstack site has to make use of everything. Depending on what we’re trying to deliver, that’s how we should start to choose the tools that we’re playing with to deploy those things.

Vitaly: Absolutely. We have Doran here. Doran, I think I know, Doran. I have a feeling that I know Doran. He’s asking, “Do you expect serverless to be gravitating towards seamless integration with JAMstack from [inaudible 00:44:36]? What is referred to as the A in JAM.

Phil Hawksworth: That’s a great question, because I think, serverless functions are — they just go so well with JAMstack sites, because in many ways, in fact, I think someone once asked me if JAMstack sites are serverless, and so I squirmed about that question, because serverless is such a loaded term. But, in many ways, it’s bang-on because I was talking, time and time again, about there’s no origin server. There’s no server infrastructure for you to manage. In fact, I once wrote a blog post called “Web Serverless,” because the world needs another buzz term, doesn’t it?

Phil Hawksworth: And really, the kind of point of that was, yes, we’re building things without servers. We don’t want to have to manage these servers, and serverless functions, or functions as a service, just fits into that perfectly. So, in the instances that you do need an API that you want to make a request to, where really it’s not sensible for you to make that request directly from the browser. So, for instance, if you’ve got secrets, or keys, in that request, you might not want those requests — that information — ever exposed in the client. But we can certainly proxy those things, and typically, traditionally, what we need to do then, is spin-up a server, have some infrastructure that was effectively doing little more than handling requests, adding security authentication to it and passing those requests on, proxying them back.

Phil Hawksworth: Serverless functions are perfect for that. They’re absolutely ideal for that. So, I sometimes think of serverless functions, or functions of a service, almost as like an escape hatch, where you just need some logic on a server, but you don’t want to have to create an entire infrastructure. And you can do more and more with that, and stipe the development pipelines for, development workflows, for functions as a service is maturing. It’s getting more accessible for JavaScript developers to be able to build these things out. So, yeah, I really think those two things go together very nicely.

Vitaly: All right, that’s a very comprehensive answer. Actually, I attended a talk just recently, where a front-end engineer from Amazon was speaking about serverless and Lamda functions they’re using — I was almost gone. He was always speaking about Docker, and Kubernetes, and all those things, Devox World, I was sitting there, thinking, “How did he end up there. I don’t understand what’s going on!” I had no idea what’s going on.

Phil Hawksworth: Exactly, but the thing is, it used to be the… I was… I accepted that I didn’t understand any of that world, but I didn’t have any desire to, since that was for an entirely different discipline. And that discipline is still really important. You know, people who are designing infrastructure — that’s still really key. But, it just feels, now, that I’m tempted. As someone with a front-end development background, as a JavaScript developer, I’m much more tempted to want to play in that world, because the tools are coming, kind of, closer to me.

Phil Hawksworth: It’s much more likely that I might be able to use some of these things, and deliver things kind of safely, rather than just as an experiment of my own, which is where I used to be dappling. So, it feels like we’re becoming more powerful as web developers, which is exciting to me.

Vitaly: Like Power Rangers, huh?

Vitaly: One thing I do want to ask you, though, and this is actually something we discussed already, maybe, a week ago, but I still wanted to bring it up, because the one thing that you mentioned in the session was the notion of having a stand-alone instance of every single deploy, which is really cool. The question, though, is if you have a large assignment, with tens of thousands of pages, you really don’t want to redeploy every thing, every single time. So, essentially, if you have, like, if you’re mostly using the static side of things. So, we had this idea for a while and I know this is actually something that you brought up last time. The idea of atomic deployments.

Vitaly: Where you actually, literally, were served some sort of div between two different versions of snapshots of the set-up. So, if you say, change the header everywhere, then, of course, every single page has to be redeployed. But if you change, maybe, a component, like let’s say, carousel, that maybe effects only a 1000 pages, then it would make sense to redeploy 15000 pages. But only this 1000. So, can we get there? Is it a magical idea that’s out there, or is it something that’s quite tangible, at this point?

Phil Hawksworth: I think that is, probably, the Holy Grail for static site generators and this kind of model because, certainly, you’ve identified probably the biggest hurdle to overcome. Or the biggest ceiling that you bump into. And that is websites that have many, tens of thousands or hundreds of thousands, or millions of URLs — the notion that the build can become very long. Being able to detect which URL’s will change, based on a code change, is a challenge. It’s not insurmountable, but it’s a big challenge. Understanding what the dependency graph is across your entire site and then, intelligently deploying that — that’s really tough to do.

Phil Hawksworth: Because as you mentioned, a component change might have very far-reaching implications but you — it’s difficult, always, to know how that’s going to work. So, there are a number of static site generators, the projects that are putting some weight behind that challenge, and trying to figure out how they do partial-regeneration and incremental builds. I’m very excited that the prospect that that might get solved day, but at the moment, it’s definitely a big challenge. You can start to do things like try to logically sharred your site, and think about, again, kind of similar to the migration issue. Well, this section I know is independent in terms of its, kind of, some of the assets that it uses, or the type of content that lives there, so I can deploy them individually.

Phil Hawksworth: But that’s not ideal to me. That’s not really the perfect scenario. One of the approaches that I’ve explored a little bit, just as a proof of concept, is thinking about how you do things, like, making intelligent use of 404s. So, for instance, a big use case for very large signs, maybe news sites is, when they need a URL when a breaking news story happens, they need to be first to get it deploy out there. They need to get a URL up there. Things like the BBC News, you’ll see that the news story will arrive on the website, and then overtime, they’ll add to it, incrementally, but getting there first is key. So, having a build that takes 10 minutes, 20 minutes, whatever it’s going to be, that could be a problem.

Phil Hawksworth: But, if they’re content is abstracted and maybe used to have been called from an API. I mentioned content management systems that are abstracted, like Contentful, or Sanity, or a bunch of those. Anything that has a content API changes to that content structure that will trigger a build and we’ll go through the run, but the other thing that could happen is that, well, if you publish your URL for that, and then publicize that URL, even if the build hasn’t run, if someone hicks that URL, if the first stop on its 404 is instead of saying, “We haven’t got it,” is actually to hit that API directly, then, you can say, “Well, the build hasn’t finished to populate that yet, but I can do it in the client.” I can go directly to the API, get that, populate it in the client.

Vitaly: Hmm, interesting.

Phil Hawksworth: So, even while the build is still happening, you can start populating those things. And then, once the build’s finished, of course it wouldn’t hit a 404. You would hit that statically running page. So, there are techniques and there are strategies to tackle it, but still, it’s a very long, rambling answer, I’m sorry, but my conclusion is, yeah, that’s a challenge. Fingers crossed we’ll get better strategies.

Vitaly: Yeah, that’s great. So, I’m wondering, so, at this point, we really aren’t thinking, not just what the performance in terms of the content delivering, but those in performance in terms of the build speed. Like building deployment. So, this is also something that we’ve been looking into for, quite a bit of time now, as well.

Vitaly: One more thing I wanted to ask you. So, this is interesting, like this technique that you mentioned. How do you learn about this? This is just something people tend to publish on their own blogs or, is it some medium or is there a central repository where you can get some sort of case studios, of how JAMstack—how companies moved while unloading, or have failed to move to JAMstack.

Phil Hawksworth: So, it’s kind of maturing this landscape a little bit, at the moment. I mean, some of these examples, I think, I’m in a very fortunate position, I work somewhere that I’m in a role that I’m playing with the toys, coming up with interesting ways to use it and start experimenting with them. So, these proofs of concepts are, kind of, things that I get to experiment with and try to address these challenges. But the, I kind of mentioned earlier, a case study that was shown at the JAMstack conference in New York, and certainly, events like that, we’re starting to see best practices or industry practices and industry approaches being talked about at those kind of events. And certainly, I want to see more and work on more case studies to get in places like on Smashing Magazines, so that we can share this information much more readily.

Phil Hawksworth: I think, large companies and the enterprise space, is gradually adopting JAMstack, in different places, in different ways, but the world is still sloped to get out there, so I think, each time a company adopts it and shares their experience, we all get to learn from that. But I really want to see more and more of these case studies get published, so that we can lean particularly about how these kind of challenges are overcome.

Vitaly: Alright, so, then, maybe just the last question from me, because I always like to read questions. So, the JAMstack land, if you could change something, maybe there is something that you desperately would love to see, beyond deployments. Anything else that would be really making you very happy? That would make your day? What would that be? What’s on your wishlist, for JAMstack?

Phil Hawksworth: What a question. I mean, had we not talked about incremental builds, that would be—

Vitaly: We did. That’s too late, now. This card has been passed, already. We need something else.

Phil Hawksworth: So—

Vitaly: What I mean, like on a platform, if you looked at the back platform, there are so many exciting things happening. We have Houdini, we have web components coming, and everything, since you could be changing the entire landscape of all the right components. On the other side, we have all this magical, fancy world with SS NGS, and, of course, obviously, we also have single-page applications and all. What are you most excited about?

Phil Hawksworth: I’m going to be obtuse here, because there is so much stuff that’s going on, it’s exciting, and there is so many new capabilities that you can make use of in the browser. The thing that I really get excited about is people showing restraint (laughs) and as I said, dull answer, but I love seeing great executions that are done with restraint, in a thoughtful — about the wider audience. It’s really good fun, and gratifying to build with the shiniest new JavaScript library or the new browser API that does, I don’t know, scratch and sniff capabilities in the browser, which we desperately need, any day now.

Phil Hawksworth: But I really like seeing things that I know are going to work in many, many places. They’re going to be really performant, are going to be sympathetic to the browsers that exist — not just on the desks of CEOs and CTOs who got the snazzy toys, but also people who have got much lower-powered devices, or they’ve got challenging network conditions and those kinds of things. I like seeing interesting experiences, and rich experiences, delivered in a way that are sympathetic to the platform, and kind of, compassionate for the wider audience, because I think the web reaches much further than us, the developers, who build things for it. And I get excited by seeing interesting things done, in ways that reach more people.

Phil Hawksworth: That’s probably not the answer you were necessarily—

Vitaly: Oh, that’s a nice ending. Thank you so much. No, that’s perfect, that really is. All right, I felt everything went good! Thank you so much for being with us! I’m handing out to Scott!

Phil Hawksworth: Great!

Vitaly: I’m just here to play questions and answers. So, thank you so much, Phil! I’m still here, but Scott, the stage is yours, now! Maybe you can share with us what’s coming up next on Smashing TV?

Scott: I will, but first, Phil, I can’t wait to see how the implementation of scratch-and-sniff API work. Sounds very interesting. And, Vitaly, JAMstackify is already taken.

Vitaly: (dejected) Taken?! Can we buy it?

Scott: No, it exists!

Vitaly: Well, that’s too late. I’m always late.

Phil Hawksworth: That’s exciting in its own way.

Vitaly: That’s the story of my life. I’m always late.

Scott: Members coming up next, I believe, Thursday, the 13th, we have my ol’ pa, Zach Leatherman, talking about what he talks about best, which is fonts. So, he’s talking about the Five Y’s of Font Implementations. And then, I’m also very interested in the one we have coming up on the 19th, which is editing video, with JavaScript and CSS, with Eva Faria. So, stay tuned for both of those.

Scott: So, that is again, next Thursday, with Zach Leatherman, and then on the 19th, with Eva, who will be talking about editing video in JavaScript and CSS. So, on that note, Phil, I can’t see you anymore, are you still there?

Phil Hawksworth: I’m here!

Scott: On that note, thank you very much everyone! Also, is anybody in the, kind of, close to Toronto area? Or anybody that’s ever wanted to visit Toronto? We have a conference coming up at the end of June, and there’s still a few tickets left. So, maybe we’ll see some of you there.

Vitaly: Thank you so much, everyone else!

Vitaly: Oh, by the way, just one more thing! Maybe Phil mentioned it, but we also have the JAMstack Conference in London, in July. So, that’s something to watch out for, as well. But I’m signing off and going to get my salad! Not sure what you—

Scott: Okay, goodbye, everybody!

Vitaly: All right, bye-bye, everyone.

That’s A Wrap!

We kindly thank Smashing Members from the very bottom of our hearts for their continuous and kind support — and we can’t wait to host more webinars in the future.

Also, Phil will be MCing at SmashingConf Toronto 2019 next week and at JAMstack_conf — we’d love to see you there as well!

Please do let us know if you find this series of interviews useful, and whom you’d love us to interview, or what topics you’d like us to cover and we’ll get right to it.

Smashing Editorial (ra, il)
Are Design Conferences Dead?

Hubspot estimates that a 400-person conference on the level of something like Hustle Con could realistically cost around $20,000 to host, not to mention all the time and coordination that goes into running an event that size.

As you can imagine, conference organizers have to charge outlandish registration fees in order to make an event of that size profitable. What’s more, many of them gate off the most valuable parts of the events (namely, the workshops and other interactive activities), to those who pay more for access to them.

With the high cost to put on a conference and the high cost to attend one, are design conferences worth anyone’s time anymore?

Reasons Why Conferences May Be Declining

Here’s the thing: design and development conferences don’t seem to be in short supply these days. Many of the major players in this space — A List Apart, Smashing Magazine, IBM, Gartner, Adobe — continue to put on annual conferences that attract attendees in big numbers.

But does that mean that design conferences are a good investment of anyone’s time and money, be it the host, speakers, attendees, or sponsors?

There are certainly a number of valid objections people could make:

They’re Expensive

McCrindle and the Melbourne Convention Bureau set out to discover what’s happening with The Future of Business Meetings (i.e. conferences and other professional events). Their report revealed that these were the most commonly reported challenges:

It should come as no surprise to anyone that the cost of these events and the destinations in which they are held take the top three spots.

While some employers may still be inclined to spend marketing dollars on sending employees to professional conferences, this likely isn’t something self-employed designers can afford to do for themselves. Unfortunately, this is a group of professionals that could greatly benefit from the educational and networking opportunities available at conferences.

What’s more, the stuff that would make attending an event like this worth it (e.g. workshops, one-on-one opportunities with guest speakers, etc.) is only available when you pay for upgraded VIP packages.

PowerPoint Sucks

One of the problems with speakers relying on a stack of notecards or a PowerPoint presentation is that they’re committed to the words and graphics before them. Not on engaging with the audience.

We’ve become so accustomed to ingesting content quickly on the web and having brief but often valuable discussions on social media with our peers, that this lecture-like format can feel tedious.

When conference sessions are formatted in such a manner, this can easily leave attendees feeling bored and disconnected. Worse, stack up enough of these sessions back-to-back over the course of an eight-hour day, for multiple days in a row, and they could have a bad case of burnout by the end of it.

They’re Shallow

Dave Thackeray, the Communications Director of Word and Mouth, was asked why he attends marketing conferences. This was his response:

I only ever go to marketing conferences to meet new people and listen to their experiences. Those people never include the speakers, who are generally wrapped up in their own world and rarely give in the altruistic sense.

Thackeray isn’t the only one who finds value in spending less time watching the stage. McCrindle survey respondents said they devote less time to keynote speeches and sessions, too:

Instead, it’s their hope that more time will be given to meaningful networking encounters.

It’s Repetitive Information

One of the problems with planning a large conference is that they need to have well-known guest speakers along with a well-formulated schedule to lure in attendees. And if they want to sell a lot of tickets in order to turn a profit, this information has to be made available to the public early.

There are a couple of problems with that…

For starters, conferences end up with many of the same guest speakers who give the same ol’ messages to every crowd they stand before.

Then, there’s the fact that speakers have to plan their topics in advance. When this happens, conferences end up loaded with basic topics that designers could easily get from blogs, vlogs, and podcasts.

If conferences are planned too far ahead, they end up relying too much on “celebrity” speakers than on creating a thought-provoking and future-focused schedule of discussions.

Should You Spend Your Money on a Conference This Year?

There are many reasons why you might feel as though attending a design conference isn’t worth it. Unpaid time away from work. Boring topics. Too many lectures and not enough questions answered.

But there are also many reasons why you should consider attending a design conference regardless of the objections.

Change of Pace – For many of us, solo work environments at home are the norm. Conferences give you a reason to break out of your routine and try learning (and maybe even doing) in a different location.

Networking – While you can certainly “meet” fellow designers and creatives online, nothing beats the experience of talking to your peers over a cup of coffee or a pint.

Build Your Brand – Whether you go to a conference as an attendee, a speaker, or a sponsor, it gives you a new opportunity to get your brand out there.

Exploration / Education – When choosing a conference, the topics covered are likely to be the biggest draw for you. But don’t just use this as an opportunity to re-examine what you know. Explore other fields, other techniques, and other tools.

New Formats – The traditional conference format — with keynote speakers, breakaway sessions, and the occasional break or after party — may not have much of a future. However, new conference formats have a good chance of surviving:

As you can see, many people envision formats that are more convenient, more interactive, and feel more personal. And because many of these won’t rely on an oversized convention space and celebrity speakers, the cost to host and cost to attend should be much lower, too.

But, look, if you’re still feeling wary about the value of conferences, why not join a local professional group instead? They’re convenient, cost-effective, and still give you a chance to learn and mingle with your peers.

https://www.webdesignerdepot.com/wp-content/plugins/wp-fs-polls/widget/poll.js.php

 

Featured image via Unsplash.

Add Realistic Chalk and Sketch Lettering Effects with Sketch’it – only $5!

Source
p img {display:inline-block; margin-right:10px;}
.alignleft {float:left;}
p.showcase {clear:both;}
body#browserfriendly p, body#podcast p, div#emailbody p{margin:0;}

Optimizing Google Fonts Performance

Optimizing Google Fonts Performance

Optimizing Google Fonts Performance

Danny Cooper

It’s fair to say Google Fonts are popular. As of writing, they have been viewed over 29 trillion times across the web and it’s easy to understand why — the collection gives you access to over 900 beautiful fonts you can use on your website for free. Without Google Fonts you would be limited to the handful of “system fonts” installed on your user’s device.

System fonts or ‘Web Safe Fonts’ are the fonts most commonly pre-installed across operating systems. For example, Arial and Georgia are packaged with Windows, macOS and Linux distributions.

Like all good things, Google Fonts do come with a cost. Each font carries a weight that the web browser needs to download before they can be displayed. With the correct setup, the additional load time isn’t noticeable. However, get it wrong and your users could be waiting up to a few seconds before any text is displayed.

Google Fonts Are Already Optimized

The Google Fonts API does more than just provide the font files to the browser, it also performs a smart check to see how it can deliver the files in the most optimized format.

Let’s look at Roboto, GitHub tells us that the regular variant weighs 168kb.

Roboto Regular has a file size of 168kb
168kb for a single font variant. (Large preview)

However, if I request the same font variant from the API, I’m provided with this file. Which is only 11kb. How can that be?

When the browser makes a request to the API, Google first checks which file types the browser supports. I’m using the latest version of Chrome, which like most browsers supports WOFF2, so the font is served to me in that highly compressed format.

If I change my user-agent to Internet Explorer 11, I’m served the font in the WOFF format instead.

Finally, if I change my user agent to IE8 then I get the font in the EOT (Embedded OpenType) format.

Google Fonts maintains 30+ optimized variants for each font and automatically detects and delivers the optimal variant for each platform and browser.

— Ilya Grigorik, Web Font Optimization

This is a great feature of Google Fonts, by checking the user-agent they are able to serve the most performant formats to browsers that support those, while still displaying the fonts consistently on older browsers.

Browser Caching

Another built-in optimization of Google Fonts is browser caching.

Due to the ubiquitous nature of Google Fonts, the browser doesn’t always need to download the full font files. SmashingMagazine, for example, uses a font called ‘Mija’, if this is the first time your browser has seen that font, it will need to download it completely before the text is displayed, but the next time you visit a website using that font, the browser will use the cached version.

As the Google Fonts API becomes more widely used, it is likely visitors to your site or page will already have any Google fonts used in your design in their browser cache.

— FAQ, Google Fonts

The Google Fonts browser cache is set to expire after one year unless the cache is cleared sooner.

Note: Mija isn’t a Google Font, but the principles of caching aren’t vendor-specific.

Further Optimization Is Possible

While Google invests great effort in optimizing the delivery of the font files, there are still optimizations you can make in your implementation to reduce the impact on page load times.

1. Limit Font Families

The easiest optimization is to simply use fewer font families. Each font can add up to 400kb to your page weight, multiply that by a few different font families and suddenly your fonts weigh more than your entire page.

I recommend using no more than two fonts, one for headings and another for content is usually sufficient. With the right use of font-size, weight, and color you can achieve a great look with even one font.

Example showing three weights of a single font family (Source Sans Pro)
Three weights of a single font family (Source Sans Pro). (Large preview)

2. Exclude Variants

Due to the high-quality standard of Google Fonts, many of the font families contain the full spectrum of available font-weights:

  • Thin (100)
  • Thin Italic (100i)
  • Light (300)
  • Light Italic (300i)
  • Regular (400)
  • Regular Italic (400i)
  • Medium (600)
  • Medium Italic (600i)
  • Bold (700)
  • Bold Italic (700i)
  • Black (800)
  • Black Italic (800i)

That’s great for advanced use-cases which might require all 12 variants, but for a regular website, it means downloading all 12 variants when you might only need 3 or 4.

For example, the Roboto font family weighs ~144kb. If however you only use the Regular, Regular Italic and Bold variants, that number comes down to ~36kb. A 75% saving!

The default code for implementing Google Fonts looks like this:

<link href="https://fonts.googleapis.com/css?family=Roboto" rel="stylesheet">

If you do that, it will load only the ‘regular 400’ variant. Which means all light, bold and italic text will not be displayed correctly.

To instead load all the font variants, we need to specify the weights in the URL like this:

<link href="https://fonts.googleapis.com/css?family=Roboto:100,100i,300,300i,400,400i,500,500i,700,700i,900,900i" rel="stylesheet">

It’s rare that a website will use all variants of a font from Thin (100) to Black (900), the optimal strategy is to specify just the weights you plan to use:

<link href="https://fonts.googleapis.com/css?family=Roboto:400,400i,600" rel="stylesheet">

This is especially important when using multiple font families. For example, if you are using Lato for headings, it makes sense to only request the bold variant (and possibly bold italic):

<link href="https://fonts.googleapis.com/css?family=Lato:700,700i" rel="stylesheet">

3. Combine Requests

The code snippet we worked with above makes a call to Google’s servers (fonts.googleapis.com), that’s called an HTTP request. Generally speaking, the more HTTP requests your web page needs to make, the longer it will take to load.

If you wanted to load two fonts, you might do something like this:

<link href="https://fonts.googleapis.com/css?family=Open+Sans:400,400i,600" rel="stylesheet">
<link href="https://fonts.googleapis.com/css?family=Roboto" rel="stylesheet">

That would work, but it would result in the browser making two requests. We can optimize that by combining them into a single request like this:

<link href="https://fonts.googleapis.com/css?family=Roboto|Open+Sans:400,400i,600" rel="stylesheet">

There is no limit to how many fonts and variants a single request can hold.

4. Resource Hints

Resource hints are a feature supported by modern browsers which can boost website performance. We are going to take a look at two types of resource hint: ‘DNS Prefetching’ and ‘Preconnect’.

Note: If a browser doesn’t support a modern feature, it will simply ignore it. So your web page will still load normally.

DNS Prefetching

DNS prefetching allows the browser to start the connection to Google’s Fonts API (fonts.googleapis.com) as soon as the page begins to load. This means that by the time the browser is ready to make a request, some of the work is already done.

To implement DNS prefetching for Google Fonts, you simply add this one-liner to your web pages <head>:

<link rel="dns-prefetch" href="//fonts.googleapis.com">
Preconnect

If you look at the Google Fonts embed code it appears to be a single HTTP request:

<link href="https://fonts.googleapis.com/css?family=Roboto:400,400i,700" rel="stylesheet">

However, if we visit that URL we can see there are three more requests to a different URL, https://fonts.gstatic.com. One additional request for each font variant.

Source code of a Google Fonts Request
(View source) (Large preview)

The problem with these additional requests is that the browser won’t begin the processes to make them until the first request to https://fonts.googleapis.com/css is complete. This is where Preconnect comes in.

Preconnect could be described as an enhanced version of prefetch. It is set on the specific URL the browser is going to load. Instead of just performing a DNS lookup, it also completes the TLS negotiation and TCP handshake too.

Just like DNS Prefetching, it can be implemented with one line of code:

<link rel="preconnect" href="https://fonts.gstatic.com/" crossorigin>

Just adding this line of code can reduce your page load time by 100ms. This is made possible by starting the connection alongside the initial request, rather than waiting for it to complete first.

5. Host Fonts Locally

Google Fonts are licensed under a ‘Libre’ or ‘free software’ license, which gives you the freedom to use, change and distribute the fonts without requesting permission. That means you don’t need to use Google’s hosting if you don’t want to — you can self-host the fonts!

All of the fonts files are available on Github. A zip file containing all of the fonts is also available (387MB).

Lastly, there is a helper service that enables you to choose which fonts you want to use, then it provides the files and CSS needed to do so.

There is a downside to hosting fonts locally. When you download the fonts, you are saving them as they are at that moment. If they are improved or updated, you won’t receive those changes. In comparison, when requesting fonts from the Google Fonts API, you are always served the most up-to-date version.

Google Fonts API Request showing a last modified date
Google Fonts API Request. (Large preview)

Note the lastModified parameter in the API. The fonts are regularly modified and improved.

6. Font Display

We know that it takes time for the browser to download Google Fonts, but what happens to the text before they are ready? For a long time, the browser would show blank space where the text should be, also known as the “FOIT” (Flash of Invisible Text).

Recommended Reading: FOUT, FOIT, FOFT” by Chris Coyier

Showing nothing at all can be a jarring experience to the end user, a better experience would be to initially show a system font as a fallback and then “swap” the fonts once they are ready. This is possible using the CSS font-display property.

By adding font-display: swap; to the @font-face declaration, we tell the browser to show the fallback font until the Google Font is available.

 @font-face { font-family: 'Roboto'; src: local('Roboto Thin Italic'), url(https://fonts.gstatic.com/s/roboto/v19/KFOiCnqEu92Fr1Mu51QrEz0dL-vwnYh2eg.woff2) format('woff2'); font-display: swap; }

In 2019 Google, announced they would add support for font-display: swap. You can begin implementing this right away by adding an extra parameter to the fonts URL:

https://fonts.googleapis.com/css?family=Roboto&display=swap

7. Use the Text Parameter

A little known feature of the Google Fonts API is the text parameter. This rarely-used parameter allows you to only load the characters you need.

For example, if you have a text-logo that needs to be a unique font, you could use the text parameter to only load the characters used in the logo.

It works like this:

https://fonts.googleapis.com/css?family=Roboto&text=CompanyName

Obviously, this technique is very specific and only has a few realistic applications. However, if you can use it, it can cut down the font weight by up to 90%.

Note: When using the text parameter, only the “normal” font-weight is loaded by default. To use another weight you must explicitly specify it in the URL.

https://fonts.googleapis.com/css?family=Roboto:700&text=CompanyName

Wrapping Up

With an estimated 53% of the top 1 million websites using Google Fonts, implementing these optimizations can have a huge impact.

How many of the above have you tried? Let me know in the comments section.

Smashing Editorial (dm, yk, il)
How To Create A PDF From Your Web Application

How To Create A PDF From Your Web Application

How To Create A PDF From Your Web Application

Rachel Andrew

Many web applications have the requirement of giving the user the ability to download something in PDF format. In the case of applications (such as e-commerce stores), those PDFs have to be created using dynamic data, and be available immediately to the user.

In this article, I’ll explore ways in which we can generate a PDF directly from a web application on the fly. It isn’t a comprehensive list of tools, but instead I am aiming to demonstrate the different approaches. If you have a favorite tool or any experiences of your own to share, please add them to the comments below.

Starting With HTML And CSS

Our web application is likely to be already creating an HTML document using the information that will be added to our PDF. In the case of an invoice, the user might be able to view the information online, then click to download a PDF for their records. You might be creating packing slips; once again, the information is already held within the system. You want to format that in a nice way for download and printing. Therefore, a good place to start would be to consider if it is possible to use that HTML and CSS to generate a PDF version.

CSS does have a specification which deals with CSS for print, and this is the Paged Media module. I have an overview of this specification in my article “Designing For Print With CSS”, and CSS is used by many book publishers for all of their print output. Therefore, as CSS itself has specifications for printed materials, surely we should be able to use it?

The simplest way a user can generate a PDF is via their browser. By choosing to print to PDF rather than a printer, a PDF will be generated. Sadly, this PDF is usually not altogether satisfactory! To start with, it will have the headers and footers which are automatically added when you print something from a webpage. It will also be formatted according to your print stylesheet — assuming you have one.

The problem we run into here is the poor support of the fragmentation specification in browsers; this may mean that the content of your pages breaks in unusual ways. Support for fragmentation is patchy, as I discovered when I researched my article, “Breaking Boxes With CSS Fragmentation”. This means that you may be unable to prevent suboptimal breaking of content, with headers being left as the last item on the page, and so on.

In addition, we have no ability to control the content in the page margin boxes, e.g. adding a header of our choosing to each page or page numbering to show how many pages a complex invoice has. These things are part of the Paged Media spec, but have not been implemented in any browser.

My article “A Guide To The State Of Print Stylesheets In 2018” is still accurate in terms of the type of support that browsers have for printing directly from the browser, using a print stylesheet.

Printing Using Browser Rendering Engines

There are ways to print to PDF using browser rendering engines, without going through the print menu in the browser, and ending up with headers and footers as if you had printed the document. The most popular options in response to my tweet were wkhtmltopdf, and printing using headless Chrome and Puppeteer.

wkhtmltopdf

A solution that was mentioned a number of times on Twitter is a commandline tool called wkhtmltopdf. This tool takes an HTML file or multiple files, along with a stylesheet and turns them into a PDF. It does this by using the WebKit rendering engine.

https://platform.twitter.com/widgets.js

Essentially, therefore, this tool does the same thing as printing from the browser, however, you will not get the automatically added headers and footers. On this positive side, if you have a working print stylesheet for your content then it should also nicely output to PDF using this tool, and so a simple layout may well print very nicely.

Unfortunately, however, you will still run into the same problems as when printing directly from the web browser in terms of lack of support for the Paged Media specification and fragmentation properties, as you are still printing using a browser rendering engine. There are some flags that you can pass into wkhtmltopdf in order to add back some of the missing features that you would have by default using the Paged Media specification. However, this does require some extra work on top of writing good HTML and CSS.

Headless Chrome

Another interesting possibility is that of using Headless Chrome and Puppeteer to print to PDF.

https://platform.twitter.com/widgets.js

However once again you are limited by browser support for Paged Media and fragmentation. There are some options which can be passed into the page.pdf() function. As with wkhtmltopdf, these add in some of the functionality that would be possible from CSS should there be browser support.

It may well be that one of these solutions will do all that you need, however, if you find that you are fighting something of a battle, it is probable that you are hitting the limits of what is possible with current browser rendering engines, and will need to look for a better solution.

JavaScript Polyfills For Paged Media

There are a few attempts to essentially reproduce the Paged Media specification in the browser using JavaScript — essentially creating a Paged Media Polyfill. This could give you Paged Media support when using Puppeteer. Take a look at paged.js and Vivliostyle.

https://platform.twitter.com/widgets.js

Using A Print User Agent

If you want to stay with an HTML and CSS solution then you need to look to a User Agent (UA) designed for printing from HTML and CSS, which has an API for generating the PDF from your files. These User Agents implement the Paged Media specification and have far better support for the CSS Fragmentation properties; this will give you greater control over the output. Leading choices include:

A print UA will format documents using CSS — just as a web browser does. As with browser support for CSS, you need to check the documentation of these UAs to find out what they support. For example, Prince (which I am most familiar with) supports Flexbox but not CSS Grid Layout at the time of writing. When sending your pages to the tool that you are using, typically this would be with a specific stylesheet for print. As with a regular print stylesheet, the CSS you use on your site will not all be appropriate for the PDF version.

Creating a stylesheet for these tools is very similar to creating a regular print stylesheet, making the kind of decisions in terms of what to display or hide, perhaps using a different font size or colors. You would then be able to take advantage of the features in the Paged Media specification, adding footnotes, page numbers, and so on.

In terms of using these tools from your web application, you would need to install them on your server (having bought a license to do so, of course). The main problem with these tools is that they are expensive. That said, given the ease with which you can then produce printed documents with them, they may well pay for themselves in developer time saved.

It is possible to use Prince via an API, on a pay per document basis, via a service called DocRaptor. This would certainly be a good place for many applications to start as if it looked as if it would become more cost effective to host your own, the development cost of switching would be minimal.

A free alternative, which is not quite as comprehensive as the above tools but may well achieve the results you need, is WeasyPrint. It doesn’t fully implement all of Paged Media, however, it implements more than a browser engine does. Definitely, one to try!

Other tools which claim to support conversion from HTML and CSS include PDFCrowd, which boldly claims to support HTML5, CSS3 and JavaScript. I couldn’t, however, find any detail on exactly what was supported, and if any of the Paged Media specification was. Also receiving a mention in the responses to my tweet was mPDF.

Moving Away From HTML And CSS

There are a number of other solutions, which move away from using HTML and CSS and require you to create specific output for the tool. A couple of JavaScript contenders are as follows:

https://platform.twitter.com/widgets.js

Recommendations

Other than the JavaScript-based approaches, which would require you to create a completely different representation of your content for print, the beauty of many of these solutions is that they are interchangeable. If your solution is based on calling a commandline tool, and passing that tool your HTML, CSS, and possibly some JavaScript, it is fairly straightforward to switch between tools.

In the course of writing this article, I also discovered a Python wrapper which can run a number of different tools. (Note that you need to already have the tools themselves installed, however, this could be a good way to test out the various tools on a sample document.)

For support of Paged Media and fragmentation, Prince, Antenna House, and PDFReactor are going to come out top. As commercial products, they also come with support. If you have a budget, complex pages to print to PDF, and your limitation is developer time, then you would most likely find these to be the quickest route to have your PDF creation working well.

However, in many cases, the free tools will work well for you. If your requirements are very straightforward then wkhtmltopdf, or a basic headless Chrome and Puppeteer solution may do the trick. It certainly seemed to work for many of the people who replied to my original tweet.

If you find yourself struggling to get the output you want, however, be aware that it may be a limitation of browser printing, and not anything you are doing wrong. In the case that you would like more Paged Media support, but are not in a position to go for a commercial product, perhaps take a look at WeasyPrint.

I hope this is a useful roundup of the tools available for creating PDFs from your web application. If nothing else, it demonstrates that there are a wide variety of choices, if your initial choice isn’t working well.

Please add your own experiences and suggestions in the comments, this is one of those things that a lot of us end up dealing with, and personal experience shared can be incredibly helpful.

Further Reading

A roundup of the various resources and tools mentioned in this article, along with some other useful resources for working with PDF files from web applications.

Specifications

Articles and Resources

Tools

Smashing Editorial (il)
Unleash The Power Of Path Animations With SVGator

Unleash The Power Of Path Animations With SVGator

Unleash The Power Of Path Animations With SVGator

Mikołaj Dobrucki

(This is a sponsored article.) Last year, a comprehensive introduction to the basic use of SVGator was published here on Smashing Magazine. If you’d like to learn about the fundamentals of SVGator, setting up your first projects, and creating your first animations, we strongly recommended you read it before continuing with this article.

Today, we’ll take a second look to explore some of the new features that have been added to it over the last few months, including the brand new Path Animator.

Note: Path Animator is a premium feature of SVGator and it’s not available to trial users. During a seven-day trial, you can see how Path Animator works in the sample project you’ll find in the app, but you won’t be able to apply it to your own SVGs unless you’re opted-in for a paid plan. SVGator is a subscription-based service. Currently, you can choose between a monthly plan ($18USD/month) and a yearly plan ($144USD total, $12USD/month). For longer projects, we recommend you consider the yearly option.

Path Animator is just the first of the premium features that SVGator plans to release in the upcoming months. All the new features will be available to all paid users, no matter when they subscribed.

The Charm Of Path Animations

SVG path animations are by no means a new thing. In the last few years, this way of enriching vector graphics has been heavily used all across the web:

Animation by Codrops
Animation by Codrops (Original demo) (Large preview)

Path animations gained popularity mostly because of their relative simplicity: even though they might look impressive and complex at first glance, the underlying rule is in fact very simple.

How Do Path Animations Work?

You might think that SVG path animations require some extremely complicated drawing and transform functions. But it’s much simpler than it looks. To achieve effects similar to the example above, you don’t need to generate, draw, or animate the actual paths — you just animate their strokes. This brilliant concept allows you to create seemingly complex animations by animating a single SVG attribute: stroke-dashoffset.

Animating this one little property is responsible for the entire effect. Once you have a dashed line, you can play with the position of dashes and gaps. Combine it with the right settings and it will give you the desired effect of a self-drawing SVG path.

If this still sounds rather mysterious or you’d just like to learn about how path animations are made in more detail, you will find some useful resources on this topic at the end of the article.

No matter how simple path animations are compared with what they look like, don’t think coding them is always straightforward. As your files get more complicated, so does animating them. And this is where SVGator comes to the rescue.

Furthermore, sometimes you might prefer not to touch raw SVG files. Or maybe you’re not really fond of writing code altogether. Then SVGator has got you covered. With the new Path Animator, you can create even the most complex SVG path animations without touching a line of code. You can also combine coding with using SVGator.

To better understand the possibilities that Path Animator gives us, we will cover three separate examples presenting different use cases of path animations.

Example #1: Animated Text

In the first example, we will animate text, creating the impression of self-writing letters.

Final result of the first example
Final result of the first example (Large preview)

Often used for lettering, this cute effect can also be applied to other elements, such as drawings and illustrations. There’s a catch, though: the animated element must be styled with strokes rather than fills. Which means, for our text, that we can’t use any existing font.

Outlining fonts, no matter how thin, always results in closed shapes rather than open paths. There are no regular fonts based on lines and strokes.

Outlined fonts are not suitable for self-drawing effects with Path Animator
Outlined fonts are not suitable for self-drawing effects with Path Animator. (Large preview)
Path animations require strokes - these paths would work great with Path Animator
Path animations require strokes. These paths would work great with Path Animator. (Large preview)

Therefore, if we want to animate text using path animations we need to draw it ourselves (or find some ready-made vector letters suitable for this purpose). When drawing your letters, feel free to use some existing font or typography as a reference — don’t violate any copyright, though! Just keep in mind it’s not possible to use fonts out of the box.

Preparing The File

Rather than starting with an existing typeface, we’ll begin with a simple hand-drawn sketch:

A rough sketch for the animation
A rough sketch for the animation (pardon my calligraphy skills!) (Large preview)

Now it’s time to redraw the sketch in a design tool. I used Figma, but you can use any app that supports SVG exports, such as Sketch, Adobe XD, or Adobe Illustrator.

Usually, I start with the Pen tool and roughly follow the sketch imported as a layer underneath:

Once done, I remove the sketch from the background and refine the paths until I’m happy with the result. No matter what tools you use, nor technique, the most important thing is to prepare the drawing as lines and to use just strokes, no fills.

These paths can be successfully animated with Path Animator as they are created with strokes
These paths can be successfully animated with Path Animator as they are created with strokes. (Large preview)

In this example, we have four such paths. The first is the letter “H”; the second is the three middle letters “ell”; and “o” is the third. The fourth path is the line of the exclamation mark.

The dot of “!” is an exception — it’s the only layer we will style with a fill, rather than a stroke. It will be animated in a different way than the other layers, without using Path Animator.

Note that all the paths we’re going to animate with Path Animator are open, except for the “o,” which is an ellipse. Although animating closed paths (such as ellipses or polygons) with Path Animator is utterly fine and doable, it’s worth making it an open path as well, because this is the easiest way to control exactly where the animation starts. For this example, I added a tiny gap in the ellipse just by the end of the letter “l” as that’s where you’d usually start writing “o” in handwriting.

A small gap in the letter ‘o’ controls the starting point of the animation
A small gap in the letter ‘o’ controls the starting point of the animation. (Large preview)

Before importing our layers to SVGator, it’s best to clean up the layers’ structure and rename them in a descriptive way. This will help you quickly find your way around your file once working in SVGator.

If you’d like to learn more about preparing your shapes for path animations, I would recommend you check out this tutorial by SVGator.

It’s worth preparing your layers carefully and thinking ahead as much as possible. At the time of writing, in SVGator you can’t reimport the file to an already existing animation. While animating, if you discover an issue that requires some change to the original file, you will have to import it into SVGator again as a new project and start working on your animation from scratch.

Creating An Animation

Once you’re happy with the structure and naming of your layers, import them to SVGator. Then add the first path to the timeline and apply Path Animator to it by choosing it from the Animators list or by pressing Shift + T.

To achieve a self-drawing effect, our goal is to turn the path’s stroke into a dashed line. The length of a dash and a gap should be equal to the length of the entire path. This allows us to cover the entire path with a gap to make it disappear. Once hidden, change stroke-dashoffset to the point where the entire path is covered by a dash.

SVGator makes it very convenient for us by automatically providing the length of the path. All we need to do is to copy it with a click, and paste it into the two parameters that SVGator requires: Dashes and Offset. Pasting the value in Dashes turns the stroke into a dashed line. You can’t see it straightaway as the first dash of the line covers the whole path. Setting the Offset will change stroke-dashoffset so the gap then covers the path.

Once done, let’s create an animation by adding a new keyframe further along the timeline. Bring Offset back to zero and… ta-da! You’ve just created a self-drawing letter animation.

Creating a self-writing text animation in SVGator: Part 1

There’s one little issue with our animation, though. The letter is animated — but back-to-front. That is, the animation starts at the wrong end of the path. There are, at least, a few ways to fix it. First, rather than animating the offset from a positive value to zero, we can start with a negative offset and bring it to zero. Unfortunately, this may not work as expected in some browsers (for example, Safari does not accept negative stroke offsets). While we wait for this bug to be fixed, let’s choose a different approach.

Let’s change the Dashes value so the path starts with a gap followed by a dash (by default, dashed lines always start with a dash). Then reverse the values of the Offset animation. This will animate the line in the opposite direction.

Reversing the direction of self-writing animation

Now that we’re done with “H” we can move on to animating all the other paths in the same way. Eventually, we finish by animating the dot of the exclamation mark. As it’s a circle with a fill, not an outline, we won’t use Path Animator. Instead, we use Scale Animator to the make dot pop in at the end of the animation.

Creating a self-writing text animation in SVGator: Part 2

Always remember to check the position of an element’s transform origin when playing with scale animations. In SVG, all elements have their transform origin in the top-left corner of the canvas by default. This often makes coding transform functions a very hard and tedious task. Fortunately, SVGator saves us from all this hassle by calculating all the transforms in relation to the object, rather than the canvas. By default, SVGator sets the transform origin of each element in its own top-left corner. You can change its position from the timeline, using a button next to the layer’s name.

Transform origin control in SVGator’s Timeline panel
Transform origin control in SVGator’s Timeline panel (Large preview)

Let’s add the final touch to the animation and adjust the timing functions. Timing functions define the speed over time of objects being animated, allowing us to manipulate their dynamics and make the animation look more natural.

In this case, we want to give the impression of the text being written by a single continuous movement of a hand. Therefore, I applied an Ease-in function to the first letter and an Ease-out function to the last letter, leaving the middle letters with a default Linear function. In SVGator, timing functions can be applied from the timeline, next to the Animator’s parameters:

Timing function control in SVGator’s Timeline panel
Timing function control in SVGator’s Timeline panel (Large preview)

After applying the same logic to the exclamation mark, our animation is done and ready to be exported!

Final result of the first example

Example #2: Animated Icon

Now let’s analyze a more UI-focused example. Here, we’re going to use SVGator to replicate a popular icon animation: turning a hamburger menu into a close button.

Final result of the second example
Final result of the second example (Large preview)

The goal of the animation is to smoothly transform the icon so the middle bar of the hamburger becomes a circle, and the surrounding bars cross each other creating a close icon.

Preparing The File

To better understand what we’re building and how to prepare a file for such an animation, it’s useful to start with a rough sketch representing the key states of the animation.

It’s helpful to plan your animation ahead and start with a sketch
It’s helpful to plan your animation ahead and start with a sketch. (Large preview)

Once we have a general idea of what our animation consists of, we can draw the shapes that will allow us to create it. Let’s start with the circle. As we’re going to use path animation, we need to create a path that covers the whole journey of the line, starting as a straight bar in the middle of the hamburger menu, and finishing as a circle around it.

Complete path of the middle bar animation turning into a circle
Complete path of the middle bar animation turning into a circle. (Large preview)

The other two bars of the menu icon have an easier task — we’re just going to rotate them and align to the centre of the circle. Once we combine all the shapes together we’re ready to export the file as SVG and import it to SVGator.

Our icon, ready to be animated in SVGator
Our icon, ready to be animated in SVGator. (Large preview)

Creating An Animation

Let’s start by adding the first shape to the timeline and applying Path Animator to it. For the initial state, we want only the horizontal line in the middle to be visible, while the rest of the path stays hidden. To achieve it, set the length of the dash to be equal to the length of the hamburger’s lines. This will make our straight middle line of the menu icon. To find the correct value, you can use the length of one of the other lines of the hamburger. You can copy it from the timeline or from the Properties panel in the right sidebar of the app.

Then set the length of the following gap to a value greater than the remaining length of the path so it becomes transparent.

Creating an icon animation in SVGator: Part 1

The initial state of our animation is now ready. What happens next is that we turn this line into a circle. To do that, two things need to happen simultaneously. First, we use Offset to move the line along the path. Second, we change the width of the dash to make the line longer and cover the entire circle.

Creating an icon animation in SVGator: Part 2

With the circle ready, let’s take care of the close icon. Just as before, we need to add two animations at the same time. First, we want the top line to lean down (45 degrees) and the bottom line to move up (-45 degrees) until they cross each other symmetrically. Second, we need to move the lines slightly to the right so they stay aligned with the circle.

As you might remember from the previous example, in SVGator, transform origins are located in the top-left corner by default. That’s very convenient to us as, in this case, that is exactly where we want them to be. All we need to do is to apply the correct rotation angles.

When it comes to aligning the lines with the circle, note that we don’t have to move them separately. Rather than adding Animators to both of the lines, we can add a group containing both of them to the timeline, and animate them together with a single Position Animator. That’s one of those moments when a nice, clean file structure pays off.

Creating an icon animation in SVGator: Part 3

Next thing to do is add a reverse animation that turns the close button back into a hamburger menu. To achieve that, we can basically follow the previous steps in reverse order. To speed things up a bit, copy and paste the existing keyframes on the timeline — that’s yet another improvement SVGator introduced in the past few months.

Reversing icon animation: back to the hamburger menu.

Once done, don’t forget to adjust the timing functions. Here, I’ve decided to go with an Ease-in-out effect on all elements. Our icon is ready for action.

Final result of the second example

Implementation

Even though implementing microinteractions goes far beyond the scope of this article, let me take a moment to briefly describe how such animation can be brought to life in a real project.

Illustrations and decorative animation are usually more straightforward. Quite often, you can use SVG files generated by SVGator out of the box. We can’t say that about our icon, though. We want the first part of the animation to be triggered when users click the button to open the menu drawer, and the second part of the animation to play once they click it for the second time to close the menu.

To do that, we need to slice our animation into a few separate pieces. We won’t discuss here the technical details of implementing such animation, as it depends very much on the environment and tech stack you’re working with; but let’s at least inspect the generated SVG file to extract the crucial animation states.

We’ll start by hiding the background and adjusting the size of the canvas to match the dimensions of the icon. In SVGator, we can do this at any time, and there are no restrictions to the size of our canvas. We can also edit the styles of the icon, such as color and width of the stroke, and test what your graphic will look like on a dark background using a switch in the top-right corner.

Preparing icon animation for development

When we’re ready, we can export the icon to SVG and open it in a text editor.

Elements you see in the body of the document are the components of your graphic. You should also notice that the first line of code is exceptionally long. Straight after the opening <svg> tag, there’s a <style> element with plenty of minified CSS inside. That’s where all the animation happens.

<svg viewBox="0 0 600 450" fill="none" xmlns="http://www.w3.org/2000/svg" id="el_vNqlglrYK"><style>@-webkit-keyframes kf_el_VqluQuq4la_an_DAlSHvvzUV… </style> <!-- a very long line of code that contains all the animations -->
<g id="el_SZQ_No_bd6">
<g id="el_BVAiy-eRZ3_an_biAmTPyDq" data-animator-group="true" data-animator-type="0"><g id="el_BVAiy-eRZ3">
<g id="el_Cnv4q4_Zb-_an_6WWQiIK_0" data-animator-group="true" data-animator-type="1"><path id="el_Cnv4q4_Zb-" d="M244 263H356" stroke-linecap="round"/></g>
<g id="el_aGYDsRE4sf_an_xRd24ELq3" data-animator-group="true" data-animator-type="1"><path id="el_aGYDsRE4sf" d="M244 187H356" stroke-linecap="round"/></g>
</g></g>
<path id="el_VqluQuq4la" d="M244 225H355.5C369 225 387.5 216.4 387.5 192C387.5 161.5 352 137 300 137C251.399 137 212 176.399 212 225C212 273.601 251.399 313 300 313C348.601 313 388 273.601 388 225C388 176.399 349.601 137 301 137" stroke-linecap="round"/>
</g>
</svg>

It’s really nice of SVGator to minify the code for us. However, we’ll have to undo it. Once the CSS code is written out in full (you can do this in your browser’s development tools, or in one of many online code formatters), you’ll see that it’s a long list of @keyframes followed by a list of id rules using the @keyframes in their animation properties.

The code may look unreadable (even when nicely formatted) but, rather, it’s very repetitive. Once you understand the underlying rule, following it is no longer that hard. First, we’ve got the @keyframes. Each animated element has its own @keyframes @-rule. They’re sorted in the same order as elements in SVGator. Therefore, in our case, the first @-rule applies to the middle bar of the hamburger icon, the second one to the top bar, and so on. The keyframes inside also match the order of keyframes created in SVGator:

@keyframes kf_el_VqluQuq4la_an_DAlSHvvzUV{ /* middle bar animation */ 0%{ stroke-dasharray: 112, 2000; /* initial state */ } 25%{ stroke-dasharray: 112, 2000; } 50%{ stroke-dasharray: 600, 2000; /* turns into a circle */ } 75%{ stroke-dasharray: 600, 2000; /* back at initial state */ } 100%{ stroke-dasharray: 112, 2000; }
}

All you need to do now is use these values from the keyframes to code your interaction. It’s still a lot of work up ahead, but thanks to SVGator the crucial part is already done.

What happens next is another story. However, if you’re curious to see an example of how this animation could work in practice, here’s a little CodePen for you:

See the Pen [Hamburger icon path animation](https://codepen.io/smashingmag/pen/ewNdJo) by Mikołaj.

See the Pen Hamburger icon path animation by Mikołaj.

The example is built with React and uses states to switch CSS classes and trigger transitions between the respective CSS values. Therefore, there’s no need for animation properties and @keyframes @-rules.

You can use a set of CSS custom priorities listed at the top of the SCSS code to control the styling of the icon as well as duration of the transitions.

Example #3: Animated Illustration

For the third and final example of this article, we’re going to create an animated illustration of an atom with orbiting particles.

Final result of the third example
Final result of the third example (Large preview)

Dashed Lines And Dotted Lines

In the two previous examples, we’ve taken advantage of dashed SVG paths. Dashed lines are cool but did you know that SVG also supports dotted lines? A dotted line in SVG is no more, no less than a dashed line with round caps, and the length of the dashes is equal to zero.

If we can have a path with lots of dots, who said we can’t have a path with a single dot? Animate the stroke’s offset and you’ve got an animation of a circle following any path you want. In this example, the path will be an ellipse, and a circle will represent an orbiting particle.

Preparing The File

As no SVG element can have two strokes at the same time, for each of the particles we need two ellipses. The first of them will be an orbit, the second will be for the particle. Multiply it by three, combine with another circle in the middle for the nucleus and here it is: a simple atom illustration, ready to be animated.

Our illustration, ready to be imported to SVGator.
Our illustration, ready to be imported to SVGator. (Large preview)

Note: At the time of writing, creating dotted lines in Figma is a hard task. Not only can’t you set a dash’s length to zero, but neither can you create a gap between the dashes long enough to cover the entire path. And when it comes to export, all your settings are gone anyway. Nonetheless, if you’re working with Figma, don’t get discouraged. We’ll fix all of these issues easily in SVGator. And if you’re working in Sketch, Illustrator, or similar, you shouldn’t experience these problems at all.

Creating An Animation

Once you have imported the SVG file into SVGator, we’ll start by fixing the dotted lines. As mentioned above, to achieve a perfect circular dot, we need a dash length set to zero. We also set the length of the gap equal to the length of the path (copied from above). This will make our dot the only one visible.

Creating an illustration animation in SVGator: Part 1

With all three particles ready, we can add new keyframes and animate the offsets by one full length of the path. Finally, we play a bit with the Offset values to make the dots’ positions feel a bit more random.

Creating an illustration animation in SVGator: Part 2.

Remember that if you find your animation too fast or too slow you can always change its duration in the settings. Right now, SVGator supports animations up to 30 seconds long.

As a final touch, I’ve added a bit of a bounce to the whole graphic.

Creating an illustration animation in SVGator: Part 3

Now the animation is ready and can be used, perhaps as a loader graphic.

Final result of the third example

A Quick Word On Accessibility

As you can see, there’s hardly a limit to what can be achieved with SVG. And path animations are a very important part of its tool kit. But as a wise man once said, with great power comes great responsibility. Please refrain from overusing them. Animation can add life to your product and delight users, but too many animations can ruin the whole experience as well.

Also, consider allowing users to disable animations. People suffering from motion sickness and other related conditions will find such an option very helpful.

Conclusion

That’s it for today. I hope you enjoyed this journey through the possibilities of path animations. To try them out yourself, just visit SVGator’s website where you can also learn about its other features and pricing. If you have any remarks or questions, please don’t hesitate to add them in the comments. And stay tuned for the next updates about SVGator — there are lots of other amazing new features already on the way!

Further Reading

Useful Resources

Smashing Editorial (og, yk, il)
Master Geolocation for Free with IP Geolocation API

Understanding where your users are, can mean the difference between making a sale, attracting a new member, getting a push on social media, building a solid user experience, or bombing completely.

Because where a person is, affects their schedule, their culture, their currency, their whole outlook on the world.

Sure, you might not be able to translate your entire site into their preferred language—who knows, maybe you can—but if they don’t speak the same language as you, you can at least acknowledge it. And okay, perhaps you can only accept payments in US dollars, but wouldn’t it be nice to let them know approximately how much you’re charging in Yen, or Dongs, or Euros. How about shipping costs, do you ship to Africa? And then there’s GDPR, wouldn’t it be great to be legally compliant in the EU without having to be compliant globally? You can even use an IP address’ longitude and latitude to identify the nearest real world store, if your company has more than one location.

You can do all of this and much much more with geolocation. Geolocation ties the web to the real world by taking an IP address, finding it in a huge database that identifies IP address locations, and then returning key details like country, currency, and continent.

The downside with geolocation is that it’s often expensive to implement, charging for the number of queries submitted to the database, making it beyond the budget of many small businesses.

The IP Geolocation API is different. Cutting down on the number of features, and the amount of data returned, means that you can get the same accurate location data, in a simple to use package, for free.

The Geolocation API returns data as a JSON object, and includes tons of information you can use to tailor your site.

Getting Started with IP Geolocation API

You can connect to the IP Geolocation API in any number of coding languages, the most common is JavaScript. However, because we want to customize the contents of the page, we don’t want to rely on a front-end technology. Instead we’re going to write the page before it gets to the user, with PHP.

The first thing you want to do is create a new file, name it “country.php” and save it to your dev environment, or open up your ftp client and upload it to a PHP enabled web space. Next we can edit the file’s code.

Open the PHP file:

<?php

Then we need to create a function to grab the location of the IP address we supply:

function grabIpInfo($ip)
{
}

Don’t worry if you’re not used to writing functions in PHP, all will become clear. (In simple terms, when we repeat that function’s name, passing an IP address to it, the code inside the brackets will run.)

Inside those brackets, we’re going to use PHP’s built-in cURL to get the data from the API:

$curl = curl_init();

This code initiates the cURL session. Next we need to set the options on the cURL session. The first is very important, it’s the URL to load, which will be https://api.ipgeolocationapi.com/geolocate/ and on the end we have to append the actual IP address we want to look up (remember we’re passing this into the function, so we just need to use the $ip parameter):

curl_setopt($curl, CURLOPT_URL, "https://api.ipgeolocationapi.com/geolocate/" . $ip);

The next option tells cURL to return the data as a string, rather than just outputting it to the page:

curl_setopt($curl, CURLOPT_RETURNTRANSFER, TRUE);

Next we execute the cURL session, saving the data to a new variable that we can return from the function:

$returnData = curl_exec($curl);

Now we close the cURL session:

curl_close($curl);

Lastly we return the data from the function:

return $returnData;

Remember all of that code was inside the curly brackets of the function. Now, we can call the function as often as we like and access the data, from outside the function:

$ipInfo = grabIpInfo("91.213.103.0");

Our $ipInfo variable now has a ton of great data returned about that IP address. If you’d like to see it, you just output the variable using PHP’s echo:

echo $ipInfo;

Finally, don’t forget to close the PHP:

?>

Now you can run your file in a browser, you’ll see something like this:

{"continent":"Europe","address_format":"{{recipient}}\n{{street}}\n{{postalcode}} {{city}}\n{{country}}","alpha2":"DE","alpha3":"DEU","country_code":"49","international_prefix":"00","ioc":"GER","gec":"GM","name":"Germany","national_destination_code_lengths":[2,3,4,5],"national_number_lengths":[6,7,8,9,10,11],"national_prefix":"0","number":"276","region":"Europe","subregion":"Western Europe","world_region":"EMEA","un_locode":"DE","nationality":"German","eu_member":true,"eea_member":true,"vat_rates":{"standard":19,"reduced":[7],"super_reduced":null,"parking":null},"postal_code":true,"unofficial_names":["Germany","Deutschland","Allemagne","Alemania","ドイツ","Duitsland"],"languages_official":["de"],"languages_spoken":["de"],"geo":{"latitude":51.165691,"latitude_dec":"51.20246505737305","longitude":10.451526,"longitude_dec":"10.382203102111816","max_latitude":55.0815,"max_longitude":15.0418962,"min_latitude":47.2701115,"min_longitude":5.8663425,"bounds":{"northeast":{"lat":55.0815,"lng":15.0418962},"southwest":{"lat":47.2701115,"lng":5.8663425}}},"currency_code":"EUR","start_of_week":"monday"}

It’s incredible that we can grab all of that data so easily, but it’s not actually all that useful. We need to be able to pick out parts of that code.

Go back and delete the line of code that begins with “echo”, and replace it with the following:

$ipJsonInfo = json_decode($ipInfo);

That line converts the JSON string into an object we can access.

Next echo the part of the data you want to. In this case, the name property gives us the country name:

echo $ipJsonInfo->name;

Run your file again, and you’ll see the country name “Germany” in your browser.

That’s cool, but there’s one more thing we need to do before this code is really useful, and that’s find the IP address dynamically, so that we’re looking up the country name of whomever is accessing the page. IP Geolocation API will do that for you if you access it with JavaScript (you simply omit the IP address altogether). Unfortunately, we’re using PHP, but fortunately, PHP gives us a really simple way to access the current user’s IP address, $_SERVER[“REMOTE_ADDR”].

Replace the IP address in the function call with the dynamic version:

$ipInfo = grabIpInfo($_SERVER["REMOTE_ADDR"]);

When you now run the code you’ll get your own country output to the browser. (If you don’t see anything, you’re probably testing locally so you don’t have a detectable IP address, upload to your webspace to see the file working correctly.)

Here’s the full code:

<?php function grabIpInfo($ip)
{ $curl = curl_init(); curl_setopt($curl, CURLOPT_URL, "https://api.ipgeolocationapi.com/geolocate/" . $ip); curl_setopt($curl, CURLOPT_RETURNTRANSFER, TRUE); $returnData = curl_exec($curl); curl_close($curl); return $returnData; } $ipInfo = grabIpInfo($_SERVER["REMOTE_ADDR"]);
$ipJsonInfo = json_decode($ipInfo); echo $ipJsonInfo->name; ?>

Simple Geolocation

As you can see, accessing geolocation data with the IP Geolocation API is incredibly fast and simple. And being able to access this kind of data for free is an incredible resource for any developer.

IP Geolocation API’s data is all run from the MaxMind database—one of the most trusted sources—which means you’re getting reliable, and accurate data, from a premium provider, at zero cost.

IP Geolocation API is the perfect place to start experimenting with geolocation. Once you get started, it will become an essential part of every user experience you design.

 

[– Featured image via DepositPhotos –]

[– This is a sponsored post on behalf of IP Geolocation API –]

Add Realistic Chalk and Sketch Lettering Effects with Sketch’it – only $5!

Source
p img {display:inline-block; margin-right:10px;}
.alignleft {float:left;}
p.showcase {clear:both;}
body#browserfriendly p, body#podcast p, div#emailbody p{margin:0;}

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from Youtube
Vimeo
Consent to display content from Vimeo
Google Maps
Consent to display content from Google