Spam Detection APIs

I was trying to research the landscape of these the other day — And by research, I mean light Googling and asking on Twitter. Weirdly, very little comes to mind when thinking about spam detection APIs. I mean some kind of URL endpoint, paid or not, where you can hit it with a block of text and whatever metadata it wants and it’ll tell you if it’s spam or not. Seems like something an absolute buttload of the internet could use and something companies of any size could monetize or offer free to show off their smart computer machines.

Akismet is the big kid on the block.

You might think of Akismet as a WordPress thing, and it is. It’s an Automattic product and is perhaps primarily used as a WordPress plugin. I run that here on CSS-Tricks and it’s blocked 1,989,326 so far.

It also has a generic API. There are libraries for Dart, JavaScript, PHP, Python, Ruby, Go, etc, as well as plugins for other CMSs. So if you use a different CMS or have your custom app, you can still use Akismet for spam detection.

After you get an API key, you can POST to a URL endpoint with all the data it needs and it’ll respond true if it’s spam or false if it’s not.

To get better results over time, you can also submit content telling it if it’s spam or ham (ham is the opposite of spam… good content).

Plino

Several folks mentioned Plino to me.

There is a lot to like here, like the fact that it’s free and returns a JSON response like you might be used to in development. There is the fancy buzzword “Machine Learning” being used here, too. It makes me think that with lots of people using this, it’ll get smarter and smarter as it goes. But there is no way to submit ham/spam, so I’m not sure that’s really the case.

There is other stuff that makes me nervous. It’s clearly on Heroku which is kinda expensive at scale, and so with no pricing model it seems like it could go away anytime. Sorta feels like a fun-but-abandoned side project. Last commit was two years ago, as I write.

OOPSpam

OOPSpam looks super similar to Plino, but has a pricing model, which is nice. They publish their latency, which is over two seconds. I haven’t compared that to the others so I have no idea if they are all that slow. Two seconds seems like a lot for an API call to me, but maybe it’s not that big of a deal since it’s an async submit?

CleanTalk

CleanTalk has a clear pricing structure and appears to have plenty of customers, which is a plus to me. The website looks a little janky though, which makes me worry a little.

(Sorry if that’s a little rude, but it’s just mental math to me. Good design is one of the least expensive investments a company can make to increase trust, so companies that overlook it make me wonder.)

It looks like they have a variety of anti-spam solutions though, which is interesting. For example, you can ask an API to see if an IP, email, or domain is on a blacklist, which is a pretty raw way of blocking bad stuff, but useful for stuff like protecting against spam registrations (rather than just checking blocks of text). They also have a firewall solution, which is interesting for folks trying to block spam before it even touches their servers.

Email options…

There are a couple out there that seem rather specific to testing emails. As in, testing your own emails before you send them to make sure they aren’t considered spam by email services. Here are a couple I cam across while looking around:

The post Spam Detection APIs appeared first on CSS-Tricks.

Popular Design News of the Week: June 17, 2019 – June 23, 2019

Every week users submit a lot of interesting stuff on our sister site Webdesigner News, highlighting great content from around the web that can be of interest to web designers. 

The best way to keep track of all the great stories and news being posted is simply to check out the Webdesigner News site, however, in case you missed some here’s a quick and useful compilation of the most popular designer news that we curated from the past week.

Note that this is only a very small selection of the links that were posted, so don’t miss out and subscribe to our newsletter and follow the site daily for all the news.

Internet has Meltdown Over Facebook’s Libra Logo

 

How to Section your HTML

 

Clarity 2.0 Released – Open Source Design System

 

Facebook Libra

 

One Designer’s Struggle to Redesign his Website

 

Building a Web-Based Motion Graphics Editor

 

Styling in Modern Web Apps

 

7 Ways to Design Better Forms

 

Why I Don’t Use Web Components

 

Birds’ Eye Logos

 

Introducing: Whimsical Mind Maps

 

How to Build an Animated CSS Thermometer Chart

 

Death of IE; its Aftermath on Cross Browser Compatibility

 

Vivaldi 2.6

 

Gitting your First Dev Job

 

Building a Component Library Using Figma

 

Sustainable Web Manifesto

 

What are Design Systems and Why do You Need One?

 

Do Animations Deliver a Better User Experience?

 

A Complete Guide to Iconography

 

Adobe has an Ambitious Plan to Help the Public Spot Fake Images

 

Drop Caps & Design Systems

 

Thinking Beyond Screens

 

Optimizing Google Fonts Performance

 

Baseline Grid

 

Want more? No problem! Keep track of top design news from around the web with Webdesigner News.

Add Realistic Chalk and Sketch Lettering Effects with Sketch’it – only $5!

Source
p img {display:inline-block; margin-right:10px;}
.alignleft {float:left;}
p.showcase {clear:both;}
body#browserfriendly p, body#podcast p, div#emailbody p{margin:0;}

Reduced Motion Picture Technique, Take Two

Did you see that neat technique for using the <picture> element with <source media=""> to serve an animated image (or not) based on a prefers-reduced-motion media query?

After we shared that in our newsletter, we got an interesting reply from Michael Gale:

What about folks who love their animated GIFs, but just didn’t want the UI to be zooming all over the place? Are they now forced to make a choice between content and UI?

I thought that was a pretty interesting question.

Also, whenever I see <img src="gif.gif"> these days, my brain is triggered into WELL WHAT ABOUT MP4?! territory, as I’ve been properly convinced that videos are better-in-all-ways on the web than GIFs. Turns out, some browsers support videos right within the <img> element and, believe it or not, you can write fallbacks for that, with — drumroll, please — for the <picture> element as well!

Let’s take a crack at combining all this stuff.

Adding an MP4 source

The easy one is adding an additional <source> with the video. That means we’ll need three source media files:

  1. A fallback non-animated graphic when prefers-reduced-motion is reduce.
  2. An animated GIF as the default.
  3. An MP4 video to replace the GIF, if the fallback is supported.

For example:

<picture> <source srcset="static.png" media="(prefers-reduced-motion: reduce)"></source> <source srcset="animated.mp4" type="video/mp4"> <img srcset="animated.gif" alt="animated image" />
</picture>

Under default conditions in Chrome, only the GIF is downloaded and shown:

Chrome DevTools showing only gif downloaded

Under default conditions in Safari, only the MP4 is downloaded and shown:

Safari DevTools showing only mp4 downloaded

If you’ve activated prefers-reduced-motion: reduce in either Chrome or Safari (on my Mac, I go to System PreferencesAccessibilityDisplayReduce Motion), both browsers only download the static PNG file.

Chrome DevTools showing png downloaded

I tested Firefox and it doesn’t seem to work, instead continuing to download the GIF version. Firefox seems to support prefers-reduced-motion, but perhaps it’s just not supported on <source> elements yet? I’m not sure what’s up there.

Wouldn’t it be kinda cool to provide a single animated source and have a tool generate the others from it? I bet you could wire that up with something like Cloudinary.

Adding a toggle to show the animated version

Like Michael Gale mentioned, it seems a pity that you’re entirely locked out from seeing the animated version just because you’ve flipped on a reduced motion toggle.

It should be easy enough to have a <button> that would use JavaScript to yank out the media query and force the browser to show the animated version.

I’m fairly sure there is no practical way to do this declaratively in HTML. We also can’t put this button within the <picture> tag. Even though <picture> isn’t a replaced element, the browser still gets confused and doesn’t like it. Instead, it doesn’t render it at all. No big deal, we can use a wrapper.

We can position the button on top of the image somewhere. This is just an arbitrary choice — you could put it wherever you want, or even have the entire image be tappable as long as you think you could explain that to users. Remember to only show the button when the same media query matches:

.picture-wrap .animate-button { display: none;
} @media (prefers-reduced-motion: reduce) { .picture-wrap .animate-button { display: block; }
}

When the button is clicked (or tapped), we need to remove the media query to start the animation by downloading an animated source.

let button = document.querySelector(".animate-button"); button.addEventListener("click", () => { const parent = button.closest(".picture-wrap"); const picture = parent.querySelector("picture"); picture.querySelector("source[media]").remove();
});

Here’s that in action:

See the Pen
Prefers Reduction Motion Technique PLUS!
by Chris Coyier (@chriscoyier)
on CodePen.

Maybe this is a good component?

We could automatically include the button, the button styling, and the button functionality with a web component. Hey, why not?

See the Pen
Prefers Reduction Motion Technique as Web Component
by Chris Coyier (@chriscoyier)
on CodePen.

The post Reduced Motion Picture Technique, Take Two appeared first on CSS-Tricks.

Weekly Platform News: Mozilla’s AV1 Encoder, Samsung One UI CSS, DOM Matches Method

Šime posts regular content for web developers on webplatform.news.

In this week’s weekly roundup, Vimeo and Mozilla partner up on a video encoding format, how to bind instructions to to form fields using aria labels, the DOM has a matching function, and Samsung is working on its own CSS library.


Vimeo partners with Mozilla to use their rav1e encoder

Vittorio Giovara: AV1 is a royalty-free video codec designed by the Alliance for Open Media and the the most anticipated successor of H.264. Vimeo is contributing to the development of Mozilla’s AV1 encoder.

In order for AV1 to succeed, there is a need of an encoder like x264, a free and open source encoder, written by the community, for the community, and available to everyone: rav1e. Vimeo believes in what Mozilla is doing.

Use aria-describedby to bind instructions to form fields

Raghavendra Satish Peri: If you provide additional instructions for a form field, use the aria-describedby attribute to bind the instruction to the field. Otherwise, assistive technology users who use the Tab key might miss this information.

<label for="dob">Date of Birth</label>
<input type="text" aria-describedby="dob1" id="dob" />
<span id="dob1">Use DD/MM/YY</span>

Samsung Internet announces One UI CSS

Diego González: Samsung is experimentally developing a CSS library based on its new One UI design language. The library is called One UI CSS and includes styles for common form controls such as buttons, menus, and sliders, as well as other assets (web fonts, SVG icons, polyfills).

Some of the controls present in One UI CSS.

DOM elements have a matches method

Sam Thorogood: You can use the matches method to test if a DOM element has a specific CSS class, attribute or ID value. This method accepts a CSS selector and returns true if the element matches the given selector.

el.classList.has('foo') /* becomes */ el.matches('.foo');
el.hasAttribute('hello') /* becomes */ el.matches('[hello]');
el.id === 'bar' /* becomes */ el.matches('#bar');

The post Weekly Platform News: Mozilla’s AV1 Encoder, Samsung One UI CSS, DOM Matches Method appeared first on CSS-Tricks.

Managing WordPress Metadata in Gutenberg Using a Sidebar Plugin

WordPress released their anticipated over to the post editor, nicknamed Gutenberg, which is also referred to as the block editor. It transforms a WordPress post into a collection of blocks that you can add, edit, remove and re-order in the layout. Before the official release, Gutenberg was available as a plugin and, during that time, I was interested in learning how to create custom blocks for the editor. I was able to learn a lot about Gutenberg that I decided to put together a course that discusses almost everything you need to know to develop blocks for Gutenberg.

In this article, we will discuss metaboxes and metafields in WordPress. Specifically, we’ll cover how to replace the old PHP metaboxes in Gutenberg and extend Gutenberg’s sidebar to add a React component that will be used to manipulate the metadata using the global JavaScript Redux-like stores. Note that metadata in Gutenberg can also be manipulated using blocks. And both ways are discussed in my course, however, in this article I am going to focus on managing metadata in the sidebar since I believe this method will be used more often.

This article assumes some familiarity with ReactJS and Redux. Gutenberg relies heavily on these technologies to render UI and manage state. You can also check out the CSS-Tricks guide to learning Gutenberg for an intro to some of the concepts we’ll cover here.

The block editor interface

Gutenberg is a React application

At its core, Gutenberg is a ReactJS application. Everything you see in the editor is rendered using a React component. The post title, the content area that contains the blocks, the toolbar at the top and the right sidebar are all React components. Data or application states in this React application are stored in centralized JavaScript objects, or “stores.” These stores are managed by WordPress’ data module. This module shares a lot of core principles with Redux. So, concepts like stores, reducers, actions, action creators, etc., also exist in this module. I will sometimes refer to these stores as “Redux-like” stores.

These stores do not only store any data about the current post, like the post content (the blocks), the post title, and the selected categories, but it also stores global information about a WordPress website, like all the categories, tags, posts, attachments and so on. In addition to that, UI state information like,”is the sidebar opened or closed?” are also stored in these global stores. One of the jobs of the “data module” is to retrieve data from these stores and also change data in the stores. Since these stores are global and can be used by multiple React components changing data in any store will be reflected in any Gutenberg UI part (including blocks) that uses this piece of data.

Once a post is saved, the WordPress REST API will be used to update the post using the data stored in these global stores. So the post title, the content, categories etc., that are stored in these global stores will be sent as payload in the WP REST API endpoint that updates the post. And thus if we are able to manipulate data in these stores, once the user clicks save, the data that we manipulated will be stored in the database by the API without us having to do anything.

One of the things that is not managed by these global stores in Gutenberg is metadata. If you have some metafields that you used to manage using a metabox in the pre-Gutenberg “classic” editor, those will not be stored and manipulated using the global Redux-like stores by default. However, we can opt-in to manage metadata using JavaScript and the Redux-like stores. Although those old PHP metaboxes will still appear in Gutenberg, WordPress recommends porting these PHP metaboxes to another approach that uses the global stores and React components. And this will ensure a more unified and consistent experience. You can read more about problems that could occur by using PHP metaboxes in Gutenberg.

So before we start, let’s take a look at the Redux-like stores in Gutenberg and how to use them.

Retrieving and changing data in Gutenberg’s Redux-like stores

We now know that the Gutenberg page is managed using these Redux-like stores. We have some default “core” stores that are defined by WordPress. Additionally, we can also define our own stores if we have some data that we would like to share between multiple blocks or even between blocks and other UI elements in the Gutenberg page, like the sidebar. Creating your own stores is also discussed in my course and you can read about it in the official docs. However, in this article we will focus on how to use the existing stores. Using the existing stores lets us manipulate metadata; therefore we will not need to create any custom stores.

In order to access these stores, make sure you have the latest WordPress version with Gutenberg active and edit any post or page. Then, open your browser console an type the following statement:

wp.data.select('core/editor').getBlocks()

You should get something like this:

Let’s break this down. First, we access the wp.data module which (as we discussed) is responsible for managing the Redux-like stores. This module will be available inside the global wp variable if you have Gutenberg in your WordPress installation. Then, inside this module, we call a function called select. This function receives a store name as an argument and returns all the selectors for this store. A selector is a term used by the data module and it simply means a function that gets some data from the store. So, in our example, we accessed the core/editor store, and this will return a bunch of functions that can be used to get data from this store. One of these functions is getBlocks() which we called above. This function will return an array of objects where each object represents a block in your current post. So depending on how many blocks you have in your post, this array will change.

As we’ve seen, we accessed a store called core/editor. This store contains information about the current post that you are editing. We’ve also seen how to get the blocks in the current post but we can also get a lot of other stuff. We can get the title of the current post, the current post ID, the current post post type and pretty much everything else we might need.

But in the example above, we were only able to retrieve data. What if we want to change data? Let’s take a look at another selector in the ‘core/editor’ store. Let’s run this selector in our browser console:

wp.data.select('core/editor').getEditedPostAttribute('title')

This should return the title of the post currently being edited:

Great! Now what if we want to change the title using the data module? Instead of calling select(), we can call dispatch() which will also receive a store name and return some actions that you can dispatch. If you are familiar with Redux, terms like “actions” and “dispatch” will sound familiar to you. If this sounds new to you, all you need to know is that dispatching a certain action simply means changing some data in a store. In our case, we want to change the post title in the store, so we can call this function:

wp.data.dispatch('core/editor').editPost({title: 'My new title'})

Now take a look at the post title in the editor — it will be changed accordingly!

That’s how we can manipulate any piece of data in the Gutenberg interface. Wan retrieve the data using selectors and change that data using actions. Any change will be reflected in any part of the UI that uses this data.

There are, of course, other stores in Gutenberg that you can checkout on this page. So, let’s take a quick look at a couple of more stores before we move on.

The stores that you will use the most are the core/editor which we just looked at, and the core store. Unlike core/editor, the core store contains information, not only about the currently edited post, but also about the whole WordPress website in general. So, for instance, we can get all the authors on the website using:

wp.data.select('core').getAuthors()

We can also get some posts from the website like so:

wp.data.select('core').getEntityRecords('postType','post',{per_page: 5})

Make sure to run this twice if the first result was null. Some selectors like this one will send an API call first to get your post. That means the the returned value will initially be null until the API request is fulfilled:

Let’s look at one more store: edit-post. This store is responsible for the UI information in the actual editor. For example, we can have selectors that check if the sidebar is currently open:

wp.data.select('core/edit-post').isEditorSidebarOpened()

This will return true if the sidebar is opened. But try closing the sidebar, run this function again, and it should return false.

We can also open and close the sidebar by dispatching actions in this store. Having the sidebar open and running this action in the browser console, the sidebar should be closed:

wp.data.dispatch('core/edit-post').closeGeneralSidebar()

You will unlikely need to use this store, but it’s good to know that this is what Gutenberg does when you click on the sidebar icon to close it.

There are some more stores that you might need to take a look at. The core/notices store, for instance, could be useful. This can help you display error, warning and success messages in the Gutenberg page. You can also check all the other stores here.

Try to play around with these stores in your browser until you feel comfortable using them. After that, we can see how to use them in real code outside the browser.

Let’s setup a WordPress plugin to add a Gutenberg sidebar

Now that we know how to use the Redux-like stores in Gutenberg, the next step is to add a React sidebar component in the editor. This React component will be connected to the core/editor store and it will have some input that, when changed, will dispatch some action that will manipulate metadata — like the way we manipulated the post title earlier. But to do that, we need to create a WordPress plugin that holds our code.

You can follow along by cloning or downloading the repository for this example on GitHub.

Let’s create a new folder inside wp-content/plugins directory of the WordPress installation. I am going to call it gutenberg-sidebar. Inside this folder, let’s create the entry point for our plugin. The entry point is the PHP file that will be run when activating your plugin. It can be called index.php or plugin.php. We’re going to use plugin.php for this example and put some information about the plugin at the top as well as add some code that avoids direct access:

<?php
/** * Plugin Name: gutenberg-sidebar * Plugin URI: https://alialaa.com/ * Description: Sidebar for the block editor. * Author: Ali Alaa * Author URI: https://alialaa.com/ */
if( ! defined( 'ABSPATH') ) { exit;
}

You should find your plugin on the Plugins screen in the WordPress admin. Click on “Activate” in order for the code to run.

As you might imagine, we will write a lot of JavaScript and React from this point, forward. And in order to code React components easily we will need to use JSX. And JSX is not valid JavaScript that can run in your browser, it needs to be converted into plain JavaScript. We might also need to use ESNext features and import statements for importing and exporting modules.

And these features will not work on all browsers, so it’s better to transform our code into old ES5 JavaScript. Thankfully, there are a lot of tools that can help us achieve that. A famous one is webpack. webpack, however, is a big topic in itself and it won’t fit the scope of this article. Therefore, we are going to use another tool that WordPress provides which is @wordpress/scripts. By installing this package, we will get a recommended webpack configuration without having to do anything in webpack ourselves. Personally, I recommend that you learn webpack and try to do the configuration yourself. This will help you understand what’s going on and give you more control. You can find a lot of resources online and it’s also discussed in my course. But for now, let’s install the WordPress webpack configuration tool.

Change to your plugin folder in Terminal:

cd path/to/your/theme/folder

Next, we need to initialize npm in that folder in order to install @wordpress/scripts. This can be done by running this command:

npm init

This command will ask you some questions like the package name, version, license, etc. You can keep hitting Enter and leave the default values. You should have a package.json file in your folder and we can start installing npm packages. Let’s install @wordpress/scripts by running the following command:

npm install @wordpress/scripts --save-dev

This package will expose a CLI called wp-scripts which you can use in your npm scripts. There are different commands that you can run. We will focus on the build and start commands for now. The <code>build script will transform your files so that they are minified and ready for production. Your source code’s entry point is configured in src/index.js and the transformed output will be at build/index.js. Similarly, the start script will transform your code in src/index.js to build/index.js, however, this time, the code will not be minified to save time and memory — the command will also watch for changes in your files and re-build your files every time something is changed. The start command is suitable to be used for development while the build command is for production. To use these commands, we will replace the scripts key in the package.json file which will look something like this if you used the default options when we initialized npm.

Change this:

"scripts": { "test": "echo "Error: no test specified" && exit 1"
},

…to this:

"scripts": { "start": "wp-scripts start", "build": "wp-scripts build"
},

Now we can run npm start and npm run build to start development or build files, respectively.

Let’s create a new folder in the plugin’s root called src and add an index.js file in it. We can see it things are working by sprinkling in a little JavaScript. We’ll try an alert.

Now run npm start in Terminal. You should find the build folder created with the compiled index.js and also sourcemap files. In addition to that, you will notice that the build/index.js file is not minified and webpack will be watching for changes. Try changing the src/index.js file and save again. The build/index.js file will re-generated:

If you stop the watch (Ctrl + C ) in Terminal and run npm run build, the build/index.js file should now be minified.


Now that we have our JavaScript bundle, we need to enqueue this file in the Gutenberg editor. To do that we can use the hoo enqueue_block_editor_assets which will insure that the files are enqueued only in the Gutenberg page and not in other wp-admin pages where it isn’t needed.

We can enqueue our file like so in plugin.php:

// Note that it’s a best practice to prefix function names (e.g. myprefix)
function myprefix_enqueue_assets() { wp_enqueue_script( 'myprefix-gutenberg-sidebar', plugins_url( 'build/index.js', __FILE__ ) );
}
add_action( 'enqueue_block_editor_assets', 'myprefix_enqueue_assets' );

Visit the Gutenberg page. If all is well, you should get an alert, thanks to what we added to src/index.js earlier.

Fantastic! We’re ready to write some JavaScript code, so let’s get started!

Importing WordPress JavaScript packages

In order to add some content to the existing Gutenberg sidebar or create a new blank sidebar, we need to register a Gutenberg JavaScript plugin — and in order to do that, we need to use some functions and components from packages provided by WordPress: wp-plugins, wp-edit-post and wp-i18n. These packages will be available in the wp global variable in the browser as wp.plugins, wp.editPost and wp.i18n.

We can import the functions that we need into src/index.js. Specifically, those functions are: registerPlugin and PluginSidebar.

const { registerPlugin } = wp.plugins;
const { PluginSidebar } = wp.editPost;
const { __ } = wp.i18n;

It’s worth noting that we need to make sure that we have these files as dependencies when we enqueue our JavaScript file in order to make sure that our index.js file will be loaded after the wp-plugins, wp-edit-posts and wp-i18n packages. Let’s add those to plugin.php:

function myprefix_enqueue_assets() { wp_enqueue_script( 'myprefix-gutenberg-sidebar', plugins_url( 'build/index.js', __FILE__ ), array( 'wp-plugins', 'wp-edit-post', 'wp-i18n', 'wp-element' ) );
}
add_action( 'enqueue_block_editor_assets', 'myprefix_enqueue_assets' );

Notice that I added wp-element in there as a dependency. I did that because we will write some React components using JSX. Typically, we’d import the entire React library when making React components. However, wp-element is an abstraction layer atop React so we never have to install or import React directly. Instead, we use wp-element as a global variable.

These packages are also available as npm packages. Instead of importing functions from the global wp variable (which will only be available in the browser that your code editor knows nothing about), we can simply install these packages using npm and import them in our file. These WordPress packages are usually prefixed with @wordpress.

Let’s install the two packages that we need:

npm install @wordpress/edit-post @wordpress/plugins @wordpress/i18n --save

Now we can import our packages in index.js:

import { registerPlugin } from "@wordpress/plugins";
import { PluginSidebar } from "@wordpress/edit-post";
import { __ } from "@wordpress/i18n";

The advantage of importing the packages this way is that your text editor knows what @wordpress/edit-post and @wordpress/plugins are and it can autocomplete functions and components for you — unlike importing from wp.plugins and wp.editPost which will only be available in the browser while the text editor has no clue what wp is.

Your text editor can autocomplete component names for you.

You might also think that importing these packages in your bundle will increase your bundle size, but there’s no worries there. The webpack config file that comes with @wordpress/scripts is instructed to skip bundling these @wordpress packages and depend of the wp global variable instead. As a result, the final bundle will not actually contain the various packages, but reference them via the wp variable.

Great! so I am going to stick to importing packages using npm in this article, but you are totally welcome to import from the global wp variable if you prefer. Let’s now use the functions that we imported!

Registering a Gutenberg Plugin

In order to add a new custom sidebar in Gutenberg, we first need to register a plugin — and that’s what the registerPlugin function that we imported will do. As a first argument, registerPlugin will receive a unique slug for this plugin. We can have an array of options as a second argument. Among these options, we can have an icon name (from the dashicons library) and a render function. This render function can return some components from the wp-edit-post package. In our case. we imported the PluginSidebar component from wp-edit-post and created a sidebar in the Gutenberg editor by returning this component in the render function. I also added PluginSidebar inside a React fragment since we can add other components in the render function as well. Also, the __ function imported from wp-i18n will be used so we can translate any string that we output:

registerPlugin( 'myprefix-sidebar', { icon: 'smiley', render: () => { return ( <> <PluginSidebar title={__('Meta Options', 'textdomain')} > Some Content </PluginSidebar> </> ) }
})

You should now have a new icon beside the cog icon in the Gutenberg editor screen. This smiley icon will toggle our new sidebar which will have whatever content we have inside the PluginSidebar component:

If you were to click on that star icon beside the sidebar title, the sidebar smiley icon will be removed from the top toolbar. Therefore, we need to add another way to access our sidebar in case the user un-stars it from the top toolbar, and to do that, we can import a new component from wp-edit-post called PluginSidebarMoreMenuItem. So, let’s modify out import statement:

import { PluginSidebar, PluginSidebarMoreMenuItem } from "@wordpress/edit-post";

The PluginSidebarMoreMenuItem will allow us to add an item in the Gutenberg menu that you can toggle using the three dots icon at the top-right of the page. We want to modify our plugin to include this component. We need to give PluginSidebar a name prop and give PluginSidebarMoreMenuItem a target prop with the same value:

registerPlugin( 'myprefix-sidebar', { icon: 'smiley', render: () => { return ( <> <PluginSidebarMoreMenuItem target="myprefix-sidebar" > {__('Meta Options', 'textdomain')} </PluginSidebarMoreMenuItem> <PluginSidebar name="myprefix-sidebar" title={__('Meta Options', 'textdomain')} > Some Content </PluginSidebar> </> ) }
})

In the menu now, we will have a “Meta Options” item with our smiley icon. This new item should toggle our custom sidebar since they are linked using the name and the target props:

Great! Now we have a new space in our Gutenberg page. We can replace the “some content” text in PluginSidebar and add some React components of our own!

Also, let’s make sure to check the edit-post package documentation. This package contains a lot of other components that you can add in your plugin. These components can allow you to extend the existing default sidebar and add your own components in it. Also, we can find components that allow us to add items in the Gutenberg top-right menu and also for the blocks menu.

Handling metadata in the classic editor

Let’s take a quick look at how we used to manage metadata in the classic editor using metaboxes. First, install and activate the classic editor plugin in order to switch back to the classic editor. Then, add some code that will add a metabox in the editor page. This metabox will manage a custom field that we’ll call _myprefix_text_metafield. This metafield will just be a text field that accepts HTML markup. You can add this code in plugin.php or put it in a separate file and include it plugin.php:

<?php
function myprefix_add_meta_box() { add_meta_box( 'myprefix_post_options_metabox', 'Post Options', 'myprefix_post_options_metabox_html', 'post', 'normal', 'default' );
}
add_action( 'add_meta_boxes', 'myprefix_add_meta_box' );
function myprefix_post_options_metabox_html($post) { $field_value = get_post_meta($post->ID, '_myprefix_text_metafield', true); wp_nonce_field( 'myprefix_update_post_metabox', 'myprefix_update_post_nonce' ); ?> <p> <label for="myprefix_text_metafield"><?php esc_html_e( 'Text Custom Field', 'textdomain' ); ?></label> <br /> <input class="widefat" type="text" name="myprefix_text_metafield" id="myprefix_text_metafield" value="<?php echo esc_attr( $field_value ); ?>" /> </p> <?php
}
function myprefix_save_post_metabox($post_id, $post) { $edit_cap = get_post_type_object( $post->post_type )->cap->edit_post; if( !current_user_can( $edit_cap, $post_id )) { return; } if( !isset( $_POST['myprefix_update_post_nonce']) || !wp_verify_nonce( $_POST['myprefix_update_post_nonce'], 'myprefix_update_post_metabox' )) { return; } if(array_key_exists('myprefix_text_metafield', $_POST)) { update_post_meta( $post_id, '_myprefix_text_metafield', sanitize_text_field($_POST['myprefix_text_metafield']) ); }
}
add_action( 'save_post', 'myprefix_save_post_metabox', 10, 2 );

I am not going to go into details in this code since this is out of the scope of the article, but what it’s essentially doing is:

  • Making a metabox using the add_meta_box function
  • Rendering an HTML input using the myprefix_post_options_metabox_html function
  • Controlling the metafield, called _myprefix_text_metafield
  • Using the save_post action hook to get the HTML input value and update the field using update_post_meta.

If you have the classic editor plugin installed, then you should see the metafield in the post editor:

Note that the field is prefixed with an underscore (_myprefix_text_metafield) which prevents it from being edited using the custom fields metabox that comes standard in WordPress. We add this underscore because we intend to manage the field ourselves and because it allows us to hide it from the standard Custom Fields section of the editor.

Now that we have a way to manage the field in the classic editor, let’s go ahead and deactivate the classic editor plugin and switch back to Gutenberg. The metabox will still appear in Gutenberg. However, as we discussed earlier, WordPress recommends porting this PHP-based metabox using a JavaScript approach.

That’s what we will do in the rest of the article. Now that we know how to use the Redux-like stores to manipulate data and how to add some React content in the sidebar, we can finally create a React component that will manipulate our metafield and add it in the sidebar of the Gutenberg editor.

We don’t want to completely get rid of the PHP-based field because it’s still helpful in the event that we need to use the classic editor for some reason. So, we’re going to hide the field when Gutenberg is active and show it when the classic editor is active. We can do that by updating the myprefix_add_meta_box function to use the __back_compat_meta_box option:

function myprefix_add_meta_box() { add_meta_box( 'myprefix_post_options_metabox', 'Post Options', 'myprefix_post_options_metabox_html', 'post', 'normal', 'default', array('__back_compat_meta_box' => true) );
}

Let’s move on to creating the React component that manages the metadata.

Getting and setting metadata using JavaScript

We have seen how to get the post title and how to change it using the wp-data module. Let’s take a look at how to do the same for custom fields. To get metafields, we can call the save selector getEditedPostAttribute. But this time we will pass it a value of meta instead of title.

Once that’s done, test it out in the browser console:

wp.data.select('core/editor').getEditedPostAttribute('meta')

As you will see, this function will return an empty array, although we are sure that we have a custom field called _myprefix_text_metafield that we are managing using the classic editor. To make custom fields manageable using the data module, we first have to register the field in the plugin.php.

function myprefix_register_meta() { register_meta('post', '_myprefix_text_metafield', array( 'show_in_rest' => true, 'type' => 'string', 'single' => true, ));
}
add_action('init', 'myprefix_register_meta');

Make sure to set the show_in_rest option to true. WordPress will fetch the fields using the WP REST API. That means, we need to enable the show_in_rest option to expose it.

Run the console test again and we will have an object with all of our custom fields returned.

Amazing! We are able to get our custom field value, so now let’s take a look at how can we change the value in the store. We can dispatch the editPost action in the core/editor store and pass it an object with a meta key, which will be another object with the fields that we need to update:

wp.data.dispatch('core/editor').editPost({meta: {_myprefix_text_metafield: 'new value'}})

Now try running the getEditedPostAttribute selector again and the value should be updated to new value.

If you try saving a post after updating the field using Redux, you will get an error. And if you take a look at the Network tab in DevTools, you will find that the error is returned from the wp-json/wp/v2/posts/{id} REST endpoint that says that we are not allowed to update _myprefix_text_metafield.

This because WordPress treats any field that is prefixed with an underscore as a private value that cannot be updated using the REST API. We can, however, specify an auth_callback option that will allow updating this field using the REST API when it returns true as long as the editor is capable of editing posts. We can also add the sanitize_text_field function to sanitize the value before saving to the database:

function myprefix_register_meta() { register_meta('post', '_myprefix_text_metafield', array( 'show_in_rest' => true, 'type' => 'string', 'single' => true, 'sanitize_callback' => 'sanitize_text_field', 'auth_callback' => function() { return current_user_can('edit_posts'); } ));
}
add_action('init', 'myprefix_register_meta');

Now try the following:

  • Open a new post in WordPress.
  • Run this in the DevTools console see the current value of the field:
wp.data.select('core/editor').getEditedPostAttribute('meta')
  • Run this in DevTools to update the value:
wp.data.dispatch('core/editor').editPost({meta: {_myprefix_text_metafield: 'new value'}})
  • There will be errors, so save the post to clear them.
  • Refresh the page and run this in the DevTools console:
wp.data.select('core/editor').getEditedPostAttribute('meta')

Does the new value show up in the console? If so, great! Now we know how to get and set the meta field value using Redux and we are ready to create a react component in the sidebar to do that.

Creating a React component to manage the custom fields

What we need to do next is create a React component that contains a text field that is controlled by the value of the metafield in the Redux store. It should have the value of the meta field…and hey, we already know how to get that! We can create the component in a separate file and then import it index.js. However I am simply going to create directly in index.js since we’re dealing with a very small example.

Again, we’re only working with a single text field, so let’s import a component provided by a WordPress package called @wordpress/components. This package contains a lot of reusable components that are Gutenberg-ready without us having to write them from scratch. It’s a good idea to use components from this package in order to be consistent with the rest of the Gutenberg UI.

First, let’s install this package:

npm install --save @wordpress/components

We’ll import TextControl and PanelBody at the top of index.js to fetch the two components we need from the package:

import { PanelBody, TextControl } from "@wordpress/components";

Now let’s create our component. I am going to create a React functional component and call it PluginMetaFields, but you can use a class component if you’d prefer that.

let PluginMetaFields = (props) => { return ( <> <PanelBody title={__("Meta Fields Panel", "textdomain")} icon="admin-post" intialOpen={ true } > <TextControl value={wp.data.select('core/editor').getEditedPostAttribute('meta')['_myprefix_text_metafield']} label={__("Text Meta", "textdomain")} /> </PanelBody> </> )
}

PanelBody takes title, icon and initialOpen props. Title and icon are pretty self-explanatory. initialOpen puts the panel in an open/expanded state by default. Inside the panel, we have TextControl. which receives a label and a value for the input. As you can see in the snippet above, we get the value from the global store by accessing the _myprefix_text_metafield field from the object returned by wp.data.select('core/editor').getEditedPostAttribute('meta').

Notice that we are now depending on @wordpress/components and use wp.data. We must add these packages as dependencies when we enqueue our file in plugin.php:

function myprefix_enqueue_assets() {
wp_enqueue_script( 'myprefix-gutenberg-sidebar', plugins_url( 'build/index.js', __FILE__ ), array( 'wp-plugins', 'wp-edit-post', 'wp-element', 'wp-components', 'wp-data' ) );
}
add_action( 'enqueue_block_editor_assets', 'myprefix_enqueue_assets' );

Let’s officially add the component to the sidebar instead of the dummy text we put in earlier as a quick example:

registerPlugin( 'myprefix-sidebar', { icon: 'smiley', render: () => { return ( <> <PluginSidebarMoreMenuItem target="myprefix-sidebar" > {__('Meta Options', 'textdomain')} </PluginSidebarMoreMenuItem> <PluginSidebar name="myprefix-sidebar" title={__('Meta Options', 'textdomain')} > <PluginMetaFields /> </PluginSidebar> </> ) }
})

This should should give you a “Meta Options” panel that contains a “Meta Fields” title, a pin icon, and a text input with a “Test Meta” label and default value of “new value.”

Nothing will happen when you type into the text input because we are not yet handling updating the field. We’ll do that next, however, we first need to take care of another problem. Try to run editPost in the DevTools console again, but with a new value:

wp.data.dispatch('core/editor').editPost({meta: {_myprefix_text_metafield: 'a newer value'}})

You will notice that the value in the text field will not update to the new value. That’s the problem. We need the field to be controlled by the value in the Redux store, but we don’t see that reflected in the component. What’s up with that?

If you have used Redux with React before, then you probably know that we need to use a higher order component called connect in order to use Redux store values in a React component. The same goes for React components in Gutenberg — we have to use some higher order component to connect our component with the Redux-like store. Unfortunately, we are unable to simply call wp.data.select directly as we did before. This higher order component lives in the wp.data global variable which is also available as an npm package called @wordpress.data. So let’s install it to help us solve the issue.

npm install --save @wordpress/data

The higher order component we need is called withSelect, so let’s import it in index.js.

import { withSelect } from "@wordpress/data";

Remember that we already added wp-data as a dependency in wp_enqueue_script, so we can just use it by wrapping our component with it, like so:

PluginMetaFields = withSelect( (select) => { return { text_metafield: select('core/editor').getEditedPostAttribute('meta')['_myprefix_text_metafield'] } }
)(PluginMetaFields);

Here, we’re overriding our PluginMetaFields component and assigning it the same component, now wrapped with the withSelect higher order component. withSelect will receive a function as an argument. This function will receive the select function (which we used to access wp.data.select) and it should return an object. Each key in this object will be injected as a prop in the component (similar to connect in Redux). withSelect will return a function that we can pass it the component (PluginMetaFields) again as seen above. So, by having this higher order component, we now get text_metafield as a prop in the component, and whenever the meta value in the redux store is updated, the prop will also get updated — thus, the component will update since components update whenever a prop is changed.

let PluginMetaFields = (props) => { return ( <> <PanelBody title={__("Meta Fields Panel", "textdomain")} icon="admin-post" intialOpen={ true } > <TextControl value={props.text_metafield} label={__("Text Meta", "textdomain")} /> </PanelBody> </> )
}

If you now try and run editPost with a new meta value in your browser, the value of the text field in the sidebar should also be updated accordingly!

So far, so good. Now we know how to connect our React components with our Redux-like stores. We are now left with updating the meta value in the store whenever we type in the text field.

Dispatching actions in React components

We now need to dispatch the editPost action whenever we type into the text field. Similar to wp.data.select, we also should not call wp.data.dispatch directly in our component like so:

// Do not do this
<TextControl value={props.text_metafield} label={__("Text Meta", "textdomain")} onChange={(value) => wp.data.dispatch('core/editor').editPost({meta: {_myprefix_text_metafield: value}}) }
/>

We will instead wrap our component with another higher order component from the @wordpress.data package called withDispatch. We’ve gotta import that, again, in plugin.js:

import { withSelect, withDispatch } from "@wordpress/data";

In order to use it, we can wrap our component — which is already wrapped with withSelect and again with withDispatch — like so:

PluginMetaFields = withDispatch( (dispatch) => { return { onMetaFieldChange: (value) => { dispatch('core/editor').editPost({meta: {_myprefix_text_metafield: value}}) } } }
)(PluginMetaFields);

You can check out yet another WordPress package called @wordpress/compose. It makes using multiple high order components a bit cleaner for use in a single component. But I will leave that to you to try out for the sake of keeping our example simple.

withDispatch is similar to withSelect in that it will receive a function that has the dispatch function as an argument. That allows us to return an object from this function that contains functions that will be available inside the component’s props. I went about this by creating a function with an arbitrary name (onMetaFieldChange) that will receive a value, dispatch the editPost action, and set the meta value in the Redux store to the value received in the function’s argument. We can call this function in the component and pass it the value of the text field inside the onChange callback:

<TextControl value={props.text_metafield} label={__("Text Meta", "textdomain")} onChange={(value) => props.onMetaFieldChange(value)}
/>

Confirm everything is working fine by opening the custom sidebar in the WordPress post editor, updating the field, saving the post and then refreshing the page to make sure the value is saved in the database!

Let’s add a color picker

It should be clear now that can we update a meta field using JavaScript, but we’ve only looked at simple text field so far. The @wordpress/components library provides a lot of very useful components, including dropdowns, checkboxes, radio buttons, and so on. Let’s level up and conclude this tutorial by taking a look at how we can use the color picker component that’s included in the library.

You probably know what to do. First, we, import this component in index.js:

import { PanelBody, TextControl, ColorPicker } from "@wordpress/components";

Now, instead of registering a new custom field, let’s aim for simplicity and assume that this color picker will be controlled by the same _myprefix_text_metafield field we worked with earlier. We can use the ColorPicker component inside our PanelBody and it will be very similar to what we saw with TextControl, but the prop names will be slightly different. We have a color prop instead of value and onChangeComplete instead on onChange. Also, onChangeComplete will receive a color object that contains some information about the chosen color. This object will have a hex property we can use to store the color value in the _myprefix_text_metafield field.

Catch all that? It boils down to this:

<ColorPicker color={props.text_metafield} label={__("Colour Meta", "textdomain")} onChangeComplete={(color) => props.onMetaFieldChange(color.hex)}
/>

We should now have a color picker in our sidebar, and since it’s controlling the same meta field as the TextControl component, our old text field should update whenever we pick a new color.

That’s a wrap!

If you have reached this far in the article, then congratulations! I hope you enjoyed it. Make sure to check out my course if you want to learn more about Gutenberg and custom blocks. You can also find the final code for this article over at GitHub.

The post Managing WordPress Metadata in Gutenberg Using a Sidebar Plugin appeared first on CSS-Tricks.

Are Design Conferences Dead?

Hubspot estimates that a 400-person conference on the level of something like Hustle Con could realistically cost around $20,000 to host, not to mention all the time and coordination that goes into running an event that size.

As you can imagine, conference organizers have to charge outlandish registration fees in order to make an event of that size profitable. What’s more, many of them gate off the most valuable parts of the events (namely, the workshops and other interactive activities), to those who pay more for access to them.

With the high cost to put on a conference and the high cost to attend one, are design conferences worth anyone’s time anymore?

Reasons Why Conferences May Be Declining

Here’s the thing: design and development conferences don’t seem to be in short supply these days. Many of the major players in this space — A List Apart, Smashing Magazine, IBM, Gartner, Adobe — continue to put on annual conferences that attract attendees in big numbers.

But does that mean that design conferences are a good investment of anyone’s time and money, be it the host, speakers, attendees, or sponsors?

There are certainly a number of valid objections people could make:

They’re Expensive

McCrindle and the Melbourne Convention Bureau set out to discover what’s happening with The Future of Business Meetings (i.e. conferences and other professional events). Their report revealed that these were the most commonly reported challenges:

It should come as no surprise to anyone that the cost of these events and the destinations in which they are held take the top three spots.

While some employers may still be inclined to spend marketing dollars on sending employees to professional conferences, this likely isn’t something self-employed designers can afford to do for themselves. Unfortunately, this is a group of professionals that could greatly benefit from the educational and networking opportunities available at conferences.

What’s more, the stuff that would make attending an event like this worth it (e.g. workshops, one-on-one opportunities with guest speakers, etc.) is only available when you pay for upgraded VIP packages.

PowerPoint Sucks

One of the problems with speakers relying on a stack of notecards or a PowerPoint presentation is that they’re committed to the words and graphics before them. Not on engaging with the audience.

We’ve become so accustomed to ingesting content quickly on the web and having brief but often valuable discussions on social media with our peers, that this lecture-like format can feel tedious.

When conference sessions are formatted in such a manner, this can easily leave attendees feeling bored and disconnected. Worse, stack up enough of these sessions back-to-back over the course of an eight-hour day, for multiple days in a row, and they could have a bad case of burnout by the end of it.

They’re Shallow

Dave Thackeray, the Communications Director of Word and Mouth, was asked why he attends marketing conferences. This was his response:

I only ever go to marketing conferences to meet new people and listen to their experiences. Those people never include the speakers, who are generally wrapped up in their own world and rarely give in the altruistic sense.

Thackeray isn’t the only one who finds value in spending less time watching the stage. McCrindle survey respondents said they devote less time to keynote speeches and sessions, too:

Instead, it’s their hope that more time will be given to meaningful networking encounters.

It’s Repetitive Information

One of the problems with planning a large conference is that they need to have well-known guest speakers along with a well-formulated schedule to lure in attendees. And if they want to sell a lot of tickets in order to turn a profit, this information has to be made available to the public early.

There are a couple of problems with that…

For starters, conferences end up with many of the same guest speakers who give the same ol’ messages to every crowd they stand before.

Then, there’s the fact that speakers have to plan their topics in advance. When this happens, conferences end up loaded with basic topics that designers could easily get from blogs, vlogs, and podcasts.

If conferences are planned too far ahead, they end up relying too much on “celebrity” speakers than on creating a thought-provoking and future-focused schedule of discussions.

Should You Spend Your Money on a Conference This Year?

There are many reasons why you might feel as though attending a design conference isn’t worth it. Unpaid time away from work. Boring topics. Too many lectures and not enough questions answered.

But there are also many reasons why you should consider attending a design conference regardless of the objections.

Change of Pace – For many of us, solo work environments at home are the norm. Conferences give you a reason to break out of your routine and try learning (and maybe even doing) in a different location.

Networking – While you can certainly “meet” fellow designers and creatives online, nothing beats the experience of talking to your peers over a cup of coffee or a pint.

Build Your Brand – Whether you go to a conference as an attendee, a speaker, or a sponsor, it gives you a new opportunity to get your brand out there.

Exploration / Education – When choosing a conference, the topics covered are likely to be the biggest draw for you. But don’t just use this as an opportunity to re-examine what you know. Explore other fields, other techniques, and other tools.

New Formats – The traditional conference format — with keynote speakers, breakaway sessions, and the occasional break or after party — may not have much of a future. However, new conference formats have a good chance of surviving:

As you can see, many people envision formats that are more convenient, more interactive, and feel more personal. And because many of these won’t rely on an oversized convention space and celebrity speakers, the cost to host and cost to attend should be much lower, too.

But, look, if you’re still feeling wary about the value of conferences, why not join a local professional group instead? They’re convenient, cost-effective, and still give you a chance to learn and mingle with your peers.

https://www.webdesignerdepot.com/wp-content/plugins/wp-fs-polls/widget/poll.js.php

 

Featured image via Unsplash.

Add Realistic Chalk and Sketch Lettering Effects with Sketch’it – only $5!

Source
p img {display:inline-block; margin-right:10px;}
.alignleft {float:left;}
p.showcase {clear:both;}
body#browserfriendly p, body#podcast p, div#emailbody p{margin:0;}

How to Increase Your Page Size by 1,500% with webpack and Vue

Disclaimer: This article is mostly satire. I do not think that I am better than you because I once wrote some TypeScript nor do I think that it’s a good thing for us to make web pages bigger. Feel free to misrepresent these views to maximize clicks.

You know, there are a lot of articles out there telling you how to make your page smaller: optimize your images, remove extraneous CSS rules, re-write the whole thing in Dreamweaver using framesets. Look,  Walmart just reduced their page size by some numbers, give or take.

https://platform.twitter.com/widgets.js

What we don’t have are enough articles showing you how to increase your page size. In fact, the only article I could find was this one from the Geek Squad which ended up being about making the font size bigger. This is a good start, but I think we can do better.

Put on some weight

Now, why would you want to increase your page size? Isn’t that a not-very-nice thing for people on low bandwidth connections? Well, there are several excellent and in no-way-contrived reasons and here are three of them since things that come in threes are more satisfying.

  1. You have a gigabyte connection and you live in Tennessee so surely everyone else is in better shape than you are.
  2. Browsers do caching, silly. That means that you only have to download the site once. Stop complaining. First world problems.
  3. You don’t care whether or not people ever visit your site because you, “work to live, not live to work.”

If any of those completely relatable reasons resonates with you, I’d like to show you how I increased the size of my CSS by 1,500% — and you can too, with one simple webpack trick.

One weird trick

It all started when I decided to refactor my retirement plan project called The Urlist over to the Bulma CSS framework.

The original incarnation of the site was all hand-rolled and my Sass looked like an episode of Hoarders.

“Burke, you don’t need 13 different .button styles. Why don’t you pick one and we can get rid of these other 12 so you have somewhere to sleep?”

Bulma also includes things like modals that I used third-party Vue components to make.

It also has a hamburger menu because it’s a well-known scientific fact that you cannot have a successful site without a hamburger.

Look, I don’t make the rules. This is just how business works.

I was quite happy with the result. Bulma styles are sharp, and the layout system is easy to learn. It’s almost as if someone somewhere understands CSS and also doesn’t hate me. That’s just a hard combination to find these days.

After a few weeks of refactoring (during which I would ask myself, “WHAT ARE YOU EVEN DOING MAN?!? THE SITE ALREADY WORKS!”), I finally finished. As a side note, the next time you think about refactoring something, don’t. Just leave it alone. If you don’t leave any technical debt for the next generation, they’re going to be extremely bored and that’s going to be on you.

When I built the project, I noticed something odd: the size of my CSS had gone up pretty significantly. My hand-crafted abomination was only 30KB gzipped and I was up to 260KB after the refactor.

And, to make matters worse, the Vue CLI was lecturing me about it…

Which, of course, I ignored. I don’t take instructions from robots.

What I did instead was deploy it. To production. On the internet. Because I did not spend all of this time refactoring to not deploy it. Yeah, sunk costs and all that, but excuse me if I’m more pragmatic than your poster of logical fallacies. All I’m saying is I came to party and I was not going home without a buzz.

Then I took to Twitter to announce my accomplishment to the ambivalent masses. As one does.

Shortly thereafter, Jeremy Thomas, who created Bulma (and clearly loves Dragon Ball) responded. It was quick, too. It’s like there is a bat signal that goes out whenever a moron tweets.

Duplicate styles? 13 times? What the heck is a namespace? Is that a π symbol or a custom Jeremy Thomas logo?

It’s at this moment that I realized that I have no idea what I’m doing.

Put the Sass down and back away slowly

I’ll be the first to admit that I don’t know a lot about CSS, and even Less about Sass. Get it? Less about Sass? Forget it. I don’t want your pity laugh.

When I setup my Vue CLI project to use Bulma, I created a src/styles folder and dropped in a bulma-but-not-all-of-bulma-only-some-of-it.scss file. They say naming things is hard, but I don’t see why.

That file imports the pieces of Bulma that I want to use. It’s Bulma, but not all of it. Only some of it.

@import "bulma/sass/utilities/_all.sass";
@import "bulma/sass/base/_all.sass"; @import "bulma/sass/form/shared.sass";
@import "bulma/sass/form/input-textarea.sass"; // etc...

Then I imported that file into a custom Sass file which I called… site.scss. I like to keep things simple.

@import "./bulma-but-not-all-of-bulma-only-some-of-it.scss"; html,
body { background-color: #f9fafc;
} // etc...

I wanted to import these files into Vue globally so that I could use them in every component. And I wanted to do it the right way; the canonical way. I think it’s clear from my willingness to deploy 2+ MB of CSS into production that I like to do things the “right way”.

I read this excellent blog post from Sarah Drasner called, “How to import a Sass file into every component in your Vue app.” She shows how to do it by modifying the webpack build process via the vue.config.js file.

module.exports = { css: { loaderOptions: { sass: { data: `@import "@/styles/site.scss";` } } }
}

What I did not understand is that this imports Sass into every component in a Vue app. You know, like the title of the blog post literally says. This is also how I ended up with a bunch of duplicate styles that had a data-v- attribute selector on them. I have scoped styles to thank for that.

How Vue handles `scoped`

Vue allows you to “scope” styles to a component. This means that a style only affects the component that it’s in, and not the rest of the page. There is no magic browser API that does this. Vue pulls it off by dynamically inserting a data- attribute in both the element and the selector. For example, this:

<template> <button class="submit">Submit</button>
<template> <style lang="scss" scoped>
.submit { background-color: #20ae96;
}
</style>

…becomes this:

<button class="submit" data-v-2929>Submit</button> <style>
.submit[data-v-2929] { background-color: #20ae96;
}
</style>

That dynamic data tag gets added to every child element in the component as well. So every element and every style for this component will have a data-v-2929 on them at runtime.

If you import a Sass file into your component that has actual styles in it, Vue (via webpack) will pull in those styles and “namespace” them with that dynamic data- attribute. The result is that is you include Bulma in your app 13 damn times with a bunch of data-v weirdness in front of it.

But this begs the question: if webpack renders the CSS in every single component, why would you ever want to use the vue.config.js approach? In a word: variables.

The variable sharing problem

You can’t define a Sass variable in one component and reference it from another. That would also be kind of hard to manage since you would be defining and using variables all over the place. Only I would write code like that.

You, on the other hand, would probably put all your variables in a variables.scss file. Each component would then reference that central store of variables. Importing a variables file into every single component is redundant. It’s also excessive. And unnecessary. And long-winded.

This is precisely the problem that Sarah’s article is solving: importing a Sass file into every component in your project.

It’s OK to import something like variables into every component because variables aren’t rendered. If you import 200 variables and only reference one of them, who cares? Those variables don’t exist in the rendered CSS anyway.

For example, this:

<style lang="scss" scoped>
$primary: #20ae96;
$secondary: #336699; .submit { background-color: $primary
}
</style>

…becomes this:

<style>
.submit[data-v-2929] { background-color: #20ae96;
}
</style>

So, there are really two problems here:

  1. Bulma needs to be global.
  2. Bulma’s variables should be accessible from the components.

What we need is a clever combination of Sarah’s technique, along with a little proprietary knowledge about how Bulma is structured.

Using Bulma with the Vue

We’re going to accomplish this with the least amount of duplication by having three files in the src/styles directory:

variables.scss: This file will be where you pull in Bulma’s variables and override/define your own. Note that you have to include the following three files to get all of Bulma’s variables. And they have to be in this order…

// Your variables customizations go up here // Include Bulma's variables
@import "bulma/sass/utilities/initial-variables.sass";
@import "bulma/sass/utilities/functions.sass";
@import "bulma/sass/utilities/derived-variables.sass";

bulma-custom.scss: This file is where you pull in the pieces of Bulma that you want. It should reference the variables.scss file.

@import "./variables.scss"; /* UTILTIES */
@import "bulma/sass/utilities/animations.sass";
@import "bulma/sass/utilities/controls.sass";
@import "bulma/sass/utilities/mixins.sass"; // etc...

site.scss: This pulls in the bulma-custom.scss file and is also where you define global styles that are used across the whole project.

@import url("https://use.fontawesome.com/releases/v5.6.3/css/all.css");
@import "./bulma-custom.scss"; html,
body { height: 100%; background-color: #f9fafc;
} // etc...

Import the site.scss file into your main.js file. Or in my case, main.ts. Does it make me better than you that I use TypeScript? Yes. Yes it does.

import Vue from "vue";
import App from "./App.vue";
import router from "./router"; // import styles
import "@/styles/site.scss";

This makes all of the Bulma pieces we are using available in every component. They are global, but only included once.

Per Sarah’s article, add the variables.scss file to the vue.config.js file.

module.exports = { css: { loaderOptions: { sass: { data: `@import "@/styles/variables.scss";` } } }
}

This makes it so that you can reference any of the Bulma variables or your own from any .vue component.

Now you have the best of both worlds: Bulma is available globally and you still have access to all Bulma variables in every component.

Total size of CSS now? About 1,500% smaller…

Take that, Walmart.

Redemption via PR

In an effort to redeem myself, I’ve submitted a PR to the Bulma docs that walks through how to customize Bulma in a Vue CLI project. It’s an act of contrition for taking to Twitter and making Bulma seem like the problem when really Burke is the problem.

And you would think that by now I would have this figured out: Burke is always the problem.

The post How to Increase Your Page Size by 1,500% with webpack and Vue appeared first on CSS-Tricks.

Every Layout

Every Layout is a new work-in-progress website and book by Heydon Pickering and Andy Bell that explains how to make common layout patterns with CSS. They describe a lot of the issues when it comes to the design of these layouts, such as responsive problems and making sure we all write maintainable code, and then they’ve provided a handy generator at the end of each article to create our own little frameworks for dealing with these things.

They also have a complementary blog and one of the posts called “Algorithmic Design” caught my eye:

We make many of our biggest mistakes as visual designers for the web by insisting on hard coding designs. We break browsers’ layout algorithms by applying fixed positions and dimensions to our content.

Instead, we should be deferential to the underlying algorithms that power CSS, and we should think in terms of algorithms as we extrapolate layouts based on these foundations. We need to be leveraging selector logic, harnessing flow and wrapping behavior, and using calculations to adapt layout to context.

The tools for flexible, robust, and efficient web layout are there. We are just too busy churning out CSS to use them.

I’m looking forward to seeing where this project goes and how many more layouts these two end up documenting.

Direct Link to ArticlePermalink

The post Every Layout appeared first on CSS-Tricks.

Master Geolocation for Free with IP Geolocation API

Understanding where your users are, can mean the difference between making a sale, attracting a new member, getting a push on social media, building a solid user experience, or bombing completely.

Because where a person is, affects their schedule, their culture, their currency, their whole outlook on the world.

Sure, you might not be able to translate your entire site into their preferred language—who knows, maybe you can—but if they don’t speak the same language as you, you can at least acknowledge it. And okay, perhaps you can only accept payments in US dollars, but wouldn’t it be nice to let them know approximately how much you’re charging in Yen, or Dongs, or Euros. How about shipping costs, do you ship to Africa? And then there’s GDPR, wouldn’t it be great to be legally compliant in the EU without having to be compliant globally? You can even use an IP address’ longitude and latitude to identify the nearest real world store, if your company has more than one location.

You can do all of this and much much more with geolocation. Geolocation ties the web to the real world by taking an IP address, finding it in a huge database that identifies IP address locations, and then returning key details like country, currency, and continent.

The downside with geolocation is that it’s often expensive to implement, charging for the number of queries submitted to the database, making it beyond the budget of many small businesses.

The IP Geolocation API is different. Cutting down on the number of features, and the amount of data returned, means that you can get the same accurate location data, in a simple to use package, for free.

The Geolocation API returns data as a JSON object, and includes tons of information you can use to tailor your site.

Getting Started with IP Geolocation API

You can connect to the IP Geolocation API in any number of coding languages, the most common is JavaScript. However, because we want to customize the contents of the page, we don’t want to rely on a front-end technology. Instead we’re going to write the page before it gets to the user, with PHP.

The first thing you want to do is create a new file, name it “country.php” and save it to your dev environment, or open up your ftp client and upload it to a PHP enabled web space. Next we can edit the file’s code.

Open the PHP file:

<?php

Then we need to create a function to grab the location of the IP address we supply:

function grabIpInfo($ip)
{
}

Don’t worry if you’re not used to writing functions in PHP, all will become clear. (In simple terms, when we repeat that function’s name, passing an IP address to it, the code inside the brackets will run.)

Inside those brackets, we’re going to use PHP’s built-in cURL to get the data from the API:

$curl = curl_init();

This code initiates the cURL session. Next we need to set the options on the cURL session. The first is very important, it’s the URL to load, which will be https://api.ipgeolocationapi.com/geolocate/ and on the end we have to append the actual IP address we want to look up (remember we’re passing this into the function, so we just need to use the $ip parameter):

curl_setopt($curl, CURLOPT_URL, "https://api.ipgeolocationapi.com/geolocate/" . $ip);

The next option tells cURL to return the data as a string, rather than just outputting it to the page:

curl_setopt($curl, CURLOPT_RETURNTRANSFER, TRUE);

Next we execute the cURL session, saving the data to a new variable that we can return from the function:

$returnData = curl_exec($curl);

Now we close the cURL session:

curl_close($curl);

Lastly we return the data from the function:

return $returnData;

Remember all of that code was inside the curly brackets of the function. Now, we can call the function as often as we like and access the data, from outside the function:

$ipInfo = grabIpInfo("91.213.103.0");

Our $ipInfo variable now has a ton of great data returned about that IP address. If you’d like to see it, you just output the variable using PHP’s echo:

echo $ipInfo;

Finally, don’t forget to close the PHP:

?>

Now you can run your file in a browser, you’ll see something like this:

{"continent":"Europe","address_format":"{{recipient}}\n{{street}}\n{{postalcode}} {{city}}\n{{country}}","alpha2":"DE","alpha3":"DEU","country_code":"49","international_prefix":"00","ioc":"GER","gec":"GM","name":"Germany","national_destination_code_lengths":[2,3,4,5],"national_number_lengths":[6,7,8,9,10,11],"national_prefix":"0","number":"276","region":"Europe","subregion":"Western Europe","world_region":"EMEA","un_locode":"DE","nationality":"German","eu_member":true,"eea_member":true,"vat_rates":{"standard":19,"reduced":[7],"super_reduced":null,"parking":null},"postal_code":true,"unofficial_names":["Germany","Deutschland","Allemagne","Alemania","ドイツ","Duitsland"],"languages_official":["de"],"languages_spoken":["de"],"geo":{"latitude":51.165691,"latitude_dec":"51.20246505737305","longitude":10.451526,"longitude_dec":"10.382203102111816","max_latitude":55.0815,"max_longitude":15.0418962,"min_latitude":47.2701115,"min_longitude":5.8663425,"bounds":{"northeast":{"lat":55.0815,"lng":15.0418962},"southwest":{"lat":47.2701115,"lng":5.8663425}}},"currency_code":"EUR","start_of_week":"monday"}

It’s incredible that we can grab all of that data so easily, but it’s not actually all that useful. We need to be able to pick out parts of that code.

Go back and delete the line of code that begins with “echo”, and replace it with the following:

$ipJsonInfo = json_decode($ipInfo);

That line converts the JSON string into an object we can access.

Next echo the part of the data you want to. In this case, the name property gives us the country name:

echo $ipJsonInfo->name;

Run your file again, and you’ll see the country name “Germany” in your browser.

That’s cool, but there’s one more thing we need to do before this code is really useful, and that’s find the IP address dynamically, so that we’re looking up the country name of whomever is accessing the page. IP Geolocation API will do that for you if you access it with JavaScript (you simply omit the IP address altogether). Unfortunately, we’re using PHP, but fortunately, PHP gives us a really simple way to access the current user’s IP address, $_SERVER[“REMOTE_ADDR”].

Replace the IP address in the function call with the dynamic version:

$ipInfo = grabIpInfo($_SERVER["REMOTE_ADDR"]);

When you now run the code you’ll get your own country output to the browser. (If you don’t see anything, you’re probably testing locally so you don’t have a detectable IP address, upload to your webspace to see the file working correctly.)

Here’s the full code:

<?php function grabIpInfo($ip)
{ $curl = curl_init(); curl_setopt($curl, CURLOPT_URL, "https://api.ipgeolocationapi.com/geolocate/" . $ip); curl_setopt($curl, CURLOPT_RETURNTRANSFER, TRUE); $returnData = curl_exec($curl); curl_close($curl); return $returnData; } $ipInfo = grabIpInfo($_SERVER["REMOTE_ADDR"]);
$ipJsonInfo = json_decode($ipInfo); echo $ipJsonInfo->name; ?>

Simple Geolocation

As you can see, accessing geolocation data with the IP Geolocation API is incredibly fast and simple. And being able to access this kind of data for free is an incredible resource for any developer.

IP Geolocation API’s data is all run from the MaxMind database—one of the most trusted sources—which means you’re getting reliable, and accurate data, from a premium provider, at zero cost.

IP Geolocation API is the perfect place to start experimenting with geolocation. Once you get started, it will become an essential part of every user experience you design.

 

[– Featured image via DepositPhotos –]

[– This is a sponsored post on behalf of IP Geolocation API –]

Add Realistic Chalk and Sketch Lettering Effects with Sketch’it – only $5!

Source
p img {display:inline-block; margin-right:10px;}
.alignleft {float:left;}
p.showcase {clear:both;}
body#browserfriendly p, body#podcast p, div#emailbody p{margin:0;}

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from Youtube
Vimeo
Consent to display content from Vimeo
Google Maps
Consent to display content from Google