The Rise Of Design Thinking As A Problem Solving Strategy

Having spent the last 20 years in the world of educational technology working on products for educators and students, I have learned to understand teachers and administrators as designers themselves, who use a wide set of tools and techniques to craft learning experiences for students. I have come to believe that by extending this model and framing all users as designers, we are able to mine our own experiences to gain a deeper empathy for their struggles. In doing so, we can develop strategies to set our user-designers up to successfully deal with change and uncertainty.

If you are a designer, or if you have worked with designers any time in the last decade, you are probably familiar with the term “design thinking.” Typically, design thinking is represented by a series of steps that looks something like this:

There are many variations of this diagram, reflective of the multitude of ways that the process can be implemented. It is typically a months-long undertaking that begins with empathy: we get to know a group of people by immersing ourselves in a specific context to understand their tasks, pain points, and motivations. From there, we take stock of our observations, looking for patterns, themes, and opportunities, solidifying the definition of the problem we wish to solve. Then, we iteratively ideate, prototype, and test solutions until we arrive at one we like (or until we run out of time).

Ultimately, the whole process boils down to a simple purpose: to solve a problem. This is not a new purpose, of course, and not unique to those of us with “Designer” in our job titles. In fact, while design thinking is not exactly the same as the scientific method we learned in school, it bears an uncanny resemblance:

By placing design thinking within this lineage, we equate the designer with the scientist, the one responsible for facilitating the discovery and delivery of the solution.

At its best, design thinking is highly collaborative. It brings together people from across the organization and often from outside of it, so that a diverse group, including those whose voices are not usually heard, can participate. It centers the needs and emotions of those we hope to serve. Hopefully, it pulls us out of our own experiences and biases, opening us up to new ways of thinking and shining a light on new perspectives. At its worst, when design thinking is dogmatically followed or cynically applied, it becomes a means of gatekeeping, imposing a rigid structure and set of rules that leave little room for approaches to design that do not conform to an exclusionary set of cultural standards.

Its relative merits, faults, and occasional high-profile critiques notwithstanding, design thinking has become orthodoxy in the world of software development, where not using it feels tantamount to malpractice. No UX Designer’s portfolio is complete without a well-lit photo capturing a group of eager problem solvers in the midst of the “Define” step, huddled together, gazing thoughtfully at a wall covered in colorful sticky notes. My colleagues and I use it frequently, sticky notes and all, as we work on products in EdTech.

Like “lean,” the design thinking methodology has quickly spread beyond the software industry into the wider world. Today you can find it in elementary schools, in nonprofits, and at the center of innovation labs housed in local governments.

Amidst all of the hoopla, it is easy to overlook a central assumption of design thinking, which seems almost too obvious to mention: the existence of a solution. The process rests on the premise that, once the steps have been carried out, the state of the problem changes from ‘unsolved’ to ‘solved.’ While this problem-solution framework is undeniably effective, it is also incomplete. If we zoom out, we can see the limits of our power as designers, and then we can consider what those limits mean for how we approach our work.

Chaos And The Limits Of Problem Solving

An unchecked belief in our ability to methodically solve big problems can lead to some pretty grandiose ideas. In his book, Chaos: Making a New Science, James Gleick describes a period in the 1950s and ’60s when, as computing and satellite technologies continued to advance, a large, international group of scientists embarked on a project that, in hindsight, sounds absurd. Their goal was not only to accurately predict, but also to control the weather:

“There was an idea that human society would free itself from weather’s turmoil and become its master instead of its victim. Geodesic domes would cover cornfields. Airplanes would seed the clouds. Scientists would learn how to make rain and how to stop it.”

— “Chaos: Making a New Science,” James Gleick

It is easy to scoff at their hubris now, but at the time it was a natural result of an ever-heightening faith that, with science, no problem is too big to solve. What those scientists did not account for is a phenomenon commonly known as the butterfly effect, which is now a central pillar of the field of chaos theory. The butterfly effect describes the inherent volatility that arises in complex and interconnected systems. It gets its name from a famous illustration of the principle: a butterfly flapping its wings and creating tiny disturbances in the air around it on one side of the globe today can cause a hurricane tomorrow on the other. Studies have shown that the butterfly effect impacts everything in society from politics and the economy to trends in fashion.

Our Chaotic Systems

If we accept that, like the climate, the social systems in which we design and build solutions are complex and unpredictable, a tension becomes apparent. Design thinking exists in a context that is chaotic and unpredictable by nature, and yet the act of predicting is central. By prototyping and testing, we are essentially gathering evidence about what the outcome of our design will be, and whether it will effectively solve the problem we have defined. The process ends when we feel confident in our prediction and happy with the result.

I want to take pains to point out again that this approach is not wrong! We should trust the process to confirm that our designs are useful and usable in the immediate sense. At the same time, whenever we deliver a solution, we are like the butterfly flapping its wings, contributing (along with countless others) to a constant stream of change. So while the short-term result is often predictable, the longer-term outlook for the system as a whole, and for how long our solution will hold as the system changes, is unknowable.

Impermanence

As we use design thinking to solve problems, how do we deal with the fact that our solutions are built to address conditions that will change in ways we can’t plan for?

One basic thing we can do is to maintain awareness of the impermanence of our work, recognizing that it was built to meet the needs of a specific moment in time. It is more akin to a tree fort constructed in the woods than to a castle fortress made from stone. While the castle may take years to build and last for centuries, impervious to the weather while protecting its inhabitants from all of the chaos that exists outside its walls, the tree fort, even if well-designed and constructed, is directly connected to and at the mercy of its environment. While a tree fort may shelter us from the rain, we do not build it with the expectation that it will last forever, only with the hope that it will serve us well while it’s here. Hopefully, through the experience of building it, we continue to learn and improve.

The fact that our work is impermanent does not diminish its importance, nor does it give us the license to be sloppy. It means that the ability to quickly and consistently adapt and evolve without sacrificing functional or aesthetic quality is core to the job, which is one reason why design systems, which provide consistent and high-quality reusable patterns and components, are crucial.

Designing For User-Designers

A more fundamental way to deal with the impermanence of our work is to rethink our self-image as designers. If we identify only as problem solvers, then our work becomes obsolete quickly and suddenly as conditions change, while in the meantime our users must wait helplessly to be rescued with the next solution. In reality, our users are compelled to adapt and design their own solutions, using whatever tools they have at their disposal. In effect, they are their own designers, and so our task shifts from delivering full, fixed solutions to providing our user-designers with useful and usable tools specific to their needs.

In thinking from this perspective, we can gain empathy for our users by understanding our place as equals on a continuum, each of us relying on others, just as others rely on us.

Key Principles To Center The Needs Of User-Designers

Below are some things to consider when designing for user-designers. In the spirit of the user-designer continuum and of finding the universal in the specific, in the examples below I draw on my experience from both sides of the relationship. First, from my work as a designer in the EdTech space, in which educators rely on people like me to produce tools that enable them to design learning experiences for students. Second, as a user of the products, I rely on them in my daily UX work.

1. Don’t Lock In The Value

It is crucial to have a clear understanding of why someone would use your product in the first place, and then make sure not to get in the way. While there is a temptation to keep that value contained so that users must remain in your product to reap all of the benefits, we should resist that mindset.

Remember that your product is likely just one tool in a larger set, and our users rely on their tools to be compatible with each other as they design their own coherent, holistic solutions. Whereas the designer-as-problem-solver is inclined to build a self-contained solution, jealously locking value within their product, the designer-for-designers facilitates the free flow of information and continuity of task completion between tools however our user-designers choose to use them. By sharing the value, not only do we elevate its source, we give our users full use of their toolbox.

An Example As A Designer Of EdTech Products:

In student assessment applications, like in many other types of applications, the core value is the data. In other words, the fundamental reason schools administer assessments is to learn about student achievement and growth. Once that data is captured, there are all sorts of ways we can then use it to make intelligent, research-based recommendations around tasks like setting student goals, creating instructional groups, and assigning practice. To be clear, we do try very hard to support all of it in our products, often by using design thinking. Ultimately, though, it all starts with the data.

In practice, teachers often have a number of options to choose from when completing their tasks, and they have their own valid reasons for their preferences. Anything from state requirements to school policy to personal working style may dictate their approach to, say, student goal setting. If — out of a desire to keep people in our product — we make it extra difficult for teachers to use data from our assessments to set goals outside of our product (say, in a spreadsheet), then instead of increasing our value, we have added inconvenience and frustration. The lesson, in this case, is not to lock up the data! Ironically, by hoarding it, we make it less valuable. By providing educators with easy and flexible ways to get it out, we unlock its power.

An Example As A User Of Design Tools:

I tend to switch between tools as I go through the design thinking process based on the core value each tool provides. All of these tools are equally essential to the process, and I count on them to work together as I move between phases so that I don’t have to build from scratch at every step. For example, the core value I get from Sketch is mostly in the “Ideation” phase, in that it allows me to brainstorm quickly and freely so that I can try out multiple ideas in a short amount of time. By making it easy for me to bring ideas from that product into a more heavy-duty prototyping application like Axure, instead of locking them inside, Sketch saves me time and frustration and increases my attachment to it. If, for competitive reasons, those tools ceased to cooperate, I would be much more likely to drop one or both.

2. Use Established Patterns

It is always important to remember Jakob’s Law, which states simply that users spend more time on other sites than they spend on yours. If they are accustomed to engaging with information or accomplishing a task a certain way and you ask them to do it differently, they will not view it as an exciting opportunity to learn something new. They will be resentful. Scaling the learning curve is usually painful and frustrating. While it is possible to improve or even replace established patterns, it’s a very tall order. In a world full of unpredictability, consistent and predictable patterns among tools create harmony between experiences.

An Example As A Designer Of EdTech Products:

By following conventions around data visualization in a given domain, we make it easy for users to switch and compare between sources. In the context of education, it is common to display student progress in a graph of test scores over time, with the score scale represented on the vertical axis and the timeline along the horizontal axis. In other words, a scatter plot or line graph, often with one or two more dimensions represented, maybe by color or dot size. Through repeated, consistent exposure, even the most data-phobic teachers can easily and immediately interpret this data visualization and craft a narrative around it.

You could hold a sketching activity during the “Ideate” phase of design thinking in which you brainstorm dozens of other ways to present the same information. Some of those ideas would undoubtedly be interesting and cool, and might even surface new and useful insights. This would be a worthwhile activity! In all likelihood, though, the best decision would not be to replace the accepted pattern. While it can be useful to explore other approaches, ultimately the most benefit is usually derived from using patterns that people already understand and are used to across a variety of products and contexts.

An Example As A User Of Design Tools:

In my role, I often need to quickly learn new UX software, either to facilitate collaboration with designers from outside of my organization or when my team decides to adopt something new. When that happens, I rely heavily on established patterns of visual language to quickly get from the learning phase to the productive phase. Where there is consistency, there is relief and understanding. Where there is a divergence for no clear reason, there is frustration. If a product team decided to rethink the standard alignment palette, for example, in the name of innovation, it would almost certainly make the product more difficult to adopt while failing to provide any benefit.

3. Build For Flexibility

As an expert in your given domain, you might have strong, research-based positions on how certain tasks should be done, and a healthy desire to build those best practices into your product. If you have built up trust with your users, then adding guidance and guardrails directly into the workflow can be powerful. Remember, though, that it is only guidance. The user-designer knows when those best practices apply and when they should be ignored. While we should generally avoid overwhelming our users with choices, we should strive for flexibility whenever possible.

An Example As A Designer Of EdTech Products

Many EdTech products provide mechanisms for setting student learning goals. Generally, teachers appreciate being given recommendations and smart defaults when completing this task, knowing that there is a rich body of research that can help determine a reasonable range of expectations for a given student based on their historical performance and the larger data set from their peers. Providing that guidance in a simple, understandable format is generally beneficial and appreciated. But, we as designers are removed from the individual students and circumstances, as well as the ever-changing needs and requirements driving educators’ goal-setting decisions. We can build recommendations into the happy path and make enacting them as painless as possible, but the user needs an easy way to edit our guidance or to reject it altogether.

An Example As A User Of Design Tools:

The ability to create a library of reusable objects in most UX applications has made them orders of magnitude more efficient. Knowing that I can pull in a pre-made, properly-branded UI element as needed, rather than creating one from scratch, is a major benefit. Often, in the “Ideate” phase of design thinking, I can use these pre-made components in their fully generic form simply to communicate the main idea and hierarchy of a layout. But, when it’s time to fill in the details for high-fidelity prototyping and testing, the ability to override the default text and styling, or even detach the object from its library and make more drastic changes, may become necessary. Having the flexibility to start quickly and then progressively customize lets me adapt rapidly as conditions change, and helps make moving between the design thinking steps quick and easy.

4. Help Your User-Designers Build Empathy For Their Users

When thinking about our users as designers, one key question is: who are they designing for? In many cases, they are designing solutions for themselves, and so their designer-selves naturally empathize with and understand the problems of their user-selves. In other cases, though, they are designing for another group of people altogether. In those situations, we can look for ways to help them think like designers and develop empathy for their users.

An Example As A Designer Of EdTech Products:

For educators, the users are the students. One way to help them center the needs of their audience when they design experiences is to follow the standards of Universal Design for Learning, equipping educators to provide instructional material with multiple means of engagement (i.e., use a variety of strategies to drive motivation for learning), multiple means of representation (i.e., accommodate students’ different learning styles and backgrounds), and multiple means of action and expression (i.e., support different ways for students to interact with instructional material and demonstrate learning). These guidelines open up approaches to learning and nudge users to remember that all of the ways their audience engages with practice and instruction must be supported.

An Example As A User Of Design Tools:

Anything a tool can do to encourage design decisions that center accessibility is hugely helpful, in that it reminds us to consider those who face the most barriers to using our products. While some commonly-used UX tools do include functionality for creating alt-text for images, setting a tab order for keyboard navigation, and enabling responsive layouts for devices of various sizes, there is an opportunity for these tools to do much more. I would love to see built-in accessibility checks that would help us identify potential issues as early in the process as possible.

Conclusion

Hopefully, by applying the core principles of unlocking value, leveraging established patterns, understanding the individual’s need for flexibility, and facilitating empathy in our product design, we can help set our users up to adapt to unforeseen changes. By treating our users as designers in their own right, not only do we recognize and account for the complexity and unpredictability of their environment, we also start to see them as equals.

While those of us with the word “Designer” in our official job title do have a specific and necessary role, we are not gods, handing down solutions from on high, but fellow strugglers trying to navigate a complex, dynamic, stormy world. Nobody can control the weather, but we can make great galoshes, raincoats, and umbrellas.

Further Reading

If you’re interested in diving into the fascinating world of chaos theory, James Gleick’s book Chaos: Making a New Science, which I quoted in this article, is a wonderful place to start.
Jon Kolko wrote a great piece in 2015 on the emergence of design thinking in business, in which he describes its main principles and benefits. In a subsequent article from 2017, he considers the growing backlash as organizations have stumbled and taken shortcuts when attempting to put theory into practice, and what the lasting impact may be. An important takeaway here is that, in treating everyone as a designer, we run the risk of downplaying the importance of the professional Designer’s specific skill set. We should recognize that, while it is useful to think of teachers (or any of our users) as designers, the day-to-day tools, methods, and goals are entirely different.
In the article Making Sense in the Data Economy, Hugh Dubberly and Paul Pangaro describe the emerging challenges and complexities of the designer’s role in moving from the manufacture of physical products to the big data frontier. With this change, the focus shifts from designing finished products (solutions) to maintaining complex and dynamic platforms, and the concept of “meta-design” — designing the systems in which others operate — emerges.
To keep exploring the ever-evolving strategies of designing for designers, search Smashing Magazine and your other favorite UX resources for ideas on interoperability, consistency, flexibility, and accessibility!

How To Run A UX Audit For A Major EdTech Platform (Case Study)

The business world today is obsessed with user experience (UX) design. And for good reason: Every dollar invested in UX brings $100 in return. So, having some free time in quarantine, I decided to check whether one of the most evolving industries right now, education technology (EdTech), uses this potential of UX.

My plan was to choose one EdTech platform, audit its UX, and, if necessary, redesign it. I first looked at some major EdTech platforms (such as edX, Khan Academy, and Udemy), read user feedback about them, and then narrowed my scope to edX. Why did I choose edX? Simply because:

it’s non-profit,
it has more than 20 million users,
its UX has a lot of negative reviews.

Even from my quick UX check, I got an overview of the UX principles and UI solutions followed by global EdTech platforms right now (in my case, edX).

Overall, this UX audit and redesign concept would be of great use to UX designers, business owners, and marketing people because it presents a way to audit and fix a product’s most obvious usability issues. So, welcome to my edX audit.

Audit Structure

Part 1: Audit for user needs
Part 2: Audit for 10 usability heuristics

This audit consists of two parts. First, I surveyed edX users, learned their needs, and checked whether the platform meets them. In the second stage, I weighed edX’s website against the 10 usability heuristics identified by Jacob Nielsen. These heuristics are well-recognized UX guidelines — the bible, if you will, for any UX designer.

Ideally, a full-fledged UX audit would take weeks. I had a fixed scope, so I checked the platform’s home page, user profile, and search page. These are the most important pages for users. Just analyzing these few pages gave me more than enough insight for my redesign concept.

Part 1: Audit for User Needs

Good UX translates into satisfied users.

That’s where I started: identifying user needs. First, I analyzed statistical data about the platform. For this, you can use such well-known tools as Semrush and SimilarWeb and reviews from Trustpilot, Google Play, and Apple’s App Store.

Take SimilarWeb. The tool analyzes edX’s rank, traffic sources, advertising, and audience interests. “Computer Electronics” and “Technology” appear to be the most popular course categories among edX students.

For user feedback on edX, I went to Trustpilot (Google Play and the App Store are relevant only for analyzing mobile apps). I found that most users praise edX’s courses for their useful content, but complain about the platform’s UX — most often about the hard and time-consuming verification process and poor customer support.

Done with the analytical check, I moved on to user interviews. I went to design communities on Facebook and LinkedIn, looking for students of online courses, asking them to answer some of my quick questions. To everyone who responded, I sent a simple Google Form to capture their basic needs and what they value most when choosing an education platform.

Having received the answers, I created two user profiles for edX: potential user and long-time user. Here’s a quick illustration of these two types:

I identified these two kinds of users based on my survey. According to my findings, there are two common scenarios for how users select an educational course.

Learner 1 is mainly focused on choosing between different education platforms. This user type doesn’t need a specific course. They are visiting various websites, looking for a course that grabs their attention.

The second kind of learner knows exactly what course they want to take. Supposing they’ve chosen edX, they would need an effective search function to help them locate the course they need, and they’d need a convenient profile page to keep track of their progress.

Based on the edX user profiles, their needs, and the statistical data I gathered, I have outlined the five most common problems that the platform’s customers might face.

Problem 1: “Can I Trust This Website?”

Numerous factors determine a website’s credibility and trustworthiness: the logo, reviews, feedback, displayed prices, etc. Nielsen Norman Group covers the theory of it. Let’s focus on the practice.

So, what do we have here? edX’s current home page displays the logos of its university partners, which are visible at first glance and add credibility to the platform.

At the same time, the home page doesn’t highlight benefits of the platform or user feedback. This is often a deciding factor for users in choosing a platform.

Other approaches

It’s good to learn from competitors. Another EdTech platform, Khan Academy, demonstrates quite a different approach to website design. Its home page introduces the platform, talks about its benefits, and shows user feedback:

Problem 2: “Do I Have All of the Information I Need to Choose a Course?

Many a time, users just want to quickly scan the list of courses and then choose the best one based on the description.

edX’s course cards display the course name, institution, and certificate level. Yet, they could also provide essentials such as pricing, course rating, how many students are enrolled, start date, etc.

Proper description of elements is an essential part of UX, as mentioned in Jacob Nielsen’s sixth heuristic. The heuristic states that all information valuable to a user should always be available.

Other approaches

Looking at another EdTech platform, Udemy’s course cards display the course name, instructor, rating, number of reviews, and price.

Problem 3: “Can I Sign Up Easily?”

According to a study by Mirjam Seckler, completion time decreases significantly if a signup form follows basic usability guidelines. Users are almost twice as likely to sign up in their first try if there are no errors.

So, let’s have a deeper look at edX’s forms:

They do not let you type your country’s name or your date of birth. Instead, you have to scroll through all of the options. (I am in the Ukraine, which is pretty far down the list.)
They do not display the password you’ve inputted, even by request.
They do not send an email to verify the address you’ve entered.
They do not indicate with an asterisk which fields are required.

Speeding up the registration process is yet another crucial UX principle. To read more about it, look at Nielsen Norman Group’s usability guidelines for website forms.

Other approaches

Many websites let users enter data manually to speed up the application process. Another EdTech website, Udemy, has an option to show and hide the inputted password by request:

Problem 4: “Is On-Site Search Helpful?”

Search is one of the most used website features. Thus, it should be helpful, simple to use, and fast. Numerous usability studies show the importance of helpful search for massive online open courses (MOOCs).

In this regard, I’ve analyzed edX’s search. I started from page loading. Below is a screenshot from Google PageSpeed, which shows that the platform’s search speed has a grade of 12 out of 100.

Let’s now move to searching in a specific category. In its current design, edX has no filtering. After choosing a category (for example, electronics courses), users need to scroll through the list to find what they want. And some categories have more than 100 items.

Other approaches

EdTech platform Coursera has visible filtering on its website, displaying all of the options to filter from in a category:

Problem 5: “Should I Finish This Course?”

Researchers don’t stop stressing that EdTech platforms have, on average, higher retention rates than other websites. Therefore, tracking user progress and motivation is critical for online courses. These principles are pretty simple yet effective.

That is what edX’s user profile looks like:

Other approaches

Khan Academy’s user profile displays various statistics, such as join date, points earned, and longest learning streak. It might motivate the user to continue learning and to track their success.

Part 2: Audit for 10 Usability Heuristics

We’ve finished analyzing the most common user needs on edX. It’s time to move to the 10 usability criteria identified by Nielsen Norman Group, a UX research and consulting firm trusted by leading organizations worldwide.

You can do a basic UX checkup of your website using the 10 heuristics even if you aren’t a UX designer. Nielsen Norman Group’s website gives a lot of examples, videos, and instructions for each heuristic. This Notion checklist makes it even more convenient. It includes vital usability criteria required for any website. It’s a tool used internally at Fulcrum (where I work), but I thought it would be good to share it with the Smashing Magazine audience. It includes over a hundred criteria, and because it’s in Notion, you can edit it and customize it however you want.

Heuristic 1: Visibility of System Status

The first heuristic is to always keep users informed. Simply put, a website should provide users with feedback whenever an action is completed. For example, you will often see a “Success” message when downloading a file on a website.

In this regard, edX’s current course cards could be enhanced. Right now, a card does not tell users whether the course is available. Users have to click on the card to find out.

Possible approach

If some courses aren’t available, indicate that from the start. You could use bright labels with “available”/“not available” messages.

Heuristic 2: Match Between System and the Real World

The system should speak the user’s language. It should use words, phrases, and symbols that are familiar to the average visitor. And the information should appear in a logical order.

This is the second criterion of Jacob Nielsen. edX’s website pretty much follows this principle, using common language, generally accepted symbols, and familiar signs.

Possible approach

Another good practice would be to break down courses by sections, and add easy-to-understand icons.

Heuristic 3: User Control and Freedom

This heuristic stresses that users always should have a clear way out when they do something by mistake, something like an undo or return option.

edX makes it impossible to change your username once it’s been set up. Many websites limit the options for changing a username for security reasons. Still, it might be more user-friendly to make it changeable.

Possible approach

Some websites allow users to save data, a status, or a change whenever they want. A good practice would be to offer customers alternative options, like to add or remove a course or to save or edit their profile.

Heuristic 4: Consistency and Standards

According to this fourth UX criterion, design elements should be consistent and predictable. For example, symbols and images should be unified across the UI design of a platform.

Broadly speaking, there are two types of consistencies: internal and external. Internal consistency refers to staying in sync with a product (or a family of products). External consistency refers to adhering to the standards within an industry (for example, shopping carts having the same logic across e-commerce websites).

edX sometimes breaks internal consistency. Case in point just below: The “Explore” button looks different. Two different-looking buttons (or any other elements) that perform the same function might add visual noise and worsen the user experience. This issue might not be critical, but it contributes to the overall UX of the website.

Heuristic 5: Error Prevention

Good design prevents user error. By helping users avoid errors, designers save them time and prevent frustration.

For instance, on edX, if you make a typo in your email address, it’s visible only after you try to verify it.

Possible approach

Granted, live validation is not always good for UX. Some designers consider it problematic, arguing that it distracts users and causes confusion. Others believe that live validation has a place in UX design.

In any case, whether you’re validating live or after the “Submit” button has been clicked, keep your users and their goals in mind. Your task is to make their experience as smooth as possible.

Heuristic 6: Recognition Rather Than Recall

Users should not have to memorize information you’ve shown them before. That’s another UX guideline from Nielsen Norman Group. Colors and icons (like arrows) help users process information better.

edX’s home page displays university logos, but not the universities’ full names, which illustrates this point. Also, the user profile page doesn’t tell you which courses you’ve completed.

Possible approach

The platform’s UX could be improved by showing courses that users have already done and recommending similar ones.

Heuristic 7: Flexibility and Efficiency of Use

According to this UX principle, speed up interaction wherever possible by using elements called accelerators. Basically, use any options or actions that speed up the whole process.

edX doesn’t provide filtering when users search for a course. Its absence could increase the time and effort users take to find the course they need.

Possible approach

Search is one of the critical stages of user conversion. If users can find what they want, they will be much closer to becoming customers. So, use filters to help users find courses more quickly and easily.

Heuristic 8: Aesthetic and Minimalist Design

This heuristic tells us to “remove unnecessary elements from the user interface and to maximize the signal-to-noise ratio of the design” (the signal being information relevant to the user, and the noise being irrelevant information).

Simply put, every element should tell a story, like a mosaic. Designers communicate, not decorate.

Comparing the current design of edX’s home page to the previous one, we can see a huge improvement. The main photo is now much more relevant to the platform’s mission. edX also added insights into how many users and courses it has.

Heuristic 9: Help Users Recognize, Diagnose, and Recover From Errors

This heuristic states that errors should be expressed in simple, explanatory language to the user. It’s also good to clearly explain why an error occurred in the first place.

edX’s 404 page serves its purpose overall. First, it explains to the user the problem (“We can’t seem to find the page you’re looking for”) and suggests a solution (giving links to the home page, search function, and course list). It also recommends popular courses.

Heuristic 10: Help and Documentation

This last heuristic is about the necessity of support and documentation on any website. There are many forms of help and documentation, such as onboarding pages, walkthroughs, tooltips, chats, and chatbots.

edX has links to a help center hidden in the footer. It’s divided into sections, and users can use a search bar to find information. The search does a good job of auto-suggesting topics that might be useful.

Unfortunately, users can’t go back to the home page from the help center by clicking the logo. There is no direct way to get back to the home page from there.

Possible approach

Enable users to return to the home page wherever they want on the website.

eDX Redesign Concept

Based on my UX findings, I resdesigned the platform, focusing on the home page, user profiles, and search results page. You can see full images of the redesign in Figma.

Home Page

1. Signal-to-Noise Ratio

First things first: To meet usability heuristic 8, I’ve made the whole page more minimalist and added space between its elements.

edX has the grand mission of “education for everyone, everywhere”, so I decided to put this on the home page, plain and bold.

I also switched the images to better reflect the story presented in the text. I expressed the mission with these new illustrations:

2. Course Cards

The “New Courses” section below highlights the latest courses.

I also added some details that edX’s cards currently do not display. This made the cards more descriptive, showing essential information about each course.

I also used icons to show the most popular subjects.

3. Credibility and Trust

I added a fact sheet to show the platform’s credibility and authority:

In addition, I freshened up the footer, reshaping the languages bar to be more visible to users.

Helpful Search

1. Search Process

In edX’s current design, users don’t see the options available while searching. So, I designed a search function with auto-suggestion. Now, users just need to type a keyword and choose the most relevant option.

2. Search Filters

I added a left sidebar to make it easy to filter results. I also updated the UI and made the course cards more descriptive.

User Profile

As mentioned in the audit section, it’s essential to motivate users to continue studying. Inspired by Khan Academy, I added a progress bar to user profiles. Now, a profile shows how many lessons are left before the user completes a course.

I put the navigation above so that it can be easily seen. Also, I updated the user profile settings, leaving the functionality but modifying the colors.

Conclusion

A UX audit is a simple and efficient way to check whether design elements are performing their function. It’s also a good way to look at an existing design from a fresh perspective.

This case presented me with several lessons. First, I see that the websites in one of the most topical industries right now could have their UX updated. Learning something new is hard, but without proper UX design, it’s even harder.

The audit also showed why it’s crucial to understand, analyze, and meet user needs. Happy users are devoted users.

Creating A Multi-Author Blog With Next.js

In this article, we are going to build a blog with Next.js that supports two or more authors. We will attribute each post to an author and show their name and picture with their posts. Each author also gets a profile page, which lists all posts they contributed. It will look something like this:

We are going to keep all information in files on the local filesystem. The two types of content, posts and authors, will use different types of files. The text-heavy posts will use Markdown, allowing for an easier editing process. Because the information on authors is lighter, we will keep that in JSON files. Helper functions will make reading different file types and combining their content easier.

Next.js lets us read data from different sources and of different types effortlessly. Thanks to its dynamic routing and next/link, we can quickly build and navigate to our site’s various pages. We also get image optimization for free with the next/image package.

By picking the “batteries included” Next.js, we can focus on our application itself. We don’t have to spend any time on the repetitive groundwork new projects often come with. Instead of building everything by hand, we can rely on the tested and proven framework. The large and active community behind Next.js makes it easy to get help if we run into issues along the way.

After reading this article, you will be able to add many kinds of content to a single Next.js project. You will also be able to create relationships between them. That allows you to link things like authors and posts, courses and lessons, or actors and movies.

This article assumes basic familiarity with Next.js. If you have not used it before, you might want to read up on how it handles pages and fetches data for them first.

We won’t cover styling in this article and focus on making it all work instead. You can get the result on GitHub. There is also a stylesheet you can drop into your project if you want to follow along with this article. To get the same frame, including the navigation, replace your pages/_app.js with this file.

Setup

We begin by setting up a new project using create-next-app and changing to its directory:

$ npx create-next-app multiauthor-blog
$ cd multiauthor-blog

We will need to read Markdown files later. To make this easier, we also add a few more dependencies before getting started.

multiauthor-blog$ yarn add gray-matter remark remark-html

Once the installation is complete, we can run the dev script to start our project:

multiauthor-blog$ yarn dev

We can now explore our site. In your browser, open http://localhost:3000. You should see the default page added by create-next-app.

In a bit, we’ll need a navigation to reach our pages. We can add them in pages/_app.js even before the pages exist.

import Link from ‘next/link’

import ‘../styles/globals.css’

export default function App({ Component, pageProps }) {
return (
<>
<header>
<nav>
<ul>
<li>
<Link href=”/”>
<a>Home</a>
</Link>
</li>

<li>
<Link href=”/posts”>
<a>Posts</a>
</Link>
</li>

<li>
<Link href=”/authors”>
<a>Authors</a>
</Link>
</li>
</ul>
</nav>
</header>

<main>
<Component {…pageProps} />
</main>
</>
)
}

Throughout this article, we’ll add these missing pages the navigation points to. Let’s first add some posts so we have something to work with on a blog overview page.

Creating Posts

To keep our content separate from the code, we’ll put our posts in a directory called _posts/. To make writing and editing easier, we’ll create each post as a Markdown file. Each post’s filename will serve as the slug in our routes later. The file _posts/hello-world.md will be accessible under /posts/hello-world, for example.

Some information, like the full title and a short excerpt, goes in the frontmatter at the beginning of the file.


title: “Hello World!”
excerpt: “This is my first blog post.”
createdAt: “2021-05-03”

Hey, how are you doing? Welcome to my blog. In this post, …

Add a few more files like this so the blog doesn’t start out empty:

multi-author-blog/
├─ _posts/
│ ├─ hello-world.md
│ ├─ multi-author-blog-in-nextjs.md
│ ├─ styling-react-with-tailwind.md
│ └─ ten-hidden-gems-in-javascript.md
└─ pages/
└─ …

You can add your own or grab these sample posts from the GitHub repository.

Listing All Posts

Now that we have a few posts, we need a way to get them onto our blog. Let’s start by adding a page that lists them all, serving as the index of our blog.

In Next.js, a file created under pages/posts/index.js will be accessible as /posts on our site. The file must export a function that will serve as that page’s body. Its first version looks something like this:

export default function Posts() {
return (
<div className=”posts”>
<h1>Posts</h1>

{/* TODO: render posts */}
</div>
)
}

We don’t get very far because we don’t have a way to read the Markdown files yet. We can already navigate to http://localhost:3000/posts, but we only see the heading.

We now need a way to get our posts on there. Next.js uses a function called getStaticProps() to pass data to a page component. The function passes the props in the returned object to the component as props.

From getStaticProps(), we are going to pass the posts to the component as a prop called posts. We’ll hardcode two placeholder posts in this first step. By starting this way, we define what format we later want to receive the real posts in. If a helper function returns them in this format, we can switch over to it without changing the component.

The post overview won’t show the full text of the posts. For this page, the title, excerpt, permalink, and date of each post are enough.

export default function Posts() { … }

+export function getStaticProps() {
+ return {
+ props: {
+ posts: [
+ {
+ title: “My first post”,
+ createdAt: “2021-05-01”,
+ excerpt: “A short excerpt summarizing the post.”,
+ permalink: “/posts/my-first-post”,
+ slug: “my-first-post”,
+ }, {
+ title: “My second post”,
+ createdAt: “2021-05-04”,
+ excerpt: “Another summary that is short.”,
+ permalink: “/posts/my-second-post”,
+ slug: “my-second-post”,
+ }
+ ]
+ }
+ }
+}

To check the connection, we can grab the posts from the props and show them in the Posts component. We’ll include the title, date of creation, excerpt, and a link to the post. For now, that link won’t lead anywhere yet.

+import Link from ‘next/link’

-export default function Posts() {
+export default function Posts({ posts }) {
return (
<div className=”posts”>
<h1>Posts</h1>

– {/ TODO: render posts /}
+ {posts.map(post => {
+ const prettyDate = new Date(post.createdAt).toLocaleString(‘en-US’, {
+ month: ‘short’,
+ day: ‘2-digit’,
+ year: ‘numeric’,
+ })
+
+ return (
+ <article key={post.slug}>
+ <h2>
+ <Link href={post.permalink}>
+ <a>{post.title}</a>
+ </Link>
+ </h2>
+
+ <time dateTime={post.createdAt}>{prettyDate}</time>
+
+ <p>{post.excerpt}</p>
+
+ <Link href={post.permalink}>
+ <a>Read more →</a>
+ </Link>
+ </article>
+ )
+ })}
</div>
)
}

export function getStaticProps() { … }

After reloading the page in the browser, it now shows these two posts:

We don’t want to hardcode all our blog posts in getStaticProps() forever. After all, that is why we created all these files in the _posts/ directory earlier. We now need a way to read those files and pass their content to the page component.

There are a few ways we could do that. We could read the files right in getStaticProps(). Because this function runs on the server and not the client, we have access to native Node.js modules like fs in it. We could read, transform, and even manipulate local files in the same file we keep the page component.

To keep the file short and focused on one task, we’re going to move that functionality to a separate file instead. That way, the Posts component only needs to display the data, without also having to read that data itself. This adds some separation and organization to our project.

By convention, we are going to put functions reading data in a file called lib/api.js. That file will hold all functions that grab our content for the components that display it.

For the posts overview page, we want a function that reads, processes, and returns all posts. We’ll call it getAllPosts(). In it, we first use path.join() to build the path to the _posts/ directory. We then use fs.readdirSync() to read that directory, which gives us the names of all files in it. Mapping over these names, we then read each file in turn.

import fs from ‘fs’
import path from ‘path’

export function getAllPosts() {
const postsDirectory = path.join(process.cwd(), ‘_posts’)
const filenames = fs.readdirSync(postsDirectory)

return filenames.map(filename => {
const file = fs.readFileSync(path.join(process.cwd(), ‘_posts’, filename), ‘utf8’)

// TODO: transform and return file
})
}

After reading the file, we get its contents as a long string. To separate the frontmatter from the text of the post, we run that string through gray-matter. We’re also going to grab each post’s slug by removing the .md from the end of its filename. We need that slug to build the URL from which the post will be accessible later. Since we don’t need the Markdown body of the posts for this function, we can ignore the remaining content.

import fs from ‘fs’
import path from ‘path’
+import matter from ‘gray-matter’

export function getAllPosts() {
const postsDirectory = path.join(process.cwd(), ‘_posts’)
const filenames = fs.readdirSync(postsDirectory)

return filenames.map(filename => {
const file = fs.readFileSync(path.join(process.cwd(), ‘_posts’, filename), ‘utf8’)

– // TODO: transform and return file
+ // get frontmatter
+ const { data } = matter(file)
+
+ // get slug from filename
+ const slug = filename.replace(/.md$/, ”)
+
+ // return combined frontmatter and slug; build permalink
+ return {
+ …data,
+ slug,
+ permalink: /posts/${slug},
+ }
})
}

Note how we spread …data into the returned object here. That lets us access values from its frontmatter as {post.title} instead of {post.data.title} later.

Back in our posts overview page, we can now replace the placeholder posts with this new function.

+import { getAllPosts } from ‘../../lib/api’

export default function Posts({ posts }) { … }

export function getStaticProps() {
return {
props: {
– posts: [
– {
– title: “My first post”,
– createdAt: “2021-05-01”,
– excerpt: “A short excerpt summarizing the post.”,
– permalink: “/posts/my-first-post”,
– slug: “my-first-post”,
– }, {
– title: “My second post”,
– createdAt: “2021-05-04”,
– excerpt: “Another summary that is short.”,
– permalink: “/posts/my-second-post”,
– slug: “my-second-post”,
– }
– ]
+ posts: getAllPosts(),
}
}
}

After reloading the browser, we now see our real posts instead of the placeholders we had before.

Adding Individual Post Pages

The links we added to each post don’t lead anywhere yet. There is no page that responds to URLs like /posts/hello-world yet. With dynamic routing, we can add a page that matches all paths like this.

A file created as pages/posts/[slug].js will match all URLs that look like /posts/abc. The value that appears instead of [slug] in the URL will be available to the page as a query parameter. We can use that in the corresponding page’s getStaticProps() as params.slug to call a helper function.

As a counterpart to getAllPosts(), we’ll call that helper function getPostBySlug(slug). Instead of all posts, it will return a single post that matches the slug we pass it. On a post’s page, we also need to show the underlying file’s Markdown content.

The page for individual posts looks like the one for the post overview. Instead of passing posts to the page in getStaticProps(), we only pass a single post. Let’s do the general setup first before we look at how to transform the post’s Markdown body to usable HTML. We’re going to skip the placeholder post here, using the helper function we’ll add in the next step immediately.

import { getPostBySlug } from ‘../../lib/api’

export default function Post({ post }) {
const prettyDate = new Date(post.createdAt).toLocaleString(‘en-US’, {
month: ‘short’,
day: ‘2-digit’,
year: ‘numeric’,
})

return (
<div className=”post”>
<h1>{post.title}</h1>

<time dateTime={post.createdAt}>{prettyDate}</time>

{/ TODO: render body /}
</div>
)
}

export function getStaticProps({ params }) {
return {
props: {
post: getPostBySlug(params.slug),
},
}
}

We now have to add the function getPostBySlug(slug) to our helper file lib/api.js. It is like getAllPosts(), with a few notable differences. Because we can get the post’s filename from the slug, we don’t need to read the entire directory first. If the slug is ‘hello-world’, we are going to read a file called _posts/hello-world.md. If that file doesn’t exist, Next.js will show a 404 error page.

Another difference to getAllPosts() is that this time, we also need to read the post’s Markdown content. We can return it as render-ready HTML instead of raw Markdown by processing it with remark first.

import fs from ‘fs’
import path from ‘path’
import matter from ‘gray-matter’
+import remark from ‘remark’
+import html from ‘remark-html’

export function getAllPosts() { … }

+export function getPostBySlug(slug) {
+ const file = fs.readFileSync(path.join(process.cwd(), ‘_posts’, ${slug}.md), ‘utf8’)
+
+ const {
+ content,
+ data,
+ } = matter(file)
+
+ const body = remark().use(html).processSync(content).toString()
+
+ return {
+ …data,
+ body,
+ }
+}

In theory, we could use the function getAllPosts() inside getPostBySlug(slug). We’d first get all posts with it, which we could then search for one that matches the given slug. That would mean we would always need to read all posts before we could get a single one, which is unnecessary work. getAllPosts() also doesn’t return the posts’ Markdown content. We could update it to do that, in which case it would do more work than it currently needs to.

Because the two helper functions do different things, we are going to keep them separate. That way, we can focus the functions on exactly and only the job we need each of them to do.

Pages that use dynamic routing can provide a getStaticPaths() next to their getStaticProps(). This function tells Next.js what values of the dynamic path segments to build pages for. We can provide those by using getAllPosts() and returning a list of objects that define each post’s slug.

-import { getPostBySlug } from ‘../../lib/api’
+import { getAllPosts, getPostBySlug } from ‘../../lib/api’

export default function Post({ post }) { … }

export function getStaticProps({ params }) { … }

+export function getStaticPaths() {
+ return {
+ fallback: false,
+ paths: getAllPosts().map(post => ({
+ params: {
+ slug: post.slug,
+ },
+ })),
+ }
+}

Since we parse the Markdown content in getPostBySlug(slug), we can render it on the page now. We need to use dangerouslySetInnerHTML for this step so Next.js can render the HTML behind post.body. Despite its name, it is safe to use the property in this scenario. Because we have full control over our posts, it is unlikely they are going to inject unsafe scripts.

import { getAllPosts, getPostBySlug } from ‘../../lib/api’

export default function Post({ post }) {
const prettyDate = new Date(post.createdAt).toLocaleString(‘en-US’, {
month: ‘short’,
day: ‘2-digit’,
year: ‘numeric’,
})

return (
<div className=”post”>
<h1>{post.title}</h1>

<time dateTime={post.createdAt}>{prettyDate}</time>

– {/ TODO: render body /}
+ <div dangerouslySetInnerHTML={{ __html: post.body }} />
</div>
)
}

export function getStaticProps({ params }) { … }

export function getStaticPaths() { … }

If we follow one of the links from the post overview, we now get to that post’s own page.

Adding Authors

Now that we have posts wired up, we need to repeat the same steps for our authors. This time, we’ll use JSON instead of Markdown to describe them. We can mix different types of files in the same project like this whenever it makes sense. The helper functions we use to read the files take care of any differences for us. Pages can use these functions without knowing what format we store our content in.

First, create a directory called _authors/ and add a few author files to it. As we did with posts, name the files by each author’s slug. We’ll use that to look up authors later. In each file, we specify an author’s full name in a JSON object.

{
“name”: “Adrian Webber”
}

For now, having two authors in our project is enough.

To give them some more personality, let’s also add a profile picture for each author. We’ll put those static files in the public/ directory. By naming the files by the same slug, we can connect them using the implied convention alone. We could add the path of the picture to each author’s JSON file to link the two. By naming all files by the slugs, we can manage this connection without having to write it out. The JSON objects only need to hold information we can’t build with code.

When you’re done, your project directory should look something like this.

multi-author-blog/
├─ _authors/
│ ├─ adrian-webber.json
│ └─ megan-carter.json
├─ _posts/
│ └─ …
├─ pages/
│ └─ …
└─ public/
├─ adrian-webber.jpg
└─ megan-carter.jpg

Same as with the posts, we now need helper functions to read all authors and get individual authors. The new functions getAllAuthors() and getAuthorBySlug(slug) also go in lib/api.js. They do almost exactly the same as their post counterparts. Because we use JSON to describe authors, we don’t need to parse any Markdown with remark here. We also don’t need gray-matter to parse frontmatter. Instead, we can use JavaScript’s built-in JSON.parse() to read the text contents of our files into objects.

const contents = fs.readFileSync(somePath, ‘utf8’)
// ⇒ looks like an object, but is a string
// e.g. ‘{ “name”: “John Doe” }’

const json = JSON.parse(contents)
// ⇒ a real JavaScript object we can do things with
// e.g. { name: “John Doe” }

With that knowledge, our helper functions look like this:

export function getAllPosts() { … }

export function getPostBySlug(slug) { … }

+export function getAllAuthors() {
+ const authorsDirectory = path.join(process.cwd(), ‘_authors’)
+ const filenames = fs.readdirSync(authorsDirectory)
+
+ return filenames.map(filename => {
+ const file = fs.readFileSync(path.join(process.cwd(), ‘_authors’, filename), ‘utf8’)
+
+ // get data
+ const data = JSON.parse(file)
+
+ // get slug from filename
+ const slug = filename.replace(/.json/, ”)
+
+ // return combined frontmatter and slug; build permalink
+ return {
+ …data,
+ slug,
+ permalink: /authors/${slug},
+ profilePictureUrl: ${slug}.jpg,
+ }
+ })
+}
+
+export function getAuthorBySlug(slug) {
+ const file = fs.readFileSync(path.join(process.cwd(), ‘_authors’, ${slug}.json), ‘utf8’)
+
+ const data = JSON.parse(file)
+
+ return {
+ …data,
+ permalink: /authors/${slug},
+ profilePictureUrl: /${slug}.jpg,
+ slug,
+ }
+}

With a way to read authors into our application, we can now add a page that lists them all. Creating a new page under pages/authors/index.js gives us an /authors page on our site.

The helper functions take care of reading the files for us. This page component does not need to know authors are JSON files in the filesystem. It can use getAllAuthors() without knowing where or how it gets its data. The format does not matter as long as our helper functions return their data in a format we can work with. Abstractions like this let us mix different types of content across our application.

The index page for authors looks a lot like the one for posts. We get all authors in getStaticProps(), which passes them to the Authors component. That component maps over each author and lists some information about them. We don’t need to build any other links or URLs from the slug. The helper function already returns the authors in a usable format.

import Image from ‘next/image’
import Link from ‘next/link’

import { getAllAuthors } from ‘../../lib/api/authors’

export default function Authors({ authors }) {
return (
<div className=”authors”>
<h1>Authors</h1>

{authors.map(author => (
<div key={author.slug}>
<h2>
<Link href={author.permalink}>
<a>{author.name}</a>
</Link>
</h2>

<Image alt={author.name} src={author.profilePictureUrl} height=”40″ width=”40″ />

<Link href={author.permalink}>
<a>Go to profile →</a>
</Link>
</div>
))}
</div>
)
}

export function getStaticProps() {
return {
props: {
authors: getAllAuthors(),
},
}
}

If we visit /authors on our site, we see a list of all authors with their names and pictures.

The links to the authors’ profiles don’t lead anywhere yet. To add the profile pages, we create a file under pages/authors/[slug].js. Because authors don’t have any text content, all we can add for now are their names and profile pictures. We also need another getStaticPaths() to tell Next.js what slugs to build pages for.

import Image from ‘next/image’

import { getAllAuthors, getAuthorBySlug } from ‘../../lib/api’

export default function Author({ author }) {
return (
<div className=”author”>
<h1>{author.name}</h1>

<Image alt={author.name} src={author.profilePictureUrl} height=”80″ width=”80″ />
</div>
)
}

export function getStaticProps({ params }) {
return {
props: {
author: getAuthorBySlug(params.slug),
},
}
}

export function getStaticPaths() {
return {
fallback: false,
paths: getAllAuthors().map(author => ({
params: {
slug: author.slug,
},
})),
}
}

With this, we now have a basic author profile page that is very light on information.

At this point, authors and posts are not connected yet. We’ll build that bridge next so we can add a list of each authors’ posts to their profile pages.

Connecting Posts And Authors

To connect two pieces of content, we need to reference one in the other. Since we already identify posts and authors by their slugs, we’ll reference them with that. We could add authors to posts and posts to authors, but one direction is enough to link them. Since we want to attribute posts to authors, we are going to add the author’s slug to each post’s frontmatter.


title: “Hello World!”
excerpt: “This is my first blog post.”
createdAt: “2021-05-03”
+author: adrian-webber

Hey, how are you doing? Welcome to my blog. In this post, …

If we keep it at that, running the post through gray-matter adds the author field to the post as a string:

const post = getPostBySlug(“hello-world”)
const author = post.author

console.log(author)
// “adrian-webber”

To get the object representing the author, we can use that slug and call getAuthorBySlug(slug) with it.

const post = getPostBySlug(“hello-world”)
-const author = post.author
+const author = getAuthorBySlug(post.author)

console.log(author)
// {
// name: “Adrian Webber”,
// slug: “adrian-webber”,
// profilePictureUrl: “/adrian-webber.jpg”,
// permalink: “/authors/adrian-webber”
// }

To add the author to a single post’s page, we need to call getAuthorBySlug(slug) once in getStaticProps().

+import Image from ‘next/image’
+import Link from ‘next/link’

-import { getPostBySlug } from ‘../../lib/api’
+import { getAuthorBySlug, getPostBySlug } from ‘../../lib/api’

export default function Post({ post }) {
const prettyDate = new Date(post.createdAt).toLocaleString(‘en-US’, {
month: ‘short’,
day: ‘2-digit’,
year: ‘numeric’,
})

return (
<div className=”post”>
<h1>{post.title}</h1>

<time dateTime={post.createdAt}>{prettyDate}</time>

+ <div>
+ <Image alt={post.author.name} src={post.author.profilePictureUrl} height=”40″ width=”40″ />
+
+ <Link href={post.author.permalink}>
+ <a>
+ {post.author.name}
+ </a>
+ </Link>
+ </div>

<div dangerouslySetInnerHTML={{ __html: post.body }}>
</div>
)
}

export function getStaticProps({ params }) {
+ const post = getPostBySlug(params.slug)

return {
props: {
– post: getPostBySlug(params.slug),
+ post: {
+ …post,
+ author: getAuthorBySlug(post.author),
+ },
},
}
}

Note how we spread …post into an object also called post in getStaticProps(). By placing author after that line, we end up replacing the string version of the author with its full object. That lets us access an author’s properties through post.author.name in the Post component.

With that change, we now get a link to the author’s profile page, complete with their name and picture, on a post’s page.

Adding authors to the post overview page requires a similar change. Instead of calling getAuthorBySlug(slug) once, we need to map over all posts and call it for each one of them.

+import Image from ‘next/image’
+import Link from ‘next/link’

-import { getAllPosts } from ‘../../lib/api’
+import { getAllPosts, getAuthorBySlug } from ‘../../lib/api’

export default function Posts({ posts }) {
return (
<div className=”posts”>
<h1>Posts</h1>

{posts.map(post => {
const prettyDate = new Date(post.createdAt).toLocaleString(‘en-US’, {
month: ‘short’,
day: ‘2-digit’,
year: ‘numeric’,
})

return (
<article key={post.slug}>
<h2>
<Link href={post.permalink}>
<a>{post.title}</a>
</Link>
</h2>

<time dateTime={post.createdAt}>{prettyDate}</time>

+ <div>
+ <Image alt={post.author.name} src={post.author.profilePictureUrl} height=”40″ width=”40″ />
+
+ <span>{post.author.name}</span>
+ </div>

<p>{post.excerpt}</p>

<Link href={post.permalink}>
<a>Read more →</a>
</Link>
</article>
)
})}
</div>
)
}

export function getStaticProps() {
return {
props: {
– posts: getAllPosts(),
+ posts: getAllPosts().map(post => ({
+ …post,
+ author: getAuthorBySlug(post.author),
+ })),
}
}
}

That adds the authors to each post in the post overview:

We don’t need to add a list of an author’s posts to their JSON file. On their profile pages, we first get all posts with getAllPosts(). We can then filter the full list for the ones attributed to this author.

import Image from ‘next/image’
+import Link from ‘next/link’

-import { getAllAuthors, getAuthorBySlug } from ‘../../lib/api’
+import { getAllAuthors, getAllPosts, getAuthorBySlug } from ‘../../lib/api’

export default function Author({ author }) {
return (
<div className=”author”>
<h1>{author.name}</h1>

<Image alt={author.name} src={author.profilePictureUrl} height=”40″ width=”40″ />

+ <h2>Posts</h2>
+
+ <ul>
+ {author.posts.map(post => (
+ <li>
+ <Link href={post.permalink}>
+ <a>
+ {post.title}
+ </a>
+ </Link>
+ </li>
+ ))}
+ </ul>
</div>
)
}

export function getStaticProps({ params }) {
const author = getAuthorBySlug(params.slug)

return {
props: {
– author: getAuthorBySlug(params.slug),
+ author: {
+ …author,
+ posts: getAllPosts().filter(post => post.author === author.slug),
+ },
},
}
}

export function getStaticPaths() { … }

This gives us a list of articles on every author’s profile page.

On the author overview page, we’ll only add how many posts they have written to not clutter the interface.

import Image from ‘next/image’
import Link from ‘next/link’

-import { getAllAuthors } from ‘../../lib/api’
+import { getAllAuthors, getAllPosts } from ‘../../lib/api’

export default function Authors({ authors }) {
return (
<div className=”authors”>
<h1>Authors</h1>

{authors.map(author => (
<div key={author.slug}>
<h2>
<Link href={author.permalink}>
<a>
{author.name}
</a>
</Link>
</h2>

<Image alt={author.name} src={author.profilePictureUrl} height=”40″ width=”40″ />

+ <p>{author.posts.length} post(s)</p>

<Link href={author.permalink}>
<a>Go to profile →</a>
</Link>
</div>
))}
</div>
)
}

export function getStaticProps() {
return {
props: {
– authors: getAllAuthors(),
+ authors: getAllAuthors().map(author => ({
+ …author,
+ posts: getAllPosts().filter(post => post.author === author.slug),
+ })),
}
}
}

With that, the Authors overview page shows how many posts each author has contributed.

And that’s it! Posts and authors are completely linked up now. We can get from a post to an author’s profile page, and from there to their other posts.

Summary And Outlook

In this article, we connected two related types of content through their unique slugs. Defining the relationship from post to author enabled a variety of scenarios. We can now show the author on each post and list their posts on their profile pages.

With this technique, we can add many other kinds of relationships. Each post might have a reviewer on top of an author. We can set that up by adding a reviewer field to a post’s frontmatter.


title: “Hello World!”
excerpt: “This is my first blog post.”
createdAt: “2021-05-03”
author: adrian-webber
+reviewer: megan-carter

Hey, how are you doing? Welcome to my blog. In this post, …

On the filesystem, the reviewer is another author from the _authors/ directory. We can use getAuthorBySlug(slug) to get their information as well.

export function getStaticProps({ params }) {
const post = getPostBySlug(params.slug)

return {
props: {
post: {
…post,
author: getAuthorBySlug(post.author),
+ reviewer: getAuthorBySlug(post.reviewer),
},
},
}
}

We could even support co-authors by naming two or more authors on a post instead of only a single person.


title: “Hello World!”
excerpt: “This is my first blog post.”
createdAt: “2021-05-03”
-author: adrian-webber
+authors:
+ – adrian-webber
+ – megan-carter

Hey, how are you doing? Welcome to my blog. In this post, …

In this scenario, we could no longer look up a single author in a post’s getStaticProps(). Instead, we would map over this array of authors to get them all.

export function getStaticProps({ params }) {
const post = getPostBySlug(params.slug)

return {
props: {
post: {
…post,
– author: getAuthorBySlug(post.author),
+ authors: post.authors.map(getAuthorBySlug),
},
},
}
}

We can also produce other kinds of scenarios with this technique. It enables any kind of one-to-one, one-to-many, or even many-to-many relationship. If your project also features newsletters and case studies, you can add authors to each of them as well.

On a site all about the Marvel universe, we could connect characters and the movies they appear in. In sports, we could connect players and the teams they currently play for.

Because helper functions hide the data source, content could come from different systems. We could read articles from the filesystem, comments from an API, and merge them into our code. If some piece of content relates to another type of content, we can connect them with this pattern.

Further Resources

Next.js offers more background on the functions we used in their page on Data Fetching. It includes links to sample projects that fetch data from different types of sources.

If you want to take this starter project further, check out these articles:

Building a CSS Tricks Website Clone with Strapi and Next.js
Replace the files on the local filesystem with a Strapi-powered backend.
Comparing Styling Methods in Next.js
Explore different ways of writing custom CSS to change this starter’s styling.
Markdown/MDX with Next.js
Add MDX to your project so you can use JSX and React components in your Markdown.

Creating Custom Emmet Snippets In VS Code

Earlier this year, I shared the HTML boilerplate I like to use when starting new web projects with line-by-line explanations on my blog. It’s a collection of mostly <head> tags and attributes I usually use on every website I build. Until recently, I would just copy and paste the boilerplate whenever I needed it, but I’ve decided to improve my workflow by adding it as a snippet to VS Code — the editor of my choice.

Click “Add Item”, enter the path to the folder where you’ve saved the snippets.json file you’ve created earlier and press “OK”.

That’s it. Now we’re ready to create snippets by adding properties to the html and css objects where the key is the name of the snippet and the value an abbreviation or a string.

Some Of My Custom HTML Snippets

Before we dive deep into snippet creation and I show you how I created a snippet for my HTML boilerplate, let’s warm up first with some small, but useful snippets I’ve created, as well.

Lazy Loading

Out of the box, there’s an img abbreviation, but there’s none for lazily loaded images. We can use the default abbreviation and just add the additional attributes and attribute values we need in square brackets.

{
“html”: {
“snippets”: {
“img:l”: “img[width height loading=’lazy’]”
}
}
}

img:l + Enter/Tab now creates the following markup:

<img src=”” alt=”” width=”” height=”” loading=”lazy”>

Page

Most pages I create consist of <header>, <main> and <footer> landmarks and an <h1>. The custom page abbreviation lets me create that structure quickly.

“snippets”: {
“page”: “header>h1^main+footer{${0:©}}”
}

page + Enter/Tab creates the following markup:

<header>
<h1></h1>
</header>
<main></main>
<footer>©</footer>

That abbreviation is quite long, so let’s break it down into smaller bits.

Breakdown

Create an <header> element and a child <h1>.

header>h1

Move up, back to the level of the <header>, and create a <footer> that follows <main>.

^main+footer

Set the final tab stop within the <footer> and set the default text to &copy.

{${0:©}}

Navigation

The abbreviation nav just creates a <nav> start and end tag by default, but what I usually need is a <nav> with a nested <ul>, <li> elements and links ( <a>). If there are multiple <nav> elements on a page, they should also be labeled, for example by using aria-label.

“nav”: “nav[aria-label=’${1:Main}’]>ul>(li>a[aria-current=’page’]{${2:Current Page}})+(li*3>a{${0:Another Page}})”

That looks wild, so let’s break it down again.

Breakdown

We start with a <nav> element with an aria-label attribute and a nested <ul>. ${1:Main} populates the attribute with the text “Main” and creates a tab stop at the attribute value by moving the cursor to it and highlighting it upon creation.

nav[aria-label=’${1:Main}’]>ul

Then we create four list items with nested links. The first item is special because it marks the active page using aria-current=”page”. We create another tab stop and populate the link with the text “Current Page”.

(li>a[aria-current=’page’]>{${2:Current Page}})

Finally, we add three more list items with links and the link text “Another page”.

(li*3>a>{${0:Another Page}})

Before our adaptations, we got this:

Now we get this:

<– After: nav + TAB/Enter –>

<nav aria-label=”Main”>
<ul>
<li><a href=”” aria-current=”page”>Current Page</a></li>
<li><a href=””>Another Page</a></li>
<li><a href=””>Another Page</a></li>
<li><a href=””>Another Page</a></li>
</ul>
</nav>

Style

The default style abbreviation only creates the <style> start and end tag, but usually when I use the <style> element I do it because I quickly want to test or debug something.

Let’s add some default rules to the <style> tag:

“style”: “style>{\* { box-sizing: border-box; \}}+{n${1:*}:focus \{${2: outline: 2px solid red; }\} }+{n${0}}”

Breakdown

Some characters (e.g. $, *, { or }) have to be escaped using \.

style>{\* { box-sizing: border-box; \}}

n creates a linebreak and ${1:*} places the first tab stop at the selector *.

{n${1:*}:focus \{${2: outline: 2px solid red; }\}}

Before: <style><style>
After: <style>
* { box-sizing: border-box; }
*:focus { outline: 2px solid red; }
</style>

Alright, enough warming-up. Let’s create complex snippets. At first, I wanted to create a single snippet for my boilerplate, but I created three abbreviations that serve different needs.

Small
Medium
Full

Boilerplate Small

This is a boilerplate for quick demos, it creates the following:

Basic site structure,
viewport meta tag,
Page title,
<style> element,
A <h1>.

{
“!”: “{<!DOCTYPE html>}+html[lang=${1}${lang}]>(head>meta:utf+meta:vp+{}+title{${2:New document}}+{}+style)+body>(h1>{${3: New Document}})+{${0}}”
}

Breakdown

A string with the doctype:

{<!DOCTYPE html>}

The <html> element with a lang attribute. The value of the lang attribute is a variable you can change in the VS code settings (Code → Preferences → Settings).

html[lang=${1}${lang}]

You can change the default natural language of the page by searching for “emmet variables” in VS Code settings and changing the lang variable. You can add your custom variables here, too.

The <head> includes the charset meta tag, viewport meta tag, <title>, and <style> tag. {} creates a new line.

(head>meta:utf+meta:vp+{}+title{${2:New document}}+{}+style)

Let’s have a first quick look at what this gives us.

<!DOCTYPE html>
<html lang=”en”>
<head>
<meta http-equiv=”Content-Type” content=”text/html;charset=UTF-8″>
<meta name=”viewport” content=”width=device-width, initial-scale=1.0″>

<title>New document</title>
</head>
</html>

Looks okay, but the meta:utf abbreviation creates the old way in HTML to define the charset and meta:vp creates two tab stops I don’t need because I never use a different setting for the viewport.

Let’s overwrite these snippets before we move on.

{
“meta:vp”: “meta[name=viewport content=’width=device-width, initial-scale=1′]”,
“meta:utf”: “meta[charset=${charset}]”
}

Last but not least, the <body> element, an <h1> with default text, followed by the final tab stop.

body>(h1>{${3: New Document}})+{${0}}

The final boilerplate:

<!DOCTYPE html>
<html lang=”en”>
<head>
<meta charset=”UTF-8″>
<meta name=”viewport” content=”width=device-width, initial-scale=1″>

<title>New document</title>

<style>
* { box-sizing: border-box; }

*:focus { outline: 2px solid red; }

</style>
</head>
<body>
<h1> New Document</h1>

</body>
</html>

For me, that’s the perfect minimal debugging setup.

Boilerplate Medium

While I use the first boilerplate only for quick demos, the second boilerplate can be used for complex pages. The snippet creates the following:

Basic site structure,
viewport meta tag,
Page title,
.no-js/.js classes,
External screen and print stylesheets,
description and theme-color meta tag,
Page structure.

{
“!!”: “{<!DOCTYPE html>}+html[lang=${1}${lang}].no-js>{<!– TODO: Check lang attribute –> }+(head>meta:utf+meta:vp+{}+title{${1:🛑 Change me}}+{}+(script[type=”module”]>{document.documentElement.classList.replace(‘no-js’, ‘js’);})+{}+link:css+link:print+{}+meta[name=”description”][content=”${2:🛑 Change me (up to ~155 characters)}”]+{<!– TODO: Change page description –> }+meta[name=”theme-color”][content=”${2:#FF00FF}”])+body>page”
}

Yeaaah, I know, that looks like gibberish. Let’s dissect it.

Breakdown

The doctype and the root element are like in the first example, but with an additional no-js class and a comment that reminds me to change the lang attribute, if necessary.

{<!DOCTYPE html>}+html[lang=${1}${lang}].no-js>{ }

The TODO Highlight extension makes the comment really pop.

The <head> includes the charset meta tag, viewport meta tag, <title>. {} creates a new line.

(head>meta:utf+meta:vp+{}+title{${1:🛑 Change me}}+{}

A script with a line of JavaScript. I’m cutting the mustard at the JS module support. If a browser supports JavaScript modules, it means that it’s a browser that supports modern JavaScript (e.g. modules, ES 6 syntax, fetch, and so on). I ship most JS only to these browsers, and I use the js class in CSS, if the styling of a component is different, when JavaScript is active.

(script[type=”module”]>{document.documentElement.classList.replace(‘no-js’, ‘js’);})+{}

Two <link> elements; the first links to the main stylesheet and the second to a print stylesheet.

link:css+link:print+{}

The page description:

meta[name=”description”][content=”${2:🛑 Change me (up to ~155 characters)}”]+{ }

The theme-color meta tag:

meta[name=”theme-color”][content=”${2:#FF00FF}”])

The body element and the basic page structure:

body>page

The final boilerplate looks like this:

<!DOCTYPE html>
<html lang=”en” class=”no-js”>
<!– TODO: Check lang attribute –>
<head>
<meta charset=”UTF-8″>
<meta name=”viewport” content=”width=device-width, initial-scale=1.0″>

<title>🛑 Change me</title>

<script type=”module”>
document.documentElement.classList.replace(‘no-js’, ‘js’);
</script>

<link rel=”stylesheet” href=”style.css”>
<link rel=”stylesheet” href=”print.css” media=”print”>

<meta name=”description” content=”🛑 Change me (up to ~155 characters)”>
<!– TODO: Change page description –>
<meta name=”theme-color” content=”#FF00FF”>
</head>
<body>
<header>
<h1></h1>
</header>
<main></main>
<footer>©</footer>
</body>
</html>

Full Boilerplate

The full boilerplate is similar to the second boilerplate; the differences are additional meta tags and a script tag.

The snippet creates the following:

Basic site structure,
viewport meta tag,
Page title,
js/no-js classes,
External screen and print stylesheets,
description and open graph meta tags,
theme-color meta tag,
canonical <link> tag,
Favicon tags,
Page structure,
<script> tag.

{
“!!!”: “{<!DOCTYPE html>}+html[lang=${1}${lang}].no-js>{<!– TODO: Check lang attribute –> }+(head>meta:utf+meta:vp+{}+title{${1:🛑 Change me}}+{}+(script[type=”module”]>{document.documentElement.classList.replace(‘no-js’, ‘js’);})+{}+link:css+link:print+{}+meta[property=”og:title”][content=”${1:🛑 Change me}”]+meta[name=”description”][content=”${2:🛑 Change me (up to ~155 characters)}”]+meta[property=”og:description”][content=”${2:🛑 Change me (up to ~155 characters)}”]+meta[property=”og:image”][content=”${1:https://}”]+meta[property=”og:locale”][content=”${1:en_GB}”]+meta[property=”og:type”][content=”${1:website}”]+meta[name=”twitter:card”][content=”${1:summary_large_image}”]+meta[property=”og:url”][content=”${1:https://}”]+{<!– TODO: Change social media stuff –> }+{}+link[rel=”canonical”][href=”${1:https://}”]+{<!– TODO: Change canonical link –> }+{}+link[rel=”icon”][href=”${1:/favicon.ico}”]+link[rel=”icon”][href=”${1:/favicon.svg}”][type=”image/svg+xml”]+link[rel=”apple-touch-icon”][href=”${1:/apple-touch-icon.png}”]+link[rel=”manifest”][href=”${1:/my.webmanifest}”]+{}+meta[name=”theme-color”][content=”${2:#FF00FF}”])+body>page+{}+script:src[type=”module”]”
}

This incredibly long snippet creates this:

<!DOCTYPE html>
<html lang=”en” class=”no-js”>
<!– TODO: Check lang attribute –>
<head>
<meta charset=”UTF-8″>
<meta name=”viewport” content=”width=device-width, initial-scale=1.0″>

<title>🛑 Change me</title>

<script type=”module”>
document.documentElement.classList.replace(‘no-js’, ‘js’);
</script>

<link rel=”stylesheet” href=”style.css”>
<link rel=”stylesheet” href=”print.css” media=”print”>

<meta property=”og:title” content=”🛑 Change me”>
<meta name=”description” content=”🛑 Change me (up to ~155 characters)”>
<meta property=”og:description” content=”🛑 Change me (up to ~155 characters)”>
<meta property=”og:image” content=”https://”>
<meta property=”og:locale” content=”en_GB”>
<meta property=”og:type” content=”website”>
<meta name=”twitter:card” content=”summary_large_image”>
<meta property=”og:url” content=”https://”>
<!– TODO: Change social media stuff –>

<link rel=”canonical” href=”https://”>
<!– TODO: Change canonical link –>

<link rel=”icon” href=”/favicon.ico”>
<link rel=”icon” href=”/favicon.svg” type=”image/svg+xml”>
<link rel=”apple-touch-icon” href=”/apple-touch-icon.png”>
<link rel=”manifest” href=”/my.webmanifest”>

<meta name=”theme-color” content=”#FF00FF”>
</head>
<body>
<header>
<h1></h1>
</header>
<main></main>
<footer>©</footer>

<script src=”” type=”module”></script>
</body>
</html>

Custom CSS Snippets

For the sake of completeness, here are some of the CSS snippets I’m using.

Debugging

This snippet creates a 5px red outline with a custom offset.

“debug”: “outline: 5px solid red;noutline-offset: -5px;”

Centering

A snippet that sets display to flex, and centers its child items.

“center”: “display: flex;njustify-content: center;nalign-items: center;”

Sticky

Sets the position property to sticky, with two tab stops at the top and left property.

“sticky”: “position: sticky;ntop: ${1:0};nleft: ${2:0};”

User Snippets

At the beginning of this article, I mentioned that VS Code also provides custom snippets. The difference to Emmet snippets is that you can’t use abbreviations, but you can also define tab stops and make use of internal variables.

How to get the best out of user snippets could be a topic for another article, but here’s an example of a custom CSS snippet I’ve defined:

“Visually hidden”: {
“prefix”: “vh”,
“body”: [
“.u-vh {“,
” position: absolute;n white-space: nowrap;n width: 1px;n height: 1px;n overflow: hidden;n border: 0;n padding: 0;n clip: rect(0 0 0 0);n clip-path: inset(50%);n margin: -1px;”,
“}”
],
“description”: “A utility class for screen reader accessible hiding.”
}

This snippet doesn’t just create CSS rules, but a whole declaration block when we type vh and press Enter or Tab.

.u-vh {
position: absolute;
white-space: nowrap;
width: 1px;
height: 1px;
overflow: hidden;
border: 0;
padding: 0;
clip: rect(0 0 0 0);
clip-path: inset(50%);
margin: -1px;
}

Final Words

It takes some time to create these snippets, but it’s worth the effort because you can customize Emmet to your personal preferences, automate repetitive tasks and save time in the long run.

I’d love to see which snippets you use, so please share them with us in the comments. If you want to use my settings, you can find my final snippets.json on GitHub.

Resources

Default CSS Emmet snippets
Default HTML Emmet snippets
Emmet cheat sheet
Emmet in VS Code docs

Smashing Podcast Episode 40 Mike Cavaliere: What Is Chakra UI For React?

In this episode, we’re talking about Chakra UI. What is it and how can it help with your React projects? Drew McLellan talks to expert Mike Cavaliere to find out.

Show Notes

Chakra UI
Mike on Twitter
Mike’s personal website
Cut Into The Jamstack book

Weekly Update

Designing With Code: A Modern Approach To Design
written by Mikołaj Dobrucki
Automating Screen Reader Testing On macOS Using Auto VO
written by Cameron Cundiff
The Rise Of Design Thinking As A Problem Solving Strategy
written by Josh Singer
How To Run A UX Audit For A Major EdTech Platform
written by Mark Lankmilier
Creating A Multi-Author Blog With Next.js
written by Dom Habersack

Transcript

Drew McLellan: He’s a Senior Software Engineer for an agency called Echobind. He’s been writing code for two decades, and using JavaScript the whole time. He loves the Jamstack, and his new book, Cut Into The Jamstack, teaches the reader how to build a software as a service app from scratch. We know he knows his way around the Jamstack, but did you know he once got lost in the peanut butter aisle? My smashing friends, please welcome Mike Cavaliere. Hi, Mike. How are you?

Mike Cavaliere: I am absolutely smashing today.

Drew: That’s good to hear. I wanted to talk to you today about a project that I’d really not heard of, somehow, until I came across it in your Jamstack book. I’m not sure how I’d missed it because it seems to be maturing and well documented and a real… Just a great project. I’m hoping that today we can talk about it, and I can catch up to find out what I should’ve known all along. I’m talking about Chakra UI, of course. Tell me, what is Chakra UI? What space is it in, and what problem is it solving for us?

Mike: Chakra UI is a UI framework for React or UI toolkit, I guess they phrase it as. In any application stack, nowadays you don’t want to invent a UI from scratch. You want to grab some toolkit. That’s been the case for a while.

Mike: Chakra UI is a great approach on a React UI toolkit. There’s a number of perks to it, but one is that it’s… For one, it’s robust. That means it’s got just every UI element that you could imagine. It’s got switches. It’s got wrappers around grids. It’s got all types of things form elements.

Mike: It’s made to be very composable, so that everything used style props. Your components, they’re great right out of the box. You can drop them and use them as is. But if you want to make a tweak, it’s very easy to pass in some style properties. They’re fully accessible. The accessibility, which everybody talks about but always forgets to implement or it takes a little effort to implement, it’s built in for you.

Mike: It’s not uncommon for me to put together something with Chakra UI and get a very good Lighthouse score. Actually, I was just checking the Cut Into The Jamstack website today, and the accessibility score is very high. It’s also very fully themeable. You can set theme configuration from beginning. There’s just a long list of perks to it.

Mike: It makes it very fast to develop, which was what originally attracted me to it. Echobind, we use it internally. But for me, I don’t have design sense. A little bit, but I’m not a designer by any means. I can grab components from Chakra and alter things ever so slightly to make it consistent and things just look good out of the box. You’re able to develop fast. Developer experience is great. It’s just awesome on so many levels.

Mike: Last thing I’ll say before I keep rambling about it. But it also has a lot of React Hooks that are helpers for very common functionality things that come along with these elements that you’re using. For example, on dark mode. There’s built in hooks for using lighter dark mode that just very obtrusively let you toggle colors in your theme.

Mike: There’s another one for used disclosure which is for toggling things like modules. Which you always need an on, off state. But the Hook just simplifies that even more so you can focus on the things that the framework can’t infer automatically. I’ll cut it off there, because that was a lot.

Drew: That’s really good. Just so I’ve got my understanding right, first of all it’s Shakra not Chakra? Shakra?

Mike: I wouldn’t be the expert on that. I’ve been saying Shakra just because of yoga. But we’ll have to ask the founders to double-check.

Drew: It’s an off the shelf design system that you can drop in to build the UI for your project.

Mike: Yeah.

Drew: It’s specifically for React projects.

Mike: Yeah. There is a Chakra Vue project out there. I’m not a Vue person very much but I know that it does exist. There may be for other frames as well, but I’m very, very React focused so I’ve been using the Chakra default React one.

Drew: Yes. I’ve been familiar with React in the past. I’ve used React when I worked at Netlify. Now I do everything in Vue. That was one of the first things I looked at. Oh, is there a Vue? This looks good. Is there a Vue version of it? I found a Vue version of it and it seems to be quite a way behind. I think it’s on 0.9 or something, rather than 1.6 or whatever the current React version is. I’m not sure how current that is.

Drew: We’ve got fairly outdated frameworks out there. Things like Foundation UI, Bootstrap, Bulma. They’ve been around for a long time and they’re a previous generation of framework, it would seem. Then we’ve got some more modern approaches. I think a lot of listeners will be familiar with Tailwind and the Tailwind UI project. Where does Chakra UI fall amongst that landscape? It’s closer to something that Tailwind might… An approach that tailwind might take. Is that correct?

Mike: I think so. Admittedly, I’ve been meaning to really dig into Tailwind a lot more just because it’s so popular right now. But I can’t speak intelligently on the ins and outs of Tailwind itself and how… My sense is that Chakra and Tailwind are alternative approaches. You grab for one, not both at the same time, obviously.

Mike: I don’t yet know what the pros and cons are for both. I’ve just been so enamored with Chakra that I just keep using it by default. I’m like, “Okay, I know this really well now. I love it. I’ll get to learn the other one later.” But Tailwind obviously, extremely popular. I think Tailwind has their base framework in a UI toolkit. Is that fair?

Drew: Right. Yeah.

Mike: Okay. This would probably be more on par with the UI toolkit of Tailwind. On the Chakra homepage, they do have a comparison on why you might want to reach for one or the other, but I don’t have it internalized.

Drew: Yeah. That’s good. As we mentioned, for React projects and the way that manifests itself rather than some of these more traditional design systems which give you a whole load of class names to put on your HTML and you have to use some HTML structure, put the right classes on it. That’s the way you get the UI manifesting in your project. With Chakra, because it’s based on React, it’s giving you a whole load of components for each of those elements. You can just import into your project. Those components encapsulate their own markup and styling, do they?

Mike: Yeah. You won’t actually have to write a class using Chakra. I haven’t. I don’t even know if it’s possible. The whole React paradigm is a component composition and properties. Encapsulation of components means you pass certain properties into the component. In Chakra, you have this notion of a theme which is a global paradigm. There’s a default theme and it’s got values for color and spacing and certain units for all common things.

Mike: You can customize that theme. It customizes it globally. You can augment it however you need to. When you call the component itself, for example, a text input. An input component. That’s going to have default colors and border radius and padding and margin as defined by the theme. When you want to style it further, if you don’t want to do it on a global basis, for example, when I specify bottom margins, I do it on a case by case basis. I don’t do it at that a global level because that can lead to catastrophe. You just pass it as a prompt.

Mike: There are shortcut prompts. If I have an input component I just say, MB equals, and then a value and it’ll apply the margin bottom. Or they have MX and MY for vertical and horizontal. Or you could just specify M and pass in the string as you would the margin CSS property. There’s no class names. It does the class names all dynamically and obfuscates that away from the user.

Drew: Yes. I think that’s where the comparison with Tailwind must come in. Because the way Tailwind works, is it gives you a whole load of classes. If you want to increase the margin, there’s a class that you can put on to increase the margin. It sounds actually you’re taking that same… It’s a different implementation, but the same approach to how its architected. We’re actually using props and you’re passing a prop in to adjust those things.

Drew: How easy is it to customize a design? Is it a case of just being able to tweak colors and margins and padding and make it look a little bit different? Or can you actually really brand up a theme with Chakra?

Mike: Oh, you can do whatever the heck you want. It’s great. You could style at the component level or the theme level. It just depends on how creative you want to be with it. I’ve managed to take some components and do some wild things with them. Part of what makes it really styleable is that these components are pretty atomic.

Mike: Using the text box example again, if you want a text box, your component is just that. You can style everything around it or you can style the text box itself. Or you can change the theme. Setting the colors to rebrand everything globally.

Mike: I actually tweeted the creator of Chakra UI, Seg, saying they should put a gallery on the site because it’s really great. You can create some beautiful designs with it. They’re very varied and you might not know on the surface there. I don’t know if Chakra UI has any tells that make it obvious that you’re using a Chakra UI for your site.

Mike: I’ve seen some pretty nice stuff with it. But you can do anything with it. I’ve done static websites. The Cut Into The Jamstack homepage is done with it. Just as one example. We’ve used it at Echobind plenty. I can’t remember if we’ve used that for echobind.com. But certainly many of our clients sites. Then the app that I’ve been building, JamShots, it’s an app. It doesn’t have marketing pages yet. But it’s all just UI and all that UI is built using Chakra.

Mike: One other thing just while I’m praising Chakra is that, there’s another website that I’ve been using a lot lately, and I use in… I’m going to introduce into the book as well. Chakratemplates.net. Chakra-templates.net. It’s a common design patterns that whoever’s contributing is finding a hero unit or a pricing unit. They just have to copy and pastable Chakra code.

Mike: I use that entirely for the book homepage because it just saved me so much time in developing it. It’s like, oh, you have a pricing model. Let me copy and paste that. Let me just adjust the style props a little bit so that everything’s consistent on my site. That’s it. It’s just another thing that is separate from Chakra itself, but it just, it’s such a time saver because you need these things on so many websites and who wants to reinvent the wheel every time.

Drew: It sounds it can be a real time-saver, not only for personal projects where you want to roll something out quickly, but in an agency context.

Mike: Oh, yes. Absolutely.

Drew: Does that apply equally to app interfaces as well as marketing sites? Does it skew one way or the other or is it just generally useful whatever you’re building?

Mike: I’d say it’s both. It definitely is. I’ve used it for both. Our company has used it for both. We build, I’d say we lean heavily towards building full stack applications and mobile applications. We definitely have a lot more need for UI than marketing stuff. Although we sometimes build that as well. It’s useful for both.

Mike: There is something on the site that they do mention, like when would you not want to use Chakra? They do say that because of the way it simplifies this interface CSS. There might be challenges when you have a lot of data on screen. If you’re creating tons and tons of DOM elements and doing a lot of real-time updates, you might or might not run into performance challenges.

Mike: I haven’t seen a performance issue ever. But I also haven’t built something that was so data intensive in real-time. It’s concern. If I was going to build an app like that, I’d probably want to spike up two different approaches anyway, just to see how they perform with a whole lot. But yeah. It’s universally useful for both of those cases.

Drew: I guess there’s always a trade-off, isn’t there with technology choices? Something that makes it really, really simple. Really quick to implement. The trade-off might be once you’re creating a 1,000 data points or whatever on a page, that method of working is not going to perform well and slows you down.

Drew: Yes. I think that’s fair. I tend to find in technology choices, the most important thing is just to know. Just to know what the trade-offs are and what the limitations are. None of them are good or bad. You just need to find an appropriate balance for your own situation.

Drew: As you’d expect to find with a design system of this kind, it comes with components for typography. For layout. Then down to nitty-gritty of buttons and form elements and there’s an icon library. There’s pretty much everything that you’d expect to see on a design systems’ kitchen sink page. You’ve got everything there. It all seems pretty modern to me. I noted that the layout grid component actually uses CSS grid, which is always nice to see. It’s not just gives some flex box.

Mike: Oh, yeah. Totally.

Drew: Is it generally very flexible to work with? Do you find that the layout elements you’re able to build any type of UI that you need to?

Mike: Yeah. Yeah. Absolutely. What’s great about it is they, in some cases provide more than one level of abstraction. In the case of CSS grid, they have a simple grid which is like, okay. You want to drop it in and here’s your grid. You just put stuff inside of it and you specify, I think the number of columns or something like that. Then you’ve got a grid.

Mike: But if you need to have a bit more flexibility over in the behavior of the grid, then you’ve got a generic grid component, which is probably… The simple grid component probably wraps the other grid component. It’s just another facade on top of itself.

Mike: That approach towards composition of components, it’s a valuable paradigm in the React world because of the same thing. If you have a component that is very versatile and has a lot of props to it, well then, there might be a set of use cases that you want to use the component one way for fairly commonly. You just wrap it with another component with static or pre-specified props for the more robust components.

Mike: They use that approach really well in Chakra. I haven’t run into anything that I can’t do with it yet. I’m sure it’s out there somewhere. Or something that’s just a little more of hassle to do. But it generally hasn’t happened yet. Not that I can think of at least.

Drew: Well, one of the things I was really pleased to see and something that you mentioned earlier as well, is there seems to be this quite strong focus on accessibility.

Mike: Yes.

Drew: Certainly in the promotional information. Is that born out in the code itself? Do they practice what they preach? Is it actually got good accessibility built in?

Mike: I think so. The closest I’ve done to putting it to the test is running Lighthouse against it. It consistently provides high scores for accessibility. I typically will use Chakra Next.js. Next.js is performance right at the box. It’s quite often that you’ll see high scores and everything. I just tweeted today about how the book’s homepage has three out of the four Lighthouse scores. There’s accessibility, best practices, performance and one fourth one. I’m not thinking right now.

Mike: Everything but performance came out close to 100%. The performance part is on me just because I put a lot on the page and I haven’t optimized it yet. It tends to do that. The accessibility scores in Lighthouse are great whenever I use Chakra UI.

Drew: That’s great. You mentioned they’re using server-side rendering and what have you. Things like Next and Gatsby and what I have you, is absolutely no problem, is it? There aren’t any hurdles to be aware of using Chakra with those?

Mike: Oh, no. Not at all. I haven’t used it. I tend to focus on Next.js. I haven’t plugged into Gatsby or any of the other SSR tools. But as long as the framework, it doesn’t have anything that would block it from using it as such, then it should be fine.

Mike: For React, Chakra provides a context API provider. A theme provider so that when you… In my Next.js apps for example, you have a… Next.js has a underscore app JS or TS file that just wraps every page in the application. You just plug the theme provider in there and Chakra does the rest of the work and it just becomes available everywhere. There’s a no hurdles to adding into Next.js certainly. But I imagine not to Chakra either.

Drew: Does Chakra use TypeScript? I believe it does, doesn’t it?

Mike: It supports it. Yep.

Drew: It supports it. That’s a big plus for people who use TypeScript already in their projects. Is there any downsides to that if people aren’t already using TypeScript?

Mike: I don’t think so. I use TypeScript by default in all my projects, and so does Echobind. But when I do things on a personal level, I use… I like to say sprinkle of TypeScript. Typescript is extremely valuable in reducing errors by creating static types. There’s a carrier for it though, where depending on your knowledge of it, TypeScript can be a real hurdle.

Mike: My minimum threshold for… The strictness of TypeScript that I use is fairly low simply because you can get a lot of value out of TypeScript with basic typing. It will prevent a lot of common mishaps. When you go into the more advanced typing, if you’re not super comfortable with that stuff, it can really slow you down and frustrate you.

Mike: That’s just to say same thing with Chakra and TypeScript. I tend to use a light amount of TypeScript, at least in the beginning until I’m really fleshing out and stabilizing a project. But it presents no challenges in using Chakra, either with or without TypeScript. It’s great with. I love it with, but you can certainly use it without as well.

Drew: Yeah. I find with TypeScript that you get 80% of the benefits, as you say, with just with a few types. If you get too far down the rabbit hole, you end up with a script that’s mostly TypeScript. Then a bit of JavaScript to the bottom.

Mike: Or you spend so much time trying to figure out the right way to type something and your brain blows up. That’s how you just put any or unknown. You shortcut it. Which I advocate for in cases like that. If it’s taking too much time for you to get something done, then there is a lever you can pull.

Drew: The Chakra documentation seems to be really well pitched, I thought, with… It has an overview of each component. Then it really usefully includes any technical notes about the design considerations that were made when implementing that component. Which, as a front end engineer, I think that’s great. They’re talking my language. I understand. I know what the component is doing slightly under the hood.

Drew: That’s just from my perspective, browsing the documentation without a real project that I’m working on. When you’re actually working on a project and deep in the weeds of it, just the documentation hold up? Is it as useful as it seems?

Mike: Oh yeah. Absolutely. My perspective is a little different. I don’t always need to know what’s going on under the hood, but I feel I can infer usually. If I’m looking at a box component, I’m just looking at the docs now while we’re talking for refresher. If I look at a box component, I’m like, “Okay. That’s probably a div by default. I see it passing in the gradient properties, whatever.”

Mike: I can get some sense of what’s going on in the hood without fully understanding their magic to translate CSS. Translate the props to CSS. But the documentation is great in that it’s very linear. It’s very consistent. It lists everything with examples. A little copy and paste.

Mike: It just uses really good white space so looking at the page doesn’t seem overwhelming. You can find what you need easily. Their search is great too. Their search is helpful. 90% of the time, I think that’s what I’m going in there for. May be going in there and seeing if a component exists to do something. It usually does. And stumbling across something else that was useful that I didn’t know about. Or just refreshing myself on some of the principles. I can always pretty much find what I need here.

Drew: The only thing that I didn’t like about the docs from glancing around was the number of ads on it. On every page for their commercial offering of Chakra UI Pro.

Mike: I hadn’t seen them. Interesting. I’ve seen it. I’ve definitely seen it. But I’m not seeing it right now. Oh yeah. Okay. There’s Chakra UI Pro. I guess I filtered it out mentally. I hear you. At least it’s not too big and in your face.

Drew: It’s not too big. It’s just in the wrong place. It’s just where you’re looking for the information. Which I guess is why they’ve done it. That’s worth mentioning in considering the ecosystem and everything around the project is there’s a pro set of components that is… I guess it’s equivalent to some of the stuff that’s in Tailwind UI that’s there. Marketing pages and here are components and more of these composed sections of pages and entire pages and layouts and things. That you, is available from the makers of Chakra, but as a commercial offering.

Mike: Yeah. Just taking a quick glance at it now. Some of these are actually available. Or versions of them are available for free like Chakra templates. It’s Chakra templates, I guess, is the open source solution to Chakra Pro or the open-source competitor. I’m sure you’re going to get a ton by paying for this. It looks Chakra Pro is extremely robust and reasonably priced if you have a paying professional need for these. There’s a couple of options for your project, it looks like.

Drew: Yeah. It sounds there’s quite an ecosystem built up around it. Do you know how long the project’s been going and what following there is? Is it in widespread use in the React community?

Mike: I want to say yes. I don’t know to what degree. I’d be curious to just see what’s the, I guess, market share of Tailwind versus Chakra nowadays. I do know Chakra got an award relatively recently. GitNation React Award for the most impactful project to the community. I’d say it’s pretty big and pretty well embraced. With good reason, which is great. People are definitely enjoying it. I’m not the only one.

Drew: One thing that’s always worth thinking about when bringing a dependency into your project is what happens when you need to update that dependency.

Mike: Yeah.

Drew: Chakra is being improved all the time, I imagine. Is it a case of once you’ve imported it and built with it, you leave it locked on a certain version? Or is it generally safe to keep updated? Is it relatively stable in terms of the design and things of your site not changing as Chakra updates?

Mike: It has been so far. Yeah. Mainly, I’d say that’s because of the progress of development. They’re on version 1.6.3 right now. A number of months ago, they went from zero to one. That was the only time they had breaking changes. Since then, they’ve just been constantly doing feature releases and bug fixes.

Mike: For the last at least couple of months, everything’s been just additions. Additions and fixes. There’s no breaking changes involved. I don’t know what the roadmap looks like, but I imagine it’ll continue to be so. Every time I’ve upgraded it, one of these minor versions, it’s been fine. I’ve never seen something break from it. But when they came out with 1.0, there were some breaking changes. I don’t remember it being catastrophic though.

Drew: Do you know what the situation is with bundle sizes and the ability to tree shake Chakra? Does it add a lot of weight to your project or things are only imported as you use them?

Mike: I don’t recall off hand, honestly. I probably should know that. I haven’t noticed it adding a lot of weight. Mainly because you are importing the components individually. Not importing all of Chakra or anything like that. I’d say it’s in line having support for tree shaking, but I haven’t put it to the test. So far, I haven’t had things that had enormous weight coming specifically from it, though.

Drew: Yeah. That’s always an important consideration, isn’t it?

Mike: Yep.

Drew: Is there anything else we should know about Chakra UI before we dive right in and use it on a project?

Mike: No. It’s great. There’s pretty active community too. I see updates often. I’m looking at the documentation now and seeing components I hadn’t seen before. I see there’s a lot of feature addition going on. That’s great.

Drew: Yeah. That is great. You’ve got a book out called Cut Into The Jamstack, which is a preview release. A beta release at the moment. You’re self-publishing that. Is that right?

Mike: Yeah. Yeah. I am. It was my first attempt at a technical book. I just want to get it out there without committing to something like, it’s formal, I guess. I’m also somebody who likes informality, especially when creating things. It gives me the ability to do it my way by doing it like that.

Drew: The book literally walks the reader through building a software as a service app.

Mike: Yep.

Drew: All on the Jamstack. Why was it you decided to write this now and to take this approach with the book?

Mike: Good question. I’ve been coding for some 20 years now and I think I attempted to write a book a while back and it just didn’t quite take shape. I’m at a point in my career where I really want to share more knowledge. I’ve been using it for so many years and I feel the itch to really put more of it out there and help others.

Mike: Around October of last year, I had this… I wanted to put something out there that was product. An ebook felt like a really good way to start. I’m really passionate about Next.js and the things you can do with it. I use the term Jamstack and I consider Next.js as part of the Jamstack because it has a static site generation as a default.

Mike: But I think it’s one thing that doesn’t get talked about enough, in my opinion, or could use some more explanation is building software as a service applications with it. Because the Jamstack isn’t just for websites. It works really well for content driven websites because of being static and snappy and SEO friendly.

Mike: But there’s so much rich functionality there, especially in Next.js where Vercel had their Next.js Conference yesterday and they’re releasing more and more amazing features in there. I’m passionate about building software as a service. Software websites are great, but software is meant to do things.

Mike: This stack to me is very much the future of software as a service development. It reminds me of what Ruby on Rails was when it came out. It was an evolution, in a matter of speaking. It automated and simplified a lot of things that you used to have to do manually. It sped up the pace of development, and it increased the quality of it.

Mike: Next.js and the Jamstack and Vercel and Chakra UI, they’re all producing things that simplify a lot of things for you. Next.js, it simplifies a lot of speed related issues and accessibility related issue. Instanalization. Those are all, routing is done for you. You don’t have to worry about client side or service side routing. It’s automatic. Chakra UI does that with accessibility and theming. These tools put together, they just… The developer experience gets really great and everything just… It gives you freedom to really create software.

Mike: To answer your question. The reason I put out a book now is because of the right time of me really wanting to put something out there and with the Jamstack ecosystem starting to come to fruition and growing. It also gave me a chance to write more code into Jamstack, which, I just love it.

Drew: I think, as you say, it’s easy to get on board with the idea of Jamstack when you’re thinking about websites and typically lightweight websites. But taking that next step into thinking about how you can use the approach to build a full web application, it’s much harder. It’s a bigger hurdle, I think, to get over if you’re used to thinking in the server side mindset.

Mike: Yeah.

Drew: It’s a much bigger jump to see, okay. I can put my authentication out to a service-

Mike: Yes.

Drew: … and I can… I guess for the readers, from the reader’s point of view of your book, just by going through and building this example, following along with you, it’s probably a great way to get over that hurdle to just help gently shift your mindset into, okay. This is how I could do all these things, but on the Jamstack. Would you agree with that?

Mike: Yeah. That’s what I’m hoping. I do think it does. That’s really what it’s intended. I was signing a talk recently, a conference talk that… Part of my motivation for the topic and the way I decided to teach in this book is that, I could teach you one programming language. A framework, but it feels better to introduce you to the stack in a hands-on manner because, every developer who’s got a lot of experience is good at going for documentation and Googling and using stack overflow. Why would I waste your time teaching that to you?

Mike: I want to give you a quick, deep dive into the stack and what I can do with it. You’re going to pick up the what’s great about each of the individual pieces. NextOFF and Prisma. Next.js and Chakra. I’ll link you to documentation just to save you a couple of clicks. But you’re going to see through an interactive example, how these pieces connect together. You’re also going to get an understanding of the hard parts.

Mike: One thing I’m going into depth in, for example, is this feature that I’m building for asynchronous multi-file upload. Next.js has a front end and a backend to it. Though in the front of the front end and the back of the front end, if you use that analogy, you’ve got the React layer. Then you’ve got the node layer. There’s these API routes.

Mike: If you want to do multi-file upload with that and use a service, I use Cloudinary in the book. But if you use an API service for your image and media uploads which you should, there’s a lot of moving pieces there. There is the client side, which the user interacts with. There’s the API requests to the Cloudinary or the other provider. But then you have to make multiple API requests to make it efficient. You have to do some signing against Cloudinary, which you need an API call for.

Mike: You need to take that sign and you need to do the upload, which goes from the browser and circumvents your API and goes directly to Cloudinary. Then you need to save that in your database, which uses your front end backend of the front end. There’s many pieces and Next.js… In the Next.js community, there isn’t an open source plugin for that yet. Which I may extract out of the app now that have built it and put it into one because other people are going to have this.

Mike: Anyway, all that’s just to say that, I think that’s something really valuable to teach to people. Even if you’re a senior engineer, for a few dollars, you get all this wrapped up for you with a bow on it to be like, okay. This is a series of tools that worked really well together for building SaaS apps with a stack. This hurdle, I don’t have to figure out a solution for writing custom. Here’s an approach that works.

Mike: I just, I take a lot of joy in trying to prevent people from having to reinvent the wheel. Even though it’s fun to reinvent the wheel, if you wanted to just ship something, the more you can reduce that, the better,

Drew: That sounds very, very helpful. The book is in beta now. If people buy it now, do they get updates as it improves?

Mike: Yep. Yeah. It’s immensely discounted. It’s $10 now. When I finish, it will be 30. Whoever gets it now, will just get updates for the life of the book.

Drew: Fantastic.

Mike: I’ve got another one coming up in probably a couple of weeks. Yeah. Yeah. It’s already 107 pages and it’s got a source-code repo that will be shipped with it. That comes along with it now. It’s already like you can do… In the first 107 pages, it goes through setup to build a new first full stack page to building a CRUD for photo galleries. Create, Read, Update, Delete. So the front end and backend components. Then shipping a deployment to railway and Vercel. It’s pretty practical right away. Then the further, other couple of 100 pages are going to be more in depth with the coding topics.

Drew: Great. That’s available now at cutintothejamstack.com.

Mike: Yep. That’s it.

Drew: I’ve been learning all about Chakra UI. What have you been learning about lately, Mike?

Mike: I’ve been digging deeper on the stack. It constantly teaches me new things. One example is just with the Vercel Conference yesterday. The Next.js Conf. Next.js 11 is now out and it’s just got a ton of great things with it. There’s a real-time collaboration tool built in so when you ship a preview deploy, you can have people commenting on it and moving their mouse around the screen, even it looks like.

Mike: In addition, their performance is getting better and better. Next.js’ image component, which I use heavily now is going to have automatic placeholders. It’s going to be even more streamlined. I’m constantly learning the better and better ways to do things in this stack. There are always more. It seems like.

Drew: Always. Always more to learn. If you dear listener would like to hear more from Mike, you can follow him on Twitter where he’s @mcavaliere, and his personal website is mikecavaliere.com. The book Cuts Into The Jamstack, which amongst other things shows a practical implementation of Chakra UI, is again at cutintothejamstack.com. Thanks for joining us today. Mike. Did you have any parting words?

Mike: Nope. Thanks so much for having me, Drew, and keep smashing out there. Maybe I should rephrase that.

Breaking Down Bulky Builds With Netlify And Next.js

One of the biggest pains of working with statically generated websites is the incrementally slower builds as your app grows. This is an inevitable problem any stack faces at some point and it can strike from different points depending on what kind of product you are working with.

For example, if your app has multiple pages (views, routes) when generating the deployment artifact, each of those routes becomes a file. Then, once you’ve reached thousands, you start wondering when you can deploy without needing to plan ahead. This scenario is common on e-commerce platforms or blogs, which are already a big portion of the web but not all of it. Routes are not the only possible bottleneck, though.

A resource-heavy app will also eventually reach this turning point. Many static generators carry out asset optimization to ensure the best user experience. Without build optimizations (incremental builds, caching, we will get to those soon) this will eventually become unmanageable as well — think about going through all images in a website: resizing, deleting, and/or creating new files over and over again. And once all that is done: remember Jamstack serves our apps from the edges of the Content Delivery Network. So we still need to move things from the server they were compiled at to the edges of the network.

On top of all that, there is also another fact: data is often dynamic, meaning that when we build our app and deploy it, it may take a few seconds, a few minutes, or even an hour. Meanwhile, the world keeps spinning, and if we are fetching data from elsewhere, our app is bound to get outdated. Unacceptable! Build again to update!

Build Once, Update When Needed

Solving Bulky Builds has been top of mind for basically every Jamstack platform, framework, or service for a while. Many solutions revolve around incremental builds. In practice, this means that builds will be as bulky as the differences they carry against the current deployment.

Defining a diff algorithm is no easy task though. For the end-user to actually benefit from this improvement there are cache invalidation strategies that must be considered. Long story short: we do not want to invalidate cache for a page or an asset that has not changed.

Next.js came up with Incremental Static Regeneration (ISR). In essence, it is a way to declare for each route how often we want it to rebuild. Under the hood, it simplifies a lot of the work to the server-side. Because every route (dynamic or not) will rebuild itself given a specific time-frame, and it just fits perfectly in the Jamstack axiom of invalidating cache on every build. Think of it as the max-age header but for routes in your Next.js app.

To get your application started, ISR just a configuration property away. On your route component (inside the /pages directory) go to your getStaticProps method and add the revalidate key to the return object:

export async function getStaticProps() {
const { limit, count, pokemons } = await fetchPokemonList()

return {
props: {
limit,
count,
pokemons,
},
revalidate: 3600 // seconds
}
}

The above snippet will make sure my page rebuilds every hour and fetch for more Pokémon to display.

We still get the bulk-builds every now and then (when issuing a new deployment). But this allows us to decouple content from code, by moving content to a Content Management System (CMS) we can update information in a few seconds, regardless of how big our application is. Goodbye to webhooks for updating typos!

On-Demand Builders

Netlify recently launched On-Demand Builders which is their approach to supporting ISR for Next.js, but also works across frameworks including Eleventy and Nuxt. In the previous session, we established that ISR was a great step toward shorter build-times and addressed a significant portion of the use-cases. Nevertheless, the caveats were there:

Full builds upon continuous deployment.
The incremental stage happens only after the deployment and for the data. It is not possible to ship code incrementally
Incremental builds are a product of time.
The cache is invalidated on a time basis. So unnecessary builds may occur, or needed updates may take longer depending on the revalidation period set in the code.

Netlify’s new deployment infrastructure allows developers to create logic to determine what pieces of their app will build on deployment and what pieces will be deferred (and how they will be deferred).

Critical
No action is needed. Everything you deploy will be built upon push.
Deferred
A specific piece of the app will not be built upon deploy, it will be deferred to be built on-demand whenever the first request occurs, then it will be cached as any other resource of its type.

Creating an On-Demand builder

First of all, add a netlify/functions package as a devDependency to your project:

yarn add -D @netlify/functions

Once that is done, it is just the same as creating a new Netlify Function. If you have not set a specific directory for them, head on to netlify/functions/ and create a file of any name to your builder.

import type { Handler } from ‘@netlify/functions’
import { builder } from ‘@netlify/functions’

const myHandler: Handler = async (event, context) => {
return {
statusCode: 200,
body: JSON.stringify({ message: ‘Built on-demand! 🎉’ }),
}
}
export const handler = builder(myHandler)

As you can see from the snippet above, the on-demand builder splits apart from a regular Netlify Function because it wraps its handler inside a builder() method. This method connects our function to the build tasks. And that is all you need to have a piece of your application deferred for building only when necessary. Small incremental builds from the get-go!

Next.js On Netlify

To build a Next.js app on Netlify there are 2 important plugins that one should add to have a better experience in general: Netlify Plugin Cache Next.js and Essential Next-on-Netlify. The former caches your NextJS more efficiently and you need to add it yourself, while the latter makes a few slight adjustments to how Next.js architecture is built so it better fits Netlify’s and is available by default to every new project that Netlify can identify is using Next.js.

On-Demand Builders With Next.js

Building performance, deploy performance, caching, developer experience. These are all very important topics, but it is a lot — and takes time to set up properly. Then we get to that old discussion about focusing on Developer Experience instead of User Experience. Which is the time things go to a hidden spot in a backlog to be forgotten. Not really.

Netlify has got your back. In just a few steps, we can leverage the full power of the Jamstack in our Next.js app. It’s time to roll up our sleeves and put it all together now.

Defining Pre-Rendered Paths

If you have worked with static generation inside Next.js before, you have probably heard of getStaticPaths method. This method is intended for dynamic routes (page templates that will render a wide range of pages).
Without dwelling too much on the intricacies of this method, it is important to note the return type is an object with 2 keys, like in our Proof-of-Concept this will be [Pokémon]dynamic route file:

export async function getStaticPaths() {
return {
paths: [],
fallback: ‘blocking’,
}
}

paths is an array carrying out all paths matching this route which will be pre-rendered
fallback has 3 possible values: blocking, true, or false

In our case, our getStaticPaths is determining:

No paths will be pre-rendered;
Whenever this route is called, we will not serve a fallback template, we will render the page on-demand and keep the user waiting, blocking the app from doing anything else.

When using On-Demand Builders, make sure your fallback strategy meets your app’s goals, the official Next.js docs: fallback docs are very useful.

Before On-Demand Builders, our getStaticPaths was slightly different:

export async function getStaticPaths() {
const { pokemons } = await fetchPkmList()
return {
paths: pokemons.map(({ name }) => ({ params: { pokemon: name } })),
fallback: false,
}
}

We were gathering a list of all pokémon pages we intended to have, map all the pokemon objects to just a string with the pokémon name, and forwarding returning the { params } object carrying it to getStaticProps. Our fallback was set to false because if a route was not a match, we wanted Next.js to throw a 404: Not Found page.

You can check both versions deployed to Netlify:

With On-Demand Builder: code, live
Fully static generated: code, live

The code is also open-sourced on Github and you can easily deploy it yourself to check the build times. And with this queue, we slide onto our next topic.

Build Times

As mentioned above, the previous demo is actually a Proof-of-Concept, nothing is really good or bad if we cannot measure. For our little study, I went over to the PokéAPI and decided to catch all pokémons.

For reproducibility purposes, I capped our request (to 1000). These are not really all within the API, but it enforces the number of pages will be the same for all builds regardless if things get updated at any point in time.

export const fetchPkmList = async () => {
const resp = await fetch(`${API}pokemon?limit=${LIMIT}`)
const {
count,
results,
}: {
count: number
results: {
name: string
url: string
}[]
} = await resp.json()
return {
count,
pokemons: results,
limit: LIMIT,
}
}

And then fired both versions in separated branches to Netlify, thanks to preview deploys they can coexist in basically the same environment. To really evaluate the difference between both methods the ODB approach was extreme, no pages were pre-rendered for that dynamic route. Though not recommended for real-world scenarios (you will want to pre-render your traffic-heavy routes), it marks clearly the range of build-time performance improvement we can achieve with this approach.

Strategy
Number of Pages
Number of Assets
Build time
Total deploy time

Fully Static Generated
1002
1005
2 minutes 32 seconds
4 minutes 15 seconds

On-Demand Builders
2
0
52 seconds
52 seconds

The pages in our little PokéDex app are pretty small, the image assets are very lean, but the gains on deploy time are very significant. If an app has a medium to a large amount of routes, it is definitely worth considering the ODB strategy.

It makes your deploys faster and thus more reliable. The performance hit only happens on the very first request, from the subsequent request and onward the rendered page will be cached right on the Edge making the performance exactly the same as the Fully Static Generated.

The Future: Distributed Persistent Rendering

On the very same day, On-Demand Builders were announced and put on early access, Netlify also published their Request for Comments on Distributed Persistent Rendering (DPR).

DPR is the next step for On-Demand Builders. It capitalizes on faster builds by making use of such asynchronous building steps and then caching the assets until they’re actually updated. No more full-builds for a 10k page’s website. DPR empowers the developers to a full control around the build and deploy systems through solid caching and using On-Demand Builders.

Picture this scenario: an e-commerce website has 10k product pages, this means it would take something around 2 hours to build the entire application for deployment. We do not need to argue how painful this is.

With DPR, we can set the top 500 pages to build on every deploy. Our heaviest traffic pages are always ready for our users. But, we are a shop, i.e. every second counts. So for the other 9500 pages, we can set a post-build hook to trigger their builders — deploying the remaining of our pages asynchronously and immediately caching. No users were hurt, our website was updated with the fastest build possible, and everything else that did not exist in cache was then stored.

Conclusion

Although many of the discussion points in this article were conceptual and the implementation is to be defined, I am excited about the future of the Jamstack. The advances we are doing as a community revolve around the end-user experience.

What is your take on Distributed Persistent Rendering? Have you tried out On-Demand Builders in your application? Let me know more in the comments or call me out on Twitter. I am really curious!

References

A Complete Guide To Incremental Static Regeneration (ISR) With Next.js,” Lee Robinson
Faster Builds For Large Sites On Netlify With On-Demand Builders,” Asavari Tayal, Netlify Blog
Distributed Persistent Rendering: A New Jamstack Approach For Faster Builds,” Matt Biilmann, Netlify Blog
Distributed Persistent Rendering (DPR),” Cassidy Williams, GitHub

The Many Shades Of July (2021 Desktop Wallpapers Edition)

Often, it’s the little things that inspire us and that we treasure most. The sky shining in the most beautiful colors as a seemingly endless summer day comes to an end, riding your bike through a light rain shower on a hot July afternoon, or maybe it’s a scoop of your favorite ice cream that refuels your batteries? No matter what big and small adventures July will have in store for you this year, our new batch of wallpapers is bound to cater for some inspiration along the way.

More than ten years ago, we started this wallpapers series to bring you a new selection of beautiful, unique, and inspiring wallpapers every month. It’s a community effort, made possible by artists and designers from all across the globe who challenge their creative skills to cater for some good vibes on your screens. And, well, it wasn’t any different this time around.

In this post, you’ll find their wallpapers for July 2021. All of them come in versions with and without a calendar and can be downloaded for free. A huge thank-you to everyone who submitted their artworks — we sincerely appreciate it! As a little bonus goodie, we also compiled some favorites from past July editions at the end of this post. Maybe you’ll discover one of your almost-forgotten favorites in there, too? Happy July!

You can click on every image to see a larger preview,
We respect and carefully consider the ideas and motivation behind each and every artist’s work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience through their works. This is also why the themes of the wallpapers weren’t anyhow influenced by us but rather designed from scratch by the artists themselves.

Submit a wallpaper!

Did you know that you could get featured in our next wallpapers post, too? We are always looking for creative talent! Join in! →

Against The Current

“‘I don’t care that they stole my idea. I care that they don’t have any of their own.’ July 10th marks 165 years since the birth of Nikola Tesla, inventor, engineer, and futurist who helped shape the world as we know it today. Tesla’s inventions brought electricity to all corners of the world, paved the way for wireless communication, and revolutionized energy production. But underneath all the discoveries and the good they brought, Tesla’s life is shrouded in mystery. His dream of free wireless energy, research on understanding the aether, and the enigmatic disappearance of his records leaves many questions unanswered.” — Designed by PopArt Studio from Serbia.

preview
with calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1440, 1980×1200, 2560×1440
without calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1440, 1980×1200, 2560×1440

Less Busy Work, More Fun!

Designed by ActiveCollab from the United States.

preview
with calendar: 1080×1920, 1400×1050, 1440×900, 1600×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440, 3840×2160
without calendar: 1080×1920, 1400×1050, 1440×900, 1600×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440, 3840×2160

Summer Season

“I’m an avid runner, and I have some beautiful natural views surrounding my city. The Smoky Mountains are a bit further east, so I took some liberties, but Tennessee’s nature is nothing short of beautiful and inspiring.” — Designed by Cam Elliott from Memphis, TN.

preview
with calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440, 3840×2160
without calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440, 3840×2160

Flamingos

“Well, if I think about the month July, the first thing what comes up is summer. So I wanted a theme about summer and I went looking on the internet for summer things. And after a few ideas, I saw a flamingo and I just started to draw some things. And I loved the idea that if you put two flamingos together like I did here, a heart shape will grow.” — Designed by Froukje from the Netherlands.

preview
with calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440
without calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440

Nightly Carnival

“July brings the yearly carnival near my hometown, so I decided to design a summer evening carnival.” — Designed by Bregje Damen from the Netherlands.

preview
with calendar: 320×480, 1024×1024, 1920×1080, 1920×1200, 1920×1440, 2560×1440
without calendar: 320×480, 1024×1024, 1920×1080, 1920×1200, 1920×1440, 2560×1440

Surfer Cat

Designed by Ricardo Gimenes from Sweden.

preview
with calendar: 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440, 3840×2160
without calendar: 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440, 3840×2160

Space

“In July, the earth is farthest from the sun. That’s what inspired me to make a space-themed calendar for the month July.” — Designed by Rosalie Toorians from the Netherlands.

preview
with calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440
without calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440

The Ancient Device

Designed by Ricardo Gimenes from Sweden.

preview
with calendar: 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440, 3840×2160
without calendar: 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440, 3840×2160

Sweet As A Peach

“One of my favorite summer fruits for a few years now is the peach. It always reminds me of vacation and that brings me joy. Hope you all enjoy this month, too!” — Designed by Melissa Bogemans from Belgium.

preview
with calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440
without calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440

Independence Day

“When you hear the word ‘July’, what comes to your mind? For us, it’s the ‘Fourth of July’, so our design team made a stunning wallpaper in commemoration of the Declaration of Independence of the United States.” — Designed by Ever Increasing Circles from the United Kingdom.

preview
with calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440
without calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440

Sun At Dawn

“I created a wallpaper that reminded me of late summer nights when the sun is slowly setting.” — Designed by Bibi Goelema from the Netherlands.

preview
with calendar: 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440
without calendar: 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440

Ocean Full Of Stars

“I wanted to make something that reminded me of summer but I wanted to keep that dark and cozy vibe, so I decided to go with a whale swimming through space. Summer for me is a time to be free from responsibilities like school and to just hang around, which is what I wanted to make.” — Designed by Ilse van Dinther from the Netherlands.

preview
with calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440
without calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440

Artemis And Athena

“Artemis has the sign Cancer as a zodiac sign and this is also a zodiac sign in July. Athena was always celebrated during the month of July, in ancient Athens. Since I knew this, I decided to put those two together in a wallpaper. The way Athena and Artemis look are inspired by Lore Olympus.” — Designed by Gigi Kim from the Netherlands.

preview
with calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440
without calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440

4th Of July And Apollo 11

“I’m a huge space nerd and wanted to celebrate the Apollo 11 moon landing in tandem with Independence Day.” — Designed by Jelle Guit from the Netherlands.

preview
with calendar: 1280×720, 1920×1080, 2560×1440
without calendar: 1280×720, 1920×1080, 2560×1440

Roman Emperor

“The month of July was named after the famous Roman emperor Julius Caesar, and we like to think that if he lived in our time, he would use his vacation in the month named after him.” — Designed by LibraFire from Serbia.

preview
with calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440
without calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440

Colorful Summer

“Two weeks ago, I got the school assignment to make a wallpaper and send this to you. I have made this picture at Kronenburgerpark in Nijmegen, the Netherlands. Since the coronavirus is on its way back, we can go back to the ‘old normal’ and do things we haven’t been able to do in over a year. So it will be a colorful summer. And that was my inspiration to take this picture and send it as a wallpaper.” — Designed by Jorn Meijs from the Netherlands.

preview
with calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440
without calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440

4th Of July

“Our teacher told us about the Smashing Magazine wallpapers. So we participated with our class to make a wallpaper.” — Designed by Jarno van der Linden from the Netherlands.

preview
with calendar: 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440, 3840×2160
without calendar: 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440, 3840×2160

Bentley Bentayga Speed

“If everything goes according to plan, the Bentley Bentayga Speed will drop on the first of July. I made this Bentley because I wanted to make something that I’m interested in myself. In my spare time I like to make car illustrations so I knew immediately the style that I wanted to use. I’m personally very excited for the drop of the Bentley.” — Designed by Nick Geurds from the Netherlands.

preview
with calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440
without calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440

Oldies But Goodies

To make your July even more colorful, we compiled some wallpaper goodies from past years’ editions below. Please note that these designs don’t come with a calendar.

Birdie July

Designed by Lívi Lénárt from Hungary.

preview
without calendar: 800×600, 1024×1024, 1152×864, 1280×960, 1280×1024, 1600×1200, 1920×1080, 2560×1440

Eternal Summer

“And once you let your imagination go, you find yourself surrounded by eternal summer, unexplored worlds and all-pervading warmth, where there are no rules of physics and colors tint the sky under your feet.” — Designed by Ana Masnikosa from Belgrade, Serbia.

preview

without calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440

Riding In The Drizzle

“Rain has come, showering the existence with new seeds of life. Everywhere life is blooming, as if they were asleep and the falling music of raindrops have awakened them. Feel the drops of rain. Feel this beautiful mystery of life. Listen to its music, melt into it.” — Designed by DMS Software from India.

preview
without calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440

Summer Cannonball

“Summer is coming in the northern hemisphere and what better way to enjoy it than with watermelons and cannonballs.” — Designed by Maria Keller from Mexico.

preview
without calendar: 320×480, 640×480, 640×1136, 750×1334, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1242×2208, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440, 2880×1800

Strive For Progress, Not Perfection

“I created this wallpaper as a daily reminder that it’s better to take one small step towards my goal every day than to do nothing at all out of fear that it won’t be perfect. I hope you enjoy it and it helps you keep motivated every day!” — Designed by Andrew from the United States.

preview
without calendar: 320×480, 1024×1024, 1280×720, 1680×1200, 1920×1080, 2560×1440

Taste Like Summer!

“In times of clean eating and the world of superfoods there is one vegetable missing. An old, forgotten one. A flower actually. Rare and special. Once it had a royal reputation (I cheated a bit with the blue). The artichocke — this is my superhero in the garden! I am a food lover — you too? Enjoy it — dip it!” — Designed by Alexandra Tamgnoué from Germany.

previewwithout calendar: 320×480, 640×480, 800×600, 1024×768, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1440×900, 1440×1050, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440

Island River

“Make sure you have a refreshing source of ideas, plans and hopes this July. Especially if you are to escape from urban life for a while.” — Designed by Igor Izhik from Canada.

preview
without calendar: 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440

Day Turns To Night

Designed by Xenia Latii from Germany.

preview
without calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440

Captain Amphicar

“My son and I are obsessed with the Amphicar right now, so why not have a little fun with it?” — Designed by 3 Bicycles Creative from the United States.

previewwithout calendar: 1280×720, 1280×800, 1280×960, 1280×1024, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440

Tropical Lilies

“I enjoy creating tropical designs, they fuel my wanderlust and passion for the exotic. Instantaneously transporting me to a tropical destination.” — Designed by Tamsin Raslan from the United States.

preview
without calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1440×900, 1440×1050, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440

July Rocks!

Designed by Joana Moreira from Portugal.

preview
without calendar: 320×480, 1024×768, 1280×1024, 1920×1080

Alentejo Plain

“Based in the Alentejo region, in the south of Portugal, where there are large plains used for growing wheat. It thus represents the extensions of the fields of cultivation and their simplicity. Contrast of the plain with the few trees in the fields. Storks that at this time of year predominate in this region, being part of the Alentejo landscape and mentioned in the singing of Alentejo.” — Designed by José Guerra from Portugal.

preview
without calendar: 1125×2436, 1280×800, 1536×2048, 1680×1050, 1920×1200, 2880×1800

Fire Camp

“What’s better than a starry summer night with an (unexpected) friend around a fire camp with some marshmallows? Happy July!” — Designed by Etienne Mansard from the UK.

preview

without calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1440×900, 1440×1050, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2048×1536, 2560×1440

Summertime

“Since I’m a big fan of Mid-Century Modern design, I changed George Nelson’s Starburst clock with 12 points — a fitting schematic of the sun with a spot-on summertime color palette — into a ‘sundial calendar’ with 31 points, one for each day of July. Illustrator’s Polar Grid tool helped get the spacing just right, and I retained the construction lines because the extra layer appealed to me. The warm yellow background is borrowed from his Kangaroo chair, and the typographic choice references the cover of Pentagram’s monograph, ‘George Nelson On Design.’” — Designed by Brian Frolo from Cleveland, Ohio, USA.

preview
without calendar: 1024×768, 1024×1024, 1152×864, 1280×800, 1280×960, 1280×1024, 1366×768, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440, 2048×2048

Hot Air Balloon

Designed by Studcréa from France

preview
without calendar: 1280×720, 1280×800, 1280×960, 1280×1024, 1440×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440

Ice Cream vs. Hot Dog

“It’s both ‘National Ice Cream Month’ and ‘National Hot Dog Month’ over in the US, which got me thinking — which is better? With this as your wallpaper, you can ponder the question all month!” — Designed by James Mitchell from the UK.

preview
without calendar: 1280×720, 1280×800, 1366×768, 1440×900, 1680×1050, 1920×1080, 1920×1200, 2560×1440, 2880×1800

Night Sky Magic

Designed by Ricardo Gimenes from Sweden.

preview
without calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440

My July

Designed by Cátia Pereira from Portugal.

preview
without calendar: 320×480, 1024×768, 1280×1024, 2560×1440

Floral Thing

“The wallpaper which I created consists of my personal sketches of Polish herbs and flowers. I wanted it to be light and simple with a hint of romantic feeling. I hope you’ll enjoy it!” — Designed by Beata Kurek from Poland.

preview

without calendar: 1024×1024, 1280×800, 1440×900, 1680×1050, 2560×1440

Sunset

“I decided to create a wallpaper to bring this summer feeling to the desktop.” — Designed by Mladen Milinovic from Germany.

preview
without calendar: 1280×800, 1280×960, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440

Spectate

Designed by Tekstografika from Russia.

preview
without calendar: 320×480, 1024×768, 1024×1024, 1280×800, 1280×1024, 1440×900, 1680×1050, 1920×1080, 2560×1440

An Intrusion Of Cockroaches

“Ever watched Joe’s Apartment when you were a kid? Well, that movie left a soft spot in my heart for the little critters. Don’t get me wrong: I won’t invite them over for dinner, but I won’t grab my flip flop and bring the wrath upon them when I see one running in the house. So there you have it… three roaches… bringing the smack down on that pesky human… ZZZZZZZAP!!” — Designed by Wonderland Collective from South Africa.

preview
without calendar: 320×480, 800×600, 1024×768, 1280×960, 1680×1050, 1920×1200, 2560×1440

Summer Never Ends!

“July is a very special month to me — it’s the month of my birthday and of the best cherries.” — Designed by Igor Izhik from Canada.

previewwithout calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440

Melting July

“July often brings summer heat and we all wish for something cold to take it away… If you take a closer look, you will see an ice cream melting from the sunset. Bon appetite!” — Designed by PopArt Studio from Serbia.

preview
without calendar: 320×480, 640×480, 800×480, 800×600, 1024×768, 1024×1024, 1152×864, 1280×720, 1280×800, 1280×960, 1280×1024, 1366×768, 1400×1050, 1440×900, 1600×1200, 1680×1050, 1680×1200, 1920×1080, 1920×1200, 1920×1440, 2560×1440

The State Of Web Workers In 2021

I’m weary of always comparing the web to so-called “native” platforms like Android and iOS. The web is streaming, meaning it has none of the resources locally available when you open an app for the first time. This is such a fundamental difference, that many architectural choices from native platforms don’t easily apply to the web — if at all.

But regardless of where you look, multithreading is used everywhere. iOS empowers developers to easily parallelize code using Grand Central Dispatch, Android does this via their new, unified task scheduler WorkManager and game engines like Unity have job systems. The reason for any of these platforms to not only support multithreading, but making it as easy as possible is always the same: Ensure your app feels great.

In this article I’ll outline my mental model why multithreading is important on the web, I’ll give you an introduction to the primitives that we as developers have at our disposal, and I’ll talk a bit about architectures that make it easy to adopt multithreading, even incrementally.

The Problem Of Unpredictable Performance

The goal is to keep your app smooth and responsive. Smooth means having a steady and sufficiently high frame rate. Responsive means that the UI responds to user interactions with minimal delay. Both of these are key factors in making your app feel polished and high-quality.

According to RAIL, being responsive means reacting to a user’s action in under 100ms, and being smooth means shipping a stable 60 frames per second (fps) when anything on the screen is moving. Consequently, we as developers have 1000ms/60 = 16.6ms to produce each frame, which is also called the “frame budget”.

I say “we”, but it’s really the browser that has 16.6ms to do everything required to render a frame. Us developers are only directly responsible for one part of the workload that the browser has to deal with. That work consists of (but is not limited to):

Detecting which element the user may or may not have tapped;
firing the corresponding events;
running associated JavaScript event handlers;
calculating styles;
doing layout;
painting layers;
and compositing those layers into the final image the user sees on screen;

(and more …)

Quite a lot of work.

At the same time, we have a widening performance gap. The top-tier flagship phones are getting faster with every new generation that’s released. Low-end phones on the other hand are getting cheaper, making the mobile internet accessible to demographics that previously maybe couldn’t afford it. In terms for performance, these phones have plateaued at the performance of a 2012 iPhone.

Applications built for the Web are expected to run on devices that fall anywhere on this broad performance spectrum. How long your piece of JavaScript takes to finish depends on how fast the device is that your code is running on. Not only that, but the duration of the other browser tasks like layout and paint are also affected by the device’s performance characteristics. What takes 0.5ms on a modern iPhone might take 10ms on a Nokia 2. The performance of the user’s device is completely unpredictable.

Note: RAIL has been a guiding framework for 6 years now. It’s important to note that 60fps is really a placeholder value for whatever the native refresh rate of the user’s display is. For example, some of the newer pixel phones have a 90Hz screen and the iPad Pro has a 120Hz screen, reducing the frame budget to 11.1ms and 8.3ms respectively.

To complicate things further, there is no good way to determine the refresh rate of the device that your app is running on apart from measuring the amount of time that elapses between requestAnimationFrame() callbacks.*

JavaScript

JavaScript was designed to run in lock-step with the browser’s main rendering loop. Pretty much every web app out there relies on this model. The drawback of that design is that a small amount of slow JavaScript code can prevent the browser’s rendering loop from continuing. They are in lockstep: if one doesn’t finish, the other can’t continue. To allow longer-running tasks to be integrated into JavaScript, an asynchronicity model was established on the basis of callbacks and later promises.

To keep your app smooth, you need to make sure that your JavaScript code combined with the other tasks the browser has to do (styles, layout, paint,…) doesn’t add up to a duration longer than the device’s frame budget. To keep your app responsive, you need to make sure that any given event handler doesn’t take longer than 100ms in order for it to show a change on the device’s screen. Achieving this on your own device during development can be hard, but achieving this on every device your app could possibly run on can seem impossible.

The usual advice here is to “chunk your code” or its sibling phrasing “yield to the browser”. The underlying principle is the same: To give the browser a chance to ship the next frame you break up the work your code is doing into smaller chunks, and pass control back to the browser to allow it to do work in-between those chunks.

There are multiple ways to yield to the browser, and none of them are great. A recently-proposed task scheduler API aims to expose this functionality directly. However, even if we had an API for yielding like await yieldToBrowser() (or something of the sort), the technique itself is flawed: To make sure you don’t blow through your frame budget, you need to do work in small enough chunks that your code yields at least once every frame.

At the same time, code that yields too often can cause the overhead of scheduling tasks to become a net-negative influence on your app’s overall performance. Now combine that with the unpredictable performance of devices, and we have to arrive at the conclusion that there is no correct chunk size that fits all devices. This is especially problematic when trying to “chunk” UI work, since yielding to the browser can render partially complete interfaces that increase the total cost of layout and paint.

Web Workers

There is a way to break from running in lock-step with the browser’s rendering thread. We can move some of our code to a different thread. Once in a different thread, we can block at our heart’s desire with long-running JavaScript, without the complexity and cost of chunking and yielding, and the rendering thread won’t even be aware of it. The primitive to do that on the web is called a web worker. A web worker can be constructed by passing in the path to a separate JavaScript file that will be loaded and run in this newly created thread:

const worker = new Worker(“./worker.js”);

Before we get more into that, it’s important to note that Web Workers, Service Workers and Worklets are similar, but ultimately different things for different purposes:

In this article, I am exclusively talking about WebWorkers (often just “Worker” for short). A worker is an isolated JavaScript scope running in a separate thread. It is spawned (and owned) by a page.
A ServiceWorker is a short-lived, isolated JavaScript scope running in a separate thread, functioning as a proxy for every network request originating from pages of the same origin. First and foremost, this allows you to implement arbitrarily complex caching behavior, but it has also been extended to let you tap into long-running background fetches, push notifications, and other functionality that requires code to run without an associated page. It is a lot like a Web Worker, but with a specific purpose and additional constraints.
A Worklet is an isolated JavaScript scope with a severely limited API that may or may not run on a separate thread. The point of worklets is that browsers can move worklets around between threads. AudioWorklet, CSS Painting API and Animation Worklet are examples of Worklets.
A SharedWorker is a special Web Worker, in that multiple tabs or windows of the same origin can reference the same SharedWorker. The API is pretty much impossible to polyfill and has only ever been implemented in Blink, so I won’t be paying any attention to it in this article.

As JavaScript was designed to run in lock-step with the browser, many of the APIs exposed to JavaScript are not thread-safe, as there was no concurrency to deal with. For a data structure to be thread-safe means that it can be accessed and manipulated by multiple threads in parallel without its state being corrupted.

This is usually achieved by mutexes which lock out other threads while one thread is doing manipulations. Not having to deal with locks allows browsers and JavaScript engines to make a lot of optimizations to run your code faster. On the other hand, it forces a worker to run in a completely isolated JavaScript scope, since any form of data sharing would result in problems due to the lack of thread-safety.

While Workers are the “thread” primitive of the web, they are very different from the threads you might be used to from C++, Java & co. The biggest difference is that the required isolation means workers don’t have access to any variables or code from the page that created them or vice versa. The only way to exchange data is through message-passing via an API called postMessage, which will copy the message payload and trigger a message event on the receiving end. This also means that Workers don’t have access to the DOM, making UI updates from a worker impossible — at least without significant effort (like AMP’s worker-dom).

Support for Web Workers is nearly universal, considering that even IE10 supported them. Their usage, on the other hand, is still relatively low, and I think to a large extent that is due to the unusual ergonomics of Workers.

JavaScript’s Concurrency Models

Any app that wants to make use of Workers has to adapt its architecture to accommodate the requirements of Workers. JavaScript actually supports two very different concurrency models often grouped under the term “Off-Main-Thread Architecture”. Both use Workers, but in very different ways and each bringing their own set of tradeoffs. Any given app usually ends somewhere in between these two extremes.

Concurrency Model #1: Actors

My personal preference is to think of Workers like Actors, as they are described in the Actor Model. The Actor Model’s most popular incarnation is probably in the programming language Erlang. Each actor may or may not run on a separate thread and fully owns the data it is operating on. No other thread can access it, making rendering synchronization mechanisms like mutexes unnecessary. Actors can only send messages to each other and react to the messages they receive.

As an example, I often think of the main thread as the actor that owns the DOM and consequently all the UI. It is responsible for updating the UI and capturing input events. Another factor could be in charge of the app’s state. The DOM actor converts low-level input events into app-level semantic events and sends them to the state actor. The state actor changes the state object according to the event it has received, potentially using a state machine or even involving other actors. Once the state object is updated, it sends a copy of the updated state object to the DOM actor. The DOM actor now updates the DOM according to the new state object. Paul Lewis and I once explored actor-centric app architecture at Chrome Dev Summit 2018.

Of course, this model doesn’t come without its problems. For example, every message you send needs to be copied. How long that takes not only depends on the size of the message but also on the device the app is running on. In my experience, postMessage is usually “fast enough”, but there are certain scenarios where it isn’t. Another problem is to strike the balance between moving code to a worker to free up the main thread, while at the same time having to pay the cost of communication overhead and the worker being busy with running other code before it can respond to your message. If done without care, workers can negatively affect UI responsiveness.

The messages you can send via postMessage are quite complex. The underlying algorithm (called “structured clone”) can handle circular data structures and even Map and Set. It cannot handle functions or classes, however, as code can’t be shared across scopes in JavaScript. Somewhat irritatingly, trying to postMessage a function will throw an error, while a class will just be silently converted to a plain JavaScript object, losing the methods in the process (the details behind this make sense but would blow the scope of this article).

Additionally, postMessage is a fire-and-forget messaging mechanism with no built-in understanding of request and response. If you want to employ a request/response mechanism (and in my experience most app architectures inevitably lead you there), you’ll have to build it yourself. That’s why I wrote Comlink, which is a library that uses an RPC protocol under the hood to make it seem like the objects from a worker are accessible from the main thread and vice versa. When using Comlink, you don’t have to deal with postMessage at all. The only artifact is that due to the asynchronous nature of postMessage, functions don’t return their result, but a promise for it instead. In my opinion, this gives you the best of the Actor Model and Shared Memory Concurrency.

Comlink is not magic, it still has to use postMessage for the RPC protocol. If your app ends up being one of the rarer cases where postMessage is a bottleneck, it’s useful to know that ArrayBuffers can be transferred. Transferring an ArrayBuffer is near-instant and involves a proper transferral of ownership: The sending JavaScript scope loses access to the data in the process. I used this trick when I was experimenting with running the physics simulations of a WebVR app off the main thread.

Concurrency Model #2: Shared Memory

As I mentioned above, the traditional approach to threading is based on shared memory. This approach isn’t viable in JavaScript as pretty much all APIs have been built with the assumption that there is no concurrent access to objects. Changing that now would either break the web or incur a significant performance cost because of the synchronization that is now necessary. Instead, the concept of shared memory has been limited to one dedicated type: SharedArrayBuffer (or SAB for short).

A SAB, like an ArrayBuffer, is a linear chunk of memory that can be manipulated using Typed Arrays or DataViews. If a SAB is sent via postMessage, the other end does not receive a copy of the data, but a handle to the exact same memory chunk. Every change done by one thread is visible to all other threads. To allow you to build your own mutexes and other concurrent data structures, Atomics provide all sorts of utilities for atomic operations or thread-safe waiting mechanisms.

The drawbacks of this approach come in multiple flavors. First and foremost, it is just a chunk of memory. It is a very low-level primitive, giving you lots of flexibility and power at the cost of increased engineering efforts and maintenance. You also have no direct way of working on your familiar JavaScript objects and arrays. It’s just a series of bytes.

As an experimental way to improve ergonomics here, I wrote a library called buffer-backed-object that synthesizes JavaScript objects that persist their values to an underlying buffer. Alternatively, WebAssembly makes use of Workers and SharedArrayBuffers to support the threading model of C++ and other languages. I’d say WebAssembly currently offers the best experience for shared-memory concurrency, but also requires you to leave a lot of the benefits (and comfort) of JavaScript behind and buy into another language and (usually) bigger binaries produced.

Case-Study: PROXX

In 2019, my team and I published PROXX, a web-based Minesweeper clone that was specifically targeting feature phones. Feature phones have a small resolution, usually no touch interface, an underpowered CPU, and no proper GPU to speak of. Despite all these limitations, they are increasingly popular as they are sold for an incredibly low price and they include a full-fledged web browser. This opens up the mobile web to demographics that previously couldn’t afford it.

To make sure that the game was responsive and smooth even on these phones, we embraced an Actor-like architecture. The main thread is responsible for rendering the DOM (via preact and, if available, WebGL) and capturing UI events. The entire app state and game logic is running in a worker which determines whether you just stepped on a mine black hole and, if not, how much of the game board to reveal. The game logic even sends intermediate results to the UI thread to give the user a continuous visual update.

The Future

I like the Actor Model. But the ergonomics of concurrent JavaScript are not great overall. A lot of tooling was built and library code was written to make it better, but in the end JavaScript The Language needs to do better here. Some engineers at TC39 have taken a liking to this topic and are trying to figure out how JavaScript can support both concurrency models better. Multiple proposals are being evaluated, from allowing code to be postMessage’d, to having objects be shared across threads to higher-level, scheduler-like APIs as they are common on native platforms.

None of them have reached a significant stage in the standardization process just yet, so I won’t spend time on them here. If you are curious, keep an eye out on the TC39 proposals and see what the next generation of JavaScript holds.

Summary

Workers are a key tool to keep the main thread responsive and smooth by preventing any accidentally long-running code from blocking the browser to render. Due to the inherent asynchronous nature of communicating with a worker, the adoption of workers requires some architectural adjustments in your web app, but as a pay off you will have an easier time supporting the massive spectrum of devices that the web is accessed from.

You should make sure to adopt an architecture that lets you move code around easily so you can measure the performance impact of off-main-thread architecture. The ergonomics of web workers have a bit of a learning curve but the most complicated parts can be abstracted away with libraries like Comlink.

Further Resources

The Main Thread Is Overworked And Underpaid,” Surma, Chrome Dev Summit 2019 (Video)
Green Energy Efficient Progressive Web Apps,” David, Microsoft DevBlogs
Case Study: Moving A Three.js-Based WebXR App Off-Main-Thread,” Surma
When Should You Be Using Workers?,” Surma
Is postMessage Slow?,” Surma
Comlink,” GoogleChromeLabs
web-worker,” npm

FAQ

There are some questions and thoughts that are raised quite often, so I wanted to preempt them and record my answer here.

Isn’t postMessage slow?

My core advice in all matters of performance is: Measure first! Nothing is slow (or fast) until you measure. In my experience, however, postMessage is usually “fast enough”. As a rule of thumb: If JSON.stringify(messagePayload) is under 10KB, you are at virtually no risk of creating a long frame, even on the slowest of phones. If postMessage does indeed end up being a bottleneck in your app, consider the following techniques:

Breaking your work into smaller pieces so that you can send smaller messages.
If the message is a state object of which only small parts have changed, send patches (diffs) instead of the whole object.
If you send a lot of messages, it can also be beneficial to batch multiple messages into one.
As a last resort, you can try switching to a numerical representation of your message and transferring an ArrayBuffers instead of sending an object-based message.

Which of these techniques is the right one depends on the context and can only be answered by measuring and isolating the bottleneck.

I want DOM access from the Worker.

This one I get a lot. In most scenarios, however, that just moves the problem. You are at risk of effectively creating a 2nd main thread, with all the same problems, just in a different thread. Making the DOM safe to access from multiple threads would require adding locks which would introduce a slowdown to DOM operations. This would probably hurt a lot of existing web apps.

Additionally, the lock-step model has benefits. It gives the browser a clear signal at what time the DOM is in a valid state that can be rendered to the screen. In a multi-threaded DOM world, that signal would be lost and we’d have to deal with partial renders or other artifacts.

I really dislike having to put code in a separate file for Workers.

I agree. There are proposals being evaluated in TC39 to inline a module into another module without all the trip-wires that Data URLs and Blob URLs have. These proposals would also allow you to create a worker without the need for a separate file. So while I don’t have a satisfying solution right now, a future iteration of JavaScript will most certainly remove this requirement.

It’s A (Front-End Testing) Trap! Six Common Testing Pitfalls And How To Solve Them

As I was rewatching a movie I loved as a child, one quote in particular stood out. It’s from the 1983 Star Wars film “Return of the Jedi”. The line is said during the Battle of Endor, where the Alliance mobilizes its forces in a concentrated effort to destroy the Death Star. There, Admiral Ackbar, leader of the Mon Calamari rebels, says his memorable line:

“It’s a trap!” This line alerts us to an unexpected ambush, an imminent danger. All right, but what does this have to do with testing? Well, it’s simply an apt allegory when it comes to dealing with tests in a code base. These traps might feel like an unexpected ambush when you’re working on a code base, especially when doing so for a long time.

In this article, I’ll tell you the pitfalls I’ve run into in my career — some of which were my fault. In this context, I need to give a bit of disclaimer: My daily business is heavily influenced by my use of the Jest framework for unit testing, and by the Cypress framework for end-to-end testing. I’ll try my best to keep my analysis abstract, so that you can use the advice with other frameworks as well. If you find that’s not possible, please comment below so that we can talk about it! Some examples might even be applicable to all test types, whether unit, integration, or end-to-end testing.

Front-End Testing Traps

Testing, whatever the kind, has a lot of benefits. Front-end testing is a set of practices for testing the UI of a web application. We test its functionality by putting its UI under permanent stress. Depending on the type of testing, we can achieve this in various ways and at various levels:

Unit tests look at the minor units in your applications. These units might be classes, interfaces, or methods. The tests check whether they give the expected output, using predefined inputs — thus, testing units separately and in isolation.
Integration tests have a broader scope. They test units of code together, looking at their interaction.
End-to-end tests test the application, as an actual user would do it. Thus, it resembles system testing if we look at quality assurance in theory.

Together, doing all of these can give us a lot of confidence in shipping our application — front-end testing makes sure that people will interact with the UI as we desire. From another perspective, using these practices, we’re able to ensure error-free releases of an application without a lot of manual testing, which eat up resources and energy.

This value can be overshadowed, though, because many pain points have various causes. Many of these could be considered “traps”. Imagine doing something with the best of intentions, but it ends up painful and exhausting: This is the worse kind of technical debt.

Why Should We Bother With Testing Traps?

When I think about the causes and effects of the front-end testing traps that I’ve fallen into, certain problems come to mind. Three causes in particular come back to me again and again, arising from legacy code I had written years ago.

Slow tests, or at least slow execution of tests.
When developing locally, developers tend to get impatient with tests, especially if someone in your team needs to merge corresponding pull requests. Long waiting times feel overwhelmingly annoying in any case. This trap can arise from a lot of small causes — for example, not paying much attention to suitable waiting times or to the scope of a test.
Tests that are difficult to maintain.
This second pain point is even more critical and a more significant cause of abandoned tests. For example, you might come back to a test months later and not understand its contents or intent at all. Or team members might ask you what you wanted to achieve with an old test that you wrote. In general, too many classes or abstractions littered across walls of text or code can swiftly kill the motivation of a developer and lead to plain chaos. Traps in this area can be caused by following best practices that are not suitable for tests.
Tests that give you no consistent value at all.
You may call these Heisenfails or Heisentests, like the famous Heisenbug, which only occurs if you look away, don’t measure it, or, in our case, don’t debug it. The worst case is a flaky test, a non-determinant test that fails to deliver the same result between builds without any changes. This can occur for various reasons, but it usually happens when you try to take an easy, seemingly convenient shortcut, disregarding testing best practices.

But don’t worry too much about my own experiences. Testing and handling tests can be fun! We just need to keep an eye on some things to avoid a painful outcome. Of course, the best thing is to avoid traps in our test designs in the first place. But if the damage is already done, refactoring a test base is the next best thing.

The Golden Rule

Let’s suppose you are working on an exciting yet demanding job. You are focused on it entirely. Your brain is full of production code, with no headspace left for any additional complexity — especially not for testing. Taking up much headspace is entirely against the purpose of testing. In the worst case, tests that feel like a burden are a reason that many teams abandon them.

In his guide “JavaScript Testing Best Practices,” Yoni Goldberg articulates the golden rule for preventing tests from feeling like a burden: A test should feel like a friendly assistant, there to help you, and should never feel like a hindrance.

I agree. This is the most crucial thing in testing. But how do we achieve this, exactly? Slight spoiler alert: Most of my examples will illustrate this. The KISS principle (keep it simple, stupid) is key. Any test, no matter the type, should be designed plain and simple.

So, what is a plain and simple test? How will you know whether your test is simple enough? Not complicating your tests is of utmost importance. The main goal is perfectly summarized by Yoni Goldberg:

“One should look at a test and get the intent instantly.”

So, a test’s design should be flat. Minimalist describes it best. A test should have not much logic and few to no abstractions at all. This also means you need to be cautious with page objects and commands, and you need to meaningfully name and document commands. If you intend to use them, pay attention to indicative commands, functions, and class names. This way, a test will remain delightful to developers and testers alike.

My favorite testing principle relates to duplication, the DRY principle: Don’t repeat yourself. If abstraction hampers the comprehensibility of your test, then avoid the duplicate code altogether.

This code snippet is an example:

// Cypress
beforeEach(() => {
// It’s difficult to see at first glance what those
// command really do
cy.setInitialState()
.then(() => {
return cy.login();
})
}):

To make the test more understandable, you might think that meaningfully naming commands is not enough. Rather, you could also consider documenting the commands in comments, like so:

// Cypress
/**
* Logs in silently using API
* @memberOf Cypress.Chainable#
* @name loginViaApi
* @function
*/
Cypress.Commands.add(‘loginViaApi’, () => {
return cy.authenticate().then((result) => {
return cy.window().then(() => {
cy.setCookie(‘bearerAuth’, result);
}).then(() => {
cy.log(‘Fixtures are created.’);
});
});
});

Such documentation might be essential in this case because it will help your future self and your team understand the test better. You see, some best practices for production code are not suitable for test code. Tests are simply not production code, and we should never treat them as such. Of course, we should treat test code with the same care as production code. However, some conventions and best practices might conflict with comprehensibility. In such cases, remember the golden rule, and put the developer experience first.

Traps In Test Design

In the first few examples in this section, I’ll talk about how to avoid falling into testing traps in the first place. After that, I’ll talk about test design. If you’re already working on a longstanding project, this should still be useful.

The Rule Of Three

Let’s start with the example below. Pay attention to its title. The test’s content itself is secondary.

// Jest
describe(‘deprecated.plugin’, () => {
it(‘should throw error’,() => {
// Actual test, shortened for component throwing
// an error
const component = createComponent();

expect(global.console.error).toBeCalled();
});
});

Looking at this test, can you tell at first sight what it is intended to accomplish? Particularly, imagine looking at this title in your testing results (for example, you might be looking at the log entries in your pipelines in continuous integration). Well, it should throw an error, obviously. But what error is that? Under what circumstances should it be thrown? You see, understanding at first sight what this test is meant to accomplish is not easy because the title is not very meaningful.

Remember our golden rule, that we should instantly know what the test is meant to do. So, we need to change this part of it. Fortunately, there’s a solution that is easy to comprehend. We’ll title this test with the rule of three.

This rule, introduced by Roy Osherove, will help you clarify what a test is supposed to accomplish. It’s is a well-known practice in unit testing, but it would be helpful in end-to-end testing as well. According to the rule, a test’s title should consist of three parts:

What is being tested?
Under what circumstances would it be tested?
What is the expected result?

OK, what would our test look like if we followed this rule? Let’s see:

// Jest
describe(‘deprecated.plugin’, () => {
it(‘Property: Should throw an error if the deprecated
prop is used’, () => {
// Actual test, shortened for component throwing
// an error
const component = createComponent();

expect(global.console.error).toBeCalled();
});
});

Yes, the title is long, but you’ll find all three parts in it:

What is being tested? In this case, it’s the property.
Under what circumstances? We want to test a deprecated property.
What do we expect? The application should throw an error.

By following this rule, we will be able to see the result of the test at first sight, no need to read through logs. So, we’re able to follow our golden rule in this case.

“Arrange, Act, Assert” vs. “Given, When, Then”

Another trap, another code example. Do you understand the following test on first reading?

// Jest
describe(‘Context menu’, () => {
it(‘should open the context menu on click’, async () => {
const contextButtonSelector = ‘sw-context-button’;
const contextButton =
wapper.find(contextButtonSelector);
await contextButton.trigger(‘click’);
const contextMenuSelector = ‘.sw-context-menu’;
let contextMenu = wrapper.find(contextMenuSelector);
expect(contextMenu.isVisible()).toBe(false);
contextMenu = wrapper.find(contextMenuSelector);
expect(contextMenu.isVisible()).toBe(true);
});
});

If you do, then congratulations! You’re remarkably fast at processing information. If you don’t, then don’t worry; this is quite normal, because the test’s structure could be greatly improved. For example, declarations and assertions are written and mixed up without any attention to structure. How can we improve this test?

There is one pattern that might come in handy, the AAA pattern. AAA is short for “arrange, act, assert”, which tells you what to do in order to structure a test clearly. Divide the test into three significant parts. Being suitable for relatively short tests, this pattern is mostly encountered in unit testing. In short, these are the three parts:

Arrange
Here, you would set up the system being tested to reach the scenario that the test aims to simulate. This could involve anything from setting up variables to working with mocks and stubs.
Act
In this part, you would run the unit under the test. So, you would do all of the steps and whatever needs to be done in order to get to the test’s result state.
Assert
This part is relatively self-explanatory. You would simply make your assertions and checks in this last part.

This is another way of designing a test in a lean, comprehensible way. With this rule in mind, we could change our poorly written test to the following:

// Jest
describe(‘Context menu’, () => {
it(‘should open the context menu on click’, () => {
// Arrange
const contextButtonSelector = ‘sw-context-button’;
const contextMenuSelector = ‘.sw-context-menu’;

// Assert state before test
let contextMenu = wrapper.find(contextMenuSelector);
expect(contextMenu.isVisible()).toBe(false);

// Act
const contextButton =
wapper.find(contextButtonSelector);
await contextButton.trigger(‘click’);

// Assert
contextMenu = wrapper.find(contextMenuSelector);
expect(contextMenu.isVisible()).toBe(true);
});
});

But wait! What is this part about acting before asserting? And while we’re at it, don’t you think this test has a bit too much context, being a unit test? Correct. We’re dealing with integration tests here. If we’re testing the DOM, as we’re doing here, we’ll need to check the before and after states. Thus, while the AAA pattern is well suited to unit and API tests, it is not to this case.

Let’s look at the AAA pattern from the following perspective. As Claudio Lassala states in one of his blog posts, instead of thinking of how I’m going to…

“…arrange my test, I think what I’m given.”
This is the scenario with all of preconditions of the test.
“…act in my test, I think when something happens.”
Here, we see the actions of the test.
“…assert the results, I think if that something happens then this is what I expect as the outcome.”
Here, we find the things we want to assert, being the intent of the test.

The bolded keywords in the last bullet point hint at another pattern from behavioral-driven development (BDD). It’s the given-when-then pattern, developed by Daniel Terhorst-North and Chris Matts. You might be familiar with this one if you’ve written tests in the Gherkin language:

Feature: Context menu
Scenario:
Given I have a selector for the context menu
And I have a selector for the context button

When the context menu can be found
And this menu is visible
And this context button can be found
And is clicked

Then I should be able to find the contextMenu in the DOM
And this context menu is visible

However, you can use it in all kinds of tests — for example, by structuring blocks. Using the idea from the bullet points above, rewriting our example test is fairly easy:

// Jest
describe(‘Context menu’, () => {
it(‘should open the context menu on click’, () => {
// Given
const contextButtonSelector = ‘sw-context-button’;
const contextMenuSelector = ‘.sw-context-menu’;

// When
let contextMenu = wrapper.find(contextMenuSelector);
expect(contextMenu.isVisible()).toBe(false);
const contextButton =
wapper.find(contextButtonSelector);
await contextButton.trigger(‘click’);

// Then
contextMenu = wrapper.find(contextMenuSelector);
expect(contextMenu.isVisible()).toBe(true);
});
});

Data We Used to Share

We’ve reached the next trap. The image below looks peaceful and happy, two people sharing a paper:

However, they might be in for a rude awakening. Apply this image to a test, with the two people representing tests and the paper representing test data. Let’s name these two tests, test A and test B. Very creative, right? The point is that test A and test B share the same test data or, worse, rely on a previous test.

This is problematic because it leads to flaky tests. For example, if the previous test fails or if the shared test data gets corrupted, the tests themselves cannot run successfully. Another scenario would be your tests being executed in random order. When this happens, you cannot predict whether the previous test will stay in that order or will be completed after the others, in which case tests A and B would lose their basis. This is not limited to end-to-end tests either; a typical case in unit testing is two tests mutating the same seed data.

All right, let’s look at a code example from an end-to-end test from my daily business. The following test covers the log-in functionality of an online shop.

// Cypress
describe(‘Customer login’, () => {

// Executed before every test
beforeEach(() => {
// Step 1: Set application to clean state
cy.setInitialState()
.then(() => {
// Step 2: Create test data
return cy.setFixture(‘customer’);
})
// … use cy.request to create the customer
}):

// … tests will start below
})

To avoid the issues mentioned above, we’ll execute the beforeEach hook of this test before each test in its file. In there, the first and most crucial step we’ll take is to reset our application to its factory setting, without any custom data or anything. Our aim here is to ensure that all our tests have the same basis. In addition, it protects this test from any side effects outside of the test. Basically, we’re isolating it, keeping away any influence from outside.

The second step is to create all of the data needed to run the test. In our example, we need to create a customer who can log into our shop. I want to create all of the data that the test needs, tailored specifically to the test itself. This way, the test will be independent, and the order of execution can be random. To sum it up, both steps are essential to ensuring that the tests are isolated from any other test or side effect, maintaining stability as a result.

Implementation Traps

All right, we’ve spoken about test design. Talking about good test design is not enough, though, because the devil is in the details. So let’s inspect our tests and challenge our test’s actual implementation.

Foo Bar What?

For this first trap in test implementation, we’ve got a guest! It’s BB-8, and he’s found something in one of our tests:

He’s found a name that might be familiar to us but not to it: Foo Bar. Of course, we developers know that Foo Bar is often used as a placeholder name. But if you see it in a test, will you immediately know what it represents? Again, the test might be more challenging to understand at first sight.

Fortunately, this trap is easy to fix. Let’s look at the Cypress test below. It’s an end-to-end test, but the advice is not limited to this type.

// Cypress
it(‘should create and read product’, () => {
// Open module to add product
cy.get(‘a[href=”#/sw/product/create”]’).click();

// Add basic data to product
cy.get(‘.sw-field—product-name’).type(‘T-Shirt Ackbar’);
cy.get(‘.sw-select-product__select_manufacturer’)
.type(‘Space Company’);

// … test continues …
});

This test is supposed to check whether a product can be created and read. In this test, I simply want to use names and placeholders connected to a real product:

For the name of a t-shirt product, I want to use “T-Shirt Akbar”.
For the manufacturer’s name, “Space Company” is one idea.

You don’t need to invent all of the product names, though. You could auto-generate data or, even more prettily, import it from your production state. Anyway, I want to stick to the golden rule, even when it comes to naming.

Look at Selectors, You Must

New trap, same test. Look at it again, do you notice something?

// Cypress
it(‘should create and read product’, () => {
// Open module to add product
cy.get(‘a[href=”#/sw/product/create”]’).click();

// Add basic data to product
cy.get(‘.sw-field—product-name’).type(‘T-Shirt Ackbar’);
cy.get(‘.sw-select-product__select_manufacturer’)
.type(‘Space Company’);

// … Test continues …
});

Did you notice those selectors? They’re CSS selectors. Well, you might be wondering, “Why are they problematic? They are unique, they are easy to handle and maintain, and I can use them flawlessly!” However, are you sure that’s always the case?

The truth is that CSS selectors are prone to change. If you refactor and, for example, change classes, the test might fail, even if you haven’t introduced a bug. Such refactoring is common, so those failures can be annoying and exhausting for developers to fix. So, please keep in mind that a test failing without a bug is a false positive, giving no reliable report for your application.

This trap refers mainly to end-to-end testing in this case. In other circumstances, it could apply to unit testing as well — for example, if you use selectors in component testing. As Kent C. Dodds states in his article on the topic:

“You shouldn’t test implementation details.”

In my opinion, there are better alternatives to using implementation details for testing. Instead, test things that a user would notice. Better yet, choose selectors less prone to change. My favorite type of selector is the data attribute. A developer is less likely to change data attributes while refactoring, making them perfect for locating elements in tests. I recommend naming them in a meaningful way to clearly convey their purpose to any developers working on the source code. It could look like this:

// Cypress
cy.get(‘[data-test=sw-field—product-name]’)
.type(‘T-Shirt Ackbar’);
cy.get(‘[data-test=sw-select-product__select_manufacturer]’)
.type(‘Space Company’);

False positives are just one trouble we get into when testing implementation details. The opposite, false negatives, can happen as well when testing implementation details. A false positive happens when a test passes even when the application has a bug. The result is that testing again eats up headspace, contradicting our golden rule. So, we need to avoid this as much as possible.

Note: This topic is huge, so it would be better dealt with in another article. Until then, I’d suggest heading over to Dodds’ article on “Testing Implementation Details” to learn more on the topic.

Wait For It!

Last but not least, this is a topic I cannot stress enough. I know this will be annoying, but I still see many people do it, so I need to mention it here as a trap.

It’s the fixed waiting time issue that I talked about in my article on flaky tests. Take a look at this test:

// Cypress
Cypress.Commands.add(‘typeSingleSelect’, {
prevSubject: ‘element’,
},
(subject, value, selector) => {
cy.wrap(subject).should(‘be.visible’);
cy.wrap(subject).click();

cy.wait(500);
cy.get(`${selector} input`)
.type(value);
});

The little line with cy.wait(500) is a fixed waiting time that pauses the test’s execution for half a second. Making this mistake more severe, you’ll find it in a custom command, so that the test will use this wait multiple times. The number of seconds will add up with each use of this command. That will slow down the test way too much, and it’s not necessary at all. And that’s not even the worst part. The worst part is that we’ll be waiting for too little time, so our test will execute more quickly than our website can react to it. This will cause flakiness, because the test will fail sometimes. Fortunately, we can do plenty of things to avoid fixed waiting times.

All paths lead to waiting dynamically. I’d suggest favoring the more deterministic methods that most testing platforms provide. Let’s take a closer look at my favorite two methods.

Wait for changes in the UI.
My first method of choice is to wait for changes in the UI of the application that a human user would notice or even react to. Examples might include a change in the UI (like a disappearing loading spinner), waiting for an animation to stop, and the like. If you use Cypress, this could look as follows:
// Cypress
cy.get(‘data-cy=”submit”‘).should(‘be.visible’);

Almost every testing framework provides such waiting possibilities.

Waiting on API requests.
Another possibility I’ve grown to love is waiting on API requests and their responses, respectively. To name one example, Cypress provides neat features for that. At first, you would define a route that Cypress should wait for:
// Cypress
cy.intercept({
url: ‘/widgets/checkout/info’,
method: ‘GET’
}).as(‘checkoutAvailable’);

Afterwards, you can assert it in your test, like this:
// Cypress
cy.wait(‘@request’).its(‘response.statusCode’)
.should(‘equal’, 200);

This way, your test will remain stable and reliable, while managing time efficiently. In addition, the test might be even faster because it’s only waiting as long as it needs to.

Major Takeaways

Coming back to Admiral Akbar and Star Wars in general, the Battle of Endor turned out to be a success, even if a lot of work had to be done to achieve that victory. With teamwork and a couple of countermeasures, it was possible and ultimately became a reality.

Apply that to testing. It might take a lot of effort to avoid falling into a testing trap or to fix an issue if the damage is already done, especially with legacy code. Very often, you and your team will need a change in mindset with test design or even a lot of refactoring. But it will be worth it in the end, and you will see the rewards eventually.

The most important thing to remember is the golden rule we talked about earlier. Most of my examples follow it. All pain points arise from ignoring it. A test should be a friendly assistant, not a hindrance! This is the most critical thing to keep in mind. A test should feel like you’re going through a routine, not solving a complex mathematical formula. Let’s do our best to achieve that.

I hope I was able to help you by giving some ideas on the most common pitfalls I’ve encountered. However, I’m sure there will be a lot more traps to find and learn from. I’d be so glad if you shared the pitfalls you’ve encountered most in the comments below, so that we all can learn from you as well. See you there!

Further Resources

JavaScript and Node.js Testing Best Practices,” Yoni Goldberg
Testing Implementation Details,” Kent C. Dodds
Naming Standards for Unit Tests.html,” Roy Osherove

Email Testing Flow As It Should Be

We spend a lot of time and effort crafting emails with a specific purpose: to make their recipients read them and take the desired actions. The three known bottlenecks of every email sequence include:

Deliverability
Emails are going to spam folders and are never read.
Display issues
Email content is broken or not properly rendered and as a result, such emails are read but don’t prompt the reader to take action.
Engagement
This is a whole set of reasons, such as vague email subject or unclear email copy, which can cause both not reading and not taking action.

How can we address these challenges? It is recommended to follow rules and best practices of building and sending emails.

But how do we know that they work?

By testing every single email aspect! Unfortunately, email testing is often underestimated and leads to email testing mistakes that kill all your effort spent on creating a great email sequence.

Let’s talk email testing! In this article, we’ll explain how a proper email testing workflow can help you improve email sending efficiency. We’ll describe common testing approaches and mistakes, and demystify the seamless email testing flow.

With this article, you will enhance your testing workflows by covering all the important aspects, and saving time and stress with suitable email testing methods and tools.

Are You Keeping On Top Of Your Email Metrics?

While there’s always something to be improved, it’s important to understand when you underperform and need to take action right away.

With any type of email you send, you need to track at least these metrics:

Open rate
How many messages you sent vs. how many of them were opened.

Bounce rate
How many emails were returned to you?

Click-through rate
For those emails that contain links, how many links were clicked?

Unsubscribe rate
For marketing email campaigns, the unsubscribe option is required.

When sending marketing messages (such as newsletters, special offers, abandoned cart emails, and so on), you can compare your rates to the industry standards. However, they will also change over time and in different circumstances.

According to Hubspot, the average email rates are:

Open rate varies from 19% to 26% depending on the industry.
Click-through rate is naturally lower, from 6.82% to 9.31%.
Hard bounce rate should be as low as 0.3% to 0.9%.
Unsubscribe rate is fine when it’s between 0.3% and 0.6%.

The research conducted by Constant Contact shows the following numbers:

The average open rate across all industries is 17.13%, with the highest value of 28.84% for religious organizations and the lowest of 10.25% for automotive services.
Click-through rate is 10.25% on average — the highest is 17.35% for publishing and the lowest is 5.54% for real estate.
Both hard and soft bounce rates are about 10.28% — as low as 6.47% for civic/social membership and as high as 15.47% for legal services.

With transactional emails, it’s a bit trickier, because the open rate you should strive for depends on the type of email you send. For example, reset password emails should be opened by the majority of your recipients, let’s say up to 90%. Order confirmation emails won’t have such a high open-rate but will receive much more interest than a marketing campaign, even with an exclusive offer.

The other side of the coin for transactional emails is that they should be opened by the right people at the right time. Otherwise, such emails are useless or even harmful. Imagine that John has just signed up for a financial service to create a tax report. To proceed, he needs to click a button in the confirmation email. After three hours, when John’s working day has already finished, he receives an email welcoming someone named Jack. Will he move on with that service? It’s doubtful.

We are pretty sure that you test your emails before sending them to avoid such ridiculous situations. But are you certain that your email testing flow is comprehensive and smooth enough?

What Should You Test For Different Sorts Of Emails?

We have already mentioned the difference between marketing and transactional emails. Their purpose, sending method, and performance vary, and so should the testing flow.

However, the goals of testing are common for all types of email sequences — ensure deliverability, content excellence, and engagement. That’s why there is a list of aspects that you should test for any kind of email.

Universal Email Testing Aspects

Email sending infrastructure.
Even when you use a dedicated email marketing service, you need to check whether all integrations work perfectly fine, especially, when it is first set up.

Make sure you use the proper domain for sending emails. A mistake could be made when you work with several projects/websites.
Check whether necessary authentication methods are set — SPF and DKIM are required, while DKIM is highly recommended, but it’s still optional.
Test your SMTP connection (there are tools both for developers and marketers — we will take a look at them later in this article).
Examine all other additional settings, such as using dedicated or shared IP, feedback loops, and so on.

Email template.
No matter what the message’s purpose is, it should be correct and visually appealing for every recipient.

Every company email, from a small notification to a detailed tutorial or newsletter, should be built with an HTML template. It is important to make sure that your message looks as designed for all your recipients. The trick is that different email clients use different rendering engines — this means that there is no standard for processing email templates. Even if you included a small PNG picture, there’s no guarantee that it will be properly displayed across all email clients and devices — let alone more complex elements, such as video or animation.
Needless to say, the email message must not contain any mistakes or typos. Email copy also needs testing, to be clear, concise, and correct.
All the links and buttons should be valid and lead to the right destinations. Pay special attention to automatically generated personal links — for example, account confirmation, password reset, personal offer, etc.
Personalization and/or dynamic content.
Today almost all messages contain at least a tiny bit of personalization. When using merge mechanisms, make sure that emails are sent to the right addresses and that dynamic variables are generated correctly (e.g. username, location, behavior in the app, and so on). Otherwise, you risk not only offending your customers by calling them a wrong name (Hello %FirstName%!), but also disclosing personal information. All in all, it may result in a low conversion rate for your campaigns.

Email headers and subject.
The sender’s name and the email subject line are the first two things the recipient sees and considers when clicking your message to open it or to route it to a spam folder. Add more focus here!

From, To, and Cc are three well-known headers. Make sure they are technically correct, and also don’t be afraid to experiment — changing the “From” address can also impact the email open rate.
The subject line is also a header, and much has been said about it — there are special subject line checkers, various lists of “100 words that you shouldn’t use in subject lines”. Besides, the email subject is the first thing used in email A/B testing. So we won’t focus on it a lot. Let’s talk instead about the pre-header, the third thing you see in your email inbox after the sender’s name and subject line. Pre-header is often neglected, but it’s a preview text that should explain the quintessence of your message and nudge a recipient to open it. If you don’t set it, the first characters of your email will be used by default. Why should you waste those precious 50-100 characters for “Hey #name something”?
Tip: Use the &nbsp trick to set a pre-header if your email sending tool doesn’t offer such functionality, and then test how it will look for your recipients.

Technical headers or metadata.
There is message metadata that can help you debug messages or track them. For example, some providers allow adding categories to the X-SMTPAPI header of the emails, which is useful for email performance tracking. Checking the raw message data can provide you with helpful information if you are an advanced SMTP email user.

Spam checks.
Would you like to have a list of precise criteria that cause emails to be marked as spam? Everyone would, especially spammers. That’s why you can only follow some general rules and use dedicated services that analyze the reputation of your sending IP, the content of your email headers, the correctness of your HTML, whether you use certain phrases, and so on. Transactional emails can get classified as spam, too.

Peculiarities Of Transactional Email Testing

When you send triggered emails, especially from your own application or online service, you should take care of a few more aspects.

Test your sending script. Here are two important aspects:

Test whether your code works and sends emails in the right way.
Check whether it seamlessly works with triggers: when a user completes a specific action, they should receive an appropriate email notification. It’s efficient to cover this part of your app functionality with automated tests — perform user acceptance testing.

If you are launching a large-scale system, it’s important to perform email app load testing. What if 2,000 users simultaneously click “reset password”? What happens next? How soon will they receive an email confirmation?

Marketing Emails Testing Focus

When you use a bulk email service, in most cases, you have built-in automation and basic testing functionality.

However, there are a couple of moments that require manual checking. They may seem obvious, but are still often overlooked.

Make sure that the right database is selected (or the right list of contacts in your email marketing service). There are many cases when test emails are sent to customers, or a customer segment receives an offer that is not valid for them.
Our favorite real story is about the US embassy in Australia accidentally sending out a test email known as a “Cookie Monster cat email”. It had “Meeting” in the subject line and a picture of a cat in blue pajamas holding a plate of cookies in the content. They were testing some new email sequences and selected their real email database instead of a testing list.

This email failure quickly became popular on Twitter. Dozens of cat memes on social media are obviously funny, but this is not always the type of PR activity you expected to be part of.
Take care of the proper naming for your clients’ lists — in some tools, they can be visible for your customers when they double opt-in or unsubscribe.

Check Bcc and Cc to not reveal any email addresses. It is not recommended, but sometimes Bcc is still used for sending small batches of emails. Just a couple of years ago one online media for software developers sent a digest to its subscribers by listing them in a Cc field. Therefore, everyone on that list could see the email addresses of other recipients — obviously, the Cc and Bcc fields were just confused in that message.

Common Email Testing Mistakes

Everyone has received an email sent by mistake at least once. Wrong links sent in promo campaigns don’t cause much harm, but what if your broken email script can lead to a data breach or a system crash? With strict data privacy guidelines, such as GDPR and CCPA, unintentional emailing to unsubscribed users or using email addresses of people who have never subscribed to your communication can lead to conflicts or penalties.

There are two main types of email testing mistakes:

Important email aspects being ignored or
The email testing infrastructure being set improperly.

In many teams, the email testing process is limited to sending a few test emails to your own or colleagues’ inboxes. Alternatively, public email inboxes for testing are used not to flood personal inboxes.

What can such an approach inform you about? Well, you can learn that your email sending service is functioning and that your email content is displayed correctly in your recipients’ email clients (or in a web browser). You can also ask your colleagues to click links and read the email copy. But what about other email clients and, even more importantly, deliverability?

Another point is testing 5 emails out of 5,000. When you have a robust email-related system, you have to make sure that it functions correctly. We have mentioned the importance of merging mechanisms, so here’s the case — can you be certain that the billing information of Mr. Bluesky & Co. doesn’t go to Bluebird & Bluebird because of a tiny mistake in your database? The most efficient way is to run a set of automated tests (for example, with Selenium) and check the content of each email.

Correct Email Testing Process

A smooth email testing process is based on simple principles:

Use a checklist to cover all important email aspects.
Automate everything you are able to in order to minimize human error.
Limit access to your sending servers and deployment processes.
Run all tests in a staging environment and use a fake SMTP to avoid sending test emails to real users. Some developers tend to use /dev/null fake SMTP server for email testing, but this is not efficient as it doesn’t imitate production. If you run tests in a production environment, be ready to reveal test data to real users.
Use server monitoring tools. They can help you indicate the abnormal activity and quickly take action if something goes wrong.

Email Testing Toolkit

There is no ubiquitous email testing flow because the process of testing depends on the type of email campaign you send, the method you use (whether it’s an email marketing service or your custom email functionality in your application), and the team you work with.

Development and QA teams who build transactional email sequences usually run automated tests by integrating testing frameworks and tools, such as Selenium.

For marketing campaigns, it’s more common to create an email testing checklist. I’d recommend creating a custom email testing list based on the aspects explained in the “What should you test for different sorts of emails?” section and competing it with the tools of your choice.

When you select an email tool, especially a paid one, pay attention to those services that offer wide functionality and can be used by both marketing and development teams.

We have talked about the general rules of successful and painless email testing. Now let’s go into detail about each type of test, with examples of tools.

How To Test Sending Infrastructure

This type of testing is usually used by developers, but there are tools that can come in handy for marketers as well.

In general, you should test your mail server when you establish a new integration, change your setup, or maintain your own email sending infrastructure.

Telnet is a computer protocol that provides communication with a remote server. It’s also a command-line utility available in most common computer systems, including Windows, Mac, Linux, and more. It can help you test server connection, ports open for email communication, supported SMTP commands, relaying specific email addresses or domains.

To use this utility for testing, you will need to run a set of SMTP commands in a telnet client (which is pre-installed on the majority of systems).

Wormly is an uptime monitoring service that also has an SMTP/Mail Server Test. It can help you test whether your SMTP server is configured correctly by sending a test message to your email server. Wormly will log the SMTP conversation for you so you can check and debug errors or exceptions if there are any to be found.

It’s super simple to use: you just need the address of your sending server, a recipient email address, and a port. You can also share the results of your team with a shareable link. Here is the case of how a marketer can use it — run a test in Wormly, proceed if no errors were thrown, and, if any were found, send a link to your dev team for troubleshooting help.

Wormly can be also useful for SMTP server monitoring, which I’ve mentioned in the Correct Email Testing Process section.

Gmass is an email marketing service inside Gmail. It also has a mail merge functionality and cold email sending options. The tool is a paid service, but it has a few helpful free tests:

SMTP tester,
Email tester (SPF, DKIM, blacklisting),
Inbox, Spam, or Promotions (inbox placement),
Email verifier (contact list),
Email deliverability wizard (stats for a list of campaigns via any Google account within the last 24 hours that had at least 1,000 recipients).

If you expect high loads on your email server, it will also be good to perform load testing with a tool like Apache JMeter™ or an email sandbox service.

How To Test Email Content

In this category, you have the widest choice of tools. We will name a few to provide you with a starting kit.

Email Template Checkers

When building an HTML email template with a special template builder or using a drag-and-drop editor in your email sending tool, you will have a preview option. It will display how your template is rendered and, in most cases, how it looks on different devices.

If you need a separate solution for HTML preview, you can use a tool like PilotMail. It provides an email builder, layout viewer, and a test email sending to 10 addresses. There is a free plan, but the paid one costs just $3/month.

With no rendering standards for email clients, it’s important to perform email client testing to make sure that your email will look perfect in any of the recipient’s email clients. The most popular tools in this category are Litmus and Email on Acid. They generate previews for a list of email clients and allow you to manually compare them and look for issues. You can experiment with both of them on a 7-day free trial, while further usage starts at $73/month.

Another approach for email client testing is used in HTML Email Check and Mailtrap (As a friendly disclaimer, I actually work there.) These tools analyze your email template for HTML and CSS validity and provide you with an actionable list of errors for each email client. Free plans are available for both tools, and their paid subscription plans start at $10/month.

Email Content Checks

Spell checkers and content testing tools are helpful for emails as well. Grammarly or ProWritingAid can help you check your email copy for errors and typos (both have free plans). Hemingway Editor can give you tips on improving readability and if you’re struggling with writing, Conversion.ai can actually write you an email copy.

The Subject Line And Preheader

Email marketing services usually provide options to preview the subject line and preheader. You can also play with TESTSUBJECT by Zurb that displays email subject and preheader previews for a few types of mobile devices. Preheader Testing Tool also provides similar functionality.

Subject Line Tester by CoSchedule and Send Check It can help you optimize your subject line for driving more opens.

How To Test Email Deliverability

Finally, we’ve made it to our favorite, but the most complicated, point — email deliverability. A whole set of factors impact your spam score: sender reputation, authentication, and email content.

There are a few tools that test your email against all the criteria.

You can start with Mail Tester. It’s free, but it provides a number of important tests. You can view your message and its source code after sending your email to a provided test address, get a SpamAssassin score, authentication check, message body analysis (it also shows HTML size), blacklisting report, link validation, and more.

Free Spamcheck by Postmark offers similar functionality. You can use it without sending a test message — just paste your HTML code with all headers and get an instant result. You can also integrate Spamcheck with your app via their JSON API and run automatic spam checks for all emails you send.

If you need a more robust solution, there is GlockApps. It has a 14-day free trial and then offers subscription plans starting at $79/month. By the way, their website states that:

“On average 51% of emails never reach the inbox! So where do they go? 26% go to the spam or junk folder and 25% are never delivered.”

GlockApps is known for its powerful reports available for registered users. It provides inbox placement, reputation check, DMARC analytics, bounce analytics, automatic tests, monitoring and alerts, and template editor. In addition, there are a few useful separate free tools: domain checker, inbox email tester, inbox insight, DMARC analytics, and uptime monitor.

Deliverability tests are also included in many full-featured email sending and testing tools.

Summary And Main Takeaways

The main purpose of this article was to prompt you to take email testing seriously, run it as a separate project, and pay attention to every important aspect. Email is not just another line in someone’s inbox — it’s a part of the user experience of your app, website or blog. It’s ideal to be able to have a team of marketers, developers, product managers, and QAs who can work together on spotless email sequences. But even if you are running a small project or have limited time and resources, you have a great list of handy tools that can do the job for you.

For any type and scale of the email-related project, it is worth allocating time to establish an email testing workflow and to experiment with tools. This will save you time and improve the quality of your campaigns at every iteration. And of course, you are very welcome to share your email testing stories and approaches in the comments!

Related Reading on SmashingMag:

A Complete Guide To HTML Email
Design Your Mobile Emails To Increase On-Site Conversion
Level-Up Email Campaigns With Customer Journey Mapping
Everything You Need To Know About Transactional Email But Didn’t Know To Ask
HTML Email with Rémi Parmentier (webinar/video)

Generated by Feedzy