How to Manage Unknowns as a Web Engineer

Working across multiple projects and platforms, web engineers are expected to know a lot of different things.

However, it’s important for us all to stress that we don’t need to know every single technology or concept we’re likely to come across. It’s how you face the unknown that tests your mettle as an engineer, and there are ways to get better at facing unknown problems. I like to think of the problem as an issue of managing unknowns – everything you don’t know is a risk that could cause obstacles or delays, but there are ways to manage that risk consciously and guard against getting stuck.

We work with legacy code touching arcane tools like Backbone.js, various opinionated page builders or meta field management plugins, or browser compatibility shims. We also work with greenfield codebases where we get to push the limits of webpack and React, and work with Altis, Elasticsearch, or what have you. Sometimes we inherit code that is optimised or structured in a way you’ve never seen before. And sometimes a project will face a genuinely new problem or experiment with novel ways of organising code or tackling a common problem, so there’s no simple Googling the answer. The point is: there’s no way any engineer can be expected to know everything going into a project.

One of the most frustrating situations for a developer is to spend time building a piece of functionality, only to find out it wasn’t what the client needed.

I originally started writing this as a post about “tips in asking for help with technical problems”, but as I was writing, I realised that asking for help successfully is part of a bigger mental shift. I’m trying to share some tactics for managing unknowns that have worked for me, in the hopes that they might prove useful to other engineers. I’d love to hear other tips and suggestions that are helpful in addressing work you don’t know anything about going into.

If requirements aren’t clear, clarify them before anything else

One of the most frustrating situations for a developer is to spend time building a piece of functionality, only to find out it wasn’t what the client needed. This happens all the time when a ticket is specified in terms that have a specific technical meaning, but the ticket writer was using them loosely. For example, a ticket might talk about a “block”, which to us means a block editor component, but maybe in this case it was a designer’s shorthand for a bit of markup that could be achieved by a template part, or a shortcode, or a widget, or who knows what else. (The reverse is sometimes true as well – sometimes a specific word in a ticket might be critical to the requirements, but you might assume you understand that they’re really looking for something else).

If a ticket isn’t clear to you, make sure you understand what’s needed before starting work. Writing pseudo-code should help clarify what the steps in a problem are, and this is a first product you can share with the rest of the project team to make sure you’re understanding the problem correctly. And as a bonus, once you have the problem written out in pseudo-code, it should be much easier to see which parts of the problem you already know how to solve, which parts you have a vague idea about, and which parts are still complete unknowns.

Don’t just pad estimates to avoid thinking through a problem

If you’re asked to estimate a task that you don’t have any idea how to tackle, it’s natural to just take what you think is a high estimate, and maybe double it to be safe. This might seem safe, but if the unknowns are left vague at this point, even the most conservative estimate might not be enough. We use consensus estimates (like planning poker) as a way of sharing knowledge about unknowns and validating concerns. Use these tools to your benefit! Communicating what you know and don’t at this step will help you get pointers early on and prevent wasting time chasing down wrong assumptions or unlucky guesses.

You don’t have to have a strict breakdown of how you’ll tackle an issue going into estimations, but you should be able to justify your estimate in some way. For example, “I don’t know how to do this, so I’ll give it a week” isn’t a useful estimate, and you’re just as likely to miss the estimate as you are to hit it if you pick up a ticket with that vague an understanding. But on the other hand, “this ticket involves creating an admin field, outputting that value on the front end, and somehow adding it to a REST endpoint so that I can add a block control” makes it really clear what you know and what you don’t. In that case you could easily say “I know how to do the first two things, but I don’t know exactly how it’s done in this project, so that might take me a day. I don’t know X about the third requirement, so I’ll have to do some research into API permissions. That might add a whole day.”

Isolate the problem you’re stuck on and ask for help there

Sometimes, even after you’ve broken a task down into small chunks and tackled them in a logical order, you’ll still get stuck at a problem that feels too big to track down or explain, and this is the point where it’s hardest to ask for help. It’s a small thing to be able to ask “how can I register a REST field for this post meta field”—as a matter of fact, if your problem is that clear, you can probably answer it yourself with a search. But if you’re stuck at a more vague point in the problem, like “why doesn’t this work?”, it might not be as easy to ask for help where you need it.

The tactic of progressively narrowing down the unknowns is the same when you’re trying to ask for help as it is when you’re trying to debug a problem on your own. Use any of the tools available to you—set debug breakpoints, insert var dumps, write unit tests—to pinpoint just where the expected behaviour is breaking down. This might be enough to find the problem! And if not, you should at least have enough context to ask a clearly defined question. Work with the people trying to help you.

Finally, make sure you’re giving people enough information to answer your question. It’s often tough to guess from a description of a problem what’s actually going on – if you can share links to your code or a gist, that helps. If an approach doesn’t work for you, explain what the problem or concern is with it – just saying “I tried that, didn’t work” to any suggestions people offer is a sure way to frustrate them and yourself.

Documenting the solutions you find after getting help is also a great way to foster an environment where supporting each other is encouraged. Answering questions on Slack, pair programming, and support take time, and feels much better to the person putting in that time if the knowledge you arrive at is shared publicly at the end of it. Any bug that one person faces and figures out should contribute to the overall knowledge pool accessible by the rest of the team.

Practice brings comfort

Learning how to manage unknowns is a skill developed over time. In a recent conversation we had internally, the team talked about how a principal engineer is expected to be able to “fight fires” by coming into any project and figuring out what needs to be done to get it finished, no matter what technology or tools it involves. This may or may not be a reasonable expectation, but my point is that the ability to work around what you don’t know is something that can be learned; starting with managing unknowns at the feature level until you’re comfortable looking at a codebase where everything is new to you.

Part of this is just developing instincts and learning to trust your gut. It may never feel comfortable to estimate a task you have no idea how to do. I know from the times when I did sales engineering work that I’ll always feel a moment of panic after committing myself to a huge abstraction like “we can do this project in 130 developer days”. I get the same feeling when I start on a ticket and have to spend a couple of hours investigating an approach before I know if it’ll even be possible or not.

But over time, through paying attention to the shape of what you know and don’t know, you can get to a point where your estimates are close most of the time, your instincts of where to look for a bug are usually good, and you can often give enough context to get a decent answer to your questions.

A Primer on Acceptance Testing – written by Thorsten

Foreword

Last week, I took the exam for the ISTQB® Acceptance Testing certification. This is a Foundation Level certification, specializing on acceptance testing, including all related activities.

In this post now, I would like to share the, in my opinion, essential information included in the official learning material, plus one or two things I learned over the last years. While the certification, as well as acceptance testing itself, targets quite a broad range of roles, the learning material focuses on the two roles Tester and Business Analyst (BA). Now, at the moment, we don’t have either of these roles inside Human Made, and I don’t know how many projects (or rather clients) actively involve testers and/or BAs in the project work we are involved in. However, following Agile methodologies, engineers (just like project managers) ideally should be involved in some of the activities that either testers and or BAs would be doing. In Scrum, this is also true for the Product Owner (PO) role.

This post is no extensive document on all the various aspects of acceptance testing, for example, roles, perspectives, activities, and objectives. It should rather be seen as an introduction, going deeper in some of the aspects, while only scratching the surface for others.

Acceptance Testing

Acceptance testing is the name of a test level. It is performed to asses a system’s readiness for deployment and its use by the end user.

Acceptance testing also is the shared responsibility of the end users, business users, testers, product owners and administrative staff, and any other stakeholder.

Where is Acceptance Testing

Using the “common” V-model, as illustrated in the following diagram, acceptance testing is situated at the end of the development process (i.e., the top right side).

The “common” V-model, including Acceptance Testing (top right).

However, this does not mean that acceptance testing is the last thing you would do. Even less so in Agile development. While acceptance testing, in general, targets the complete system, it is both possible and common to also perform acceptance testing for a specific feature, or user interface (UI) element, or business process. Acceptance testing is the last test level simply because testing the (user) acceptance of something that is not yet complete is almost useless.

The reason I like the above model is that it provides an easy-to-understand and yet complete overview of all the different (typical) test levels, the different development phases and activities (which exist in both traditional and Agile development, in some form and shape), and, most importantly, the relationship between those. Acceptance testing assesses whether or not the implementation meets the Customer Level requirements—in Agile development, requirements are often represented by user stories, or features.

Objectives of Acceptance Testing

Acceptance testing focuses on the behavior and capabilities of a whole system or product. Its main objectives are:

  • establish confidence in the quality of the system (feature);
  • validate that the system (feature) is complete and works as expected;
  • verify that both functional and non-functional behaviors are as specified.

Forms of Acceptance Testing

The most common (and for us relevant) forms of acceptance testing are:

  • User Acceptance Testing (UAT)
  • Operational Acceptance Testing (OAT)
  • Beta Testing

User acceptance testing aims at building confidence that end users can use the system, fulfill requirements and perform business processes with minimum difficulty, cost, and risk. Operational acceptance testing, on the other hand, focuses on building confidence that administration staff can keep the system working properly for end users, even under exceptional or difficult conditions. OAT includes things like upgrading, user management, data load and migration, and performance testing.

Beta testing is conducted to obtain feedback from the market after development and in-house testing. Since it is performed in various realistic configurations by potential or existing users at their own location, beta testing may discover defects that escaped during development and previous test levels.

Depending on the project context, there might be more, for example, regulatory acceptance testing, which is testing that the system or product adheres to specific government, legal or safety regulations.

Acceptance Criteria

When working on a product or other project, you usually have a set of requirements. These might cover both the business and the product side, and, typically, at least BAs, testers and developers are involved in the requirements engineering process. If you intend to perform acceptance testing, this is the ideal time to also develop acceptance criteria, and, based on these, acceptance tests. Doing this as a joint effort of this multi-disciplinary team ensures a mutual understanding of what acceptable means from the business, development, and testing perspectives, right from the start.

Requirements and Acceptance Criteria

Each acceptance criterion relates directly to a specific requirement or user story, and it is either part of the detailed description or an attribute of the related requirement. When using user stories, acceptance criteria are part of the user story’s definition.

If requirements or user stories are vague or ambiguous, people will likely make assumptions—assumptions which may be incorrect. As a consequence, resulting acceptance tests might be flawed or invalid, and thus create unnecessary costs, as well as risks and uncertainty about the overall quality. Therefore, it is critical that developers, testers and BAs work closely to make sure that requirements are clear and well understood by all stakeholders.

Acceptance Criteria Best Practices

Acceptance criteria refine requirements or user stories, and they provide the basis for acceptance tests. They are formulated as one or more statements, which can all either be true or false. Acceptance criteria also represent the test conditions used to check whether or not a requirement or user story has been implement as expected.

In order to create acceptance criteria, one needs to think about both functional and non-functional requirements, from a stakeholder and user perspective. This provides a high chance of detecting inconsistencies, contradictions, or missing information, and it supports early verification and validation of the related requirement or user story.

Well-written acceptance criteria are precise, measurable and concise. For each criterion, testers need to know if a given test complies with it, or not. Acceptance criteria do not contain technical information such as technologies used, or implementation details. Acceptance criteria do not refer to UI elements.

Sample Acceptance Criteria

Good examples of acceptance criteria:

  • A registration form is displayed on the screen.
  • I cannot register if the username I choose already exists.
  • After sucessful registration, I am informed about it via an email.

Bad examples would be:

  • The registration form is realized with Gravity Forms.
  • The cursor is initially set on the login field.
  • Pressing the TAB key, the cursors switches to the password, repeat password, email and repeat email input fields.

Reviewing Acceptance Criteria

As with requirements and user stories, acceptance criteria should be reviewed regularly to ensure they are clear, consistent and comprehensive, and that they also cover non-functional characteristics. These quality assurance activities can be performed during sprint planning or refinement meetings, or formal technical reviews.

Acceptance Tests

Acceptance tests are derived from acceptance criteria, or other requirements documents. While the latter determine what to test, acceptance tests specify how to test, including detailed test procedures.

Designing Acceptance Tests

Acceptance tests represent scenarios of usage of the system or product. When performing requirements-based acceptance testing, there are several work products that can be used as a test basis. The obvious ones are user or business requirements, or user stories. But also system requirements, documentation, and regulations, contracts and standards oftentimes form a good basis for certain acceptance test cases.

Acceptance tests are typically Black-box Tests. This means that the test does not care about the implementation details, but treats the subject under test as a black box, only interacting with its public interface. Acceptance testing is about behavior, functionality, or usage, not about implementation. Relevant acceptance testing techniques include Equivalence Partitioning and Boundary-value Analysis.

Other test techniques or approaches often used for acceptance testing are:

  • business process-based testing, validating business processes and rules;
  • experience-based testing, leveraging the tester’s experience, knowledge and intuition;
  • risk-based testing, using previously identified product or business risks;
  • model-based testing, using graphical (or textual) models to obtain test cases.

Using Acceptance Tests to Drive the Development

There are several approaches to acceptance testing where both test analysis and test design are formally part of the requirements engineering process. The most prominent ones are Acceptance Test-driven Development (ATDD), and Behavior-driven Development (BDD). The names already indicate that the whole development process is driven by either acceptance testing aspects, or the system behavior (which is to be validated and verified).

ATDD relies on different forms of textual or graphical acceptance test design, for example, representations of application workflows. BDD, on the other hand, uses a domain-specific scripting language, Gherkin, that is based on structured, natural statements. Requirements are defined in a “Given—When—Then” format, and they are also used as acceptance test cases, as well as for test automation. This means that acceptance tests become living documentation, easy to understand by all stakeholders, important for the complete development process.

Using the Gherkin language, all requirements or acceptance test cases follow a standardized pattern:

  • Given [a situation]
  • When [an action]
  • Then [the expected result]

The “Given” block specifies the initial state of the test object, the “When” block indicates what actions to peform, and the “Then” block includes the expected consequences. This Given—When—Then triple is no different than the Arrange—Act—Assert triple, which you might know from unit or integration testing (i.e., white-box testing techniques).

As with acceptance criteria, acceptance tests using the Gherkin language do not refer to user interface elements, but rather to user actions on the system.

Here are two sample requirements/tests:

Given I have specified a post title
When I navigate to the Social tab in the Multi-Titles block
Then I see the the social headline reflect the post title

Given I am creating/editing a post
And the content has triggered legal warnings
When I publish the post
Then I see a failing Legal Warnings item in the publication checklist

Non-functional Acceptance Tests

Meeting the expectations for non-functional quality charateristics strongly influences user acceptance. In terms of acceptance testing, the most relevant product quality properties are Performance, Usability, and Security. That is not to say that the other ones are not important. However, the three mentioned characteristics directly affect both the business and the end-user perspective.

In addition to that, performance testing, usability testing and security testing oftentimes require specific approaches to obtain a desired level of coverage, especially if there are (legal) regulations to meet.

What Now?

I think there is some low-hanging fruit. And, to be fair, there’s some rather tedious initial setup to do, too. But let’s focus on the easy wins first, and start writing clear and standardized requirements. Requirements that can be used for manual testing. Which means they can also be used for automated testing!

Let’s not do that for an entire codebase all at once, but gradually. And when the time is right, we can build up more. I’m positive that sooner or later, one or the other person on your project team—maybe you—will benefit from that.

I also think that we would benefit greatly by creating and maintaining business process/rule models for select workflows or processes. And this is true for requirements engineering, for implementation, for testing, and also for communication with stakeholders and/or third parties. Using a standardized approach allows these models to be understood by a variety of roles, and to be used for a variety of activities. If there’s interest, I might write a follow-up post specifically on this subject…

Functional acceptance testing always is specific to a project, and it requires a lot of manual work, initially. For non-functional acceptance testing, however, there are existing tools, services, plans to use. Verifying that a certain workflow or a whole product complies with select security regulations, or accessibility criteria, or performance metrics, this is easy, and at the same time rather important.

What do you think about all this? 🙂

Altis 2: A/B testing, Publication Checklist, enhanced DX

We’re excited to announce the released a new iteration of our next-generation digital experience platform: Altis 2 features A/B testing, a new Publication Checklist workflow, enhanced privacy and GDPR compliance, cloud improvements, and fine-tuned developer experience.

Business logic to enhance workflows

Altis 2 introduces exciting new features to power improved experiences for developers and marketers with a focus on experimentation and workflows.

The new Analytics Layer boasts a web experimentation framework built on top, providing real-time statistics into website activity to give you a full view into user behaviour as it’s happening. This also enables new features to programatically create A/B tests for data points – putting experimentation tools directly in the hands of users.

The new Publication Checklist introduces a full framework for building custom publishing checks both on the backend and the frontend, creating a risk-free editorial experience and empowering publishers.

Altis’ developer experience is enhanced with developer tools now enabled out of the box on non-production environments.

We’ve also improved our Custom Documentation toolkit, enabling you to add your own documentation as part of your internal knowledge base for developers.

Altis 2 highlights

Native Analytics and experiments:
A/B testing, real-time stats, enhanced privacy and GDPR compliance

Custom Documentation:
Strengthen your internal knowledge base with documentation tailored for your development team

Cloud improvements:
An improved Altis Dashboard mobile experience

Publication Checklist:
Custom editorial task lists to prevent content going live before it’s ready

Google Site Verification:
Verify your site without codebase changes

Developer experience and APIs:
simplified autoloading; developer tools enabled out of the box on non-production environments; local server and configuration improvements

Developer resources:

Analytics and experimentation

Built on the Altis Cloud infrastructure, the native analytics layer introduces a framework to power data-driven decisions, including programatically creating A/B tests for headlines.

Electronics, Computer, Tablet Computer

Altis’ new native analytics module puts experimentation tools directly in the hands of editorial and marketing teams, enabling powerful capabilities including real time stats, A/B testing, and personalisation. It also supports enhanced GDPR and privacy compliance, ensuring you own your data and limit transfer to third parties.

At a glance:

  • Create your own custom metrics to feed into external reporting
  • Real-time statistics and overview into effects of marketing activity
  • Extensible API to enable custom experimentation
  • Audience segmentation and campaign features

Publication Checklist

The new Publication Checklist feature provides a framework for building pre-publish checks, with flexibility to fit your workflows.

Altis Publication Checklist in the block editor

Altis now includes a Publication Checklist feature built natively for the Block Editor, allowing you to ensure specific conditions are met before publishing. This enhances business processes, supporting legal compliance and helping to automate quality control. 

It contains a full framework for building pre-publish checklists both on the frontend and backend, including deep integration into the Block Editor.

For editors, the Publication Checklist supports an entirely customisable set of conditions, helping them move faster and with confidence with a workflow tailored for them.

At a glance:

Cloud improvements

Enjoy an improved mobile experience, comprehensive database logging, and more detailed status information for your deploy process on the Altis Dashboard.

Altis Cloud Dashboard

We’ve been hard at work improving Altis’ cloud infrastructure and tooling. Our deployment system now provides much more detailed information so you can see exactly how your deploy is going, and pinpoint any errors immediately.

The Altis Dashboard now works much better on mobile with improvements to our responsive design, allowing you to check logs and site status no matter what device you’re using.

We’ve also added more detailed database logging into the Altis Dashboard, allowing you to see database errors and slow queries at a glance.

At a glance:

  • Improved Altis Dashboard on small screens
  • Comprehensive database logging
  • Detailed deploy status information

Custom documentation

Build an internal knowledge base for your development team in one place, on top of Altis’ documentation.

Altis documentation

Altis is designed not just to enable great publishing experiences, but also to provide a framework to ensure projects remain sustainable in the long-term.

As part of these efforts, we’ve added documentation to allow you to use Altis’ Documentation module for your own, project-specific documentation. This allows codifying best practices and documenting custom modules alongside the built-in modules, providing a unified development knowledge base for your development team.

At a glance:

  • New capability to add custom documentation
  • Extended support to add, remove, or manipulate existing documentation

Request a demo

To request a demo, head over to the Altis website and get in touch with our Sales team:

Contact Sales

Let us know what you think

Altis is open source and its code (as long as it doesn’t depend on cloud features) available to the developer community. If you’re ready to give Altis 2 a glance on your local development environment, we’re keen to hear your feedback – drop us a line via email, LinkedIn, or Twitter!

A technical introduction to Altis, our enterprise-augmented WordPress platform

In May 2019, we launched Altis, our next-generation digital experience platform. Altis is the evolution of how we work with WordPress, and we believe it’s a fundamental and major step forward for the WordPress ecosystem.

So what is Altis, and what benefits does it bring for your development team and process?

Altis - WordPress digital experience platform

Digital experience platform: a fundamental shift

The biggest change that Altis represents is a fundamental transformation in how we think about the role of our software in enterprise.

WordPress has humble beginnings as a blogging platform that evolved into a content management system (CMS). But the web hasn’t stood still in the meantime, and the scope of what we can achieve with web software has grown. Increasingly, enterprises are now seeking solutions for much broader problems: moving beyond simply managing content, to platforms that can support the creation, distribution, and management of content at scale.

As enterprises undergo digital transformation, our software needs to adapt and push forward. A website is no longer something maintained by the marketing team: it now forms the core of all digital experiences across the whole organisation. Functionality like open APIs and aggregation is key, allowing data to be available wherever it is needed.

The rise of the digital experience platform (DXP) has been driven by these increased needs. Traditional vendors have begun to adapt to this changing marketplace, offering proprietary solutions built on antiquated technology. Taking the market-leading, open source WordPress to enterprise allows us to provide engaging digital experiences, empowering users to achieve more. It allows us to push open source further into enterprise, offering not just an open solution, but a best-in-class solution.


Stay in the loop!
Sign up to Word on the Future, our monthly industry newsletter – curated opinions and insights on WordPress in enterprise and beyond straight to your inbox.

Sign up now

Enterprise-augmented WordPress

As we embrace new challenges, we’re faced with the reality of enterprise needs.

WordPress is built and designed for the consumer, which leads to fantastic user experiences, but can be technically challenging. Often we find ourselves rebuilding the same functionality due to the lack of a foundational framework for solving these problems – functionality that can be critical for large organisations. Features like single sign-on, multilingual content, and advanced workflows are baseline functionality for any digital experience platform.

Altis augments the WordPress user experience with tailored modules to push beyond a simple CMS to tackle these challenges. These modules provide proven solutions, pushing the baseline for all projects forward. This enables us to focus on meeting the unique challenges of each organisation, and rather than rebuild the basics, provide more opportunities for digital innovation and growth.

The modular design of Altis allows us to provide not just a batteries-included platform, but a batteries-replaceable one. Modules can be disabled and replaced as needs dictate, allowing projects to override or bring-their-own functionality.

Empowering developers

With Altis’ core modules, we’re not only able to bundle functionality, we can also provide a better developer experience. By providing developers with better tooling and documentation, we can empower them to build the best possible results for end-users.

Screenshot of Altis' Developer Documentation.
Altis documentation

Documentation is a core part of our developer experience. By providing full, unified documentation for all modules in Altis, we’re able to cut down the discovery and onboarding period, and reduce the time-to-market: helping organisations test, iterate, and innovate on their digital strategy.

Our documentation provides detailed guides, sharing our experience engineering complex projects to allow new projects to jump straight to implementation.

Altis also bundles developer tools, codifying best practices and removing pain points. While documentation is useful, often it’s the more convenient tools that get the most use. Our tooling provides an efficient, ergonomic way to follow best practices and standards.

In addition, we’re able to accelerate development by offering a unified development environment, from virtual machines and containers, through live development stacks, to in-browser developer tools. Our integration ensures a unified approach through the development lifecycle.

An integrated experience

WordPress is renowned for its vibrant ecosystem of plugins, offering much more functionality than could ever be built by a single team. This has been foundational to the success of WordPress as a major player across the web, allowing it to act beyond a CMS and as a true platform for development.

The power of the third-party ecosystem comes with limitations, as each plugin builds its own unique experience. This can lead to an inconsistent user experience, and duplication of functionality.

With Altis, a carefully curated set of core functionality provides for an integrated experience, using best-in-class solutions. Through better cohesion of this functionality, we’re able to create experiences which get out of the way of users, empowering them to work better. Our common APIs allow developers to build custom functionality without reinventing UI paradigms or backend implementation.

In addition, we are working with our technology partners to build a vibrant ecosystem. These integrations allow our clients to solve their unique challenges by working with trusted industry leaders, while providing a cohesive user experience.

And of course, WordPress plugins are always within reach to extend and enhance your platform capabilities. Our code review and quality assurance processes ensure that even third-party plugins are thoroughly vetted and tested before moving to production.

Our commitment to open source

Open source is at the core of everything we do at Human Made. Altis would not be possible without the open source community, and we are committed to contributing back.

As we shift away from bespoke solutions and towards an integrated product, we are able to contribute back to the community in a much better way. Rather than code languishing in a project’s closed source repository, it can form the basis of new open source solutions. New modules can be developed and validated by enterprise clients, proving out new approaches. These solutions can be maintained continuously as part of our platform, and every project built on them can benefit from improvements.

As a core part of our platform, we’re able to ensure our open source plugins can be actively maintained. This allows our projects to be assets rather than liabilities, and makes the business case for maintenance much clearer.

Altis allows us to bring the benefits of open source to enterprise, and the stability of enterprise to open source.

Beyond WordPress

With Altis, we’re moving beyond WordPress to something far bigger. Version 1 of Altis represented just the start of our journey; with Altis 2, we’ve iterated on the platform’s core feature set as well as on developer experience.

Talk to us about your digital transformation today

We’re technology partners to market leaders worldwide

Altis client logo grid, 2019

Extend and Create Screen Options in the WordPress Admin

* In this post, when I use the term Screen Options (uppercase), I am referring to the Screen Options API, the framework, the class etc. When I use the term screen options (lowercase), I am referring to the options available on the admin screen.

Customising the display of pages in the WordPress admin

Screen Options are most frequently dismissed and ignored — it’s more visual noise on the page that, since you don’t frequently interact with it, you forget it exists. But it serves a valuable function in WordPress: the ability to customise the display of any page in the WordPress administration. This is a powerful yet rarely used customisation benefit built into WordPress.

If you’re unfamiliar, the Screen Options tab appears at the top right of most pages in the WordPress admin. What it does varies depending on the page you are on, but generally, it allows you to customise the way information appears in the admin. For example, on the All Posts page, you can change what columns appear in the post list and how many posts per page to display. On the Edit Post screen, you can toggle the display of the meta boxes. It sits next to the Help tab but, unlike the Help tab, most pages have a Screen Options tab by default, whereas many plugins lack their own unique Help tab.

While working on a client project recently, I built a plugin that was designed to track data from disparate sources. The plugin needed to:

  • Add an admin page to display data about various sites maintained by the client;
  • Collect the data from API endpoints on those sites, and store it in a single place so the values could be compared across many sites in one location;
  • Display the information in an admin screen via “widgets” styled to match the dashboard widgets native to WordPress (using the same classes and styles);
  • Allow the user to show and hide these widgets in order to give them more control over the experience and select those widgets that are most relevant to them.

Since this last piece is exactly what Screen Options are for, I set out on a path to implement custom screen options in my plugin.

Here’s the problem: besides some very basic documentation on what Screen Options are in the user handbook, and cursory descriptions of the add_screen_option() function in the Developer Code Reference and WordPress Codex, there isn’t much information available.

There have been a few, very good blog posts written about hooking into the Screen Options tab by Pippin Williamson and Joe Dolson. Both of those cases use the per_page option type — which is usually used to determine how many posts to display on a particular admin page. I wanted to toggle options ON  or OFF with a checkbox and I knew it was possible: it’s literally what you see when you open the Screen Options tab on any edit page in WordPress. Yet, even here, no documentation was readily available.

I decided to experiment and built a class (a variation of which I’ve put on GitHub for you to fork, download, or use) to handle all functionality for my custom Screen Options.

Working with Screen Options

The first thing you need to do when working with Screen Options is to register your screen options like anything else in WordPress. Almost all the documentation (including the documentation in the Codex) will give you an example where you pass per_page as the first value to add_screen_option. The Developer Reference page on add_screen_option says that that value is “an option name” but gives no further guidance.

The truth is that this value can be anything you want. It’s just an identifier for the option you want to store and it’s used when you interact with the WP_Screen object to retrieve your options.

The second parameter of add_screen_option is the data you want to save. Again, this can be literally anything. The only limitations are the needs of your project. This gives you an incredible amount of power and control over how you want your screen options to work.

In my case, I created a custom ID that was unique to each option, and the arguments I passed were the option name and a default value. I looped through each option individually, and added it with something like this:

add_screen_option( "wordpress_screen_options_demo_$option_name", [
	'option'  => $option_name,
	'value'   => true,
] );

All of this gets hooked to the load-$admin_page action, where $admin_page is the unique page slug of whatever admin page you’re using. When it comes time to display the screen options, I check to see if it’s been saved to user meta first, and if not, I use the default values via the get_option method of the WP_Screen class.

My helper class has a before and after method to handle the HTML markup before and after looping through each option, and the whole thing is hooked to the screen_settings filter which will add it to the screen options tab.

After doing all this, I’ve got something that looks like this:

Screenshot of the Screen Options tab added by the demo plugin: Three sample checkboxes and an Apply button
Screenshot of the Screen Options tab added by the demo plugin

So I’m left with

  • the options,
  • the options display,
  • and they are unique to my admin page.

But they don’t do anything yet.

In order to make them functional, you need to use the set-screen-option filter to save the screen option to user meta when the Apply button is clicked. Several posts about working with screen options will give some example code like this for working with the set-screen-option filter:

add_filter( ‘set-screen-option’, ‘set_option’, 10, 3 );

function set_option( $status, $option, $value ) {
	if ( ‘my_option’ === $option ) {
		return $value;
	}
}

But as Joe Dolson rightly points out in his post, this won’t work if you are passing more than one option to screen options. If you want to make sure of the data you’re saving, you can still make that check, but default to the original value.

In my case, after handling the security checks, I run the same check but my return value is outside of the conditional: I’m not just returning if the option matches what I’m looking for. This means that other options will still get saved normally.

Using Screen Options

That’s great, but how do you actually use the stuff you’re saving to screen options?

Once screen options are saved, they’re stored in user meta. This means that one person’s settings won’t match another person’s settings. This can be a good thing, because how a page displays in the admin can be customised per user. However, if screen options haven’t been saved, the user meta won’t exist, so you need to handle the default display as well.

In the WordPress Screen Options Framework repository I built, I created a plugin to demonstrate how the options work and how you might interface with those options. You can also see an example of what working with saved options versus the defaults might look like in the examples above.

The plugin adds an admin page for the screen options to be tied to, then simply loops through and displays the value for those options. If they haven’t been saved, the plugin  displays a message to explain it’ll use the default values and displays those values; but as soon as they are saved, it pulls from the user meta and displays the custom user values.

Screenshot of the WordPress admin page added by the Screen Options Framework plugin, displaying the values of each of the demo options
The admin page added by the WordPress Screen Options Framework demo plugin

I was just using checkboxes to toggle things ON or OFF, but you could put any kind of input field in Screen Options that you want. You could even use a JavaScript library like Select2 or Chosen to render better select boxes (if you cared enough to load an extra JavaScript library just for the Screen Options tab). Once you have your screen options, it might be helpful to let your users and clients know it’s there, either in your documentation, or with an announcement pointing out the Screen Options tab.

A powerful new tool

Animated image of the screen options workflow within the plugin, showing a user clicking the Screen Options tab, changing options, and those values reflected on the admin page
Example of the Screen Options Framework demo plugin in action.

One thing to note about the Screen Options tab is that it is only visible on larger screens (larger than 782 pixels wide). This means it will be hidden on older tablets and phones. This seems like an oversight to me, but it can be added back easily with custom admin styles.

Due to the limited documentation, it’s hard to really dig into the guts of the Screen Options API. But hopefully with the information and examples in this post and in the Screen Options Framework, you will take another look at Screen Options and use them to add customisation controls for your next project.

Hot Module Replacement for Gutenberg Blocks

In tandem with this effort, we have explored how to make Gutenberg a first-class citizen within our development process and developer tooling.

At Human Made we’re proud to have been at the forefront of our community’s transition towards tools like React, and we have previously released several projects that make it easier to use React within WordPress.

Many of those same techniques we use for React can benefit our Gutenberg development, and applying these tools to WordPress makes all the difference on a project when some developers have a stronger background in React than in WordPress, or vice versa. The quality of our developer experience becomes a key performance indicator, influencing our processes in a range of areas: from hiring all the way to delivering code that will stand the test of time in a rapidly changing industry.

In this post, I will share some of the techniques we are using in our projects to bring familiar React-ecosystem developer conveniences to our Gutenberg work.

Reduce Block Registration Boilerplate

The most straightforward way to register a custom editor block is to call the registerBlockType method within a JavaScript module defining our custom block:

// src/blocks/book.js
registerBlockType( 'my-plugin/book', { /* type definition */ };

This approach is unambiguous, but by including the registerBlockType call inline within this module we duplicate the block registration boilerplate in every block file we create, and we lose the opportunity to test any of this module’s behavior independently of the core WordPress block registry. Because of this we now follow the pattern where a block module will export the block’s name and settings object, then we then call registerBlockType from the parent level:

// src/blocks/book.js
export const name = 'my-plugin/book';
export const settings = { /* type definition */ };
// src/blocks.js
import * as bookBlock from './blocks/book';
import * as albumBlock from './blocks/album';
[
    bookBlock,
    albumBlock,
].forEach( ( { name, settings } ) => registerBlockType( name, settings ) );

Any custom logic or components in the block modules can now be imported and tested without side effects, and our block modules are free of boilerplate. However, we still need to manually enumerate each block we wish to include in our bundle—this is sometimes useful, but it can also lead to bugs or build errors if a developer forgets to include the block module.

If our project conforms to a consistent directory structure such as defining each block in an index file within the blocks folder (e.g. src/blocks/{ block name }/index.js), we can further simplify this process by using Webpack’s require.context to automatically detect and load each block. The require.context function lets us provide a directory to search and a regular expression pattern to govern which files are included, so we can auto-load all index.js files within the blocks/ directory:

// src/blocks.js

// Create a require context containing all matched block files.
const context = require.context(
    './blocks',  // Search within the src/blocks directory
    true,        // Search recursively
    /index\.js$/ // Match any file named index.js
);

context.keys().forEach( modulePath => {
    // context() is a function which includes and returns the module.
    const block = context( modulePath );
    registerBlockType( block.name, block.settings );
} );

This is less obvious than import { name, settings } from blocks/block-name.js, but now we can add or remove blocks at will without writing a single include or registerBlockType statement. Cutting out that type of overhead makes a big difference on a large project.

Introduce Hot Module Replacement

Of all the benefits we gain by building applications with React and Webpack, Hot Module Replacement (HMR for short) is one of the most impressive. When properly configured HMR allows developers to make changes to a component file, save, and see that component update instantly on our webpage without disrupting any other application state, dramatically speeding up prototyping work and code iteration.

Gutenberg-Icon

There’s a host of tools to support HMR in traditional React or Vue applications, but it’s not as obvious how to apply those concepts to Gutenberg blocks. What would it take to make changes to our components’ edit() and render() methods take effect in front of our eyes, without a page load or editor refresh?

The process we want to follow looks something like this:

  1. Detect any changes to a block’s JavaScript file(s).
  2. Unload the previous version of the block (Gutenberg displays an error if you try to register two blocks with the same name).
  3. Load the updated block module.
  4. Register the updated block.

HMR is normally configured by detecting the module.hot object (only present in development builds) and calling module.hot.accept() within the module to be swapped. This can add a lot of boilerplate to our block files, though, and as discussed above that’s a result we want to avoid; additionally, our use of require.context makes the use of module.hot.accept a little less intuitive.

From Webpack’s point of view, the tree of all auto-loaded blocks is considered to be one entity. If we want to intelligently reload only a part of that module tree, we need to apply our own logic to identify what in that tree has changed. This, therefore, is the process we need to follow to add HMR to our block files:

  1. When loading blocks with require.context, save a reference to the block module in a local cache object.
  2. Tell Webpack to module.hot.accept the entire require.context tree.
  3. When an update is detected within that tree, check each module against the cache to see which specific blocks have changed.
  4. Only unregister & re-register the individual blocks which have been updated.

Putting that together, we get this:

// Use a simple object as our module cache.
const cache = {};

// Define the logic for loading and swapping modules.
const loadModules = () => {
    // Create a new require.context on each HMR update; they cannot be re-used.
    const context = require.context( './blocks', true, /index\.js$/ );

    // Contextually load, reload or skip each block.
    context.keys().forEach( key => {
        const module = context( key );
        if ( module === cache[ key ] ) {
            // require.context helpfully returns the same object reference for
            // unchanged modules. Comparing modules with strict equality lets us
            // pick out only the edited blocks which require re-registration.
            return;
        }
        if ( cache[ key ] ) {
            // Module changed, and prior copy detected: unregister old module.
            const oldModule = cache[ key ];
            unregisterBlockType( oldModule.name );
        }
        // Register new module and update cache.
        registerBlockType( module.name, module.settings );
        cache[ key ] = module;
    } );

    // Return the context so we can access its ID later.
    return context;
};

// Trigger the initial module load and store a reference to the context
// so we can access the context's ID property.
const context = loadModules();

if ( module.hot ) {
    // In a hot-reloading environment, accept hot updates for that context ID.
    // Reload and compare the full tree on any child module change, but only
    // swap out changed modules (using the logic above).
    module.hot.accept( context.id, loadModules );
}

Seeing Your Changes

With the above code, our custom editor blocks will be swapped out in the background whenever we make changes. But those changes won’t be reflected in the editor until we select the updated blocks, and worse, if we update code for a block which is selected in the editor we may see that “The editor has encountered an unexpected error.” There are a couple final things we need to do to make this process seamless.

First, we will dispatch a core editor action to deselect the current editor block before we swap out the modules. We don’t want to lose state either, though, so we will save the clientId of the currently-selected editor block. We add this code immediately prior to the require.context call in the above example:

// Save the currently selected block's clientId.
const selectedBlockId = select( 'core/editor' ).getSelectedBlockClientId();
// Clear selection before swapping out upated modules.
dispatch( 'core/editor' ).clearSelectedBlock();

Then, immediately following the .forEach() loop we restore that selection once our updates are complete:

// Restore the initial block selection.
if ( selectedBlockId ) {
    dispatch( 'core/editor' ).selectBlock( selectedBlockId );
}

That takes care of the “unexpected error” described above.

Second, ideally we would be able to see our changes take effect in real-time. However Gutenberg has no inherent knowledge of our HMR updates, and re-registering a block is not sufficient to force a UI update. We can make all the changes we like, but won’t see any updates in the browser until we next select the block and force a re-render.

To work around this we can loop through all the editor’s current blocks and select each one, prompting Gutenberg to re-render each block in turn. We add this logic right between the .forEach loop and the snippet above that restores the prior selection:

select( 'core/editor' ).getBlocks().forEach( ( { clientId } ) => {
    dispatch( 'core/editor' ).selectBlock( clientId );
} );

(This approach will also select blocks which have not changed, but it is the most reliable way we have found to guarantee updated blocks get re-rendered. The potential performance gains are insufficient to justify the added complexity of more intelligently targeting our block selection actions.)

Our full update flow now looks like this:

  1. When loading blocks with require.context, save a reference to the block module in a local cache object.
  2. Tell Webpack to module.hot.accept the entire require.context tree.
  3. When an update is detected within that tree,
    1. Save a reference to the currently-selected block within the editor.
    2. Deselect all blocks to avoid issues while swapping block code.
    3. Loop through each module and check against the cache to see which specific blocks have changed.
    4. Unregister & re-register the individual blocks which have been updated.
  4. After all blocks are updated,
    1. Loop through each block in the editor and trigger a select action on each to refresh rendered content.
    2. Reselect whichever block was selected at the start of the update.

This flow registers all blocks, intelligently re-registers and hot-swaps updated blocks, and ensures that any updates get reflected in the editor as they are made! We can now rapidly iterate on our block code and observe the changes right in the block editor, achieving all the benefits of hot module reloading in a WordPress-specific context.

Going Further

This same approach here can be used to load and reload Gutenberg editor plugins, and we anticipate releasing an internal tool in the near future which abstracts this logic into a reusable module. Until then, a complete example of the technique described above can be found on GitHub.

We should note that all of the techniques discussed in this post depend upon the Webpack development server, which deserves a blog post in and of itself! The linked repository contains a sample bare-bones Webpack configuration adapted from the concepts behind react-wp-scripts, but stay tuned for further posts here on our developer tooling over the coming months.

In closing, I would like to thank my colleagues for their support and creativity over the past year of learning and growth with Gutenberg, especially Dzikri Aziz, Than Taintor, and Joe McGill, without whom this post would not exist. It is a true privilege to work with a team so dedicated to improving the state of our art.

Headless WordPress: The Future DXP

In 2016, we released Talking to 25% of the web: an in-depth report and analysis on the REST API, following the launch of the merged REST API into WordPress core. Today, we’re talking to over 30% of the web with our re-released and extensively researched white paper, Headless WordPress: The Future DPX.

Download Headless WordPress. The Future DXP

This release introduces our latest projects using the REST API, including case studies on the TechCrunch and Fairfax Media projects. You can also expect an update on the challenges the REST API is facing two years on, and the changes that have happened since the merge occurred, one of the most monumental being Gutenberg’s adoption of the REST API to communicate data between the server and a JavaScript powered frontend.

With the REST API, WordPress stops being a web development tool used in isolation. It is one module that is available in a web developer’s toolkit; a building block to be used in many kinds of applications.

Human Made and the REST API

We’re deeply involved in the WordPress REST API project. We’ve hosted leading events to teach agencies, publishers, and engineers how to use and build with the REST API; A Day of REST in London and Boston, and a week-long Developer Bootcamp, A Week of REST, in 2017.

Our engineers have been at the forefront of the technology, and have led the teams working to build, improve, and advance the REST API.

  • Joe Hoyle, CTO — member of the REST API team
  • K. Adam White, Senior Engineer — member of the REST API team
  • Ryan McCue, Director of Engineering — co-lead of the REST API

Headless WordPress: The REST API in action

Our experience with the REST API has led us to work with some of the world’s best known publishers. The re-released Headless WordPress: The Future CMS includes one of the most valuable additions to the updated report: the REST API being used in action.

Here is a taster of the case studies you can expect to see in our re-released report.

TechCrunch and the REST API

TechCrunch adopted WordPress and the REST API to help them decentralise their publishing experience: ensuring they could keep the editorial simplicity inherent with a WordPress backend, whilst making use of the REST API to create a user friendly frontend.

Fairfax Media and the REST API

Fairfax Media wanted a technology partner to support them through their latest digital evolution: building a custom CMS based on headless WordPress, with a modern publishing workflow, and an audience facing React.js based frontend. The REST API was instrumental in this process; enabling us to update, streamline, and improve their editor screen, and helping us build a modern newsroom experience.

Big WP London, A WordPress Event for Enterprise: 13th September

Big WP London is a WordPress event for developers, publishers, product managers, and technical leads working on large scale high traffic websites. We welcome speakers from all over the world to share challenges and success stories on some of the most exciting and complex WordPress projects. 

Join us for conversations on WordPress in Enterprise

This month, on 13th September, WordPress.com VIP and Human Made will host the first Big WP London in 2018, in the beautiful News UK offices. With speakers from renowned agencies including Ipsyde, 10up, and Big Bite Creative, as well as a guest speaker from News UK’s technology team.

Subscribe to join us at Big WP London, 13th September

Big WP London Schedule

The event will take place between 17:30 – 21:00 on Thursday, 13th September. Doors open at 17:30 to allow plenty of time to get through security and we encourage you to arrive early to ensure you don’t miss the start of the presentations. 

Presentations typically last 20mins with an open Q&A at the end of every session. 

SpeakerTitleAbstract
Giuseppe Mazzapica, InpsydeWordPress Multisite for large and high traffic multilingual websitesStarting from a comparative overview of the various approaches to multilingual WordPress websites, the presentation will focus on the usage of Multisite for the scope, documenting Inpsyde’s experience with the multilingual-via-multisite approach and the benefit it brings for large, high-traffic websites in terms of performance, flexibility and “vendor lock-in” avoidance.
Gabe Karp, 10upRebuilding NobelPrize.orgBuilt in 1994, NobelPrize.org was one of the very first websites powered by a MySQL database. More than 20 years later, VIP agency partner 10up led a project to modernise the site, and bring a modern editorial workflow to the Nobel Digital team, whilst preserving the flexibility of that original build. 10up’s Gabe Karp will describe the creative and technical transformation behind the new site.

Joel Davis, News UK
How we won the World CupWith the first major football tournament since we launched the Sun website on WordPress this was the ideal excuse to launch a new, innovative destination page. From the proof of concept to delivery in a few months.
Jason Agnew, Big Bite CreativeUsing Gutenberg in ProductionJason is going to discuss his recent experiences of using Gutenberg with two enterprise projects –  one for the largest bank in Europe and the other for the world-impacting Amnesty International.

The Gutenberg journey hasn’t been without its issues, so Jason is going to explore the impact of the new editor, including how Big Bite are using Gutenberg, how the team tackled training, and how they price new projects which require blocks.

Put it in your calendar

RSVP to the event on meetup.com and please make sure you register with your full name. We use this information as a security guest list and you must be registered to attend. 

We’ll also be serving drinks and small bites! And we generally migrate to the nearest pub once the event is over. 

Word on the Future: the inside track on enterprise WordPress

We’re delighted to launch the first edition of ‘Word on the Future’, an industry newsletter for Enterprise WordPress.

enterprise wordpress newsletter

A newsletter to inform decisions 

Covering news relevant to you: from machine learning, to personalisation, and AI technology, as well as opinions and insights from some of the most respected technologists across the ecosystem. 

Plus, get the latest white papers, documentation, and brochures published by Human Made.

Don’t get lost in the noise.

Subscribe to Word on the Future


Members-only Professional Discussion

Did you have a question about what you read? Is there a topic you’re burning to discuss? If so, join us in our members only Linkedin group, ‘Word on the Future’. For professional discussion and opinions on Enterprise WordPress topics and news. 


Restsplain: Document your WordPress REST API

At Human Made, we often work on WordPress REST API-based projects internally and for clients.

One of the difficulties of any project is having up-to-date documentation available for engineers on the project, especially when onboarding new people. With traditional PHP code, this can be solved by creating inline documentation to document our internal APIs. However, when using the REST API, our API is external, and users may not have access to the codebase. This requires us to document in a completely different way.

Today we’re introducing Restsplain, an automated documentation site for your WordPress REST API. Restsplain is a WordPress plugin you can install on any site and use immediately to browse your API. Customise it to your heart’s content and build native API documentation for your site.

Documentation in the Age of the API

Talk about why Restsplain was created, existing documentation, uniqueness of sites

We’re big fans of documentation. We create all sorts of documentation, from inline documentation in PHP, to project readmes with the key information everyone needs, to internal project wikis.

Documentation can be problematic for one key reason: it can get out-of-date. This has been a problem since the early days of software, and continues to this day. To combat this problem, many developers have switched to using inline documentation, where the documentation lives with the code itself. With strong review policies, this means that docs are updated in lock-step with the code itself.

Inline documentation is generally used by static analysis, such as in documentation generators or IDEs. However, it can’t be easily used without access to the codebase, and can’t be used for more dynamic pieces. In particular, there’s no way of statically analysing the WordPress REST API, as plugins and themes can add additional data to the output. Each site has a unique set of plugins, giving each site a unique REST API that a single documentation site (like WordPress.org) can’t possibly cover.

Thankfully, this isn’t an unsolveable problem. The WordPress REST API is built around self-documentation through object schemas. This documentation comes directly from the codebase, and is functional: the docs are the code. This is all available in a machine-readable standard format called JSON Schema, and can be retrieved for any API endpoint with an OPTIONS request.

Say Hello to Restsplain

Restsplain takes the WordPress REST API’s documentation and generates a beautiful interface, integrating into your site. This allows your site to have always-current documentation for your unique API, with styling to match your site.

image of Restsplain UI

One other advantage of using the API directly rather than static analysis is we can use the API. That means we can have interactive documentation, allowing you to read about an endpoint and immediately try it out.

gif of sending API requests

Restsplain is not just built for automatic documentation. Every good documentation site includes both automated and manual content. Thankfully, we already have a great CMS at our fingertips for working on manual content. Restsplain adds an API documentation content type to your site, allowing you to write and edit manual documentation using a familiar workflow.

Using Restsplain

Talk about using Restsplain on a site, customisation opportunities, etc.

Using Restsplain is as easy as installing the plugin. Simply download the plugin from GitHub, upload it to your site and activate. You’ll now have your own unique WordPress REST API documentation at /api-docs/

Restsplain integrates directly with your theme (via wp_head() and wp_footer()), and includes all the CSS you regularly enqueue. This allows you to write custom styles to suit your site, and tweak Restsplain’s design to exactly what you want.

You can also configure the logo, documentation URL, code highlighting, and more with filters.

Challenges

Talk about how Restsplain was built, struggles with React, etc

Restsplain is built as a React single-page app, integrating into the frontend of your site. Using React allows us to provide a fast and interactive experience, giving users an API reference similar to an offline app like Dash. The interactive requests allow for live example data straight from the site.

Using React in a distributable WordPress plugin is still in the early days, and some workflows leave much to be desired. …

(We’re working on internal tooling and documentation to improve React-based plugins and themes, and hope to release them soon!)

Roadmap

Talk about future plans for Restsplain, usage on wordpress.org, etc

We’d love everyone to install Restsplain and give it a shot. Download the plugin and try it today!


Interested in working on amazing tools like Restsplain?

We are always hiring, and we’d love to hear from you. Our flexible working policy means you can fit your work around your other commitments, whether that’s your family life, your other life as a musician, or your love of travel. All employees enjoy benefits such as a 28 day minimum holiday policy, regular new equipment, and our annual company retreat. You’ll get to work with and learn from a supportive team of colleagues.

We encourage people of all backgrounds and locations to apply, and are committed to creating a diverse environment that every team member feels proud to be a part of.

Human Made team retreat

Adopting Open Principles for Planet 4: A Greenpeace Story

Greenpeace wanted to consolidate their technology platforms, and move towards open and collaborative working practices. They chose WordPress to help them centralise their data and systems, and adopt better open principles in their workflows and processes.

Greenpeace-official-logo

Project Planet 4 

In July 2016 Greenpeace launched a bold new project to reinvigorate their global web presence, and create a better platform for engagement that works to empower audiences. Planet 4, the codename for the complete redesign, aims to rebuild greenpeace.org for the modern web, and transform the way it exists in digital channels. The entire project is being run with open principles, and the concept, timelines, goals, and challenges are being made accessible and transparent to everyone through their Medium publication.

As advocates of open technology and principles, Greenpeace invited us to write about our experience working with them on the project and we were excited to share some of our experience, and contribute towards their story.

Our work with Planet 4 spanned both strategy and technology consultancy. One of Greenpeace’s major goals through our involvement was to ensure their in-house technical team could benefit from working closely with WordPress specialists… Ensuring the team had a thorough working knowledge of WordPress was paramount to the success of the project long-term, and one of the most valuable aspects we were able to contribute towards Planet 4.

John Bevan, Client Services Director 
Human Made

Read the full post here

Automated Accessibility Testing During Development

Authored by Rian Rietveld

tl;dr

For code base testing, there are some good tools for JavaScript and React. But for a good overview of the errors we must test on a generated DOM; aXe and pa11y perform well here.

Currently, manual testing in the browser is still necessary, particularly for keyboard navigation and screen reader feedback on dynamic changes. And unfortunately, there is no single big button to catch all errors in one report.

Scope

This research is about integrating automated accessibility testing during development using npm modules, command line, and other tools.

Out of scope

Testing in the browser is already pretty well covered by browser addons like Axe, HTML_CodeSniffer, and the Accessibility Inspector in the Firefox Developer Tools and Chrome Dev Tools.

Axe adds a tab to the inspector and HTML CodeSniffer adds a bookmarklet that displays a popup with the errors and warnings.

The W3C has online developer tools like the Markup Validation Service and the CSS Validation Service to validate the HTML and CSS of the frontend of your work.

Expectations

As I mentioned previously, there is no single script that catches all accessibility errors in a workable report for a whole project in one go. That would be nice!

Spoiler alert: that’s currently impossible. But we can do a lot of automated testing already.

The Government Digital Service did an Accessibility tool audit, which includes information on what can be tested and the performance of the tools available for testing. Not all listed tools are open source or in-browser based.

Tools and workflow

The recommended workflow for accessibility testing is currently like this:

  • check the code base
  • check the DOM
  • check the keyboard navigation.

The first can be automated, the second only partly, and the third still has to be done manually during development.

The difference between testing for PHP and JavaScript errors is that the accessibility needs to be tested on a generated DOM, including the different “responsive” views. This accounts for heading structure, colour contrast between text and background, generated content by JavaScript, and screen reader feedback on dynamic changes.

Tools to test the code base

Some JavaScript check modules can be included in your test routine to test the code base.

Here are a couple I recommend:

  • eslint-plugin-jsx-a11y by Ethan Cohen. Static AST checker for a11y rules on JSX elements.
  • react-a11y by ReactJs. Warns about potential accessibility issues with your React elements.

For a project within WordPress it’s impossible to check the HTML for semantic errors from the PHP code base, because most of the HTML is generated by the PHP. This makes it hard to get an overview of what is eventually generated and how this relates to other functionality.

Tools to test the DOM

At the moment there are two prominent CLI modules that can create a DOM from a url and perform accessibility tests on them.

pa11y runs HTML CodeSniffer from the command line for programmatic accessibility reporting.

Instructions for use

Install pa11y for CLI:

npm install -g pa11y

Then run pa11y in the command line:

pa11y your-url

aXe-cli runs axe-core from the command line. It default runs headless Chrome to generate an instance of the DOM.

Install axe-core for CLI:

npm install axe-cli -g
npm install chromedriver –g

Then run axe in the command line:

axe your-url

Both modules are highly configurable to meet your test requirements. Personally I think that aXe generates better error warnings and pa11y is easier to configure.

On the pa11y GitHub repository Rowan Manning proposed researching the possibility of replacing HTML CodeSniffer by aXe-core, so that’s good news.

The typical way to work with CLI tools like this is to run them for one url at the time. That way you get a readable report of all the errors on that page.

More than one url at the time?

You can run axe-cli on more then one url in one command, but axe-cli isn’t built to run on a large amount of urls or on a complete site; axe-cli is not a crawler. Deque Labs recommends using the  axe-webdriverjs, a chainable aXe API for Selenium’s WebDriverJS, in order to test on a large number of urls.

But do you want to test every url in a project during development? For one, you will get many duplicate errors. For example, if there is an error in the header, it will be reported for every page. Secondly, if you run a test on all pages in your project it will take long time to generate a report. And if you work with a team, all the errors in your team members work will also be reported.

If you include all urls possible in your project, it will result in slow, unreadable reports. So adding this to your Grunt / Gulp routine may not be advisable.

As an alternative, you may want to generate a list of possible templates, and define some mapping instead of running everything at once. You can start by testing a page, a post, a custom post type page, an archive, a contact page, or a page with a custom template. That way you minimise the risk of duplication and the regeneration time, with the likelihood of receiving a more usable report.

When you use Pattern Lab it’s easy to include the pages with the different components.

For example, like this in your Grunt file:

shell: {
    axe: { command: () => 'axe http://your-site.local/patternlab/public/patterns/01-molecules/index.html, http://your-site.local/patternlab/public/patterns/02-organisms/index.html -b c' },
    phpcs: { ... },
    phpunit: {... }
 },

In my experience, running axe in the command line before a commit is the easiest, fastest, and most accurate method of achieving this.

WordPress trunk

What about WordPress trunk?

Can we use automated a11y testing, included for the code standard checks?

PHPCS looks at the raw code (it doesn’t parse it as PHP, only as individual tokens), so isn’t generally suitable for creating sniffs for accessibility tests.

Automated accessibility testing needs to be on a fully working server-driven site (even if it’s local), and not just a collection of files.

A bash script could be written that does the following:

  • Pulls down WordPress, or uses an existing local install,
  • Sets up a database with some demo content (much like the Unit Tests structure does),
  • Optionally takes a diff and applies it to the WordPress codebase,
  • Runs aXe over the resulting site.

This bash script could then be hooked into a SVN/Git pre-commit hook, so that a failure in aXe would halt the commit.

Halting a commit on an aXe failure is kind of tricky. aXe gives a return status but does not distinguish between errors or warnings. Also, aXe (or any other a11y check tool) gives false positives. Manually checking the results remains a necessary part of the process.

Note: keyboard testing and screen reader feedback on dynamic changes still needs to be performed manually.

Kudos

Many thanks to Juliette Folmer Reinders, Gary JonesJoe McGill, Alain Schlesser, Anton Timmermans, and Sam Miller for generously sharing their time and expertise.

Rebuilding the WordPress Edit Screen

Enterprise WordPress projects differ substantially from a typical WordPress project, which might simply be a custom theme with some posts, pages, and a widgetized sidebar or two. Human Made’s recent partnership with Fairfax Media was no exception: there were no pages, no widgets and there certainly wasn’t a theme. The built-in media library was completely removed, the ability to modify terms was disabled, and many of WordPress’s default roles were replaced or disabled.

What did remain were posts (renamed articles in the UI) and a heavily modified edit screen. Working with Fairfax’s internal CMS development team, we essentially rebuilt the WordPress edit screen from the ground up while maintaining many of the technical features of WordPress.

The rebuilt editor screen for Fairfax

Why rebuild the edit screen?

The need to improve the WordPress default editing experience is perhaps best illustrated by the current major project to do just that for WordPress straight out of the box, codenamed Gutenberg.

While a traditional WordPress project shares some requirements with those of enterprise clients, many others are quite different and require rethinking the editing experience and related workflow. This was particularly the case for Fairfax, a national publisher moving from a proprietary to open source CMS, managing three large mastheads through a single instance of WordPress.

Expanded editor article options, including masthead and multi-author selection

Many of the changes we made to the edit screen modified default features to improve their operation on a large scale, making them more efficient and usable at scale. Major interface changes were made to the edit screen, a tabbed interface was introduced to separate the meta data required for commissioning an article – such as deadline, the writing team, and the brief – from the fields required for authoring an article – such as the introduction, the content and specifying a byline.

Some of the default features replaced with custom-built interfaces were the publishing and taxonomy metaboxes, and the post slug interface. The author field was also expanded to allow multiple authors and to provide a way to specify collaborators and editors of the story.

Many of the custom fields were added to the screen using CMB2, our preferred library for adding data to the edit screen. CMB2 was also used to modify the interface with custom display and Ajax callbacks.

Technical details

The Fairfax CMS includes a custom hierarchical taxonomy containing around 22,000 terms. By default, WordPress will attempt to create a metabox containing a checkbox list of each of these terms:

> document.querySelectorAll( '#custom_taxochecklist li' ).length
22092

The default metabox in WordPress is not designed for this scale (nor should it be, it’s an extreme edge case): aside from becoming impractical for an author to use, it adds inefficient database calls to the page load (in this case returning 80,000 rows across two queries) and adds 3.5MB to the HTML weight.

To work around this, we replaced the metabox with a custom-built select2 metabox for CMB2. This uses a custom Ajax call to query for terms, minimising the impact to the page. We further tweaked the query to allow users to search against parent terms (searching location to return all locations, company for all companies) and we paginated the results to avoid slowing down the request with large result sets.

Custom taxonomy and author selectors, using select2 with our custom APIs

Custom publishing and workflow

Rather than modifying the existing publishing metabox in WordPress, we found it was easier to replace it than attempt to modify it via hooks and actions. This metabox was replaced with what we dubbed the Publish Box of the Future (PBotF). Despite the grandiose name, this began as a simple prototype that evolved over time into a real replacement for the existing metabox. This allowed us to develop the partially-functional prototype while keeping the legacy metabox around until it reached feature parity.

Using a custom metabox allowed us to implement custom workflow requirements, such as legal status, scheduling the time for unpublishing an article, and allowing for drafts of after-publication edits. It also allowed us to save the post and associated data via the WordPress REST API to avoid full page refreshes.

The PBotF was built using existing libraries within WordPress, including Backbone, jQuery and the REST API Backbone JavaScript client, allowing us to take advantage of the existing functionality without having to build it all from scratch.

Modelling data with the WordPress REST API

The ability to save via the REST API required adding custom fields and taxonomies to the built in post endpoint. In a CMS of the scale of Fairfax’s, these quickly add up: we needed to add 61 custom fields to the endpoint and several taxonomies. In all instances, these were prefixed with ffx_ to ensure we didn’t clash with another plugin’s data and to reduce the chance of a WordPress update introducing a field of the same name.

register_rest_field(
    'post',
    'ffx_custom_field',
    [
        'get_callback'    => __NAMESPACE__ . '\\get_custom_field',
        'update_callback' => __NAMESPACE__ . '\\set_custom_field',
        'schema'          => [
            'description' => __( 'Fairfax custom field, 'ffx' ),
            'type'        => 'boolean',
        ],
    ]
);

By registering the fields with the REST API and including a schema, we were able to take advantage of several WordPress core features. By specifying the field’s type (boolean in the example above), the REST API automatically performs validation and sanitization of the data when saving.

The REST API Backbone client uses this same schema to provide the model when saving to the endpoint, giving developers a significant head start there too (thus our decision to stick with the arguably dated libraries).

While much of the existing state PBotF was stored in the custom Backbone model, we needed to break out of the box to access many of the regular post fields on the page. Rather than rebuild the entire edit screen (a task the Gutenberg team have taken on with React), we decided to simply break out of the box using jQuery to access the fields.

const getFormData = function () {
  const postData = {
      title:               $( '#title' ).val() || '',
      content:             $( '#content' ).val() || '',
      excerpt:             $( '#excerpt' ).val() || '',
      ffx_custom_field:    $( '#cmb2-id--ffx-custom-field input' ).is( ':checked' ),
      ffx_custom_field_ii: $( '#_ffx_custom_field_ii' ).val() || '',
     // snipped
  }
};

Saving with revisions

As part of adding custom properties to the edit screen, we needed revisions to include a bunch of additional content. For this we used Adam Silverstein’s post meta revisions plugin.

Once we switched to the Publish Box of the Future we started experiencing an off-by-one error when saving revisions: updates to meta were be stored against the next revision. Investigation revealed that this was due to internal behaviour in the WordPress REST API.

When saving with the standard publish box, wp_insert_post is used to update the post’s content, the taxonomies and meta. At the end of wp_insert_post a new revision is generated. However, the REST API only uses wp_insert_post to save the post content and other data stored in the post table. Taxonomy and meta are then updated separately, changing the order in which things happen. This means that by default in the REST API, revisions are created before the post data has finished updating; this becomes a problem once revisions include meta.

To solve this, on REST API requests we remove the revision callback from running at the end of wp_insert_post, and instead generate the revision at the conclusion of the REST API request:

/** Move the 'wp_save_post_revision' callback for REST requests. */
function move_revision_callback( $result, $unused, $request ) {
    /* SNIP: Check for post update request */

    // Move default `wp_3_save_post_revision` callback.
    remove_action( 'post_updated', 'wp_save_post_revision', 10 );
    add_action( 'rest_request_after_callbacks', 'wp_save_post_revision');

    return $result; // Support other filters.
}

Tabbed layout of the edit screen

While replacing the publishing box allowed for technical control of the publishing process, we also needed to modify the edit screen to allow for the human side the process. Large publishers with multiple mastheads have naturally-complex processes, with many people involved in the process of publishing a single article. Using the magical power of JavaScript, we switched the edit screen to a tabbed layout, separating the editorial interface from the reporting experience.

Since the post screen doesn’t natively support a tabbed layout, we had to create our own using some creative uses of the WordPress hooks API. We used the generic all_admin_notices action to add the tab markup in the notifications area, scoped to the edit screen using the load-post.php and load-post-new.php hooks. For switching between tabs, we applied a helper class to the body element indicating which tab we were on. When registering the CMB2 fields, we included helper classes indicating on which tab (or tabs) the field ought to be displayed on:

$field = [
	'id'      => '_ffx_custom_field',
	'classes' => [
		'ffx-show-on-listing',
	],
];

This made managing the display a case of some simple CSS:

.ffx-show-on-listing,
.ffx-show-on-article {
    display: none;
}

body.ffx-tab-listing .ffx-show-on-listing,
body.ffx-tab-article .ffx-show-on-article {
    display: inherit;
}

While the tabs were one of the simpler challenges technically (barring a few race conditions in JavaScript), they ended up providing the most value for the user through the clarified interface.

Conclusion

Building a custom editing interface allowed us to take WordPress well beyond its blogging and small business stereotype, with a brand new, modern experience via the WordPress REST API. This allowed us to scale to tens of thousands of terms and hundreds of thousands of post objects while keeping a usable editorial workflow. Combined with other custom features – including enhanced media management, access to wire services, and Slack integration – we were able to produce a true enterprise product.

While the quality of the code undoubtedly contributed to creating an enterprise product, the process around writing the code was of much greater significance. Code review and a strict adherence to scrum (including all the meetings developers love to hate) allowed us to scale the process and work effectively, taking open source well beyond the stereotype, with more than 11 thousand commits and over 30 committers.

We worked with Fairfax for a little under 18 months, with our friends at XWP additionally joining us to work on additional features for a couple of months. Most of our time was spent on the edit screen and backend, building out the advanced features using our enterprise experience and open source tooling (including Cavalcade). Working together with Fairfax’s internal development team, we were able to build a functional, scalable CMS that their team continues to iterate on, and that will continue to be flexible and powerful well into the future.


Transforming WordPress for the modern newsroom

Download white paper

Download the full project white paper

Our React Tools for WordPress

We build internal projects to test out the latest and greatest technologies and tools, then use what we’ve learnt to build out new and better experiences for our clients.

We’ve been big proponents of React since we started experimenting with it internally a few years ago. Many of our internal tools are built with React, including our server management interface, cross-company communication blogs, and other tiny, single-purpose utilities including our tool to help employees book time off. We’ve also been using it increasingly on client projects, using to build internal tools to help editorial teams, as well as going completely headless.

From working with React internally and on client projects, we’ve been able to see the common pain points when developing. We found ourselves often writing the same boilerplate and connectors repeatedly, as well as constantly struggling with tooling. We’ve been working on solving these problems internally, and we wanted to share our solutions to these common problems.

We’re officially releasing three tools today: react-wp-scripts for development tooling, Repress for smart Redux stores, and react-oembed-container to simplify oEmbed rendering. Each of these tools is built for a need we’ve had, and they can be used together or as standalone projects.

Easy Development with react-wp-scripts

If you’ve worked with React for a while, you’ll remember how painful setting up a project used to be. Thankfully, create-react-app (CRA) came along and revolutionised this by making it as easy as running a single command. We wanted to do the same for WordPress-based projects, and make it super easy to build amazing apps.

react-wp-scripts is our tool for handling this. It extends create-react-app (and the underlying tool, react-scripts) with WordPress-specific helpers. This allows all the features of CRA to be used inside a WordPress project, including live reloading (and hot-loading), error reporting, and easy switching between development and production builds.

Installation

Starting a new project with react-wp-scripts is super easy. Simply run:

npx create-react-app --scripts-version react-wp-scripts your-directory/

(Don’t have npx ? Upgrade your copy of node, or follow the manual installation instructions instead.)

You can also easily add react-wp-scripts to your existing project; simply follow the installation instructions in the project’s documentation.

You’ll need to also add the PHP to load in your scripts. The bootstrap command will copy the loader file to your project for you, so all you have to do is hook it in to WordPress:

require __DIR__ . '/react-wp-scripts.php';

add_action( 'wp_enqueue_scripts', function () {
	// In a theme, pass in the stylesheet directory:
	\ReactWPScripts\enqueue_assets( get_stylesheet_directory() );

	// In a plugin, pass the plugin dir path:
	\ReactWPScripts\enqueue_assets( plugin_dir_path( __FILE__ ) );
} );

Once your project is set up with react-wp-scripts, you can simply run npm start to use the development, live-/hot-reloading React app. When you’re ready to build your project, use npm run build just like a regular CRA project; the PHP loader will automatically use the built version whenever you aren’t running the development builder.

Help Us Out

react-wp-scripts is still early days, although we’re beginning to use it on all our new projects. We want your help to make it easier to use, and get to full feature parity with regular CRA apps. There are still a few broken features (such as jumping directly to your editor) that we’d love to have.

We’re also going to add standalone commands so that starting a new project will be as easy as:

npx create-react-wp-plugin my-plugin
npx create-react-wp-theme my-theme

We need your help testing and developing this to make it into the best tool for the modern WordPress ecosystem. Help us out!

Power-Up your Redux Store with Repress

When building out React apps, we found ourselves repeatedly writing the same boilerplate code to get data from WordPress. This involved sending off requests to the REST API, storing that data somewhere, and pulling it back out to render. Doing this repeatedly was a pain, and making sure we didn’t miss anything was tough.

To make this easier, we created Repress, a Redux library for the WordPress REST API. Repress natively understands the WordPress REST API, and makes caching super simple. Unlike many other Redux libraries for WordPress, Repress can be dropped into an existing store, allowing you to progressively adopt it. You can even combine Repress with other methods of retrieving API data in the same store if you’d like.

Repress is built around two fundamental pieces: API resources (like a post), and queries (called “archives” in Repress). Internally, Repress shares these resources between queries, allowing efficient reuse of resources, just like WordPress’ object cache.

Installation

Repress is published as an npm package, so you can add it just like any other package:

npm install --save @humanmade/repress

(You’ll need to already have a Redux store set up, as well as Redux Thunk and React.)

Once added, you need to establish your type instances. Usually, you’ll have a types.js file that handles this in a central place:

// types.js
import { handler } from '@humanmade/repress';

export const posts = new handler( {
	type: 'posts',
	url:   window.wpApiSettings.url + 'wp/v2/posts',
	nonce: window.wpApiSettings.nonce,
} );

You then just need to connect this to your reducer wherever you’d like it to live in your store:

// reducer.js
import { combineReducers } from 'redux';

import { posts } from './types';

export default combineReducers( {
	// Any regular reducers you have go in here just like normal.

	// Then, create a "substate" for your handlers.
	posts: posts.reducer,
} );

Using the data is super simple, as Repress provides higher-order components (HOC), including one called withSingle. This works just like Redux’s connect HOC, and provides props to your component:

// SinglePost.js
import { withSingle } from '@humanmade/repress';
import React from 'react';

import { posts } from './types';

const SinglePost = props => <article>
	<h1>{ props.post.title.rendered }</h1>
	<div
		dangerouslySetInnerHtml={ { __html: props.post.content.rendered } }
	/>
</article>;

export default withSingle(
	// Pass the handler:
	posts,

	// And a getSubstate() function so Repress can find the data:
	state => state.posts,

	// And a mapPropsToId() function so Repress knows what post to get:
	props => props.id
)( SinglePost );

Archives

In order to facilitate predictability and cachability, Repress introduces a concept called “archives“. Archives act as a filtered view into the list of resources, which allows you to easily reuse resources; for example, if you go from the homepage to a post, this can reuse the existing data for instant rendering.

Archives have to be registered with an ID before use, which allows Repress to cache the result and simplify pagination. They can either be static or dynamic:

posts.registerArchive( 'stickied', { sticky: '1' } );
posts.registerArchive( 'today', () => {
	return {
		after:  moment().startOf( 'day' ).toISOString(),
		before: moment().endOf( 'day' ).toISOString(),
	}
} );

Using archives is super simple, as Repress provides another HOC called withArchive, which works just like withSingle:

// TodayArchive.js
import withArchive from '@humanmade/repress';
import React from 'react';

import { posts } from './types';

const TodayArchive = props => <ul>
	{ props.posts.map( post =>
		<li key={ post.id }>
			{ post.title.rendered }
		</li>
	) }
</ul>;

export default withArchive(
	// Handler object:
	posts,

	// getSubstate() - returns the substate
	state => state.posts,

	// Archive ID
	'today'
)( TodayArchive );

withArchive also provides helper props for pagination, loading, and more. This allows you to forget about the process and just worry about making fantastic apps.

Try it Out

We’re already using Repress in our internal projects, but we haven’t begun using it in production client projects just yet. We want your help to make Repress into a solid library anyone can use. Try it out on your projects today, read the documentation, and let us know what we could improve!

Simple Embedding with react-oembed-container

The web is all about interactive, multimedia-rich experiences. WordPress includes powerful tools to enable using media from across the web using the open oEmbed protocol. This allows writers and editors to add a URL to their post and have the HTML generated by WordPress.

While most embeds generate pretty vanilla HTML, some of the more complex embeds require JavaScript for full interactivity, including Twitter:

For React-powered frontends, you typically receive the post content as a single HTML string. This requires you to use one of React’s escape hatches, dangerouslySetInnerHtml. This is a safe operation (as WordPress has already sanitised the content), but suffers from the limitations of innerHTML, which importantly includes not adding <script> elements to the DOM. There are other solutions to this problem, including preparsing the HTML into structured data on the server (Scott Taylor of the New York Times has a great post on this), but these don’t automatically solve this problem either.

To solve these problems, we created react-oembed-container, a component which handles all of the complexity for you. This component is directly derived from what we’ve learnt building out React-powered sites, and is battle-tested. Plus, it works on any HTML and doesn’t require WordPress.

Install it right now from npm:

npm install react-oembed-container

Using the container is super simple: simply wrap your normal rendering code with the container:

import EmbedContainer from 'react-oembed-container';

const MyPost = props => {
    return <EmbedContainer
        markup={ post.content.rendered }
    >

        {/* for example, */}
        <article id={`post-${post.id}`}>
            <h2>{ post.title.rendered }</h2>
            <div dangerouslySetInnerHTML={{ __html: post.content.rendered }} />
        </article>

    </EmbedContainer>;
}

react-oembed-container supports all oEmbed scripts, but contains special support for Facebook, Instagram, and Twitter embeds to improve the embedding process. If you hit into any other special cases, we’d be happy to add support for those too.

While react-oembed-container is production-ready and in use on high-traffic sites already, we’d love feedback and contribution. If you can think of improvements, let us know!

And Even More!

Apart from these projects we’re announcing today, we have a few projects we’ve already released, plus more in the pipeline. Look out for more information on these in the coming weeks and months!

Gutenberg Blocks with hm-gutenberg-tools

As Matt announced recently, we switched humanmade.com to use Gutenberg. Along the way, we created some reusable tools and blocks, which we packaged up into hm-gutenberg-tools. This includes a Post Select button, more sidebar controls, and a handy wrapper for editable HTML.

Read more about our use of Gutenberg on the original post.

Restsplain: Document Your WordPress REST API

Building frontend-heavy applications with React often involves a lot of work with the WordPress REST API. While the REST API is designed to be self-documenting, the documentation is only available in machine-readable format, which isn’t the greatest for humans like me and you. The human-readable documentation is great, but doesn’t cover custom APIs that we’ve built ourselves.

To better visualise and understand the REST API, we built Restsplain, a documentation interface for the REST API. We’ll be posting more about Restsplain in the upcoming weeks, but if you can’t wait, you can get it from GitHub right now.

Coming Soon: Server-Side React Toolkit

React is a great tool for the frontend, but at the end of the day, WordPress is still fundamentally a server-based project. We’re working on tools to make server-side rendering easier, as well as helpers to make preloading data into React easier. These are still a little too experimental to officially release, but you might find hints of them on our GitHub profile.

We Love Feedback!

We’d love to hear if you use any of these tools in your projects. While we’ve built them to satisfy our own needs, we want them to be ecosystem-wide tools and libraries that can help everyone move faster and create amazing things. Feel free to leave feedback in GitHub issues, or tweet us with your thoughts!

Join us and friends for Guten Tag: ‘Think Outside the Block’ March 5th

A few weeks ago, Jenny & Matt arranged an informal conversation and demo on Matt’s experience implementing Gutenberg on the humanmade.com website. (You can watch the conversation and read a post about it). This led to a series of other discussions both internally at Human Made, and externally with friends, about what could be done to aid and facilitate this discovery as a wider community.

Gutenberg-Icon

After some deliberation and brainstorming, a community initiative was launched to organise an online event series exploring Project Gutenberg: now known as ‘Guten Tag‘.

You can see the beginning of this plan here, as well as ideas of sessions that would be interesting to cover in future.

Think outside the block

The first event in the series, ‘Think Outside the Block‘ will be aired March 5th, from 9:30 UTC and will be followed by three further sessions throughout the day. You can see the full schedule here and to join us for the event, just sign in with your email address to be taken directly to the session.

About Guten Tag

Guten Tag is an open source, community initiative. We aim to use the series to air discussions on various aspects of Project Gutenberg, and how GUtenberg is affecting different parts of the WordPress ecosystem.

The event is curated by the community, for the community, and we strongly encourage anyone to get involved and create the next event in the series. 

You can find current contributors in the UK WP Community Slack team. In the #gutentag-events channel. To join the channel, please follow the instructions here. Our event hashtag is #gutentagWP so please do use it if you’re tweeting about the event!

All participants are asked to adhere to the Code of Conduct.

Contributors

Guten Tag was been made possible by contributions from the following people and organisations:

The Small Print

  • The event will be streamed online using the Crowdcast platform. Every event in the series must be free. The event itself and the resources associated with it are under a CC-BY license with all content and resources being open and available to the community prior, during, and before the event.
  • All events in the series will be covered by a standard Code of Conduct created for this event using the Open Source Bridge Code of Conduct

Transforming WordPress for the Modern Newsroom, with Fairfax Media

Last year, we joined Fairfax Media on a project to overhaul their existing CMS platform to deliver a fast and efficient custom CMS WordPress website and a tailored publishing workflow for their editors and journalists. 

Today, we’re delighted to be able to share the story of that project and the journey to improve the experience of journalists at one of the most influential media companies in the Asia Pacific region. 

Human Made’s expertise in WordPress was key to a successful outcome, and they were instrumental in developing our Editorial experience. 

Damian Cronan, Chief Technology Officer

Read Fairfax’s story

Download white paper

Download the full project white paper