hckr.fyi // thoughts

Solving Remote Work Issues with Documentation

by Michael Szul on

Lastly, if you're a fan of cyberpunk, I'll be speaking at the Cyberpunk Culture Conference (virtually) this July, pulling material from the original Max Headroom article and Codepunk podcast episode. Virtual participation is literally the cost of a cup of coffee.


I worked remotely for roughly 6 years before taking my current position. After a few years of being fully in office, I started spending one day a week working from home—cutting out 2.5 hours worth of driving time. Eventually, the plan was to start spending more time working remotely to eliminate the wear-and-tear on both my car and my body, but the current COVID-19 pandemic accelerated that decision, and my entire team and I are currently burning cycles remotely for the near future.

Working remotely often presents its own unique set of problems. These problems can be difficult for remote workers to tackle, but even more difficult for remote managers to resolve. When team members are in different timezones and we need to look at resolving these issues while working asynchronously, they can be exponential.

A lot of these problems stem from trying to duplicate in-person processes remotely, which is a natural inclination when teams first shift to remote; however, this is only sustainable for a short period of time, as teams start to grow tired of frequent video calls, lengthy email chains, and having to explain themselves when they run out to lunch, but forget to set their chat status to away.

The long-term resolution to this remote work stressor lies in appropriate documentation. We've talked about this before, and even went into detail with it previously, but what I wanted to do was deep-dive into two different aspects of documentation that can help change your overall team dynamic.

The first aspect of documentation is the traditional one: Writing stuff down. Many quality systems have documentation: A manual on how things ought to operate. 20 years ago, these manuals came with major software releases. Today, open source projects are riddled with a lack of documentation, and small-to-medium development and application shops within companies often suffer from a lack of good administrative screens and a lack of solid documentation. Missing documentation such as this increases misuse of the application or system, increases the number of support requests, and often bogs down the development team with requests for help instead of allowing them to improve the software. It becomes a vicious cycle.

When teams moves to a remote work environment, creating system and application documentation becomes even more important. Nobody can walk to your desk to ask for help, and I don't think I've received a phone call (rather than an email) since the early 2000's. To survive in a remote environment, you need a strong push towards writing the system down, and creating an effective manual.

It doesn't stop with a single application manual, however, as the fast pace of software development means regular updates at a speed much faster than PDF output. PDF seems very slow and low-tech when it comes to the fast pace of DevOps deployments. Software applications and process changes need to be consistently written down to capture the regular changes, and notes need to be more than passing comments on Zoom calls and/or chat windows. I've written at length about chat over emails over video calls, and it's with good reason. Effective remote work requires asynchronous communication. Interruptions and context-switching cause a lose of productivity, so video calls and in-person meetings are a time sink. Emails are great for formalizing a statement or reporting up, but for communicating with an asynchronous team, the temporal nature of emails make it easy for messages to slip through the cracks. I've seen executives with upwards of 10,000 unread messages in their inbox.

When I refer to chats, I'm referring to group channels. One-on-one chats are great for personal conversations, but when we're trying to keep channels of communication open for knowledge transfer, we need things like Microsoft Teams, Slack, or even IIRC (the grand-daddy). These open channels allow for anyone to ask questions, anyone to answer questions, and more closely resemble a pull-based communication system.

The problem with this proliferation of chat channels is that once a decision is made, what do you do with it? Sure the chats are searchable, but that leads us to the same temporal issues that email presents. Instead, someone needs to formalize those decisions into a constructive document. That might require consolidating the discussion and sending a email for approval. It might require typing it into a requirements document. This is a better way, but it changes the way you do business.

Although system documentation is just as important today as it was 20 years ago, the speed of development puts you in a position where large manuals get out-of-date quickly, so we need to shift our perspective on how to publish these manuals. The first step is to put all documentation in source control—not a document repository. Look, I know that document repositories (including SharePoint) can offer version control, but that's different than a full source control repository that treats documentation like code.

You can start by putting your source files in source control. This probably means Word documents. It's not ideal, but it's a good start.

The reality is that just like a web application with continuous updates to avoid entropy, documentation is most likely a living document undergoing constant change. If your documentation process is undergoing the same changes and processes as your development process, shouldn't they mirror one another?

I'm a fan of using the tools that your familiar with, but then relying on build processes to get you where you need to go. For documentation, there are two ways to go about this. The first is to have your documentation writers just write it in markdown?

Why?

Markdown has become the de facto HTML shorthand for most online documentation. The most popular source control repository in the world (GitHub) will automatically render markdown as HTML, and will default your repository to the README.md for a landing page, if one is present. GitLab, Azure DevOps, etc. all render markdown automatically and use the README.md file in the same way. Markdown is purposefully limiting, avoiding overburdening formatting, and restricting the writer to a core set of elements that are easily handled by CSS when the document is out for publication.

What if your documentation people don't want to write in markdown? This is where appropriate tooling comes into play. You could easily have your writers continue writing in Word, but create a utility or process that uses software like Pandoc to convert the Word document to HTML. You can even associate specific styles in Word with HTML styles to allow Pandoc to generate the appropriately styled and formatted text. In fact, Pandoc is a great publishing utility that can be used to output different formats for different audiences, even if you start in markdown. Ultimately you can convert any format to markdown, HTML, eBook (which is just HTML), and even LaTeX (and PDF through LaTeX). You could build and entire publication process around tooling such as Pandoc, and integrate that process into the same DevOps process that you put your software through.

Why markdown at all? Who cares about whether or not source control repositories render it? Because even though you can generate out documentation as Word or PDF, documentation today lives on the web. You want documentation that is easily accessible, easily editable, and ultimately easily searchable. This is what the web was built for to begin with. All documentation should be available via a web browser, even if that documentation is behind a firewall and requires security. People are used to the web browser; They're likely in one most of the day.

The other reason for using markdown (or converting to markdown), and using the full power of a web site or web application is because it brings more people on board other than the technical writer or documentation specialist. When it comes to software, developers need to be writing their own documentation, if possible. Markdown is simply easier for them.

But it's more than that. We talked about documentation residing in source control. This means that documentation can follow the same process as any other application. Does new documentation need to be added? Is there an error in the documentation? Don't email the technical writer. Fork the repository, make the correction yourself, and then issue a pull request. With the technical writer as the reviewer or the code owner of the repository, he or she will automatically get flagged to review the change, and can act as the gatekeeper for documentation changes, requesting changes to the pull request along the way.

We've focused on source control so much because modern day source control repositories offer easy hooks into DevOps processes. Some like GitHub and GitLab have their own continuous integration (CI) and continuous deployment (CD) functionality. Others offer webhooks that different services can integrate with. Chances are, if you're a modern technology team (or striving to be one), then you're working without a DevOps process that deploys your applications through these CI/CD means. Your documentation shouldn't be any different.

There are several static site generators and publication engines in the open source community (e.g., Jekyll, Hugo). Plug into one. Better yet: Build one yourself. You can build a static site generator in less than 100 lines of code.


Interested in building your own static site generator? We'll be covering this topic for how we're rebuilding Codepunk here on the web site and over at our YouTube channel in the very near future.


With a static site generator implementation, you can tie the publishing of your documentation site directly into your CI/CD pipeline via a source control webhook. An editor makes a change to markdown, checks in the change, makes a pull request, gets the pull request merged, and then a build and release process gets kicked off to convert the markdown to HTML and publishes the static site: An automatic documentation site builder. This follows the same specific patterns of a standard DevOps deployment. If you're not keen on markdown as the documentation origin, you can add Pandoc into the build steps to run conversions as well.

Remember that gatekeeper comment from earlier? Chances are you don't have an army of technical writers on your team, but you likely have a significant number of ever-changing applications. We never want one person or one job to become the bottleneck. In our current industry, there is a consistent drive to "shift left," which is a generic term for moving an engagement or responsibility closer to the beginning or the source. Shifting left on security means bringing security into the conversation at an earlier stage. Shifting left on support means providing tools necessary for tier-1 or self-service support—enabling the end users to find the information they need at their fingertips.

It's no different with documentation. With limited resources, programmers should be expected to document their own processes, code, and application functionality, contributing to the documentation guides and manuals as they go. This shifts the responsibility to the person writing the code, who is also the most likely to known how the code actually functions. If the documentation is kept in source control, the programmer can fork the repository, edit the documentation, and then make a pull request. The technical writer or documentation specialist then acts as a gatekeeper and editor, requesting changes, asking additional questions on the pull request comments thread, and ensuring the overall integrity and understanding of the documentation before merging the changes into the origin repository (which will kick off a CI/CD build/release to push the documentation to production).

There is one final usage of documentation to enrich asynchronous, remote work. This use, however, does require a shift in process that takes it outside of the boundaries of normal communication. This is documentation at the source of the issue.

If we use GitHub as an example, both GitHub issues and GitHub pull requests allow for comments. A good rule of thumb is to put everything into the issues backlog for triage and use the labels for tagging and filtering. This does require you to come up with a system for labeling. But if there's one thing humans are good at, it's classification. If an end user has a question, rather than emailing it, maybe they should add it as a GitHub issue using a "question template" and the person triaging the issues can label it with a question label. Ideas, bugs, enhancements, and all manner of discussions related to the repository can exist inside of a system that sits close to the code. When a pull request is initiated, it can reference the issue and automatically close it once the pull request is merged. This offers traceability from issue to code to assigned programmer, and even to build and release version, keeping your entire software development, documentation, and user feedback process in the same place.

This isn't just for issues either; Comments in pull requests are perhaps not used enough with internal applications and projects. Pull requests can be made without any code changes, but instead with a proposal for change. GitLab is famous for using their version of the pull request (called a merge request) to initiate documentation changes, code changes, and virtually everything else. To them: a process starts with a merge request to document the requested change. This is why they consider themselves a documentation-first company. We've spoken at length in the past about creating a team manual and being manual-first much like GitLab is handbook-first. Comments in pull requests allow code changes to be debated in a format that is easily accessible and searchable rather than relying on email and chat. If your DevOps setup is mature, you'll likely have build checks on pull requests that might point to coding standard issues, testing issues, or other items that need to be addressed or debated before merging the code. In an even more mature process, you might have every pull request deployed to a testing environment in the cloud so that the change can be tested. Vercel has an excellent platform for this.

Ultimately, not all documentation will occur where it needs to be, but we need to be diligent in ensuring that it makes its way to where it needs to live. Doing so enables asynchronous work (there's a reason why some are called pull-based teams), and provides a much greater proficiency and efficiency for remote teams.