Recently someone in a Slack group had asked about metrics for technical writing. They were looking for metrics like tolerable errors per 1000 words.

To me, that is absolutely a wrong metric to be tracking. I could write a 1000 word article of absolute nonsense without any grammatical errors or typos. By that metrics, I’ll the best writer you’ve ever seen.

The first question is to start with why. What is the reason for collecting metrics? Is there something that you can do after collecting metrics?

If you’re looking for something measurable and useful, then you have to look at things like the 4Cs: clarity, conciseness, completeness, and correctness, then the answer is a bit complex. Some of these attributes are hard to quantify into a number directly.

One way of collecting metrics is not to go by the number of edits, rather by the type of edit. You can take up an article and mark up the changes to be made. So each edit would be categorized as say, typo, wordiness, incorrect technical info etc.

Now if there were a few typos, you would just mark up the typos and categorize them as avoidable errors. Whereas if there were issues of wordiness, you would categorize it as a language improvement area. If there were incorrect technical information, flag it as a severe issue.

At the end of the review, you would have an understanding of the actual items to be taken care of. By now, you can probably quantify the type of edits as a proto-metric.

Now repeat this exercise for several content pieces for the same author, you get an idea of what that author has to improve.

Repeat this exercise for other writers and then you have an idea of what the entire team needs to improve on.

The most important objective of this exercise is to improve the writing, not rate the writer on some arbitrary scale of your choice.