You can also try the Readability Grader at Jellymetrics, for a modern take on it.
Of the multiple analyses provided, the Gunning Fog index is the easiest result to read – it is a proxy for the number of years of schooling required to read something.
What makes this interesting is that you can apply the same algorithm ((avg # of words per sentance) + (% of words with 3 or more syllables)) * 0.4 to any document. For example, Tyner Blain’s blog (prior to this post) had 11.17 words per sentance, and 17.27% “hard” words, yielding 11.376 as a Gunning Fog index. There are other indeces as well, designed to provide different insights into the writing.
It’s important to note that these are mathematical anayses, and provide no insight into comprehensibility. If you follow the link to everything you ever wanted to know about readability tests, you would find that this sort of analysis is generally discouraged today. These formulaic studies are superficial measures of the text. They do not provide any insight into the difficulty of the vocabulary, ease of interpretation by non-native speakers of the language, or comprehensibility in general.
If this is a bad test, why are you telling me about it?
Good question. The statistics can identify if a draft is wordy or dense – raising a red flag that content should be considered for revision. While we can’t use this test to say “writing is good”, we can use it to say “writing might be bad”. When we are receiving feedback that our writing is too hard to read, this type of analysis can give us feedback about how effective our editing has been.
The goal of writing a requirement is not pedantic accuracy, it’s effective communication. In addition to crossing domain-boundaries with the different audiences that consume our requirements, we often are crossing language barriers and varying educational levels. It’s hard enough conveying concepts that presume contextual knowledge, our readers shouldn’t have to parse the text repeatedly.