The Comparative Copyfitting Factor

I’m often asked how my fonts compare to Times New Roman or other common system fonts in terms of how many words they fit per page.

Trip­li­cate, for instance, fits as many words per page as Courier because they’re both mono­spaced fonts, and thus both conform to the type­writer conven­tion that each char­acter is 6/10 of an em wide. (Recall that the em is the height of the notional bounding box of the font. It’s always scaled to the current point size, so at 12 point, 6/10 of an em = 7.2 points.)

But that’s not true of propor­tional fonts like Equity and Times New Roman, because their char­acter widths vary. I know from expe­ri­ence that Equity and Times New Roman fit roughly the same number of words per page. But since point size alone doesn’t capture their compar­a­tive copy­fit­ting char­ac­ter­is­tics, could we quan­tify this another way? Let’s call it the Compar­a­tive Copy­fit­ting Factor (CCF).

A simple answer might be to type out every char­acter of each font and measure the length of these two char­acter sets. Then the CCF would be the ratio of these two measure­ments. The problem is that most of these char­ac­ters are seldom used in body text. We’d end up measuring a lot of char­ac­ters that don’t have any bearing on copy­fit­ting.

We might then observe that most body text is made of lower­case letters. What if we just measured the lower­case alpha­bets? A better idea. But still flawed, because every letter would appear once in our measure, and thus be weighted equally. In the real world, letter frequency varies. Ideally, we would use a sample string that was correctly weighted for statis­tical letter frequency.

The issue of letter frequency arises repeat­edly in the history of typog­raphy. Letters that are more common need special treat­ment. For instance, when type was cast in metal, it wouldn’t have made sense for type founders to furnish an equal number of every letter. Instead, fonts were shipped with more copies of the common letters and fewer of the uncommon ones. This is also why the left two columns of a Lino­type keyboard were ETAOIN and SHRDLU: grouping the common letters made it possible to operate the machine faster.

Measuring letter frequency with a computer, of course, is much easier than by hand. The largest such effort is prob­ably the one conducted by Peter Norvig, who searched through a massive trove of digi­tized books, counting 3,563,505,777,820 letter occur­rences. For instance, he found 445.2 billion occur­rences of the most common letter (“e”, natu­rally) and at the other end, 3.2 billion occur­rences of “z”.

E    445.2bn  12.49%
T 330.5bn 9.28%
A 286.5bn 8.04%
O 272.3bn 7.64%
I 269.7bn 7.57%
N 257.8bn 7.23%
S 232.1bn 6.51%
R 223.8bn 6.28%
H 180.1bn 5.05%
L 145.0bn 4.07%
D 136.0bn 3.82%
C 119.2bn 3.34%
U 97.3bn 2.73%
M 89.5bn 2.51%
F 85.6bn 2.40%
P 76.1bn 2.14%
G 66.6bn 1.87%
W 59.7bn 1.68%
Y 59.3bn 1.66%
B 52.9bn 1.48%
V 37.5bn 1.05%
K 19.3bn 0.54%
X 8.4bn 0.23%
J 5.7bn 0.16%
Q 4.3bn 0.12%
Z 3.2bn 0.09%

How can we make a single string that captures this frequency infor­ma­tion? We don’t need to type out tril­lions of letters. Instead, let’s normalize the occur­rence counts by dividing all of them by 3.2 billion. That means “z” will appear once in our sample string, and the other letters will appear propor­tion­ately more based on their compar­a­tively greater frequency. If I’m doing this right, we get this string of 1187 char­ac­ters:

eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeettttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaoooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiinnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnsssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhlllllllllllllllllllllllllllllllllllllllllllllllldddddddddddddddddddddddddddddddddddddddddddddccccccccccccccccccccccccccccccccccccccccuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuummmmmmmmmmmmmmmmmmmmmmmmmmmmmmfffffffffffffffffffffffffffffpppppppppppppppppppppppppggggggggggggggggggggggwwwwwwwwwwwwwwwwwwwwyyyyyyyyyyyyyyyyyyyybbbbbbbbbbbbbbbbbbvvvvvvvvvvvvvkkkkkkxxxjjqz

In body text, however, we also need to consider word spaces, which occur frequently and take up non-negli­gible space. According to Norvig, the average word length is 4.79 letters. So in our sample of 1187 letters, we can expect to see 1187 / 4.79 ≈ 248 word spaces. We add those to the sample string above, for a total of 1435 char­ac­ters. Measuring this rela­tively short string should give a us a very good esti­mate of the copy­fit­ting char­ac­ter­is­tics of any font.

(By the way, we’re delib­er­ately ignoring capital letters, punc­tu­a­tion, etc. on the idea that these char­ac­ters occur so infre­quently in body text that they won’t mean­ing­fully affect CCF.)

Here’s a selec­tion of MB fonts and common system fonts, along with the length of our sample string (denom­i­nated in ems) and then sorted in decreasing order:

Bookman             701.640
Century Schoolbook 661.401
Book Antiqua 632.806
Arial 629.010
Valkyrie 628.063
Charter 625.379
Century Supra 613.393
Bell 581.680
Calibri 579.634
Times New Roman 570.381
Equity 565.521
Goudy Oldstyle 562.483
Garamond 555.012

This reveals some­thing that every college student and lawyer has prob­ably figured out on their own: Bookman is a hog; Gara­mond is the most svelte. But we wanted a Compar­a­tive Copy­fit­ting Factor. So let’s deem Times New Roman to be 1.0 and express the others rela­tive to this bench­mark:

Bookman             1.2301
Century Schoolbook 1.1595
Book Antiqua 1.1094
Arial 1.1027
Valkyrie 1.1011
Charter 1.096
Century Supra 1.075
Bell 1.019
Calibri 1.016
Times New Roman 1.0
Equity 0.9914
Goudy Oldstyle 0.9861
Garamond 0.9730

Truth in adver­tising: Equity’s CCF of 0.9914 means it fits essen­tially the same number of words per page as Times New Roman. Mean­while, many lawyers remain enam­ored of Century School­book, even though it has a rather bad CCF of 1.1595.

Another way of reading this chart is to think of the CCF as the point size multi­plier needed for two fonts to match. For instance, a page of 12-point Bookman will contain roughly the same number of words as Times New Roman at 14.76 point (because Bookman’s CCF is 1.23, and 12 × 1.23 = 14.76).

I resisted putting a chart like this in Typog­raphy for Lawyers because of the poten­tial for abuse. My advice has always been to inter­pret court rules in good faith, which means noticing that these rules are meant to promote legi­bility and fair­ness. Lawyers who under­mine these goals by delib­er­ately picking the lowest-CCF font time after time—are you new here?

Still, point-size shenani­gans could be elim­i­nated entirely by denom­i­nating docu­ment length in terms of word count rather than page count, a tech­nique best left behind with the rest of the type­writer era.

PS to supern­erds: you might wonder whether it’s possible to construct a shorter string that produces statis­ti­cally equiv­a­lent CCF results. (Yes—I think it can be at least 90% shorter, based on my exper­i­ments.) Also, is there a reason­able compu­ta­tional tech­nique for converting one of these sample strings into a read­able list of words (that is, an anagram)?