One concern for letter writers is potential inadvertent bias in letters: focusing on personality for some people and research products for others, for example, or writing letters of very different lengths. This comes up especially in the context of women in science but likely applies to other groups. I thought to make a simple tool to allow letter writers to simply check this themselves; there are similar tools elsewhere as well (see here for a good overview, and thanks to this tweet for pointing out that resource). This uses terms from Schmader et al. (2007); the main code is forked from Ethan Arterberry's code, which is based on Kevin Roth's Sentimental code.

Paste in two letters to compare them (click on the page somewhere outside the box to start parsing). The letters are parsed for length, sentiment (words are scored for how positive or negative they are), words relating to teaching, words relating to research, words relating to being a standout ("best"), words relating to ability, words relating to "grit", and titles used. In most areas, higher scores are better (the only exceptions would be if it is too long, or if your field favors using first names, rather than titles). Words from your letter that match each category are returned as well.

I've set this up so that everything is done with javascript on the client side (that is, your browser) rather than being run in this server, and nothing is transfered (so I can't see your letters). You can see the code at https://github.com/bomeara/sentimood for more info, or to make a better version.