Great experimental marketing sites

June 3, 2015

I’ve written before about strategies for avoiding A/B tests. The most important one is just to do your homework, and take advantage of the huge number of documented usability and conversion rate optimization studies that are already out there.

I’ll update this post with more sites that I find matching this description.

  • Nielsen Norman Group. A lot of their content is paid, but much of it is free, too. Their blog, which is under the “Articles” section of the site, is great weekly reading, and of course, extremely easy to understand.
  • Marketing Experiments. Huge number of videos and articles, and a research archive with extremely detailed testing results.
  • Which Test Won?. Huge library of A/B tests, specifically. I can’t resist taking their “Free Test of the Week”, where you have to guess which of two variations performed better. Great check on your abilities as a marketer, frankly.

Tagged:

Twitter's Got It Right?

June 1, 2015

I think marketing should be far more data-driven than it currently is.

But I’m still really surprised that Twitter appointed its CFO, Anthony Noto, to run marketing. (And, by the way, that “CMO” hasn’t been officially added to his title). I think that a lot of the business world, especially in tech, suffers from the idea that just anyone can do marketing, that it isn’t a separate discipline and craft of its own, requiring expertise to do well.

This AdAge article by Moshe Vaknin is a great example of this attitude. Here’s the headline:

Twitter's Got It Right: Why CFOs Can Oversee Marketing

It took me some time to understand Vaknin’s exact point here. Twitter’s got what right? Some possibilities:

  • A CFO can oversee marketing. Sure. Was anyone debating this? CMOs can also oversee finance. CTOs can oversee HR. It’s totally fine and these sorts of things happen all the time. But it doesn’t mean it’s necessarily a good idea.
  • Some CFOs make good CMOs. This seems like a much more sensible conclusion. In fact, I can think of industries like consumer packaged goods where marketing is so integral to the business that this might often be the case. But reading the second line of the headline suggests to me that it’s not what Vaknin means.

But the subhead says Four Reasons Why CFOs Make Good Marketing Chiefs. I think you can only read that to mean:

  • All CFOs make good CMOs.

This statement rests on misunderstanding what it means for marketing to have become more data-driven over the past ten years. Just because it’s become more data-driven, doesn’t mean it’s been de-skilled. It doesn’t mean that all it takes is quantitative ability.

If anything, the prerequisites for great marketing are even greater, and it now requires a combination of creativity, communication abilities, and empathy, with the ability to understand a spreadsheet.

Why does Vaknin say this?

1. It's all about the data.

The "Mad Men" of yesteryear have been replaced by "math men" and "math women," data scientists, quantitative analysts and other number crunchers who analyze the data for measuring, analyzing and optimizing every marketing campaign.

Well, no, they haven’t been. Modern marketing isn’t solely about data, though data is playing an increasingly large role. And that’s good!

But what’s so interesting about modern marketing is that that quantitatively-focused stuff still needs great creative work to be successful. Marketing is still about telling stories, and connecting with your prospects on both an emotional level (how will this product make me feel?) and on a numbers level (what specific benefits does this bring me? what value can I expect to realize?). See AirBNB and Facebook for starters.

Data is important for targeting, obviously, too. And it’s important for refining your pitch: you can do A/B testing on the channels you use, the words you use, and some of your creative assets. But putting together the vision for a brand requires that ability plus the other stuff.

And that vision is key to getting people to use the product. Partly because it guides the product’s development, and partly because it helps your customers understand how your product fits into their lives. See MakerBot’s entire history for a great example of this.

Vaknin goes on to point out that:

Few have more experience in overseeing data than a former Wall Street analyst, particularly one who was voted top analyst for research on the internet industry.

I didn’t understand this. Analysts don’t really “oversee data”. They build models and try to understand the fundamentals of an industry, and the prospects of individual companies. They use a combination of strong financial and quantitative skills, and strategic and social understanding.

Wouldn’t a data scientist be the logical choice for a CMO, if “overseeing data” is the main qualification for success?

2. Twitter is focused on performance marketing in 2015.

I'd venture a guess that Twitter's marketing is less focused on brand building, for which it has done a great job, than on performance-based marketing tactics to grow its user base and active Twitter usage.

I wish Vaknin had focused here. This is a fair point.

Performance-based marketing is new. Starting with banner ads on the early web (and maybe even before that, I don’t know), marketers became able, at least in theory, to track their advertising efforts all the way from the first time a prospect saw the message, through to sale. In turn, they could avoid Wanamaker’s predicament: “Half the money I spend on advertising is wasted; the trouble is I don’t know which half.”

But then again, Vaknin says he thinks Twitter has done a great job brand-building. What does he base this on? Isn’t user growth, for example, a huge problem for Twitter right now? Isn’t that a failure of marketing, that people don’t understand how Twitter fits into their lives?

I don’t think Twitter’s growth is going to come from better pay-per-click advertising. It’s probably going to come from a better product roadmap, better onboarding and retention, and better brand positioning.

3. It is the age of mar-tech.

Marketing executives want to understand the ROI of every dollar spent. They want to cultivate a unique relationship with each individual customer. That includes buying media programmatically; optimizing creative, placement and audience; retargeting likely customers; and measuring results in near real-time.

Marketers should view these not as campaign tactics but as elements of a constantly evolving strategy.

This is a great summary of what martech is fundamentally about. (And really, what good relationship marketing is about, which is why martech is so exciting.)

But what does this have to do with a CFO running marketing? If anything, this suggests even more strongly that you want somebody with experience. Marketing technology is complicated and hiring someone who knows how to implement it and run it well is key. And, of course, the ability to be technical is not the same as quantitative ability.

4. Breaking down the silos (and taboos) is good for organizations. Diversity is important for organizations and brings a different set of experiences and skills to the table, providing an important opportunity to solve problems with an alternative perspective.

This is a truism, so I won’t get too deeply into it.

But if breaking down the silos is good, why not hire a great CMO from outside Twitter, and have them run finance while Noto focuses on fixing marketing?

Tagged:

Marquess, a bash script that makes it easy to use PrinceXML for templated collateral generation

May 21, 2015

Typically, marketing assets like whitepapers, datasheets, and case studies are maintained by the graphic design team.

This can be frustrating for everyone who’s not on that team, because when you want to make a fix or update, even if it’s just fixing a typo, you have to get graphic design involved. And it can be frustrating for graphic design, too, because who wants to spend all day fixing typos?

There are other problems as well:

  • If you decide to change the template for your documents, because, for example, your branding has changed, it has to be changed manually everywhere
  • It’s hard to maintain documents in multiple formats (for example, RGB vs CMYK, or HTML vs PDF), because, again, changes made in one place have to be manually replicated elsewhere
  • Making even minor changes to documents requires the involvement of graphic design, and then an extra review cycle by the requester to make sure the change is made correctly
  • Tables of contents are a pain to manage generally, and have to be updated and manually cross-checked if the document structure or headings change

PrinceXML was recommended to me as a solution to these problems. As input, PrinceXML takes standard CSS and HTML, just as it’s used on the web. As output, it produces PDFs that are indistinguishable from something created using software like InDesign.

However, PrinceXML by itself doesn’t solve any of the problems above.

Why? Its interface is the command line. And the command has to be written to correctly import all of the necssary HTML and CSS for multiple versions, together with all images, boilerplate text, fonts and graphics, and so on.

This is not straightforward for users. Imagine giving everyone a car instead of forcing them to ride the bus. Much easier for them to get to where they want to be, but only if they understand how to use the turn signals, the gear shift, how to add fuel, etc. If they don’t know how to do these things, they’re worse off than they were before.

So, we needed an interface for PrinceXML. This interface had to make it easy to:

  • Send a document to PrinceXML for parsing
  • Include all the necessary files (images, fonts, multiple CSS versions, etc.)

As a bonus, it adds logic to parse Markdown, which is helpful because Markdown is far easier to understand than HTML is, but allows for including HTML if necessary.

I wrote a script that provides this interface, and the code (not including, of course, PrinceXML, which has to be purchased separately) is included in this GitHub repository: https://github.com/riboflavin/marquess.

The script is fairly straightforward bash, and it takes one parameter, which is the folder containing the document you want to use.

You can actually just drag the script, and then drag the target folder, straight to your command line if you’re using OS X. Or run the script by itself and it will prompt you for the folder.

Quick guide to the script

Here’s a quick guide to the functioning of the script. The first thing it’ll do is create some output folders for your document.


mkdir -p $docfolder/Output/PDF;
mkdir -p $docfolder/Output/PDF/img;

Next, it’ll look for a few lines at the beginning of the file that give header, subheader, and the date of the document. These are delimited with #, ##, and ###. The script will then remove those lines from the document and echo the remaining contents to the temporary final document.


#get the title out of content.md
TITLE=$(head -n 1 $docfolder/Input/content.md)
#bash substring replacement syntax
TITLE=${TITLE#\# }
TITLE=${TITLE#\#}

#remove three lines from the top of the content.md file 
#(the title, subtitle, date)
sed '1,3d' $docfolder/Input/content.md >> $docfolder/temp/content.md

Next, convert any existing markdown to HTML, using John Gruber’s Markdown perl script:


for i in $docfolder/temp/*.md; do perl $DIR/template/Markdown.pl --html4tags $i >> ${i%.*}.html; done;

After that, the script looks for a table of contents file, toc.md. If it doesn’t exist, the script looks for h1, h2, and h3 headings, parses them, and creates a new table of contents:


if [ ! -f "$docfolder/Input/toc.md" ]
then
    grep -e '^' $docfolder/temp/content.html | sed 's/<\/h[123]>/<span><\/span>& /g' >> $docfolder/Input/toc.md
fi

Work then continues on the final document file, content.md. Marquess produces this file through a series of concatenations, and it does this in a giant loop that includes each format you want; these formats are included in the files names that Marquess looks for, and the filenames it generates.

The front page, for example, is generated like this:


for fmt in cmyk rgb; do

#preface. on the cmyk loop, for example, use example_cmyk_front.html
cat $DIR/template/doc/example_${fmt}_front.html >> $docfolder/Output/PDF/$fmt.html
cat $DIR/template/doc/example_common_front.html >> $docfolder/Output/PDF/$fmt.html

#cover
echo "<div id=\"cover\">" >> $docfolder/Output/PDF/$fmt.html
echo "<h1>$TITLE" >> $docfolder/Output/PDF/$fmt.html
echo "<h2>$SUBTITLE" >> $docfolder/Output/PDF/$fmt.html
echo "<h3>$TITLEDATE" >> $docfolder/Output/PDF/$fmt.html
echo "<div id=\"frontlogo\"></div>" >> $docfolder/Output/PDF/$fmt.html
echo "</div>" >> $docfolder/Output/PDF/$fmt.html

After adding lots of other stuff to the final output, at the very end, the script runs Prince on the HTML. You must update this line with the path to your PrinceXML binary.


#prince
#replace with path to your PrinceXML binary
/path_to_prince_binary/prince/prince $docfolder/Output/PDF/$fmt.html -o $docfolder/Output/PDF/${docname}_${fmt}.pdf -v

That’s it! The next step would be to make this an online app, with configurable (and saveable) options to generate documents on the fly. Though I’ve found the command line really useful since we often want to make lots of little iterations on our documents, then see what they look like.

Tagged:

Content doesn't have to expire, but if it does, try to match the user's intent

May 15, 2015

It’s easy to think that content is one-time thing. You write a blog post, and it’s done. Or you create a whitepaper or give a webinar, and you’re done. But it’s much more accurate to think of your content as something you’re saying in an ongoing conversation.

That’s great, because it means that you can:

  • Promote old content, including linking to it from your new content.
  • Consider using old content as a template for your new content.
  • Update what you have instead of creating something new.

It also means that you’re regularly auditing your old content to see if it’s still correct, relevant, and on-brand. And you’re updating calls to action on your old content so that they point to the most relevant current offers.

Of course, that’s what you should be doing with as much content as you can, starting with your best. But what about that page you created for an event that’s already passed, or a webinar that covers an old version of your software?

As usual, it’s all about intent. What was your visitor trying to achieve when they visited this page? How can you best help them achieve that, given the content you have, and given the resources if you have if you can’t update everything?

1) It’s a blog post or presentation relating to an earlier version of your product. Your visitor might be looking for something related to that version. So don’t just take it down. Leave it up, but add a banner pointing to new content.

2) It’s a listing for an event that has already passed. Maybe your visitor is looking for the content from that event; can you put a link to it on that page (or even better, add it directly to that page)? If not, they have some interest in attending an event, presumably. Can you 404 the page but tell the visitor where your upcoming events are?

3) It’s information that is not outdated, but is off-brand, or uses old messaging. It’s surprising that this page still comes up on search. Does it make sense to revise this page? If not, can you pick the page on your site that’s closest to that content, and forward the visitor there?

If all else fails, 404 it, but provide site search on your 404 page. 404s don’t hurt you in search. And track that information, since it can be a useful way of seeing what content visitors want that you don’t have.

Tagged:

Responsive diagrams and photos

April 6, 2015

Responsive design is critical for a high-quality, well-designed site. Many layouts are fairly straightforward; there are decisions to be made, to be sure, but you can accomplish a lot by stacking things up in one long column, especially on informational sites.

But if you have complex images, such as diagrams, or even if you have photos, it’s a lot less clear what to do. Simply shrinking what you have down to a mobile device width might make it unintelligible.

What are some solutions to this problem?

Pan and zoom widget

You could write some Javascript that allows users to mobile devices to pan and zoom your content. The major advantage is that once the code is written, you don’t have to make any other changes. The implementation is perfectly uniform across your site, and easy to test. It’s also probably the appropriate solution for e.g. online stores, where visitors want to inspect your images in detail.

Responsive photos via icons Responsive photos via icons.
Responsive photos via pinch-to-zoom Responsive photos via pinch-to-zoom. Note the icon in the middle.

But I’m not sure users use these widgets.

Why? Users don’t even read on the web. And given that visitors are barely paying attention to your text, why would they take the time to notice, learn, and then use, a widget on your site? Especially since images are often meant to enhance the text, rather than containing the main message of it.

Compounding the problem is that there’s no standard widget for image pan and zoom (unlike say a dropdown or radio button). So a user will be learning from scratch on every single site they visit.


Different images for different screen sizes

This solution is the most respectful of varying screen sizes. For all the diagrams on your site, you provide a much simpler version that hits the important parts, and for all photos, you do some intelligent cropping so that the most relevant parts of the image (for what you’re illustrating) are always visible.

Lautrec Lautrec medium Lautrec small Cropping for a smaller screen size. What you crop out really depends on context.

In some cases, this approach will dramatically increase the amount of time it takes to produce image assets. For photos, this might not be such a big deal.

For diagrams, though, you essentially have to produce two assets (the simple and the complex), whereas before you would only have had to produce one. The puts an additional burden on your graphic design team, as well as on all the people requesting the images. “What are the most salient points of this diagram, so we can convert those into a simplifed version of what we already have?” Not to mention that in some cases this might not be possible.

Technical implementation is also not straightforward. David Walsh does a fantastic job of going through all the options here. There are many to choose from. I’m not covering responsive data tables in this article, but some of the approaches there are also helpful; CSS-tricks has a great roundup of those.

Changing your approach to images

One question to consider is whether the image actually adds anything. If it doesn’t, conisider hiding it for small screen sizes (though ideally you probably should omit images altogether, if they don’t add anything). As Jakob Nielsen points out,

Users pay close attention to photos and other images that contain relevant information but ignore fluffy pictures used to “jazz up” Web pages.

So, simply reducing the amount of imagery on your site might be one approach, which makes producing targeted assets more practical.

Another idea might be to make your diagrams more vertically-oriented for all cases. This isn’t as nice for laptop and desktop displays, but looks much better on mobile.

If the images are truly valuable and also truly massive, you could also implement one of the simple approaches above, but offer to email the asset to the visitor or get it to them on some other channel that’s convenient for full-screen viewing. This could be a helpful way of collecting email addresses, too.

Tagged:

Why does enterprise software look so bad?

April 2, 2015

I’ve become a big user of enterprise software in the past couple years. The design of most enterprise software ranges from unattractive to hideous. Color schemes don’t make a lot of sense. There’s no whitespace. Interfaces are busy and unintuitive. There isn’t any sense of fun, either.

I’ve seen a few explanations for this. All of them seem like they’ll go away pretty soon.

“Enterprise vendors have no taste”

A recent post on Hacker News speculated that enterprise software looks bad because enterprise software vendors have no taste. That’s probably true. Certainly it’s easy to point to lots of examples of bad-looking and difficult-to-use enterprise software.

But if the vendors have no taste, or at least don’t have the resources to be tasteful, why is that? There are plenty of great designers and UX people out there; why don’t enterprise software vendors hire and listen to them?

Vendors are motivated to create software that sells. So why doesn’t design sell?

Enterprise software has to do a lot

This argument was also advanced in a Hacker News thread. The idea is that because enterprise software is essentially a construction kit, and because it has to work across so many types of businesses and use cases, it’s harder to make it look nice.

This would make sense as argument if it were purely about usability. Building powerful software that also exposes many powerful features in an easily-comprehensible way is indeed difficult. And enterprise software tends to have more features than other types of software.

However, it doesn’t really explain the more design-oriented aspects of this problem, especially when non-adminstrative users are dealing with the software. It also doesn’t explain some of the truly egregious design decisions that are made. Check out the front page of Marketo’s software, for example. It presents almost no functionality, is badly designed, and isn’t customizable.

Marketo front page

Also, there are lots of enterprise-y software packages that are intended for less sophisticated users, that are still designed reasonably well. Compare Microsoft Office, for example, to SuccessFactors.

The users aren’t the buyers

Jason Fried’s explanation seems like a plausible one.

The people who buy enterprise software aren’t the people who use enterprise software. That’s where the disconnect begins. And it pulls and pulls and pulls until the user experience is split from the buying experience so severely that the software vendors are building for the buyers, not the users. The experience takes a back seat to the feature list, future promises, and buzz words.

In enterprise software, there’s a disconnect between people who will use your software, and the people who buy it. That there are some great-looking enterprise software products, like Slack, actually reinforces this idea: those products are almost invariably purchased by the end-user, or are at a relatively low price point. Having an end-user that is also the buyer doesn’t guarantee good design, but it allows for it.

This doesn’t make sense over the long-term

This doesn’t seem like a stable equilibrium, though. There are a bunch of hidden, but very quantifiable, benefits to good design (and good usability):

  • Reduced reliance on help desks. Well-designed software makes it easy to remember how to do things, and makes errors more difficult to make and easier to recover from.
  • Increased productivity. Software that “gets out of the way” and doesn’t present the user with obviously bad design allows tasks to be completed faster, and are less cognitively taxing for users.
  • Increased feature finding. If features are laid out in a sensible way, and thought is given to presenting options and related tasks carefully within each view, more of the software can actually be used. According to this infographic, about a quarter of SaaS churn is caused by poor onboarding.
  • Reduced training costs. More intuitive software is easier to learn.
  • Decreased abandonment rate. Avoiding user frustration means that there are more advocates within the organization for your software.

It seems as if these factors, and others, would eventually influence economic buyers. Users who are getting more out of software, and are more productive with it, mean that the software is more valuable. My guess is that over the long term, enterprise software won’t have the luxury of looking bad anymore.

Tagged:

Accepting error to make less error

April 1, 2015

If you accept that you will make some errors, you’ll probably make fewer errors overall.

In this post, I wrote down some ideas about why people don’t trust algorithms (by which I mean sets of decision-making rules). I speculated that people don’t trust algorithms in part because of a desire to maintain control over their lives; we want our decision-making to matter.

But the research pointed to the idea that people don’t trust algorithms because they hope for perfection in their decision-making. If you accept a set of rules, it’s likely that they’ll be wrong at least in some cases, and the whole point of accepting rules is that you don’t change them in order to compensate for their defects. So almost any algorithm will inevitably be wrong, at least sometimes.

Here’s a paper by Hillel Einhorn, “Accepting Error to Make Less Error”, that talks more about this. Einhorn breaks decision-making into two approaches, the clinical, and the statistical.

  • The clinical aims for perfect prediction, and seeks to develop a causal model of what is going on in order to predict perfectly. Imagine using data about a car to tune its engine, based on a detailed understanding of exactly how an engine works.
  • The statistical model doesn’t aim for perfect prediction, and doesn’t try to develop a model of why things happen the way they do. But in many cases it will predict better, because a reasonable causal model may not exist. Imagine trading stocks. It’s impossible to explain (or to predict) many moves in the market. But a simple algorithm, such as investing in an index fund, will work well over the long term.

Einhorn says that both approaches have their merits, and they depend on your model of reality.

  • If nature is a system, and we can know that system, it’s better to make predictions, based on developing and refining a systematic understanding (clinical).
  • If nature is random, or unknowable, it’s better to pick an algorithm in advance (statistical).

Gerd Gigerenzer puts this a different way.

If risks are known, good decisions require logic and statistical thinking. If some risks are unknown, good decisions also require intuition and smart rules of thumb.

Tagged:

Why don't people trust algorithms?

March 28, 2015

Here’s an interesting article: “Why People Don’t Trust Algorithms To Be Right”. (The actual title is “Why People Don’t Trust Machines To Be Right”, but algorithms don’t always run on machines, and the article also conflates algorithms with data).

Anyway, it’s an interesting problem. A good algorithm can be an extremely efficient way of making decisions.

Gerd Gigerenzer, a German psychologist, talks more about this. For example, in his book Risk Savvy, he spends several pages talking about the 1/n stock market portfolio, which basically just means that you allocate your money equally to each of n places.

This performs really well as an investment strategy. It beats a bunch of alternative strategies in most measures of investment performance in this paper by DeMiguel, Garlappi, and Uppal. The “buy an index fund and hold it” investment strategy boils down to this strategy.

But there are whole industries based on making continuous, non-algorithmic decisions about how to invest. Decisions based on consistently re-evaluated human judgment. This doesn’t really appear to work, but people do it anyway.

Why?

No algorithm’s perfect [and] that little error seems to be a real problem for an algorithm to overcome…

Once people have seen an algorithm err they assume that it’s going to keep making mistakes in the future, which is probably true to some degree.

The bad assumption is that the human won’t keep making errors and the human could even improve, which probably in a lot of contexts isn’t true. The human will keep making worse mistakes than the algorithm ever would.

When an algorithm has made an error, you know that it has made an error. There’s no illusion of perfection. So you know you can expect it to continue making errors, even small ones, whereas with more deliberate decision-making, you have the hope of reducing the number of errors.

In the stock market, if your index fund strategy has a bad year, you’re tempted to sell out of it altogether, under the illusion that you can stop that from happening next time.

The kind of strange solution that the article suggests to this problem is to let people meddle with the algorithm anyway, but in ways that don’t effect the outcome dramatically.

So, for example, the algorithm puts out a number and you can adjust it up or down by five. And we found that people like that much, much more than not being able to give any of their input.

And actually, when that method errs and people learn about it, they don’t necessarily lose confidence in it. So, as long as they had some part of the decision and they got to use their judgment, that might actually help them use the algorithm.

The interview doesn’t say what the results of these changes are. Presumably, the human second-guessing of the algorithm doesn’t actually improve it. But if it makes people more accepting of the algorithm, that should still improve overall decisionmaking.

To me, this solution points to another reason why people are biased against algorithms: fear. If a set of rules can replace your human judgment, then that decision isn’t yours anymore. People like having control over their environment and over their lives; an algorithm replaces that. And the success of an algorithm also means that our judgment doesn’t matter as much as we’d like.

Tagged:

Styles of translating Ancient Greek

March 23, 2015

One thing that I reliably get huffy about is modern translations of Greek and Latin classics. (Yes, really.) I can’t remember exactly why, but I was recently reminded of the Fagles translation of the Odyssey, which I’ve never really liked, and I wanted to compare it to my favorite, Richmond Lattimore’s.

Reading Lattimore is about as close as you can get to reading Ancient Greek, without actually reading Ancient Greek. It helps that that each of Lattimore’s lines is exactly 14 syllables, approximating the meter that the Greek poem is written in. But mostly it’s his extremely faithful, and yet still poetic, word choices that make reading his translation so close to the experience of reading the original text.

Some examples from the first five lines of the Odyssey:

Line 1

Greek:           ἄνδρα μοι ἔννεπε, μοῦσα, πολύτροπον, ὃς  μάλα πολλὰ
                 andra moi ennepe, mousa, polutropon, hos mala polla

Lattimore:       Tell me muse, of the man of many ways, who was driven
Fagles:          Sing to me of the man, Muse, the man of twists and turns

“andra moi ennepe”, is quite literally, “tell me, Muse, of the man”, which is what Lattimore renders.

Compare Fagles’ “sing to me of the man”, which isn’t really true to the Greek since “sing” isn’t actually present there.

“Polutropon” is a tough word to translate. Literally “much-turned” or “much-turning”, (poly-trope-ic), it could mean “versatile” or “wandering” or perhaps “tricky”, or lots of other things. Lattimore picks “the man of many ways”, and Fagles picks “the man of twists and turns”, both of which seem OK.

Line 2

Greek:           πλάγχθη,  ἐπεὶ Τροίης ἱερὸν  πτολίεθρον  ἔπερσεν:
                 plangthe, epei Troies hieron ptoliethron epersen:

Lattimore:       far journeys, after he had sacked Troy's sacred citadel.
Fagles:          driven time and again off course, once he had plundered / the hallowed heights of Troy.

“Ptoliethron” is a poetic version of the word “polis”, which just means “city”, and “hieron” means “holy” or “sacred” (like hieroglyphics, which are sacred writing). So really what you would want here is “sacred city”.

Lattimore gives us “Troy’s sacred citadel”. I’m not sure how much closer you could get to the Greek, since “Troy’s sacred city” doesn’t make complete sense in English.

Fagles gives us “hallowed heights”, which sounds nice but gives you a sense of how much license he is taking with the language.

Line 3

Greek:           πολλῶν δ᾽ ἀνθρώπων  ἴδεν ἄστεα καὶ νόον ἔγνω,
                 pollon d' anthropon iden astea kai voon egno,

Lattimore:       Many were they whose cities he saw, whose minds he learned of,
Fagles:          Many cities of men he saw and learned their minds,

This line is another great contrast between Lattimore’s and Fagles’ styles.

Fagles’ rendering is technically incorrect, since πολλῶν, “many”, modifies “men” and not “cities”. So it has to be “he saw the cities of many men”, not “many cities of men”.

It’s interesting that both Lattimore and Fagles render νόον “noon”, which is “mind” in the singular, as “minds”. (“noon” can also mean lots of other things as well.)

Line 4

Greek:           πολλὰ δ᾽ ὅ γ᾽ ἐν πόντῳ πάθεν  ἄλγεα ὃν  κατὰ θυμόν,  
                 polla d'ho g' en ponto pathen algea hon kata thumon

Lattimore:       many the pains he suffered in his spirit on the wide sea,
Fagles:          many pains he suffered, heartsick on the open sea,

Again, while “heartsick” has an emotional charge to it, κατὰ θυμόν definitely means “in his heart”. Lattimore has to add an extra “wide” here, I assume for metrical purposes, but neither “wide” nor “open” is actually in the Greek.

Line 5

Greek:           ἀρνύμενος ἥν  τε  ψυχὴν   καὶ νόστον  ἑταίρων.
                 arnumenos hen te  psuchen kai noston hetairon.

Lattimore:       struggling for his own life and the homecoming of his companions.
Fagles:          fighting to save his life and bring his comrades home.

Lattimore renders νόστον, “noston”, as “homecoming”. This is the standard meaning of the word as it shows up in lots of Greek literature and related scholarship; Odysseus is fighting for the homecoming of his friends.

Here again, Fagles gives us the much less literal “to bring his comrades home”.

Tagged: