I can’t remember when I first noticed TL;DR appearing on web pages. On the face of it, a TL;DR is a kind of “executive summary”, intended to save the valuable time of the reader.
I wonder whether the rise of the TL;DR is a property of the medium - it’s harder to read extended passages on a screen than it is on paper - or of the reader. Demands on our attention are so intense and frequent, especially while online, that spending longer than few seconds reading something feels like a major commitment.
I reckon there’s a kind of inverse laziness effect, that TL;DR can be a kind of a get-out-of-jail card for an author. Start with a TL;DR, then write/rant for as long as you like. It allows the online equivalent of Pascal’s apology:
I would have written a shorter letter, but I did not have the time.
Yahoo’s recent acquisition of summly demonstrates how much interest there is in the idea that summarisation can be automated. I’m not entirely happy with the concept, no matter how good the technology might get. Gary Marcus, writing a recent article in The New Yorker asks the question about self-driving cars:
Your car is speeding along a bridge at fifty miles per hour when errant school bus carrying forty innocent children crosses its path. Should your car swerve, possibly risking the life of its owner (you), in order to save the children, or keep going, putting all forty kids at risk? If the decision must be made in milliseconds, the computer will have to make the call.
An extreme example, perhaps - but don’t dismiss it with “it’ll never happen to me”. I would argue that it’s easy to come up with a situation for any person where they simultaneously feel “yeah, I can imagine that happening, that’s not too far-fetched”; they can see that a computer might have to take a decision for them; and they can imagine a way in which they might easily disagree with the default setting. Should the car swerve for a rabbit that hops out into the road? Probably not. A mother pushing a pram? I might be getting a bit dystopian here, but it’s not inconceivable that wealthy people could purchase higher levels of “impact defence insurance”, and be given priority in the split second decision that the computer takes.
Others (Clive Thompson, Wired) have pointed out, more eloquently than I can, that the more we rely on “machines” to take decisions for us, the fewer our opportunities to be reflective. I’m not arguing that I would make a better decision, simply pointing out that there are so many decisions already being taken for us that we don’t even realise it. Defining which we agree with and which we don’t is a subjective matter, but one that’s painfully relevant as we start to cede more control. We live in a filter bubble, and even if summly doesn’t even approach the current state of the art in AI, who knows what decisions it might make in summarising news for me?
Some see an existential risk from artificial intelligence (Prof. Huw Price, CAM magazine issue 68, p22: “Man vs Robot”) - or at least a sufficiently important threat that it’s worth establishing a Centre for the Study of Existential Risk here in Cambridge. I’m following with interest.