To put the pan in the demic

Screenshot 2020-05-23 at 11.24.27.png
A 15th-century manuscript of Ammianus Marcellinus’ Histories (Vatican, BAV, Arch.Cap.S.Pietro.E.27)

I’ve noticed that the words ‘pandemic’ and ‘epidemic’ have recently been used interchangeably in the media and in everyday speech. Any extensive outburst of a contagious disease is a pandemic or an epidemic, depending on which prefix you prefer or comes to mind first. Pandemic sounds a bit more catastrophic, while epidemic has a scientific, even clinical, ring to it.

To do justice to the roots of the two words, it should be said that in an epidemic, the disease is in the process of spreading among the people, from the Greek epi- (among) and demos (people). A pandemic is when the disease has affected and infected everyone (from pan– meaning all). The myth of herd immunity can only work in a pandemic, when everyone’s infected, and then fingers crossed.

So a pandemic is not only far more catastrophic than an epidemic, but it’s also a point of no return – since everyone’s got the disease, there’s not much one can do in terms of prevention.

In the 4th century AD, a different classification was in use. The Roman historian Ammianus Marcellinus describes three types of outbreaks, what he calls endemic, epidemic and loemodemic or pestilential. The pandemic is absent from his triad, because Ammianus focused on the severity, not the contagion factor (what we now call the R0), of a disease.

Now the first kind of plague is called endemic, and causes those who live in places that are too dry to be cut off by frequent fevers. The second is epidemic, which breaks out at certain seasons of the year, dimming the sight of the eyes and causing a dangerous flow of moisture. The third is loemodes, which is also periodic, but deadly from its winged speed. (History, I, 489 (Rolfe edition))

By the 14th century, all plagues were referred to as pestilences. The Black Death was a pestilentia, as were countless other instances of outbreaks of smallpox, malaria and Yersinia Pestis-related plagues.

The only prefix which doesn’t come to mind yet is hypo- as in hypodemic, which describes a disease with a reproductive rate of less than 1. The day will come when Covid-19 goes into hypo-drive.

A different kind of tourism

It is often said that the origins of modern tourism go back to the medieval European pilgrimage. Many people travelled in the Middle Ages, but out of all the eligible categories, the pilgrim is often singled out as the ancestor of the tourist and holidaymaker. Not the warrior, not the merchant, not the itinerant student, nor the preaching friar, though they often travelled long distances and for long periods of time. The merchants and friars, not pilgrims, ended up in Asia. Marco Polo was no pilgrim, and neither was William of Rubruck, the Franciscan friar who travelled to Karakorum to see the Great Kahn. Perhaps no other Europeans toured the known world more than these two people, and yet they, and other members of their group, are never counted among the early tourists.

Travelling was not among the essential features of a merchant or friar, even though many did travel. And travelling is not everything to a tourist, even though tourism as a concept comes from the activity of touring or travelling.

Just like a merchant, a soldier or a friar on a diplomatic mission like William, the medieval pilgrim travelled to get somewhere. The point of a pilgrimage was and still is devotional, the desire to reach a sacred place, to accomplish a vow, to do penance, or immerse oneself in the reality of a transcendental past or reality.

The first and last motivations still characterise many tourist projects today. When upper-class northern European men and women embarked on what was to become the Grand Tour of the 17th and 18th centuries, they also wanted to partake of the sacrality of Florence and Rome, even if for them the illustrious ancient past was enough grounds for revering those places with a devotion similar to that of a medieval pilgrim on her way to Rome or Jerusalem.

Maps, sights to see, places to stay at, things to know were common features of pilgrim guides and early-modern grand-tour travelbooks just as they are permanent fixtures of our Michelin and Lonely Planet travel guide books. The fridge magnet, hacky as it may be, is a distant relative of the pilgrim badge, perhaps as hacky then, which pilgrims brought back from the places they visited.

The medieval pilgrim in you or me may not be too hard to find next time we go to an exotic location and we marvel before a ruin, a statue, a building or even a landscape. Unlike the medieval pilgrim however, we wouldn’t think it’s ok to die on the way and to accomplish the journey in the afterlife. After all, we need that selfie, don’t we?

Dea Febris

Every age and every culture has its own way to respond to viral diseases, plagues and epidemics. While we have medicine, technology and social services, the ancient Romans had Dea Febris, the goddess of fever, the protector against malaria and other infectious diseases, though by no means the only one. According to one theory, Febris evolved from the god Februus after whom the month of February is named. If feverish conditions must happen, then February is the right time. The irony of this wild speculation won’t be lost on anyone.

If Febris was the tutelary deity against the destructive effects of fever, Februus was the god of purification, possibly connected with the salutary (this time) effect of fever, since fever led to sweating, which then as now, is considered the body’s way of ejecting the disease and purifying itself. If this is correct, then Februus and Febris are an ideal pair, the reflection of an insightful theory of pyrexia, the human body’s ability to increase its temperature – both as symptom of an infection and as protection against it.

So I’m submitting Dea Febris to anyone who needs a patron saint for these feverish times, anyone who is not happy with the historical performance of Saints Adrian of Nicomedia (patron saint of plagues), Bernardino of Siena (same), Edmund of East Anglia (of pandemics), the Fourteen Holy Helpers (same), Hugh of Cluny or even latter-day Saint Corona.

Pseudonyms and fake news

Despite speculation and hard-headed scrutiny, nobody knows who the person behind the hyper-famous name ‘Elena Ferrante’ is. We might never find out. The French writer Romain Gary received the Goncourt Prize twice, once for a book written under his own name, the other for a book written under the pseudonym ‘Emile Ajar’. Nobody suspected anything and perhaps he would have taken the secret to his grave, if he hadn’t revealed it himself just before committing suicide.

We don’t seem to mind pseudonyms, even though the word actually means ‘fake name’ in Greek. Names may be fake, but stories have to be true. No tolerance for ‘pseudangelia’, the Greek word for fake news. The Ancient Greeks didn’t know fake news in the sense we do. I coined pseudangelia for the occasion. It’s all fake.

There’s a great deal of ‘fakery’, or pseudepigrapha in the European literary history. These are writings claiming to be something else they are. Authors purported to be someone else, stories alleged to have been written in a different century, a different context, a different place. For nearly every writer from classical antiquity there is a host of pseudo-writers adopting false names or seen to have been the same as those they claim to be. The 6th-century Christian philosopher known today as Pseudo-Dionysius the Areopagite was long held to be the same Dionysius mentioned in the Acts of the Apostles in the 1st century AD, who may have never written anything. Some medieval writers of erotic poetry attempted to pass as the Roman poet Ovid. There was always someone usurping identity and more than one person ready to support the identity theft. The positive reception of pseudonymity is always more visible than the forging agent. In other words, those who fake it get less publicity than those who welcome the fake.

From our age of fake news, what history will remember won’t be so much those who faked it (besides, do we know now who they are?), but those who fell for them, those who clicked and passed on, who spread the pseudangelia like wonky half-angels.

The interesting thing about medieval pseudonyms is that the true authors are only rarely identified. Most pseudonyms stay pseudonymous. Even when historians and philologists call the pseudonymous author’s bluff, the real person behind the fake name remains anonymous, that is, nameless. There is a huge body of authors’ names from the medieval period, but an even longer list of nameless names: Pseudo-X, Pseudo-Y, faute de mieux. After all, what are we to call those who wrote books under pseudonym? And what are we to do when we have a Pseudo-Horace writing in the 9th century and another one trying to pass as Horace in the 12th century?


The art of writing letters

Amélie Nothomb once said that while novels and poems are texts where others are allowed to enter or not, letters and epistles cannot exist without the other person. You may write a letter to an imaginary friend, to a deceased addressee, even to a fictional figure – a letter is always a social gesture, involving two or more people.

As a fundamentally social literary genre, epistles have always been regulated by strict rules, the do’s and don’t’s of letter-writing style.

Until recently, at least. As the epistolary genre has ceased to play a role on the literary scene, an important social artefact has been left behind. Letters are now permanent fixtures of museums, libraries and works of historical biographies.

Who will ever seek to turn their Outlook archive into an edited collection of emails for posterity to see and admire? Who will ever allow themselves to be seen and described through and by the electronic missives they’ve sent to family and acquintances, friends and foes of a lifetime?

Cicero wrote letters that changed the course of European history. St Paul sent letters that built an entire faith up. Abelard wrote letters that altered our perception of an age. We write emails, encrypted and ephemeral, that no one will ever read again, even if they survive in perpetual storage.

In his Heroides, Ovid had Penelope address a grief-stricken letter to Ulysses, her husband. Petrarch directed his letter to Homer across 2000 years of history and cyberspace. Antiquity had its strict rules about how a letter should be. The medieval West had stricter rules still. Writing letters was an ars, the art of an artisan, a skill acquired, a practice applied. Learning to read and write, pupils used to learn how to write letters as well. We no longer abide by letter-writing rules, and that learning process has been removed from us.

It may be fair to say that emails are not letters after all. The metaphor of their name, the electronic mail, has eaten its own tale and dissolved into bits of written orality, if you pardon the oxymoron. Emails and letters are both communication tools, but that’s where the similarity ends.

Of individuals and crowds

News travelled very slowly in the past. Just as it took months for individuals to reach distant lands, so did news reports reach different individuals and communities with significant retardation.

In premodern Europe, oral information circulated more speedily than written reports. In the medieval period, necrologues travelled quickly from one monastic community to another. Before monks found out what was happening in other regions, they learned about who died and how they died.

Necrologues were important sources for monastic chronicles everywhere in the West. Historians often didn’t have anything to report for certain years except that an abbot or a prominant brother or sister died. This was newsworthy information and was often incorporated into annals and chronicles. The business of the dead was as important as the affairs of the living, especially in a culture where death was a stop on the way to eternal life.

That is why even during periods of political upheaval and social disruptions, many chronicles don’t have much to add other than some people died. And, at least before the 14th century, the record of their death did not push historians to find out more about the departed. The general impression one gets from perusing many medieval chronicles is one of strict objectivity and cold indifference.

As biography rises in importance during the last centuries of the medieval period, chroniclers begin to record more information about the people that had previously only received a passing remark. That is because these writers start asking questions. That is because the person rises above the community to become an object of discussion. In the visual arts, the individual leaves the amorphous crowd to become a full-featured object of representation. And then we enter the Renaissance.

Omo sanza lettere

Screenshot 2020-05-17 at 12.25.44.png
A leaf from a manuscript containing works by Manuel Chrysoloras, who contributed to the rebirth of Greek letters in the West, Vatican, BAV,

At the turn of the 16th century, Leonardo da Vinci described himself as an ‘omo sanza lettere’, a man without letters. He most likely meant that he had, to paraphrase Ben Johnson, little Latin and even less Greek.

Leonardo was one of the last men of the Middle Ages and one of the first of the Renaissance. During the medieval period, the inability to read and write Latin was a barrier towards career progression, but not to know Greek was absolutely alright. Most rulers couldn’t read Latin anyway. Few aristocrats invested time and money in the kind of education that included the learning of Latin. To be ‘without letters’ during most of the medieval period meant to lack access to Latin. Then something changed in the 14th and 15th centuries. Greek, like Ulysees, came back home after a long exile abroad.

The memory of ancient Greek never left the medieval West. Men and women of letters were well aware that Greek was as authoritative a language as Latin, that the New Testament had been written in Greek, that many Church Fathers wrote in Greek, that Cicero, the great master of rhetoric, therefore of language, had mastered Greek. But with no-one to teach Greek or to give Western culture intellectual challenges requiring knowledge of Greek, the West did happily without.

Things began to change in the late 14th century when a first wave of Byzantine scholars  arrived in the West as a result of diplomacy and political crises. Manuel Chrysoloras was one of the first who brought the West out of Hellenic amnesia. Many followed in his footsteps, to the point that by the 16th century, Western humanists were trained in both Latin and Greek. The return of the Greek language to Western shores meant also the return of Greek literature to Western readers. One after the other, the works of Greek antiquity recovered their old markets and conquered new ones.



Language acquisition

It has often been said that the childhood is a creation of the modern period. Indeed, ancient and medieval sources occlude discussions of the first age of the human individual, the baby and toddler years. Premodern representations of children emphasise size, rather than any other features, to designate youngsters. From the baby in the cradle or in the mother’s arms, the human child stands on her two feet like a miniature adult. Artistic sensitivity to early youth is very low in this period. That’s because the child, the argument goes, doesn’t really exist.

And yet, there are occasional medieval reflections on the childhood years of human existence. And they are instructive. Instead of launching into a survey of literary sources for ancient and medieval childhood (there are numerous books on the topic, anyway), I’ll focus on language acquisition.

Plato was one of the earliest thinkers ever to discuss the humans’ innate ability to acquire and use language. While his ideas were extremely influential through the ages, his observations were purely theoretical. He didn’t invite Socrates or any of his other thoughtful dialogists to talk about their or any other people’s children’s ability to acquire speech. We’d have to wait for St Augustine to give us an early glimpse on what happens when children are in the presence of speech and how they might acquire it. And not any children, but himself as a child. Introspective autobiography meets empirical observation.

In the first book of his Confessions, Augustine describes his infancy and the moment he started to acquire human speech. The account is remarkable:

I remember this, and I afterwards observed how I first learned to speak, for my elders did not teach me words in any set method, as they did letters afterwards; but myself, when I was unable to say all I wished and to whomsoever I desired, by means of the whimperings and broken utterances and various motions of my limbs, which I used to enforce my wishes, repeated the sounds in my memory by the mind, O my God, which Thou gavest me. When they called anything by name, and moved the body towards it while they spoke, I saw and gathered that the thing they wished to point out was called by the name they then uttered; and that they did mean this was made plain by the motion of the body, even by the natural language of all nations expressed by the countenance, glance of the eye, movement of other members, and by the sound of the voice indicating the affections of the mind, as it seeks, possesses, rejects, or avoids. So it was that by frequently hearing words, in duly placed sentences, I gradually gathered what things they were the signs of; and having formed my mouth to the utterance of these signs, I thereby expressed my will. Thus I exchanged with those about me the signs by which we express our wishes, and advanced deeper into the stormy fellowship of human life, depending the while on the authority of parents, and the beck of elders. (Confessions, Book I, chapter 8)

Augustine’s realism is unsurpassed. He understood that non-verbal behaviour and imitation are at the heart of the child’s ability to acquire speech and understanding. Nor was he ignorant of the fact that expressing one’s will is ultimately a speech act, something we might call, following John Langshaw Austin, a locutionary act. First-language acquisition is a matter of identifying signs and making them your own.

Almost a millennium later, Dante picked up Augustine’s ideas and incorporated them into his theory of natural language which he laid out in his essay De Vulgari Eloquentia (On the eloquence of the vernacular language), the world’s first treatise on historical linguistics. Dante was the first to point out that the mother tongue has qualities that an artificial language such as Latin (check my previous posts if you’re offended or confused by my use of the word artificial), or other acquired languages such as French or Provençal, do not. One’s mother tongue feels closer to home, is a language of the heart, acquired from close interaction with one’s mamma and babbo, the homely Italian words Dante uses for mother and father. While it may lack the precision and efficiency of a conventional language such as Latin (at least in Dante’s time when the vernaculars had not achieved literary status), the mother tongue is part of one’s identity and constitute’s one’s cultural legacy to future generations.

Medieval multilingualism

I’ve never been fond of big narratives, linear descriptions, watertight explanations. For every book claiming that history moves from X to Y according to principle Z, there are a dozen others which show that reductionism is not the way to do history. From Hegel to latter-day brief histories of humankind, the temptation to explain big cultural change through simple principles has been irresistible. And yet, as any good course in historiography and historical criticism will show, the big is almost never reducible to the simple. And the simple is hard to see, anyway. Unless it’s in our head, in which case it seems clear and compelling.

One way historians methodologically challenge the reductionist urge of many authors and cultural pundits is by throwing exceptions, so to speak, into their wheels. Nothing is more irritating to one’s explanation than to realise that it doesn’t really apply to what it was meant to apply to or that there are enough exceptions for it to lose its explanatory edge.

An example is multilingualism. While many still believe that the West moved from the ignorance and darkness of the ancient and medieval periods to the knowledge and enlightenment of the modern age, it can be shown that in the area of multilingualism, the upward trend actually runs downwards. Few would argue that multilingualism is a culturally insignificant phenomenon. Access to other languages enlarges one’s mental horizon, reduces prejudice by exposing one to the lingually different, and contributes to the development of novel ideas.

In the 21st century, there are more people capable of reading, if not speaking, more than one language in Europe than there were in the 1st century BC or in the 12th century AD. No one’s going to argue with that. The historical progress in literacy and education was such that more of us today can understand people outside our closest circles than our distant ancestors did. Yet, when we shift the focus from the whole population to that of intellectuals, scientists, and other cultural movers and shakers, things begin to look different.

Take intellectuals, the thinkers and professionals of the written word.

The medieval West enjoyed a very peculiar linguistic arrangement, which I had opportunities to rant about in previous posts. While most people spoke their mother tongue then as they do now, Latin was the language of the elites, the language of culture, government, administration, both secular and ecclesiastical. Latin wasn’t the luxury of academic oddballs, but the key equipment of every intellectual. Despite the rise of vernacular languages as idioms of culture in the 12th century, Latin reigned supreme for the whole of the medieval period. To be an intellectual, a man (or sometimes a woman) of ideas and letters, one had to speak Latin, that is, speak a second language, since Latin wasn’t anyone’s mother tongue. This was true to the extent that Latin illiteracy was a instant disqualifying criterion for anyone aspiring to be taken seriously in the world of thought.

Now, fast forward to the 21 century. How many academics and intellectuals in highly developed countries can only speak one language? I haven’t seen any stats on this, but just based on my personal experience of countries such as the UK, US, France and Italy, I can say, there’s quite a lot, and what urges me on in this thought is that academic activity and intellectual distinction aren’t hampered by monolingualism. After all, there are enough tools out there for monolingual intellectuals to overcome their weakness.

This state of affairs is not surprising. Nor is the fact that monolingual intellectuals may more likely exist in countries where monolingualism is generally the norm, like the UK, the United States, France, countries sitting on the linguistic laurels of the past and present.

So to go back to the 12th century and to those intellectuals who could converse with Cicero as well as with the local market vendor in different languages. Those individuals were products of their own culture, one that had inherited the burden of Rome to which it superimposed its various ethnic layers. What should we say of medieval England with its documents issued in English, French and Latin by the same scribes, or Emperor Frederic II’s Sicilian chancery switching between Latin, Greek and Arabic? And what shall we say of modern-day US and UK with its overwhelming monolingual populations, proud that all you need is English nowadays in order to converse with Cicero, haggle with the market vendor or write silly blogs?


Guilty pleasures

The Ashburnham Pentateuch, stolen by Guglielmo Libri from Tours in 1842 and sold to the Earl of Ashburnham in England, was returned to France in 1888. 
Since the 14th century, many book lovers have been born in Florence or its vicinity. The Renaissance was, since its early days, a book rush, especially one for rare, lost, unread, unknown, neglected volumes. The humanists of the Renaissance were avid book finder and collectors.

The Florentine scholar Poggio Bracciolini (1380-1459) made some incredible finds during the first decades of the 15th century. His love for old texts in old manuscripts was undeniable. His pleasure for collecting volumes forgotten on centuries-old shelves and boxes acquired approval from everyone. It was not a guilty pleasure. Poggio loved putting his arms (his name Bracciolini ultimately comes from the Italian word braccio for arm) around the books he loved. There was nothing wrong with that.

Some 420 years later, Florence popped out another book lover. This time it was someone whose affection for books preceded his birth. His name was Guglielmo Libri, his last name Libri meaning ‘books’ in Italian. You can’t get more predestined than that. But sometimes life takes its cues from novelists, and Libri’s life was going to be an essay in cosmic irony. There is no question that when he was appointed Chief Inspector of French Libraries in 1841, he considered himself the luckiest bibliophile on earth. Unlike Poggio the Book-Hugger before him, Libri used his position and a general state of national gullibility to remove rare manuscripts from French libraries. Unlike most heist fictions, his didn’t go wrong and he was never caught. Although condemned in France, he lived as a free man in England and Italy, where he died in 1868.

Libri’s guilty pleasure left traces all around Europe. Many of the stolen manuscripts were sold in London, and from there they were resold elsewhere. Volumes were still being returned to their rightful repositories as late as 2010.

To simply dismiss Libri as a book thief is to miss an important slice of the picture. Libri was in love, and as every lover knows, he was in search of possession and intimacy with the object of his love, the manuscripts of his affection. And as any lover would do, he took them home.