The Problem with Memetic Literacy

Unknown.jpegImmediately after they wake up, a large percentage of people check their phones to see the latest notifications from their social media or to respond to the influx of emails they have received. Similarly, a large percentage of people stay up late doing the same thing, checking their feeds, scrolling, nothing in particular on their minds, nothing to look out for—just scrolling, as if something will magically appear. Everyday, millions of pairs of eyes flicker over their bright screens, either on Instagram, Snapchat, or iFunny looking at hundreds of memes, short, humorous images or clips shared from person to person, starting with just one viewer, then spreading exponentially, until, like the game of telephone, it evolves with every share, becoming something new, something different, yet derivative, building off of the original, but with a new touch of interpretation by whoever appropriates it. It can be said that memes are one the greatest things of 21st-century technology since they are able to be universally understood, shared, and laughed at. Language barriers are no Unknown.pngmore, so someone in the U.S. can share a meme with someone in China, and they will both get it. How cool is that—to be able to communicate cross-culturally and get a laugh out of it? Memes allow for a shared knowledge and entertainment for people of all ages and backgrounds, connecting them through a single medium. While I myself like a good meme, just as anyone else does, and while they can be hilarious, I think the popularity of memes today, despite its benefits, also brings with it deficits, problems that need, and should, be addressed. The spread of a “memetic literacy,” as I like to call it, has supplanted a much more fundamental, more necessary cultural literacy, and so will, I believe, impoverish both today’s and tomorrow’s youths.

Screen Shot 2018-03-18 at 11.33.01 PM.pngWhen we think of literacy, we think of reading and writing. To be literate is to be able to read and write; to be illiterate, to be able to neither read nor write. Defined this way, our generation has the highest literacy ever, according to the graph to the left. Over time, as education has become open to more people, as education has been improved, literacy has gone up, and will continue to. We are living in an Enlightened age, the most Enlightened age, with information stored in computers and more brains than there have ever been. However, there is a difference between being able to read and write and being able to read and write well. E. D. Hirsch defines literacy as “the ability to communicate effectively with strangers.”[1] What this means is that literacy is a common, shared knowledge. If I am literate, then I should be able to engage anyone on the street and be able to have an understanding conversation with them, one in which I am able to understand them, and them me. Despite our backgrounds, we are both able to know what we are each talking about; I and they are comprehended. During the 19th century when the world was industrializing, education was universalized. Schools were implemented worldwide to teach a shared culture. National languages were codified, instead of regional dialects so that people could understand one another, and thus, as in Unknown-1.jpegthe Renaissance, reading was made available for everyone, not just the learned elite, who were usually religious members. Because language was made singular, common, the koine, the vulgar tongue, the common folk could on a mass level learn to read and write in school. Some argue that is a language and a culture that create a nation, for what is spoken and what is spoken about constitute a common people. There is a sort of egalitarian principle behind this, a principle of making everyone equal, of giving everyone, no matter their makeup, no matter their abilities, no matter their social position, the right to an education, the right to be a part of a culture. There are no distinctions between the advantaged and the disadvantaged, the educated and the uneducated.

Unknown-2.jpegHirsch relates how the literate usually like to keep the illiterate illiterate by not telling them how to be literate, withholding the specific requirements for becoming so. It is subtle: There is no single, agreed-upon list of things to know in order to be literate, for the selection is just so vast. The Western Canon, for example, is but a sampling of the world’s greatest literature. So while some may call you literate for having read the whole Canon, some may not consider that criteria enough. As such, to be truly literate, to be well read, is to be a part of the elite, as opposed to the merely literate, comprised of those who are educated enough to read and write. I like to think that I am pretty literate in memes, but this was disabused when I was hanging out with a friend one time, and every phrase I heard out of his mouth I could not relate to. I thought I had a pretty solid grasp of memes, yet here was my friend, who was clearly more literate in memes, referencing different jokes whereof I knew not. It was like he was having an inside joke with himself that I could not understand; I lacked the shared background knowledge as he, and he assumed I had it, when I did not. On YouTube, there are famous playlists 300-videos long, lasting for several hours, full of Screen Shot 2018-03-19 at 11.11.59 AM.pngmemes. If one can sit through all of them, then one, I guess, can be called “literate” in memes. However, he will still be lacking in other memes, meaning it is hard to specify what memes one should know if one is to be literate in them. In my case, how am I to know which memes are in vogue? Moving past this, the better one can read, the better one does in other subjects. From experience, I can attest to the fact that reading a variety of texts leads to a bigger vocabulary, and thence to a larger storage of knowledge and comprehension, resulting, ultimately, in easier learning through association. Such is the outline of literacy by Hirsch. Someone who is well-rounded in their reading, who reads not just fiction but non-fiction, who looks up words they do not know so they can improve, who not only specializes but generalizes their knowledge, who associates what they do not know with what they do know—they are literate, and they are successful in reading and writing.

E. D. Hirsch writes of a study he once conducted in Richmond, Virginia at a community college. There, he interviewed students and asked them to write responses to his prompts. Eventually, he asked them to write an essay in which they compare Civil War generals Ulysses S. Grant and Robert E. Lee, the latter of whom was himself a Virginian. Although they were in the capital of Virginia, what was once the capital of the South, the students were not able to write a response because they did not know who either of the two men was. Hirsch was flabbergasted, to say the least. The point he was trying to prove was this: Cultural literacy is integral to society. A universal background is always Unknown-1.pngpresupposed. We require tacit knowledge to understand things that are implicit, both in a text and in the world around. The culture is greater than the sum of its parts. Culture must be understood generally, in relation to all its parts, kind of like a Hermeneutic Circle, where the whole and its parts must be continually interpreted in light of each other. In this sense, cultural literacy comprises political, historical, social, literary, and scientific literacy, all in one, according to Hirsch. In other words, cultural literacy is the totality of all its subjects. One must be well-rounded and not too-specialized to be culturally literate, lest one neglect a subject over another. For instance, a writer writing a non-fiction book assumes his audience knows what he knows, or at least has some kind of background information coming into it; he least expects them to be coming in blindsided, without any preconceptions or context whatsoever. There should be an interplay between specialization and generalization, because, on the one hand, a reader should have a grasp of the subject overall, but also the details within it. Things that are assumed are connotations, norms, standards, and values, among other things—in short, shared knowledge. To have this shared knowledge, this basic understanding of one’s culture, such that one is able to engage with it, “to communicate effectively with strangers,” is to be culturally literate.

Durkheim spoke of a “collective consciousness,” a totality of implicit, pre-existent notions that exist within a society. Everyone in the given culture is under this collective consciousness, is part of it. It is collective because it is common to everyone; Unknown-4.jpegconsciousness because everyone knows it, even without acknowledging it. Being an American, I have the idea of freedom as a part of my collective consciousness, just as over 300 million other people do. Were I to stop a stranger and ask them about freedom, I am sure they would have the same background knowledge as I, such as the 4th of July, which signifies independence for the U.S. This example illustrates an interaction in cultural literacy. Things are a part of our collective consciousnesses only because they are meaningful and valuable; if they are not, then they do not deserve to be presupposed by all. If it did not mean something, why should it survive in all of us? Hirsch writes, “[T]he lifespan of many things in our collective memory is very short. What seems monumental today often becomes trivial tomorrow.”[2] It is hard to become a part of the collective memory. What makes good literature good is its longevity. Homer has long been considered one of the greatest ancient writers because he has remained read for millennia. Compare this to pop singers today, whose meteoric rises soon meet an impasse, only to decline, impermanent, impertinent. With memes, the same can be said. They all explode in popularity, only to reach their apex before either fading into obscurity or being replaced by another. A meme can be overhyped. It loses its importance, and although it seems “funny” or “important” one day, it may not the next. Memes are volatile things. On a whim, they come and go. Even though some have a longer life than others, they all eventually go. The classic Vine “9+10=21” was once extremely popular, and was quoted daily in school; now, it hardly exists in our collective F759C5A8B71089736889893797888_175ced7823d.3.2.mp4.jpgmemory; it is a ghost, a fragment from oblivion. Hirsch comments that about 80% of what is taught in the collective memory has already been taught for at least 100 years. The Western Canon, again, is a good example: Its core works have been fixed since antiquity, and as civilization progressed, more works were added to it to keep up, all the way to the 20th century. In 100 years, it is incredibly unlikely—albeit still possible—that we will remember, or at most care about, people chucking things while yelling, “YEET!” Memes, while communicating entertainment, do not express values. Therefore, the Western Canon as such is as it is because it has been formative in our world; they have been studied so long and by so many people, that it has left an indelible influence, an influence that persists today.

Given all this, I can now address the main problem of this essay, namely the conflict between cultural literacy and “memetic literacy.” I have not spoken a lot about memes yet save in small bits, but I shall discuss them presently. For now, I wish to direct your attention to the issue at hand: The decline of cultural literacy. A teacher created a quiz full of famous, influential persons and gave it to his class to gauge their familiarity with historical, artistic, literary, and philosophical literacy. He was disappointed when one of his students compared the test to a game of Trivial Pursuit, because it prompted the question, What counts as important or trivial today? This is a vital question that everyone needs to ask themselves. Are famous leaders like Napoleon now trivial today, compared to the importance of Viners and YouTubers like Logan Paul? If both names were to be put on a test, would students cry, “Why do we have to know this Napoleon Unknown-5.jpegguy? Logan Paul obviously has a bigger influence today”? Is knowing who Napoleon is just trivia? Furthermore, the teacher found that his students had no knowledge of current events, specifically of their own country and its involvement in foreign affairs. Jaime M. O’Neill, the teacher, states, “Communication depends, to an extent, upon the ability to make (and catch) allusions, to share a common understanding and a common heritage.”[3] Allusions are thought by many to be pretentious. Those who make allusions are called name-droppers, and are disparaged. Many and I would argue on the contrary, saying that it connects to Hirsch’s idea of cultural literacy. Allusions are an example of shared knowledge. To be well-read, and therefore to know of many ideas and people, is to be involved in your culture. If I were to call something Kafkaesque, then I would be engaging with my culture, as I am expressing a background in literature, whereof the situation calls. Conclusively, we are losing the ability to make references to the collective consciousness, the ability to commune with strangers on the same basis. There is a paucity of literacy in literature and history. All teenagers know these days is what they need to know. No one goes out of their way to study history or literature; they are content and complacent with what they know. O’Neill records, plaintively, that some of his students thought Pablo Picasso was a 12th-century painter, and William Faulkner was an English scientist during the Scientific Revolution.

Throughout my day, I hear my friends and classmates complaining about impractical, specialized knowledge on their tests, knowledge they have to memorize. Although I can sympathize with them, and although I agree often that these tests are absurd, I also think they are in the wrong to say these things. Jeff Jacoby, a journalist for the Boston Globe, has written about the same subject. He talks about how it is actually easier to memorize what is on standardized tests than it is our peers’ standards. Put another way, we memorize so much useless information and trivia on a daily basis about sports, music, 91uBT9850xL._SL1200_.jpgand TV in order to keep up with our peers, that it is easier to memorize facts that are on a test. Unlike peer culture, whose facts are prone to change and in constant flux, tests’ facts are fixed and unchanging. Whereas 1789 is always the date of the start of the French Revolution, knowing Steph Curry is the point guard for the Golden State Warriors is bound to change in years to come. Memorizing the Pythagorean Theorem is applicable, as opposed to memorizing all the names of the band members of One Direction, which is impressive, but not applicable. The biggest complaints I hear, and which Jacoby also cites, are “I could spend my time more meaningfully” and “Why should we have to memorize facts?” Both points have merit, I concede, especially the latter. Please do not interpret me as supporting the school and not the students; I have many a problem with education today, of which one is standardized testing, because the memorization of lifeless facts is indeed a problem. My point is: We youths memorize countless dumb, trivial facts about pop culture and regurgitate them just as much as we do scientific facts, like mitochondria being the powerhouse of the cell. I am forced to ask, If you claim you could be spending your time better, what, then, would that look like? Simply put, teenagers, myself included, are false and hypocritical; and while I am not saying we should not complain at all, I think we should complain less, unless we truly have grounds for doing so.

Kids set truly high performance learning standards for each other…. If students don’t know the details of the latest clothing fashions or the hot computer games or the to-die-for movie stars, they’re liable to be mocked, shunned, and generally ‘flunked’ by others their age. That’s why so many spend hours each day absorbing the facts and names of popular culture.[4]

This is a particularly interesting insight. Writing for the Concord Review, Will Fitzhugh observes that teens memorize popular culture information to fit in with their peers, to pass their “informal tests” that they create for each other, to be cool. Just as school is standardized, so peer performance has standards, which, if not met, result in getting “flunked.” Students complain about testing in schools when life is a big test itself! One must struggle to stay afloat in the advancing rapids of entertainment that speed by. One must be “cool,” lest they be ostracized for not being a part of the peer culture. One should be studying hard for a test they have later that week, yet there they are, up late at night, stressing over whether they are literate enough in pop culture, cramming in short seven-second videos to fit in, obsessive, anxious. Memetic literacy is slowly overtaking cultural literacy. Jacoby concludes, “The question on the table is whether the subjects to be memorized will include English, math, science, and history—or whether the only mandatory subjects will be music, television, movies, and fashion.”[5]

So what actually is a meme? The following excerpt comes from the originator of the term, the scientist Richard Dawkins:

We need a name for the new replicator, a noun that conveys the idea of a unit of cultural transmission, or a unit of imitation…. [M]emes propagate themselves in the meme pool by leaping from brain to brain via process which, in the broad sense, can be called imitation.[6]

Unknown-6.jpegA meme is a certain kind of gene, a strand of code that is inherited. But unlike biological genes, memes are what Dawkins calls “cultural genes” in that they do not pass from person to person, but culture to culture. It is a gene on a mass level. Think viral. A “viral video” is so called because, like a virus, it spreads exponentially in its hosts, not just through the air, but digitally. The video goes “viral” as it is passed from person to person, computer to computer. He says a meme is a form of “imitation,” by which he means that the meme is copied and then replicated. It has copies made of it, either new ones or mutations. They are reproducible and copyable—in fact, there is a meta-meme, a meme about a meme, about stealing memes: Creators will take an already existing meme and put their own twist on it, then put their name on it to claim it, ad infinitum. A meme is a favorable way of cultural transmission, as Dawkins puts it, because they are easily reproducible. The basic meme consists of a picture background with an above and below text that makes some kind of predictable joke along a patterned outline. The picture stays the same, but the text can be changed to allow for different jokes among people. They are simple and easy to understand. Punchlines are short and witty, and they are so widely recognized, anyone, regardless of ethnicity or language, will be able to get a laugh at its comedy. Unlike cultural literacy, which differs transculturally, memes are universal. Any high schooler, I can guarantee, will know a meme from across the world if presented one. Memes have become the source of new allusions. This means, after all, that memes are a part of the collective consciousness briefly. Seen by millions daily, memes are a images.jpegworldwide shared knowledge. But of course, memes, for how good they are, come with problems, too. What is most important in the definition of a meme, I feel, is the word “idea.” Idea can be many things—a song, a joke, a theory, an emotion, a fashion, a show, a video, and a dozen others. This said, memes have great potential because they are good for spreading ideas that matter. The problem is: Memes spread ideas that do not matter. Viral videos are for entertainment, and nothing else. One laughs at a sneezing panda for enjoyment, not education, nor enlightenment. Memes are usually. trivial, frivolous, meaningless, and humorous. Not all are, but most are. Despite their potential, memes are actually vapid and disruptive. I get a good laugh out of memes, and sometimes they can even be intellectual in their content, like historical memes. But for the majority of them, they are useless, fatuous entertainment. We need, in this age of ours, to find a balance between being literate in memes, and being literate in our world.

Unknown-8.jpegTo summarize, the problem at hand is that we are seeing a decline in cultural literacy, the ability to communicate with strangers with a shared, underlying knowledge, and a rise in memetic literacy, the ability to make allusions to videos, celebrities, sports, fashion, and other popular culture. This is not to say that memes should not be used at all, no; after all, Nietzsche said, “Without music life would be a mistake.”[7] A musician like Michael Jackson, being a part of popular culture, ought to be discussed just as much as Louis XVI because he is a part of our collective memory. Popular culture is, of course, a subdivision of cultural literacy, because without it, we would have little shared Unknown-7.jpegknowledge! I fear the day we no longer know of classical literacy, when we can quote Lil Pump’s “Esketit” but not Shakespeare’s “To be or not to be.” We should be able to discuss music and fashion and sports, but it should not be the priority; they are entertainment. Memes do a lot of good, but they can also do a lot of harm. They spread universal joy. They can get an idea to be seen by millions. What we need to do is ask ourselves questions. We need to consider what is trivial and important today. We need to decide what is worth studying, what ideas are worth spreading. Entertainment is essential, but spreading ideas, good ideas, is more important. We are undergoing a fundamental change in our world, and we need to be present to address it. This is a proposal to look inward instead of outward, to examine our values, to find out what we care about.


[1] Hirsch, The Dictionary of Cultural Literacy, p. xv
[2] Id., p. x
[3]  O’Neill, “No Allusions in the Classroom” (1985), in Writing Arguments by John D. Ramage, pp. 400-1
[4] Will Fitzhugh, qtd. in Jacoby, “The MCAs Teens Give Each Other” (2000), in Elements of Argument by Annette T. Rottenberg, p. 99
[5] Id., p. 100
[6] Dawkins, The Selfish Gene, p. 192
[7] Nietzsche, The Twilight of the Idols, §33, p. 5


For further reading: 
Elements of Argument: A Text and Reader 7th ed. by Annette T. Rottenberg (2003)
Writing Arguments: A Rhetoric with Readings
by John D. Ramage (1989)
The Dictionary of Cultural Literacy
by E. D. Hirsch (1988)
Challenges to the Humanities
by Chester E. Finn (1985)

An Incomplete Education by Judy Jones (2006)


Do Babies Exist?

My friends and I were sitting on the deck one Summer afternoon sipping cokes by the pool while discussing different philosophical matters. It was a hot day, and I was introducing Descartes’ philosophy to them—as any normal person in an everyday conversation does—and explaining why it was important and what it meant for us. I set Unknownit up like this: He asked if his whole life were an illusion, a dream, and if there were an Evil Demon that was deceiving him, causing his senses to be misleading. It is impossible, I explained, to distinguish between waking reality and a dream, according to Descartes. However, searching for a first principle, a single starting point of knowledge from which to start, he realized he had been thinking this whole time. The process of questioning whether he was in a dream presupposed that there was a questioner who was doing it. This led him to remark, “Cogito, ergo sum,” or “I think, therefore I am.” By doubting all his senses, he was led to the conviction that he could not doubt that he was doubting in the first place; for otherwise, he would not be able to doubt: He would have to exist first before he could be deluded.

UnknownAfter hearing this, my friends seemed pretty convinced, and pondered it a bit. Out of nowhere, one of them said, “Well, babies aren’t self-conscious.” A pause. “So do babies exist?” Taken aback, unprepared for such a response, I readily dismissed the notion, called it absurd, and tried to think of an answer. We began debating whether or not babies knew they existed, or whether they could even think about thinking. Of course, the question itself—do babies exist since they are not self-conscious?—is actually grounded in a misunderstanding: Descartes was not trying to prove his existence; rather, he was trying to prove he had certainty, something undoubtedly true. But for the sake of argument, we entertained the idea. Common face shouts till it is red in the face, “Obviously, yes, babies exist! Only a madman would doubt their existence. I mean, we see them right in front of us—they’re right there, they exist!”[1]

This prompts the question: If we are conscious of a baby existing, yet they themselves are not conscious of themselves existing, do they exist? Babies are fascinating creatures. They are copies of us, miniature humans who must learn to cope with and understand the world in which they are living through trial-and-error. Seeing as they are capable of such amazing cognitive feats like cause-and-effect and language acquisition, investigating their conscious abilities sounded intriguing. A delve into developmental psychology, the study of how humans develop through life, yields interesting insights into this psycho-philosophical problem.

Unknown-1.jpegJean Piaget was a developmental psychologist who studied the development of children throughout the 20th-century. Today, his influence is still felt in psychological literature and continues to impact thought regarding childhood development. For years he observed, tested, and took notes on infants, from birth to early adulthood, using the data to devise his famous theory of cognitive development, which takes place in four stages: Sensorimotor, preoperational, concrete operational, and formal operational. The first stage, sensorimotor, takes place starting at birth and ending at the age of two. During this period, the baby’s life is geared toward adjusting to the world. Babies are “thrown” into this world, to use a Heideggerian term. They are born immediately into life amidst chaos, with all kinds of new stimuli to which to react. Confused, unable to make sense of things, exposed to strange sights and sounds, the baby cries and thrashes about, trying to find some sense of security. It is bombarded all at once by sensations and experiences. It is disoriented. This is a brave new world, and it is full of data that needs to be interpreted and sorted out in the baby’s mind. In order to navigate through the world, the newborn uses its motor skills and physical senses to experience things. The baby interacts with its environment, including people, grabbing with its hands, sucking with its mouth, hearing with its ears, and smelling with its nose. Imagine being in a cave for years, devoid of all sensory information, when, one day, you are let out and, having forgotten what it was like to experience the world, you are overcome by the magnitude of the environment, so you try to relearn as much as possible, greedily taking in everything that you can—well, being in the womb is kind of like being in a cave for the baby, meaning it is doing the same thing: It is getting a grasp of reality by engaging its senses in any way that it Unknown-3.jpegpossibly can. The baby is an empiricist who delights in its senses as though life were a buffet. Oh, there is something I can touch! Ah, that smells nice, let me smell it! While it cannot yet register these sensations, the infant uses its senses to obtain a primitive understanding. They are actively mapping out the world according to their perceptions, simple though they are. According to Piaget, babies eventually learn to pair coordination, knowledge of their body and its movement, with determination. Once they are able to effectively use their body parts in a way that is conducive to their survival, they develop their sense of where these limbs are in relation to each other, called proprioception. This allows them to use determination in regard to this newly acquired coordination. Babies can now direct themselves with autonomy and do something. However, this is a simple form of determination; it is not like the baby has free will and can decide or choose to do this or that. Whereas the baby can move toward a particular object, it cannot decide mentally, “I am going to crawl over to that thing”; it just does it out of pure, unthinking volition.

At three months, a baby can sense emotions and, amazingly, recreate them. Seeing their parents sad, an infant can react to this with a fitting response, as in being sad themselves. By being able to tell what someone is feeling, the baby can imitate them, showing that the baby has at least a simple recognition of empathy. Around this time also, the baby actively listens to their social scene, picking up on spoken language. It is incredible (in both senses of the word) because it is now that the infant unobtrusively Unknown-4.jpegand quietly internalizes and processes everything it hears like a sponge, learning speech cues, such as when to talk and when to pause; the rhythms of speech, including cadence; vocabulary; and nonverbal communication, which makes up the majority of social interaction. Here is a tiny little human just crawling around the house on all fours who cries and eats and goes to the bathroom, all the while they are actually learning how to speak—who could possibly fathom what is going on in that small, undeveloped mind! A little earlier, around two months usually, the baby already shows signs of early speech when it babbles. Nonsense sounds are uttered by the baby, who is trying to imitate speech, but who is not complex enough to reproduce it entirely. Four to five months into development, the baby can understand itself as a self-to-Others, or a self-as-viewed-by-Others. I have my own image of myself, but I understand that I am perceived by other people, who form their own images of me. One study shows that, from four to nine months, the infant has changing patterns of involvement in play. In the earliest stage, the baby will, if it is approached by the parent, play peekaboo. Because they have not yet learned that things exist independent of them in time, babies think that the parent disappears when they are covered, and is surprised to find they are still there. A few months later, nine months, the baby is able to take on the role of the initiator who wants to play peekaboo, instead of the responder who will play peekaboo if asked. This proves that babies learn to combine determination with intention (Bruner, 1983).

Just three months later, when the infant is officially one year old, it achieves a self-image. Looking in the a mirror, it can recognize itself and form an early identity. Like chimps, babies can now respond to themselves as an actual self in the mirror, noticing, for example, a mark on their forehead, and realizing that it is not on the mirror, but on themselves. During 14-18 months, an infant is able to differentiate an Other’s intentions from their own (Repacholi & Gopnik, 1997). Children like to think in terms of their own desires. If a kid wants a cookie, they act on their desire. Thus, when they are 14-18 months old, they can distinguish Others’ desires as different from their Unknown-5.jpegown. Within this period, the baby can also know that it is being imitated by someone else. If a parent mimics something the infant is doing, the infant knows their own behavior is being shown to them. Finally, the 18-month marker designates when the baby begins to start its sentences with the first-person “I.” With a sense of self, the infant is able to roleplay, in which it takes on new identities, or roles, and is able to play “as them.” Second-order emotions, also known as self-conscious emotions, like shame and embarrassment, arise in the child at this time, too. Children possess some semblance of self-consciousness.

After the sensorimotor stage is what Piaget called the preoperational stage, which takes place between the ages of two and seven. It is at this stage that the infant constructs their own world. Through the process of assimilation, the toddler creates mental schemas, mini blueprints conceived in their minds, frameworks by which reality is processed then Unknown.pngmade sense off, allowing them to structure reality in a way that is useful to them. When a new experience is undergone, it is made to fit the pre-existing schema. Because these schemas are very simple and basic, they are obviously inaccurate, although that is not point of them; they are not supposed to be innate categories of the mind, as Kant would have thought of them, but early hypotheses made from the little experienced gathered by a child. One time, my cousins came over to play video games; we were playing a level in Lego Indiana Jones where we had to drive around on a motorcycle chasing cars. My cousin’s little brother pointed excitedly at the cars zooming down the streets, exclaiming, “Doo-doo!” I hopped on a motorcycle and chased after them, only for him to look at the motorcycle and, again, shout, “Doo-doo!” My cousin and I tried to tell him that a car and a motorcycle were two separate things. In his mind, he saw a moving vehicle with wheels, so he created a mental schema. Anything that fit under that description—a moving vehicle with wheels—would be considered by him to be a “Doo-doo”—in this case, both the car and the motorcycle, despite their being different things. This illustrates that schemas are not always accurate; they are for classifying and categorizing things. Of course, this leads to a new process observed by Piaget: Accommodation. We come to an age where we discover that our schemas are inadequate because they do not fully represent reality. As such, we have a kind of “schematic crisis,” as we are met with an anomaly, something which sticks out, something which does not fit with our prevailing theory. Hence, we must remodel our thinking. Consequently, we are forced to find a way to reconcile the already-existing category with this new piece of data, either by broadening the schema, or by creating a new one altogether. Babies thus learn to make more accurate classifications as they learn new things and create new schemas with which to interpret Unknown-6.jpegreality. Once these schemas are built up, the infant is able to engage in organization, through which they order their schemas. Some are judged to be more inclusive or exclusive than others, and so are co-ordinated based thereon. In the case of my cousin’s little brother, he would have to organize his schemas like this: Broadly, there are vehicles, under which we might find cars and motorcycles as types, which can themselves be expanded upon, for each comes in different kinds. This way, reality is structured in levels, or hierarchies, not necessarily in importance, but in generality and specificity. Organization is a synthesis of assimilation and accommodation. All this schematizing segues into the next point, namely that in making sense of the world, we give sense to it.

The preoperational period is characterized by symbolic representation in toddlers. In philosophy, the study of meaning and symbolism is called semiotics, and it is closely related to what babies do, interestingly. Life is separated into two concepts: Signs and symbols. Signs are fixed things—concrete objects. Symbols are relative meanings—abstract values—usually assigned to signs. While every car I see is always a car, its meaning is not always the same and is liable to change. For some, it can represent, can be symbolic of, freedom, if you are a teen just getting your license; transportation, if it is how you get around; dread, if you hate road trips or have to wait hours during commute. The point is, everyone sees the same sign, but for everyone the symbol has different meanings. Preoperational toddlers are able, then, to understand objects not just in their literal, concrete sense, but as standing for something, as abstract and meaningful. Babies are not passive, as I have said, but on the contrary, very much, if not entirely, active. By interacting with the world around them, they experiment, learn, and conceptualize. Around three years, the baby is fully capable of speaking, feeling, having motives, and knowing the relation of cause-and-effect.

Unknown-2.pngOne of the consequences of Descartes’ Cogito is its resulting solipsism: The thinker, the Cogito, is only able to prove his own existence, whereas Others’ existences are uncertain. Is this a requisite for existence? Is self-certainty a necessity? If so, the case is a difficult one for babies. Controversially, Piaget proposed that babies are egocentric; his theory is widely contested today in psychological circles. The meaning of egocentrism can be guessed by looking carefully at the word’s roots: It means self-centered; however, it is not self-centeredness in the sense of being prideful, selfish, and concerned with oneself, no—it is more closely related to anthropocentric, in the sense that the self is the central point from which all others points are judged or perceived. For this reason, Piaget suggested that infants can only see things through their own perspectives, not through Others’. You may be wondering why I sometimes have been capitalizing “Other.” Philosophically, the problem of egocentrism is closely related to solipsism, resulting in what is called “the problem of Other Minds,” which is the attempt to prove the existence of selves outside of our own, of whose existence we are uncertain, so they are called “Others,” giving them a kind of external, foreign connotation. I digress. Babies, so thought Piaget, are unable to take Others’ perspectives, so the must rely on their own perspectives. To do this, they reason from self to Other. Infants’ egocentric tendencies, when combined with their inability to acknowledge objects as existing permanently outside of them, lead to a subject-object dualism, a subjective idealism, in which the self is distinguished and utterly separated cup-faces.jpgfrom the physical world. It becomes “my” viewpoint, or “your” viewpoint, subjective, relative. As long as I look at an object, a toddler thinks, it exists. And yet, the toddler also has a social self, which it develops through its interactions with other children. Many psychologists have claimed that, by playing, children are able to acknowledge the existence of not just Others, but Others’ emotions. It is evident in roleplaying, where the children pretend they are someone they are not, and act accordingly, placing themselves within a new self, which they adopt as their own, and interact with the other children, whom they see as someone else, whom they acknowledge and actively engage with, responding to how they are treated, and sensing emotions.

A dominant, popular theory that attempts to refute Piaget’s egocentrism is “Theory of Mind” ([ToM] Wellman, 1990). Wellman found that babies develop an awareness of Others at the age of three, when they operate on belief-desire reasoning. Motivation for kids consists of a belief, what they know, and a desire, what they want. A child might be motivated to have a cookie because they know where the cookie jar is, and they are hungry for one. Using this kind of reasoning, the kid attributes their own intentions to another. Looking at his playmate, the toddler assumes, “Well, I want a cookie, and I know where they are, so this kid, like me, because he has the same beliefs and desires as I, must want a cookie, too.” Is it faulty and inaccurate? Wildly. Does it make sense, realistically? Yes. The Theory of Mind is a primitive form of empathy, a kind of empathetic stepping stone. It is simple and selfish, because it assumes that images.pngchildren have the same beliefs and desires. One often sees this in children trying to console one another: An infant sees another crying, and, because he takes comfort in eating ice cream, believes the other will take comfort in it, too. Critics like Vasudevi Reddy criticize Theory of Mind because it is too detached from actual interaction and ends up actually attributing one’s own self-certitude to another, resulting in what she calls a “Neo-Cartesianism” of sorts. It promotes solipsistic thinking by denying the existence of an independent thinker with emotions, instead attributing to them own’s own ideas, thereby increasing a toddler’s dualistic thinking.

Unknown-8.jpegAccording to Reddy, a baby’s communication with Others’ already presupposed intersubjectivity, or being involved with people on a personal level. Babies are self-aware to an extent at birth because, the argument goes, the baby is able to distinguish itself from the world around it. To act, is to know both the self and the object. It is similar to Fichte’s philosophy in that the Ego becomes aware of itself by recognizing everything that is not the Ego, creating the Non-ego; in other words, it is through the Non-ego—the world—that the Ego knows itself. The world, or Non-ego, is created purely with the intent of being a moral playground for the Ego. Following from this is the idea that the baby, coming into contact with the world, immediately knows it as not-itself, and so uses it as its playground, activating all its senses to learn about reality. If we could not tell the environment apart from ourselves, and we thought ourselves a part of it, how could we act independently of it, with our senses? This is an argument against Freud and Piaget, who both said newborns cannot tell themselves from the world. As a solution to egocentrism, psychologists found that parents play an important role early on. Parents should teach their children early on to differentiate self from Other. Too much similarity between the baby and parent means more egocentrism in life, which is harder to unlearn. Reddy’s RquLcsxM.jpgsolution is to avoid Cartesianism and Theory of Mind and instead pursue a second-person perspective, one between I-and-Thou, You-and-I. This way, there is direct access to another’s intentions. Babies, through play, function on this second-person level by directly interacting with their peers. For Piaget, babies achieve consciousness when symbolism and schematism come together as one to create meaningful representations. An understanding of how things fit together and how they function is what Piaget considers consciousness. On the other hand, metacognition, the ability to think about thinking, does not arise until the age of 11, Piaget’s formal operational stage.

The following are milestones in the evolution of a baby’s cognitive abilities, summarized in eight chronological key events:

  1. Coordination
  2. Self vs. non-self
  3. Know special/loved people
  4. Know + respond to name
  5. Self-image
  6. Pointing to objects (symbol)
  7. Use “I” in sentences
  8. Know Other Minds

Unknown-9.jpegSo, to answer my friend: The question of whether or not babies exist is actually not so straightforward as one might think. It could be argued that babies exist when they are one, when they establish their self-image for the first time, and thus are, in one way or another, conscious of themselves. Or it may be that babies exist once they turn 18 months, and they can use “I,” roleplay, and experience reflexive emotions. Here, babies are aware of themselves as actors, are willing to play with others and take new perspectives, and are able to perceive how they are themselves perceived by others. Yet then again, it is possible that it is only when metacognition is possible, when we are able to doubt that we are doubting, when we are able to posit a hypothetical Evil Demon trying to deceive us all, that we exist—in which case… babies do not exist at all! Do only children and preadolescents and onwards exist? Maybe when we are born, we do not exist, we are in a state of utter nonexistence and non-being, and it is only when we reach 11 that—POOF!—we magically pop into existence.


[1] This is obviously a satirical question. Babies do exist. It is more of a thought-experiment, or armchair philosopher problem. I find the comment to be so outrageous that it is funny, and I thought it made for a perfect reason to research if babies are conscious. 


For further reading: How Infants Know Minds by Vasudevi Reddy (2008)
Developmental Psychology 8th ed. by David R. Shaffer (2010)
The Secret Language of the Mind 
by David Cohen (1996)
The Science of the Mind
by Owen J. Flanagan, Jr. (1984)

Philosopher Clerihews

Invented by Edmund Clerihew Bentley, the clerihew is a poem form composed of two rhyming couplets with the scheme AABB, wherein a famous person is mentioned in the first line, and the last three complete an accomplishment, failure, biography, anecdote, rumor, or joke about them. Contrived, silly, and fun to read, these humorous poems can actually be quite educational while still being entertaining. I was inspired after reading some of Jacques Barzun’s clerihews on philosophers to write my own. Following are 16 clerihews on different philosophers. I have tried my best to make them concise summaries of their philosophies!






Henry David Thoreau
Was a very thorough
Observer of nature
Who used botanical nomenclature


Martin Heidegger
Conceived upon his ledger,
That what was once concealed
Would in a new beginning be revealed


Michel Henry
Did French phenomenology
And he into life inquired
Whence he from interiority acquired


Friedrich Wilhelm Nietzsche
Tried to preach the
Death of God, and of the slave morality
Favoring instead: Übermensch mentality


Arthur Schopenhauer
Believed in the instinctive power
Of the blind Will-to-Life,
So his pessimism was rife


Had to accede this:
Some things are outside our control
So with the punches we must roll


Edmund Husserl
Made unfurl
In his phenomenological prolegomena
The bracketing of experienced phenomena


Plato, or Aristocles,
Had found the keys
To the fundamental reality,
Which was actually ideality


Did not like Apologies
So he rushed out of the cave
And made dialectic all the rave


John Stuart Mill
Had had his fill
Of individual liberty:
He used it as a Utility


Thomas Kuhn—
Why’d you have to ruin
All of scientific history
By reducing it to anomalistic mystery?


Søren Kierkegaard
Was the first of Existential regard
Whose melancholy made him weep
And whose faith made him take a Leap


Thomas Hobbes
Was moved to sobs
When he found life was short
And served the Leviathan’s royal court


Blaise Pascal
Was a real ras-cal
Who liked to gamble
In his theological preamble


John Locke
Pictured a rock
And said it was qualities, primarily
Conceived on a blank slate, summarily


George Berkeley
Said, “Esse est percipi,”
Meaning he couldn’t find
Anything outside his mind

Should I write more philosophical clerihews? Maybe in other subjects as well, like history, literature, and psychology? Make sure to leave your own in the comments, and I’ll be sure to read them!


Kafka’s “The Trial” in a Poem

uddenly one morning, Joseph K is arrested at his home
Apartment to apartment, from lawyer to lawyer, whither he roams,
He discovers everything is beneath the Court’s unassailable dome.

The trial wraps itself around K’s neck like a noose;
It looms overhead, ambiguous, following like a cloud,
So that K, argumentative, confident, innocent, cannot hang loose.

On consulting the painter, K decides to drop his domineering lawyer,
With whom he’s dissatisfied, despite the daunting danger,
And of all the women he’s been with, he harangues her (Leni).

Reposed and ready for his final trial, K’s once more ripped from his room;
And dragged through the streets, as if “guilty” of a crime, he finds he can’t fight time,
For “the Law” has spoken, has driven into his heart a knife—yes, the clouds still loom.

Ycleped by a priest, a “door-keeper” of the Court, K is told a story:
A man is kept from the Law by a door-keeper, who closes it off for him.
K cries, “The door-keeper’s deceptions do himself no harm but do infinite harm to the man” (242)

Happiness as Eudæmonia

Averill on Happiness.pngHappiness, according to psychologist James R. Averill, a Eudaemonist, is a means-to-an-end, contrary to what his predecessor Aristotle thought. After taking into account both survey reports and behavioral observations, he devised a table of happiness (see below). It is a 2×2 table, one axis being “Activation,” the other “Objectivity.” The four types of happiness he identified were joy, equanimity, eudaemonia, and contentment. He narrowed it down to the objective standard of high immersion known as “eudaemonia,” a term for overall well-being that finds its roots in Aristotle’s Nicomachean Ethics. Aristotle wrote that eudaemonia was achieved through activity, as when we are so engaged in doing something, we forget we are doing it, and lose a sense of time—time flies when you’re having fun. As such, happiness for Aristotle is not a typical emotion in that it occurs for periods of time. You cannot always be in a state of eudaemonia. Rather, it can be actively pursued when you immerse yourself in meaningful work. To be happy is not to be happy about or for anything because it is essentially an object-less emotion, a pure feeling. Eudaemonia is distinguished from equanimity by the fact that the latter is the absence of conflict, the former the resolution thereof. Equanimity has been valued by philosophers as a state of total inner peace; on the other hand, eudaemonia is the result of achieving a images.jpeggoal, which necessarily entails conflict, viz. desire vs. intention. When you are confident in your abilities and set realistic goals, when you are able to complete their goals, having overcome conflict, you can achieve happiness. Too many short-term goals means not experiencing enough of what life has to offer, while too many long-term goals means not being accomplished or confident in yourself. The measure of happiness, then, is relative, not absolute, and differs from person to person. What remains absolute, however, is that this sense of achievement can be had privately, by yourself, and publicly, when it is done for your community, family, or close friends. Inherent to eudaemonia, Averill asserts, is purpose: Behind happiness is direction, intention, and devotion. This led him to claim that “Pleasure without purpose is no prescription for happiness,” meaning you should not resort to hedonism to be happy, but must seek pleasure in meaningful actives into which you can immerse yourself.

Averill’s Table of Happiness:

Subjective: Objective:
High activation: Joy Eudaemonia
Low activation: Contentment Equanimity


For further reading: Handbook of Emotions 2nd ed. by Michael Lewis (2000)

A Very Short History of the Dream Argument

Unknown.jpegDreaming is an integral part of our lives, occurring every night when we are asleep. While the body relaxes, the brain stays active, creating a stream of thought, a stream that comes from the unconscious. Recent research into a method called “lucid dreaming” allows people to control their dreams, to place themselves within their illusory world, letting them make their dreams a reality; however, lucid dreaming, as cool as it is, presents a troubling problem, one that has intrigued humans for millennia: How do we know for certain we are not lucid dreaming right now? How do we distinguish our consciousness, our awareness, from the unconscious, the unaware? Are we actually asleep at this moment, life but a mere string of thoughts and sensations?

Defining dreaming and consciousness will help, as both concepts, simple though they may seem, are highly complex, each with their own requirements, psychologically and philosophically. Consciousness refers to “the quality or state of being aware especially of something within oneself”; in other words, consciousness refers to the realization or Unknown-1.jpegacknowledgement of the mind and its inner workings.[1] If you acknowledge that you are reading right now, you are conscious of yourself as reading, so consciousness is always consciousness of something, be it an activity or a mental state. American psychologist William James thought consciousness was not an existent thing, relating it to a stream, a series of experiences, one after the other, every one distinct from the other. Neurological studies later linked consciousness, the awareness of the brain, as a process within the brain itself, located in the thalamus. Dreams, on the other hand, are defined as “a succession of images, thoughts, or emotions passing through the mind during sleep.”[2] Dreams are specific from person to person, which makes it difficult, then, to “remember” a dream, considering it cannot be proven true or false. Therefore, it is difficult to differentiate the waking state from the dream state, so far as both are collections of experiences.

Apps-Lucid-Dreaming-Header.jpgMany philosophers, dating from the 5th century B.C. to the modern day, have attempted to tackle the “Dream Argument,” trying to prove that we are in fact living consciously. For example, Plato mentions it in a dialogue: “How can you determine whether at this moment we are sleeping, and all our thoughts are a dream; or whether we are awake, and talking to one another in waking state?”[3] Socrates was interested in finding out if our senses were reliable, if what we see, hear, taste, feel, and smell is real or a figment of our active minds. Perhaps when we fall asleep, when our brains switch to R.E.M., when we dream, there is a dreamer dreaming this dream. Another philosopher, René Descartes of the 17th century, in refuting the Dream Argument, famously proposed, “I think, therefore I am.” Descartes thought that his whole life was an illusion, a trick played on him by a divine being, that he was misled into believing reality. He started to doubt everything, including his senses; but one thing he could not possibly doubt was his existence, his self, because in order for him to doubt, there had to be a him to doubt in the first place!

Even though some of the greatest thinkers could not deny the Dream Argument irrefutably, at least we know from science that we exist, that dreams are just processes happening in the brain, and that reality is as real as it gets, dreams being a product of our imagination… unless we actually are dreaming, just waiting to be woken.



[1] “Consciousness.” (January 19th, 2017)
[2] “Dreaming.” (January 19th, 2017)
[3] Plato, Theætetus, 158d


If you have a lot of free time:

The Breath and Mindfulness

Benefits-Deep-Breathing-Featured1.pngHow many times have you gone for a run, and, a mile in, you reach your prime, and you feel unstoppable, your legs like automatic machines pumping, arms swinging by your sides, only to feel a pain in your chest, a heavy feeling in your lungs, sharp, managing just short breaths? Or what about getting ready to present in front of an audience, all their eyes on you, expectations hanging above you like the sword of Damocles, your reputation on the line, and you find yourself pacing nervously, breathing in and out shallowly? Or when you try to hold your breath for as long as you can underwater, cheeks puffed out, pressure building up, rising, inside your mouth and lungs, till it is enough to make you burst so that you pop up to the surface fighting for air, gasping, thankful for each time you get to swallow? In each of the common and everyday above instances, there runs a common theme: The importance of the breath. Just as these occasions are average, so breathing is something we do daily, although we never give attention to it. Constant, unchanging, it remains with us throughout the day, even if we do not heed it, dependable, vital. Despite being something we do around 20,000 times a day, breathing is, for the most part, subconscious, an effort produced by the brain because it has to be done, rather than because we will it. It is only after a workout, for example, when we push ourselves, that we find we have power over it, and really feel a need for it. However, the breath is much Unknown.jpegmore important than we believe. For thousands of years, the breath has remained an essential part of our cultures, West and East, ranging from Vedic writings from India to Ancient Greek philosophy to modern day Buddhism and mindfulness practices, which have tried to bring back an ancient appreciation of the breath. In this blog, I will discuss the physiology of breathing, its philosophical and meditative significance, and how it can help in daily life.

Beginning with the physiology is essential because sometimes, one appreciates something more when they know how it works; and also because, once one understands how something operates, they are more aware of how to improve it. The process of breathing, although covered it in school, is not always covered in detail. Respiration, or ventilation, is the act of inhaling fresh air and exhaling stale air. It is an exchange. The purpose of respiration is to exchange carbon dioxide (CO2) for oxygen (O2), the former being poisonous, the latter good for us, hence the need to get rid of CO2 and get more O2 in the body. While you can go weeks without food and days without water or sleep, you cannot go a single day, let alone a minute, without air—that is how vital it is. Beneath the diaphragmatic-breathing-illustration.jpgsurface, the process of inhalation goes like this: Together, the diaphragm, located between the abdomen and thorax, or chest, and the intercostals, which are muscles between the ribs on either side of the lungs, contract, allowing the lungs to expand. A dome-shaped muscle, the diaphragm flattens out, and the intercostals move up and outward, expanding the total area in the chest. Near the neck and shoulders, the sternocleidomastoid (a real mouthful!) moves the clavicle—the collarbone—and sternum, in harmony with the scalenes, all of which contract upward, opening up the chest farther. Put together, both actions make room for the lungs to expand. The chest, increases, as do the lungs, whose inner pressure is exceeded by external pressure, causing a suction effect so that air is sucked in. Exhalation is the opposite: The diaphragm relaxes, and the interior intercostals go down and in with the abdominals and obliques, shrinking and thereby increasing the volume of the lungs, causing a reverse suction, where the higher concentration of air within the lungs is diffused outside, to the lower concentration. Like a rubber band, the lungs remain passive purify-lungs.jpgthroughout respiration. Instead of thinking of the lungs as actively sucking in air, it is better to think of them as passive bands that are either stretched or released. Lungs are big pink sponges, colored so because they are full of blood vessels, inflated so because full of pneumatic branches ending in alveoli, where air is stored. Extending from the collarbone to the diaphragm, they are both divided into lobes. The right has three lobes, the left only two since it leaves room for the heart. Pleural membranes surround the exterior of the lungs, coating them with a fluid to help them contract effortlessly and smoothly, accounting for friction during inhalation and exhalation. How does the air get from your mouth and nose to your lungs? Air passes from the nasal cavity and mouth to the pharynx, which is pretty much the throat, whereupon it goes down the larynx, better known as the voicebox—where your voice is produced—before moving down the trachea. Here, it comes to a fork, two bronchi, left and right, each extending into Unknown-1.jpegsecondary bronchi, then tertiary bronchi, and finally into bronchioles, at the ends of which are small sacs called alveoli. This section takes place in the lungs, and because they physically branch downward, resembling an upside-down tree, it is referred to as the “bronchial tree.” A flap of cartilage lies between the pharynx and larynx. It is the epiglottis, and when relaxed, it lies up against the throat, opening up the passage of air; however, when it contracts, such as when swallowing, it acts like a drawbridge, moving down over the larynx, blocking anything unwanted. The job of the epiglottis is to let only air pass. All of these muscles are involved in subconscious breathing. More muscles are activated during exercise, as Respiratory center 1.jpgextra help is needed to speed up the process. At the bottom of the brain, the respiratory center stimulates the diaphragm and intercostals based on CO2, O2, and muscle stretch receptors. Chemoreceptors in the brain test blood in the body, and if there is a lack of blood, they alert the medulla oblongata, which will tell the body to produce oxygen faster. As we know, much of breathing is subconsciously controlled, its rate and depth preset by the brain, and altered when necessary, but we also have voluntary control over it. At rest, we breathe about 12-15 times per minute, and twice or more that amount during exercise. About 17 fl. oz. (0.5L) of air are displaced by the diaphragm; when forced, 70 fl. oz. (2L), totaling 150 fl. oz. (4.5L) added up. The air we breathe is 78.6% Unknown-2.jpegnitrogen, 20.9% oxygen, 0.4% water, 0.04% carbon dioxide, and 0.06% other elements. Accordingly, a lot of nitrogen is taken in, more than is needed, yet a lot of it is safe for us, only posing a threat when we are underwater, because then it remains in bubble form, at which time it can get into our blood. Luckily, our system is made to take in the right amount of oxygen we need. Of our total lung capacity, only 10% is used subconsciously. We always have at least 35 fl. oz. (0.1L) of air leftover despite having a total capacity of 204 fl. oz. (5.8L), meaning we never exhale all the air in our lungs, even if we try our hardest. The average flow air in the breath is 18fl. oz. (0.5L), but we have a reserved capacity of extra air in case we need it.

Meditation and running are a great combination because the two complement each other. Both value the breath and call for relaxation, which in turn strengthens oneself. To practice the two together, it is advised that you run at “conversational pace,” which is a pace at which you can comfortably sustain a conversation with someone else and not feel out of breath. When breathing during this, you should breathe from the bottom up, not the top down as we instinctively do, for there are no alveoli in the upper lungs. Shallow breaths from the chest deprive you of oxygen since there is not sufficient gas TobiasMacpheeRunThroughGrass.jpgexchange involved. Slow breaths from the diaphragm, at the bottom of the chest, near the stomach, will help you stay energized, prevent cramps, and focus you. Another important tip is to make your exhale longer than your inhale. Inhalation leaves residue oxygen in the lungs, mind you, such that, every now and then, the leftover oxygen will interfere with your respiratory system, resulting in a cramp because the oxygen got in. This way, by exhaling longer than you inhale, you not only reduce the chance of getting a cramp, but you also get a deeper, rhythmic breathing cycle. In the traditional philosophy of Yoga—not modern day Yoga, with the stretches—the regulation of breath is called prāna vritti. Central to its teachings is prānāyāma, or expansion of the vital force, prāna being Sanskrit for breath or vital force, āyama vertical or horizontal pranayama-breathing-lessons-nadhi-shodhana.jpgexpansion. Yoga training in prānāyāma requires that you first master āsana, posture, before moving onto breathing, to the extent that proper breathing is only enacted after achieving proper posture. Āsana involves straightening the spine so you are erect, a straight line able to be drawn from head to hips; opening up the chest, allowing the lungs to expand naturally; pulling the shoulders back between the scapula, or shoulder blades, thus enlarging the chest activity; and relaxing the whole body, releasing all tension from the muscles. The spine represents Earth, the empty space in the torso Ether, respiration Air, and Water and Fire, being diametrically opposed, represent life force (prāna). Therefore, all of nature is manifest in the body as a sacred unity, a gathering of the Elements. Once āsana is practiced sufficiently, one can move onto prānāyāma, where one is instructed to apply attention to the breath. Sahita prānāyāma is one specific technique that involves inhaling (pūruka), retaining (kumbhaka), then exhaling (recaka), each of which is equally prolonged. As such, each stage should last as long as the others, usually held for a few seconds, lengthening by a second. You should sit either on a chair or on the ground in a comfortable position, get Unknown.jpeginto āsana, properly aligned, erect, and breathe in a few seconds, retain it for the same length, then exhale for the same time, and repeat. It is similar to “box breathing,” a technique used by Navy Seals, who inhale for four seconds, hold it for four, exhale for four, and wait before inhaling for four—perhaps it was based on the ancient practice of sahita prānāyāma. By thus controlling the breath, you give it a regular rhythm. According to Yogic texts, there are five breaths: 1.) Prāna, which extends from the toe to the heart to the nose 2.) Apāna, which extends from the throat to the ribs 3.) Samāna, which extends from the digestion system to the joints to the navel 4.) Udāna, which is in the skull and eyebrows and 5.) Vyāna, which occupies the circulation of the breath, distributing the life force throughout the body. The aim hereof is to slow the breath as though you are asleep, when your mind goes adrift, wavering, and you can see into the absolute state of consciousness, “continued consciousness.” Just as we instinctively, subconsciously take shallow breaths as a habit, so we must learn to turn controlled, rhythmic breathing into a subconscious, instinctive habit. Through our days, we should be able to notice that we are breathing deeply and steadily by habit and therefore by instinct, rather than as we normally do it, subconsciously.

Other traditions, too, outside of Indian philosophy, practice extension of the breath. The Chinese philosophy of Taoism, in T’ai Chi, has a practice called “embryonic respiration,” whereby the breath is sustained for the goal of a longer life, ch’ang shen. It was thought that the breath gave the power of the immortality; if one could hold one’s breath for 1,000 seconds, they would become immortal. Obviously, the breath was taken very seriously, and it was trained rigorously. Other benefits of the breath were believed to be the ability to walk on fire, to not drown, and to cure sickness by expelling bad humors and airs. Islam and Hesychasm in the East also have breathing practices. Sufis say Dhikr, a kind of devotional prayer that is immensely private and isolated, always involving the Unknown-1.jpegbreath. Ancient Greek philosophy held air to be vital as well. One of the first philosophers, the pre-Socratic Anaximenes, held that the arche (αρχἠ) of the world, the single element from which the Cosmos and everything in it was made, was Air. A monist, he like Thales and Anaximander believed a single element was the basis of reality. Air, he taught, was concentrated in the breath, which functioned as man’s psyche (ψυχἠ), or soul/spirit, whence came “psychology.” Although its origin is widely debated, the saying of “Bless you” has been proposed to have come from an Anaximenes-influenced Ancient Greece: A sneeze was thought to expel the breath, which was synonymous with the soul, so people would say “Bless you” to keep the soul inside the body. A couple centuries later, the Stoics posited the existence of two principles in Nature, one passive, the other active. Pneuma (πνεῦμα), translated as breath, was conceived to be the active principle, a sort of fiery air immixed in the breath that pervaded reality. From it, we get words like “pneumatic” and “pneumonia,” all relating to the breath.

Unknown-2.jpegToday, the breath is becoming the center of attention again in modern mindfulness practices. It is well known that oxygenation has tons of health benefits, such as lowering stress, improving one’s clarity and moods, removing negative thoughts, and grounding oneself in the present.[1] Buddhist writers often identify the breath as an “anchor,” something to which to return when distracted, to shift to in order to be present, to consult when invaded by thoughts. Some of the thinking is: If you can notice, appreciate, and love something so small, precious, and minute as the breath, then you can surely extend that attention and love to everything else in life, big or small. In other words, if you can appreciate the simplicity of the breath, then you can also appreciate, for example, the simplicity of a tree, or the smell of the coffee you make every morning, adding a depth to everyday life, an added layer of meaning. Both Buddhists’ and Zen Buddhists’ central teaching regarding the breath is to notice. You just have to acknowledge at any moment, “I am breathing”—nothing else. To stop in the middle of the day, halting whatever you are doing, and notice the breath, to just know and be conscious of the breath is to appreciate it, considering we move through our days like automatons without ever giving notice to our unsung breaths, without which we could not live. During mindfulness meditation, the goal is to feel the breath, passively, observantly, unobtrusively. The feeling of the breath as you inhale and exhale, as it comes in through your nose, down your throat, down the bronchial tree, and out the mouth—this is to what we must pay attention. A particular Zen practice calls for beginning practitioners to count the breath, by counting breath5.jpgthe in’s and out’s, only the in’s, or only the out’s. Whichever you choose, it is advised that you count up to a number like 10 before restarting; and eventually, once the count is ingrained enough, having been trained multiple times, you will not have to say it out loud or mentally voice it—your breath will naturally fall into rhythm. Conclusively, what can be said is this: That while both Yoga and Buddhism attribute great importance to the breath, they differ in their approaches to it, Yoga’s being to control the breath, to apply rhythm, to attune the breath voluntarily; Buddhism’s being to notice the breath, to watch Unknown-4.jpegit, to fully and intentionally be present with it; one is active, the other passive in its method. Nature is the perfect place to be mindful of the breath. Simply stand, the sun shining down on you, leaves blowing around, and be mindful of the fact that as you exchange CO2 and O2, you are actively engaging with the trees around in a mutual exchange, symbiotic, one giving life to the other, perpetuating, giving existence to one another. You, the trees, and the animals and wildlife are all interconnected, sharing the eternal breath.

Personally, when I do mindfulness meditation, despite having read about the importance of the breath, I never feel anything special, never get what they mean by “appreciating the breath,” no matter how much I try, always trying to “feel” the breath as I inhale, then losing it as it moves past the nasal cavity, wondering where it went, then exhaling through my mouth, monotonous, uninteresting, without any specific feeling. Hence, I usually focus on using my senses rather than focusing on the breath. However, recently I discovered that an appreciation of the breath through mindfulness can be achieved in another way, one more suited to my subjective tastes, when I can truly be alone with it and feel its benefits:

Unknown-5.jpegIt was 78ºF on a Saturday morning, unbearably hot for a weekend in January, and I was with my fellow runners at track practice. We were all exhausted. We had only just warmed up, yet we were already sweating, all of us taking off our jackets and sweats and putting them on the turf. Our coach gathered us, back to the sun, and announced fatalistically, “You will be doing 5×300’s, Varsity at a 48-second pace. This is going to be the hardest workout all season, and they will only get easier after this.” As soon as he said 5×300’s, my heart sank, my eyes widened, and my jaw nearly dropped, and I could feel my teammates collectively doing the same. Anyone who is a short-distance sprinter specializing in the 100m will know how dreadful 300’s are—how they strike fear into your soul, unforgiving, excruciating, unfeeling, merciless. Only 100 meters less than the 400m and 100 meters more than the 200m, they are a terrible, formidable middle state, a Purgatory between two Hells. This said, the senior and freshman runners alike were mortally terrified. Having no choice in the matter, though, we approached the track, with heads down and a shuffling gait, unwilling—or was it unable?—to face the track, to look it head on. We were divided into groups of about six to 10 runners, and I was placed in the first heat, with the seniors and juniors, who had to run them at a 48 second pace, which cheered me up a bit seeing as it was the time one got on a regular 400m, but it also meant I had to run 48 seconds, too. Staggering on the track, we got into our lines, bent our legs, got low, surveyed the track, taking in the great distance we had to traverse, contemplated the suffering we would endure, and hoped for the best, forcing out a final breath of repose. Coach said “Go,” stopwatch in hand, and we were off. I followed closely behind the Unknown.pngjuniors, like a dog does its owner, careful not to lose them, not to fall back with the others who were behind, as I wanted to push myself. The sun was beating down on us, and my body was pushing to keep up with them as we turned the bend, straightening out, until it was me and three other runners leading the pack, behind us a few others. When we finished our first rep, I was relieved. It was not too bad; we were running at a pace I likened to a fast jog, the kind of pace at which you go for a casual mile, but with more haste. Those who came up the rear were breathing hard. That morning, before coming to practice, I had completed a 20-minute meditation in which I tried to focus on my breath and my breath alone. As I confessed, it did not work so well, and I could not for the life of me stay with my breath. There and then, though, standing arms akimbo on the grass, sweat across my forehead, legs heavy, I found solace in my breath. In contrast to the rapid, shallow breathing of my teammates, I walked around calmly, breathing slowly and intentionally, in and out, not from the top of my lungs, but the bottom, from the diaphragm, which made all the difference. Because of this, there was a noticeable difference. I was much more collected. With this in mind, I headed over to the starting line again, ready for rep two, eager to try a new strategy: When I ran, I would focus only on the breath, like I was supposed to during meditation. This next ran, I told myself, was not a run at all, but another meditation session, a practice of mindfulness—mindful sprinting. My thinking instilled within me a kind of vitalization, a readiness for pain, whereas the other runners came up sluggishly, not looking forward to this next rep. Instead of viewing the track as a stumbling block, I viewed it as a hurdle (no pun intended), something to overcome, over which to jump, and thus from which to grow. The sprint was an opportunity, not a punishment. We lined up again after the last heat finished. Once more staggered, we heard “Go,” and we went. Familiar with the pacing, I set myself behind the juniors and kept close to them, careful not to speed up at the bend, but to relax. I breathed as though I were not running, but sitting still, meditating, still Unknown.jpegbreathing from the diaphragm and exhaling through my mouth. The first 100m was not hard, nor was the second. It was always the third which was hardest. My friend, who had up until then been running at my hip, had fallen behind on the second leg, his legs too tired, his breath too short, to keep up. This was the final straightaway. Lactic acid had built up in my legs, making them heavy, so that just raising my leg took most of my effort. I thought of what my Coach had told me, namely that I needed to keep my knees high, especially at the end; so I turned my attention to my breath. Unlike pain, unlike tiredness, the breath is not transitory, but is permanent, constant, unchanging, eternal, a dependable cycle of air, of vitality, which coursed through my body, an unending cycle, infinite, and it entered into the foreground, while the rest of my attention faded into the background, even the track, even my periphery, even the pain I felt in my legs, even the pressure in my chest, even the sweat dripping as I ran—it all went away, impermanent, mere sensations, perceptions, which could easily have been illusory, as opposed to the breath, whereof I was most certain at that time—Respiro, ergo sum—the only certainty, the only object of which I was conscious, to which I was willing to devote my attention, and so it felt as if my mind and breath were alone, two objects painted into an empty wind_breath.jpgcanvas, my thoughts and my breath, both transcendent and immortal, real, unlike pain, which felt unreal at the time, and the track was the dependent variable, my breath the independent variable, the distance equal to the pace and the infinite Now, the passing away of time into seconds as my legs carried me forward, knees high, arms pumping cheek-to-cheek, my breath still constant, till I was nearing the end, feeling great, triumphant, and suddenly all the sensations dawned on me, but they did not matter, not the pain, not the feeling in my lungs as I watched my running shadow on the track, so I did not feel alone with my breath, whereupon I saw the finish line, and, pushing one last time, made it to the finish line. As I peeled off to the side to make room for the others, I interlaced my fingers and put my arms over my head, opening my chest to make my breathing easier, more controlled, while the others were out of breath.  

[1] A simple search will bear hundreds of results if you want to read more. Here are two: 18 Benefits and 21 Benefits


For further reading: 
Running with the Mind of Meditation by Sakyong Mipham (2012)
Light on the Yoga Sūtras of Patañjali by B.K.S. Iyengar (1996)
Mindfulness & the Natural World 
by Claire Thompson (2013)
Encyclopedia of the Human Body 
by Richard Walker (2002)
Wherever You Go, There You
Are by Jon Kabat-Zinn (2005)
Yoga: Immortality and Freedom
by Mircea Eliade (1958)
The Complete Human Body 
by Dr. Alice Roberts (2010)
The Greek Thinkers 
Vol. 1 by Theodor Gomperz (1964)
Philosophies of India 
by Heinrich Zimmer (1951)
Coming to Our Senses
 by Jon Kabat-Zinn (2005)
The Human Body Book 
by Steve Parker (2007)
by Joseph Goldstein (2016)

Zen Training by Katsuki Sekida (1985)
Chi Running by Danny Dreyer (2004)