Roads to Ielts

TEST 1.1 – MAKING TIME FOR SCIENCE

Chronobiology might sound a little futuristic – like something from a science fiction novel, perhaps – but it’s actually a field of study that concerns one of the oldest processes life on this planet has ever known: short-term rhythms of time and their effect on flora and fauna.

This can take many forms. Marine life, for example, is influenced by tidal patterns. Animals tend to be active or inactive depending on the position of the sun or moon. Numerous creatures, humans included, are largely diurnal – that is, they like to come out during the hours of sunlight. Nocturnal animals, such as bats and possums, prefer to forage by night. A third group are known as crepuscular: they thrive in the low-light of dawn and dusk and remain inactive at other hours.

When it comes to humans, chronobiologists are interested in what is known as the circadian rhythm. This is the complete cycle our bodies are naturally geared to undergo within the passage of a twenty-four hour day. Aside from sleeping at night and waking during the day, each cycle involves many other factors such as changes in blood pressure and body temperature. Not everyone has an identical circadian rhythm. ‘Night people’, for example, often describe how they find it very hard to operate during the morning, but become alert and focused by evening. This is a benign variation within circadian rhythms known as a chronotype.

Scientists have limited abilities to create durable modifications of chronobiological demands. Recent therapeutic developments for humans such as artificial light machines and melatonin administration can reset our circadian rhythms, for example, but our bodies can tell the difference and health suffers when we breach these natural rhythms for extended periods of time. Plants appear no more malleable in this respect; studies demonstrate that vegetables grown in season and ripened on the tree are far higher in essential nutrients than those grown in greenhouses and ripened by laser.

Knowledge of chronobiological patterns can have many pragmatic implications for our day-to-day lives. While contemporary living can sometimes appear to subjugate biology – after all, who needs circadian rhythms when we have caffeine pills, energy drinks, shift work and cities that never sleep? – keeping in synch with our body clock is important.

The average urban resident, for example, rouses at the eye-blearing time of 6.04 a.m., which researchers believe to be far too early. One study found that even rising at 7.00 a.m. has deleterious effects on health unless exercise is performed for 30 minutes afterward. The optimum moment has been whittled down to 7.22 a.m.; muscle aches, headaches and moodiness were reported to be lowest by participants in the study who awoke then.

Once you’re up and ready to go, what then? If you’re trying to shed some extra pounds, dieticians are adamant: never skip breakfast. This disorients your circadian rhythm and puts your body in starvation mode. The recommended course of action is to follow an intense workout with a carbohydrate-rich breakfast; the other way round and weight loss results are not as pronounced.

Morning is also great for breaking out the vitamins. Supplement absorption by the body is not temporal-dependent, but naturopath Pam Stone notes that the extra boost at breakfast helps us get energised for the day ahead. For improved absorption, Stone suggests pairing supplements with a food in which they are soluble and steering clear of caffeinated beverages. Finally, Stone warns to take care with storage; high potency is best for absorption, and warmth and humidity are known to deplete the potency of a supplement.

After-dinner espressos are becoming more of a tradition – we have the Italians to thank for that – but to prepare for a good night’s sleep we are better off putting the brakes on caffeine consumption as early as 3 p.m. With a seven hour half-life, a cup of coffee containing 90 mg of caffeine taken at this hour could still leave 45 mg of caffeine in your nervous system at ten o’clock that evening. It is essential that, by the time you are ready to sleep, your body is rid of all traces.

Evenings are important for winding down before sleep; however, dietician Geraldine Georgeou warns that an after-five carbohydrate-fast is more cultural myth than chronobiological demand. This will deprive your body of vital energy needs. Overloading your gut could lead to indigestion, though. Our digestive tracts do not shut down for the night entirely, but their work slows to a crawl as our bodies prepare for sleep. Consuming a modest snack should be entirely sufficient.

TEST 1.2 – THE TRIUNE BRAIN

The first of our three brains to evolve is what scientists call the reptilian cortex. This brain sustains the elementary activities of animal survival such as respiration, adequate rest and a beating heart. We are not required to consciously “think” about these activities. The reptilian cortex also houses the “startle centre”, a mechanism that facilitates swift reactions to unexpected occurrences in our surroundings. That panicked lurch you experience when a door slams shut somewhere in the house, or the heightened awareness you feel when a twig cracks in a nearby bush while out on an evening stroll are both examples of the reptilian cortex at work. When it comes to our interaction with others, the reptilian brain offers up only the most basic impulses: aggression, mating, and territorial defence. There is no great difference, in this sense, between a crocodile defending its spot along the river and a turf war between two urban gangs.

Although the lizard may stake a claim to its habitat, it exerts total indifference toward the well-being of its young. Listen to the anguished squeal of a dolphin separated from its pod or witness the sight of elephants mourning their dead, however, and it is clear that a new development is at play. Scientists have identified this as the limbic cortex. Unique to mammals, the limbic cortex impels creatures to nurture their offspring by delivering feelings of tenderness and warmth to the parent when children are nearby. These same sensations also cause mammals to develop various types of social relations and kinship networks. When we are with others of “our kind” – be it at soccer practice, church, school or a nightclub – we experience positive sensations of togetherness, solidarity and comfort. If we spend too long away from these networks, then loneliness sets in and encourages us to seek companionship.

Only human capabilities extend far beyond the scope of these two cortexes. Humans eat, sleep and play, but we also speak, plot, rationalise and debate finer points of morality. Our unique abilities are the result of an expansive third brain – the neocortex – which engages with logic, reason and ideas. The power of the neocortex comes from its ability to think beyond the present, concrete moment. While other mammals are mainly restricted to impulsive actions (although some, such as apes, can learn and remember simple lessons), humans can think about the “big picture”. We can string together simple lessons (for example, an apple drops downwards from a tree; hurting others causes unhappiness) to develop complex theories of physical or social phenomena (such as the laws of gravity and a concern for human rights).

The neocortex is also responsible for the process by which we decide on and commit to particular courses of action. Strung together over time, these choices can accumulate into feats of progress unknown to other animals. Anticipating a better grade on the following morning’s exam, a student can ignore the limbic urge to socialise and go to sleep early instead. Over three years, this ongoing sacrifice translates into a first class degree and a scholarship to graduate school; over a lifetime, it can mean ground-breaking contributions to human knowledge and development. The ability to sacrifice our drive for immediate satisfaction in order to benefit later is a product of the neocortex.

Understanding the triune brain can help us appreciate the different natures of brain damage and psychological disorders. The most devastating form of brain damage, for example, is a condition in which someone is understood to be brain dead. In this state a person appears merely unconscious – sleeping, perhaps – but this is illusory. Here, the reptilian brain is functioning on autopilot despite the permanent loss of other cortexes.

Disturbances to the limbic cortex are registered in a different manner. Pups with limbic damage can move around and feed themselves well enough but do not register the presence of their littermates. Scientists have observed how, after a limbic lobotomy2, “one impaired monkey stepped on his outraged peers as if treading on a log or a rock”. In our own species, limbic damage is closely related to sociopathic behaviour. Sociopaths in possession of fully-functioning neocortexes are often shrewd and emotionally intelligent people but lack any ability to relate to, empathise with or express concern for others.

One of the neurological wonders of history occurred when a railway worker named Phineas Gage survived an incident during which a metal rod skewered his skull, taking a considerable amount of his neocortex with it. Though Gage continued to live and work as before, his fellow employees observed a shift in the equilibrium of his personality. Gage’s animal propensities were now sharply pronounced while his intellectual abilities suffered; garrulous or obscene jokes replaced his once quick wit. New findings suggest, however, that Gage managed to soften these abrupt changes over time and rediscover an appropriate social manner. This would indicate that reparative therapy has the potential to help patients with advanced brain trauma to gain an improved quality of life.

TEST 1.3 – HELIUM’S FUTURE UP IN THE AIR

In recent years we have all been exposed to dire media reports concerning the impending demise of global coal and oil reserves, but the depletion of another key non-renewable resource continues without receiving much press at all. Helium – an inert, odourless, monatomic element known to lay people as the substance that makes balloons float and voices squeak when inhaled – could be gone from this planet within a generation.

Helium itself is not rare; there is actually a plentiful supply of it in the cosmos. In fact, 24 per cent of our galaxy’s elemental mass consists of helium, which makes it the second most abundant element in our universe. Because of its lightness, however, most helium vanished from our own planet many years ago. Consequently, only a miniscule proportion – 0.00052%, to be exact – remains in earth’s atmosphere. Helium is the by-product of millennia of radioactive decay from the elements thorium and uranium. The helium is mostly trapped in subterranean natural gas bunkers and commercially extracted through a method known as fractional distillation.

The loss of helium on Earth would affect society greatly. Defying the perception of it as a novelty substance for parties and gimmicks, the element actually has many vital applications in society. Probably the most well known commercial usage is in airships and blimps (non-flammable helium replaced hydrogen as the lifting gas du jour after the Hindenburg catastrophe in 1932, during which an airship burst into flames and crashed to the ground killing some passengers and crew). But helium is also instrumental in deep-sea diving, where it is blended with nitrogen to mitigate the dangers of inhaling ordinary air under high pressure; as a cleaning agent for rocket engines; and, in its most prevalent use, as a coolant for superconducting magnets in hospital MRI (magnetic resonance imaging) scanners.

The possibility of losing helium forever poses the threat of a real crisis because its unique qualities are extraordinarily difficult, if not impossible to duplicate (certainly, no biosynthetic ersatz product is close to approaching the point of feasibility for helium, even as similar developments continue apace for oil and coal). Helium is even cheerfully derided as a “loner” element since it does not adhere to other molecules like its cousin, hydrogen. According to Dr. Lee Sobotka, helium is the “most noble of gases, meaning it’s very stable and non-reactive for the most part … it has a closed electronic configuration, a very tightly bound atom. It is this coveting of its own electrons that prevents combination with other elements’. Another important attribute is helium’s unique boiling point, which is lower than that for any other element. The worsening global shortage could render millions of dollars of high-value, life-saving equipment totally useless. The dwindling supplies have already resulted in the postponement of research and development projects in physics laboratories and manufacturing plants around the world. There is an enormous supply and demand imbalance partly brought about by the expansion of high-tech manufacturing in Asia.

The source of the problem is the Helium Privatisation Act (HPA), an American law passed in 1996 that requires the U.S. National Helium Reserve to liquidate its helium assets by 2015 regardless of the market price. Although intended to settle the original cost of the reserve by a U.S. Congress ignorant of its ramifications, the result of this fire sale is that global helium prices are so artificially deflated that few can be bothered recycling the substance or using it judiciously. Deflated values also mean that natural gas extractors see no reason to capture helium. Much is lost in the process of extraction. As Sobotka notes: “[t]he government had the good vision to store helium, and the question now is: Will the corporations have the vision to capture it when extracting natural gas, and consumers the wisdom to recycle? This takes long-term vision because present market forces are not sufficient to compel prudent practice”. For Nobel-prize laureate Robert Richardson, the U.S. government must be prevailed upon to repeal its privatisation policy as the country supplies over 80 per cent of global helium, mostly from the National Helium Reserve. For Richardson, a twenty- to fifty-fold increase in prices would provide incentives to recycle.

A number of steps need to be taken in order to avert a costly predicament in the coming decades. Firstly, all existing supplies of helium ought to be conserved and released only by permit, with medical uses receiving precedence over other commercial or recreational demands. Secondly, conservation should be obligatory and enforced by a regulatory agency. At the moment some users, such as hospitals, tend to recycle diligently while others, such as NASA, squander massive amounts of helium. Lastly, research into alternatives to helium must begin in earnest.

TEST 2.1 – THE MAGIC OF KEFIR

The shepherds of the North Caucasus region of Europe were only trying to transport milk the best way they knew how – in leather pouches strapped to the side of donkeys – when they made a significant discovery. A fermentation process would sometimes inadvertently occur en route, and when the pouches were opened up on arrival they would no longer contain milk but rather a pungent, effervescent, low- alcoholic substance instead. This unexpected development was a blessing in disguise. The new drink – which acquired the name kefir – turned out to be a health tonic, a naturally-preserved dairy product and a tasty addition to our culinary repertoire.

Although their exact origin remains a mystery, we do know that yeast-based kefir grains have always been at the root of the kefir phenomenon. These grains are capable of a remarkable feat: in contradistinction to most other items you might find in a grocery store, they actually expand and propagate with use. This is because the grains, which are granular to the touch and bear a slight resemblance to cauliflower rosettes, house active cultures that feed on lactose when added to milk. Consequently, a bigger problem for most kefir drinkers is not where to source new kefir grains, but what to do with the ones they already have!

The great thing about kefir is that it does not require a manufacturing line in order to be produced. Grains can be simply thrown in with a batch of milk for ripening to begin. The mixture then requires a cool, dark place to live and grow, with periodic unsettling to prevent clumping (Caucasus inhabitants began storing the concoction in animal-skin satchels on the back of doors – every time someone entered the room the mixture would get lightly shaken). After about 24 hours the yeast cultures in the grains have multiplied and devoured most of the milk sugars, and the final product is then ready for human consumption.

Nothing compares to a person’s first encounter with kefir. The smooth, uniform consistency rolls over the tongue in a manner akin to liquefied yogurt. The sharp, tart pungency of unsweetened yogurt is there too, but there is also a slight hint of effervescence, something most users will have previously associated only with mineral waters, soda or beer. Kefir also comes with a subtle aroma of yeast, and depending on the type of milk and ripening conditions, ethanol content can reach up to two or three percent – about on par with a decent lager – although you can expect around 0.8 to one per cent for a typical day-old preparation. This can bring out a tiny edge of alcohol in the kefir’s flavour.

Although it has prevailed largely as a fermented milk drink, over the years kefir has acquired a number of other uses. Many bakers use it instead of starter yeast in the preparation of sourdough, and the tangy flavour also makes kefir an ideal buttermilk substitute in pancakes. Kefir also accompanies sour cream as one of the main ingredients in cold beetroot soup and can be used in lieu of regular cow’s milk on granola or cereal. As a way to keep their digestive systems fine-tuned, athletes sometimes combine kefir with yoghurt in protein shakes.

Associated for centuries with pictures of Slavic babushkas clutching a shawl in one hand and a cup of kefir in the other, the unassuming beverage has become a minor celebrity of the nascent health food movement in the contemporary West. Every day, more studies pour out supporting the benefits of a diet high in probiotics1. This trend toward consuming probiotics has engulfed the leisure classes in these countries to the point that it is poised to become, according to some commentators, “the next multivitamin”. These days the word kefir is consequently more likely to bring to mind glamorous, yoga mat-toting women from Los Angeles than austere visions of blustery Eastern Europe.

Kefir’ s rise in popularity has encouraged producers to take short cuts or alter the production process. Some home users have omitted the ripening and culturation process while commercial dealers often add thickeners, stabilisers and sweeteners. But the beauty of kefir is that, at its healthiest and tastiest, it is a remarkably affordable, uncluttered process, as any accidental invention is bound to be. All that is necessary are some grains, milk and a little bit of patience. A return to the unadulterated kefir-making of old is in everyone’s interest.

TEST 2.2 – FOOD FOR THOUGHT

Why not eat insects? So asked British entomologist Vincent M. Holt in the title of his 1885 treatise on the benefits of what he named entomophagy – the consumption of insects (and similar creatures) as a food source. The prospect of eating dishes such as “wireworm sauce” and “slug soup” failed to garner favour amongst those in the stuffy, proper, Victorian social milieu of his time, however, and Holt’s visionary ideas were considered at best eccentric, at worst an offense to every refined palate. Anticipating such a reaction, Holt acknowledged the difficulty in unseating deep-rooted prejudices against insect cuisine, but quietly asserted his confidence that “we shall some day quite gladly cook and eat them”.

It has taken nearly 150 years but an eclectic Western-driven movement has finally mounted around the entomophagic cause. In Los Angeles and other cosmopolitan Western cities, insects have been caught up in the endless pursuit of novel and authentic delicacies. “Eating grasshoppers is a thing you do here”, bug- supplier Bricia Lopez has explained. “There’s more of a ‘cool’ factor involved.” Meanwhile, the Food and Agricultural Organization has considered a policy paper on the subject, initiated farming projects in Laos, and set down plans for a world congress on insect farming in 2013.

Eating insects is not a new phenomenon. In fact, insects and other such creatures are already eaten in 80 per cent of the world’s countries, prepared in customary dishes ranging from deep-fried tarantula in Cambodia to bowls of baby bees in China. With the specialist knowledge that Western companies and organisations can bring to the table, however, these hand-prepared delicacies have the potential to be produced on a scale large enough to lower costs and open up mass markets. A new American company, for example, is attempting to develop pressurisation machines that would de-shell insects and make them available in the form of cutlets. According to the entrepreneur behind the company, Matthew Krisiloff, this will be the key to pleasing the uninitiated palate.

Insects certainly possess some key advantages over traditional Western meat sources. According to research findings from Professor Arnold van Huis, a Dutch entomologist, breeding insects results in far fewer noxious by-products. Insects produce less ammonia than pig and poultry farming, ten times less methane than livestock, and 300 times less nitrous oxide. Huis also notes that insects – being cold- blooded creatures – can convert food to protein at a rate far superior to that of cows, since the latter exhaust much of their energy just keeping themselves warm.

Although insects are sometimes perceived by Westerners as unhygienic or disease-ridden, they are a reliable option in light of recent global epidemics (as Holt pointed out many years ago, insects are “decidedly more particular in their feeding than ourselves”). Because bugs are genetically distant from humans, species-hopping diseases such as swine flu or mad cow disease are much less likely to start or spread amongst grasshoppers or slugs than in poultry and cattle. Furthermore, the squalid, cramped quarters that encourage diseases to propagate among many animal populations are actually the residence of choice for insects, which thrive in such conditions.

Then, of course, there are the commercial gains. As FAO Forestry Manager Patrick Durst notes, in developing countries many rural people and traditional forest dwellers have remarkable knowledge about managing insect populations to produce food. Until now, they have only used this knowledge to meet their own subsistence needs, but Durst believes that, with the adoption of modern technology and improved promotional methods, opportunities to expand the market to new consumers will flourish. This could provide a crucial step into the global economic arena for those primarily rural, impoverished populations who have been excluded from the rise of manufacturing and large-scale agriculture.

Nevertheless, much stands in the way of the entomophagic movement. One problem is the damage that has been caused, and continues to be caused, by Western organisations prepared to kill off grasshoppers and locusts – complete food proteins – in favour of preserving the incomplete protein crops of millet, wheat, barley and maize. Entomologist Florence Dunkel has described the consequences of such interventions. While examining children’s diets as a part of her field work in Mali, Dunkel discovered that a protein deficiency syndrome called kwashiorkor was increasing in incidence. Children in the area were once protected against kwashiorkor by a diet high in grasshoppers, but these had become unsafe to eat after pesticide use in the area increased.

A further issue is the persistent fear many Westerners still have about eating insects. “The problem is the ick factor—the eyes, the wings, the legs,” Krisiloff has said. “It’s not as simple as hiding it in a bug nugget. People won’t accept it beyond the novelty. When you think of a chicken, you think of a chicken breast, not the eyes, wings, and beak.” For Marcel Dicke, the key lies in camouflaging the fact that people are eating insects at all. Insect flour is one of his propositions, as is changing the language of insect cuisine. “If you say it’s mealworms, it makes people think of ringworm”, he notes. “So, stop saying ‘worm’. If we use Latin names, say it’s a Tenebrio quiche, it sounds much more fancy.” For Krisiloff, Dicke and others, keeping quiet about the gritty reality of our food is often the best approach.

It is yet to be seen if history will truly redeem Vincent Holt and his suggestion that British families should gather around their dining tables for a breakfast of “moths on toast”. It is clear, however, that entomophagy, far from being a kooky sideshow to the real business of food production, has much to offer in meeting the challenges that global societies in the 21st century will face.

TEST 2.3 – LOVE STORIES

Love stories are often associated – at least in the popular imagination – with fairy tales, adolescent day dreams, Disney movies and other frivolous pastimes. For psychologists developing taxonomies2 of affection and attachment, however, this is an area of rigorous academic pursuit. Beginning in the early 1970s with the groundbreaking contributions of John Alan Lee, researchers have developed classifications that they believe better characterise our romantic predispositions. This involves examining not a single, universal, emotional expression (love), but rather a series of divergent behaviours and narratives that each has an individualised purpose, desired outcome and state of mind. Lee’s gritty methodology painstakingly involved participants matching 170 typical romantic encounters (e.g., The night after I met X…) with nearly 1500 possible reactions ( I could hardly get to sleep or I wrote X a letter). The patterns unknowingly expressed by respondents culminated in a taxonomy of six distinct love styles that continue to inform research in the area forty years later.

The first of these styles – eros – is closely tied in with images of romantic love that are promulgated in Western popular culture. Characteristic of this style is a passionate emotional intensity, a strong physical magnetism – as if the two partners were literally being pulled together – and a sense of inevitability about the relationship. A related but more frantic style of love called mania involves an obsessive, compulsive attitude toward one’s partner. Vast swings in mood from ecstasy to agony – dependent on the level of attention a person is receiving from his or her partner – are typical of manic love.

Two styles were much more subdued, however. Storge is a quiet, companionate type of loving – love by evolution rather than love by revolution, according to some theorists. Relationships built on a foundation of platonic affection and caring are archetypal of storge. When care is extended to a sacrificial level of doting, however, it becomes another style – agape. In an agape relationship one partner becomes a caretaker, exalting the welfare of the other above his or her own needs.

The final two styles of love seem to lack aspects of emotion and reciprocity altogether. The ludus style envisions relationships primarily as a game in which it is best to play the field or experience a diverse set of partners over time. Mutually- gratifying outcomes in relationships are not considered necessary, and deception of a partner and lack of disclosure about one’s activities are also typical. While Lee found that college students in his study overwhelmingly disagreed with the tenets of this style, substantial numbers of them acted in a typically ludic style while dating, a finding that proves correct the deceit inherent in ludus. Pragma lovers also downplayed emotive aspects of relationships but favoured practical, sensible connections. Successful arranged marriages are a great example of pragma, in that the couple decide to make the relationship work; but anyone who seeks an ideal partner with a shopping list of necessary attributes (high salary, same religion, etc.) fits the classification.

Robert J. Sternberg’s contemporary research on love stories has elaborated on how these narratives determine the shape of our relationships and our lives. Sternberg and others have proposed and tested the theory of love as a story, whereby the interaction of our personal attributes with the environment – which we in part create – leads to the development of stories about love that we then seek to fulfil, to the extent possible, in our lives. Sternberg’s taxonomy of love stories numbers far more, at twenty-six, than Lee’s taxonomy of love styles, but as Sternberg himself admits there is plenty of overlap. The seventh story, Game, coincides with ludus, for example, while the nineteenth story, Sacrifice, fits neatly on top of agape.

Sternberg’s research demonstrates that we may have predilections toward multiple love stories, each represented in a mental hierarchy and varying in weight in terms of their personal significance. This explains the frustration many of us experience when comparing potential partners. One person often fulfils some expected narratives – such as a need for mystery and fantasy – while lacking the ability to meet the demands of others (which may lie in direct contradiction). It is also the case that stories have varying abilities to adapt to a given cultural milieu and its respective demands. Love stories are, therefore, interactive and adaptive phenomena in our lives rather than rigid prescriptions.

Steinberg also explores how our love stories interact with the love stories of our partners. What happens when someone who sees love as art collides with someone who sees love as business? Can a Sewing story (love is what you make it) co-exist with a Theatre story (love is a script with predictable acts, scenes and lines)? Certainly, it is clear that we look for partners with love stories that complement and are compatible with our own narratives. But they do not have to be an identical match. Someone who sees love as mystery and art, for example, might locate that mystery better in a partner who views love through a lens of business and humour. Not all love stories, however, are equally well predisposed to relationship longevity; stories that view love as a game, as a kind of surveillance or as an addiction are all unlikely to prove durable.

Research on love stories continues apace. Defying the myth that rigorous science and the romantic persuasions of ordinary people are incompatible, this research demonstrates that good psychology can clarify and comment on the way we give affection and form attachments.

TEST 3.1 – ELECTRORECEPTION

Open your eyes in sea water and it is difficult to see much more than a murky, bleary green colour. Sounds, too, are garbled and difficult to comprehend. Without specialised equipment humans would be lost in these deep sea habitats, so how do fish make it seem so easy? Much of this is due to a biological phenomenon known as electroreception – the ability to perceive and act upon electrical stimuli as part of the overall senses. This ability is only found in aquatic or amphibious species because water is an efficient conductor of electricity.

Electroreception comes in two variants. While all animals (including humans) generate electric signals, because they are emitted by the nervous system, some animals have the ability – known as passive electroreception – to receive and decode electric signals generated by other animals in order to sense their location.

Other creatures can go further still, however. Animals with active electroreception possess bodily organs that generate special electric signals on cue. These can be used for mating signals and territorial displays as well as locating objects in the water. Active electroreceptors can differentiate between the various resistances that their electrical currents encounter. This can help them identify whether another creature is prey, predator or something that is best left alone. Active electroreception has a range of about one body length – usually just enough to give its host time to get out of the way or go in for the kill.

One fascinating use of active electroreception – known as the Jamming Avoidance Response mechanism – has been observed between members of some species known as the weakly electric fish. When two such electric fish meet in the ocean using the same frequency, each fish will then shift the frequency of its discharge so that they are transmitting on different frequencies. Doing so prevents their electroreception faculties from becoming jammed. Long before citizens’ band radio users first had to yell Get off my frequency  at hapless novices cluttering the air waves, at least one species had found a way to peacefully and quickly resolve this type of dispute.

Electroreception can also play an important role in animal defences. Rays are one such example. Young ray embryos develop inside egg cases that are attached to the sea bed. The embryos keep their tails in constant motion so as to pump water and allow them to breathe through the egg’s casing. If the embryo’s electroreceptors detect the presence of a predatory fish in the vicinity, however, the embryo stops moving (and in so doing ceases transmitting electric currents) until the fish has moved on. Because marine life of various types is often travelling past, the embryo has evolved only to react to signals that are characteristic of the respiratory movements of potential predators such as sharks.

Many people fear swimming in the ocean because of sharks. In some respects, this concern is well grounded – humans are poorly equipped when it comes to electroreceptive defence mechanisms. Sharks, meanwhile, hunt with extraordinary precision. They initially lock onto their prey through a keen sense of smell (two thirds of a shark’s brain is devoted entirely to its olfactory organs). As the shark reaches proximity to its prey, it tunes into electric signals that ensure a precise strike on its target; this sense is so strong that the shark even attacks blind by letting its eyes recede for protection.

Normally, when humans are attacked it is purely by accident. Since sharks cannot detect from electroreception whether or not something will satisfy their tastes, they tend to try before they buy, taking one or two bites and then assessing the results (our sinewy muscle does not compare well with plumper, softer prey such as seals). Repeat attacks are highly likely once a human is bleeding, however; the force of the electric field is heightened by salt in the blood which creates the perfect setting for a feeding frenzy. In areas where shark attacks on humans are likely to occur, scientists are exploring ways to create artificial electroreceptors that would disorient the sharks and repel them from swimming beaches.

There is much that we do not yet know concerning how electroreception functions. Although researchers have documented how electroreception alters hunting, defence and communication systems through observation, the exact neurological processes that encode and decode this information are unclear. Scientists are also exploring the role electroreception plays in navigation. Some have proposed that salt water and magnetic fields from the Earth’s core may interact to form electrical currents that sharks use for migratory purposes.

TEST 3.2 – FAIR GAMES?

For seventeen days every four years the world is briefly arrested by the captivating, dizzying spectacle of athleticism, ambition, pride and celebration on display at the Summer Olympic Games. After the last weary spectators and competitors have returned home, however, host cities are often left awash in high debts and costly infrastructure maintenance. The staggering expenses involved in a successful Olympic bid are often assumed to be easily mitigated by tourist revenues and an increase in local employment, but more often than not host cities are short changed and their taxpayers for generations to come are left settling the debt.

Olympic extravagances begin with the application process. Bidding alone will set most cities back about $20 million, and while officially bidding only takes two years (for cities that make the shortlist), most cities can expect to exhaust a decade working on their bid from the moment it is initiated to the announcement of voting results from International Olympic Committee members. Aside from the financial costs of the bid alone, the process ties up real estate in prized urban locations until the outcome is known. This can cost local economies millions of dollars of lost revenue from private developers who could have made use of the land, and can also mean that particular urban quarters lose their vitality due to the vacant lots. All of this can be for nothing if a bidding city does not appease the whims of IOC members – private connections and opinions on government conduct often hold sway (Chicago’s 2012 bid is thought to have been undercut by tensions over U.S. foreign policy).

Bidding costs do not compare, however, to the exorbitant bills that come with hosting the Olympic Games themselves. As is typical with large-scale, one-off projects, budgeting for the Olympics is a notoriously formidable task. Los Angelinos have only recently finished paying off their budget-breaking 1984 Olympics; Montreal is still in debt for its 1976 Games (to add insult to injury, Canada is the only host country to have failed to win a single gold medal during its own Olympics). The tradition of runaway expenses has persisted in recent years. London Olympics managers have admitted that their 2012 costs may increase ten times over their initial projections, leaving tax payers 20 billion pounds in the red.

Hosting the Olympics is often understood to be an excellent way to update a city’s sporting infrastructure. The extensive demands of Olympic sports include aquatic complexes, equestrian circuits, shooting ranges, beach volleyball courts, and, of course, an 80,000 seat athletic stadium. Yet these demands are typically only necessary to accommodate a brief influx of athletes from around the world. Despite the enthusiasm many populations initially have for the development of world-class sporting complexes in their home towns, these complexes typically fall into disuse after the Olympic fervour has waned. Even Australia, home to one of the world’s most sportive populations, has left its taxpayers footing a $32 million-a-year bill for the maintenance of vacant facilities.

Another major concern is that when civic infrastructure developments are undertaken in preparation for hosting the Olympics, these benefits accrue to a single metropolitan centre (with the exception of some outlying areas that may get some revamped sports facilities). In countries with an expansive land mass, this means vast swathes of the population miss out entirely. Furthermore, since the International Olympic Committee favours prosperous global centres (the United Kingdom was told, after three failed bids from its provincial cities, that only London stood any real chance at winning), the improvement of public transport, roads and communication links tends to concentrate in places already well-equipped with world-class infrastructures. Perpetually by-passing minor cities creates a cycle of disenfranchisement: these cities never get an injection of capital, they fail to become first-rate candidates, and they are constantly passed over in favour of more secure choices.

Finally, there is no guarantee that an Olympics will be a popular success. The feel good factor that most proponents of Olympic bids extol (and that was no doubt driving the 90 to 100 per cent approval rates of Parisians and Londoners for their cities’ respective 2012 bids) can be an elusive phenomenon, and one that is tied to that nation’s standing on the medal tables. This ephemeral thrill cannot compare to the years of disruptive construction projects and security fears that go into preparing for an Olympic Games, nor the decades of debt repayment that follow (Greece’s preparation for Athens 2004 famously deterred tourists from visiting the country due to widespread unease about congestion and disruption).

There are feasible alternatives to the bloat, extravagance and wasteful spending that comes with a modern Olympic Games. One option is to designate a permanent host city that would be re-designed or built from scratch especially for the task. Another is to extend the duration of the Olympics so that it becomes a festival of several months. Local businesses would enjoy the extra spending and congestion would ease substantially as competitors and spectators come and go according to their specific interests. Neither the Olympic City nor the extended length options really get to the heart of the issue, however. Stripping away ritual and decorum in favour of concentrating on athletic rivalry would be preferable.

Failing that, the Olympics could simply be scrapped altogether. International competition could still be maintained through world championships in each discipline. Most of these events are already held on non-Olympic years anyway – the International Association of Athletics Federations, for example, has run a biennial World Athletics Championship since 1983 after members decided that using the Olympics for their championship was no longer sufficient. Events of this nature keep world-class competition alive without requiring Olympic-sized expenses.

TEST 3.3 – TIME TRAVEL

Time travel took a small step away from science fiction and toward science recently when physicists discovered that sub-atomic particles known as neutrinos – progeny of the sun’s radioactive debris – can exceed the speed of light. The unassuming particle – it is electrically neutral, small but with a non-zero mass and able to penetrate the human form undetected – is on its way to becoming a rock star of the scientific world.

Researchers from the European Organisation for Nuclear Research (CERN) in Geneva sent the neutrinos hurtling through an underground corridor toward their colleagues at the Oscillation Project with Emulsion-Tracing Apparatus (OPERA) team 730 kilometres away in Gran Sasso, Italy. The neutrinos arrived promptly – so promptly, in fact, that they triggered what scientists are calling the unthinkable – that everything they have learnt, known or taught stemming from the last one hundred years of the physics discipline may need to be reconsidered.

The issue at stake is a tiny segment of time – precisely sixty nanoseconds (which is sixty billionths of a second). This is how much faster than the speed of light the neutrinos managed to go in their underground travels and at a consistent rate (15,000 neutrinos were sent over three years). Even allowing for a margin of error of ten billionths of a second, this stands as proof that it is possible to race against light and win. The duration of the experiment also accounted for and ruled out any possible lunar effects or tidal bulges in the earth’s crust.

Nevertheless, there’s plenty of reason to remain sceptical. According to Harvard University science historian Peter Galison, Einstein’s relativity theory has been pushed harder than any theory in the history of the physical sciences . Yet each prior challenge has come to no avail, and relativity has so far refused to buckle.

So is time travel just around the corner? The prospect has certainly been wrenched much closer to the realm of possibility now that a major physical hurdle – the speed of light – has been cleared. If particles can travel faster than light, in theory travelling back in time is possible. How anyone harnesses that to some kind of helpful end is far beyond the scope of any modern technologies, however, and will be left to future generations to explore.

Certainly, any prospective time travellers may have to overcome more physical and logical hurdles than merely overtaking the speed of light. One such problem, posited by René Barjavel in his 1943 text Le Voyageur Imprudent is the so- called grandfather paradox. Barjavel theorised that, if it were possible to go back in time, a time traveller could potentially kill his own grandfather. If this were to happen, however, the time traveller himself would not be born, which is already known to be true. In other words, there is a paradox in circumventing an already known future; time travel is able to facilitate past actions that mean time travel itself cannot occur.

Other possible routes have been offered, though. For Igor Novikov, astrophysicist behind the 1980s’ theorem known as the self-consistency principle, time travel is possible within certain boundaries. Novikov argued that any event causing a paradox would have zero probability. It would be possible, however, to affect rather than change historical outcomes if travellers avoided all inconsistencies. Averting the sinking of the Titanic, for example, would revoke any future imperative to stop it from sinking – it would be impossible. Saving selected passengers from the water and replacing them with realistic corpses would not be impossible, however, as the historical record would not be altered in any way.

A further possibility is that of parallel universes. Popularised by Bryce Seligman DeWitt in the 1960s (from the seminal formulation of Hugh Everett), the many-worlds interpretation holds that an alternative pathway for every conceivable occurrence actually exists. If we were to send someone back in time, we might therefore expect never to see him again – any alterations would divert that person down a new historical trajectory.

A final hypothesis, one of unidentified provenance, reroutes itself quite efficiently around the grandfather paradox. Non-existence theory suggests exactly that – a person would quite simply never exist if they altered their ancestry in ways that obstructed their own birth. They would still exist in person upon returning to the present, but any chain reactions associated with their actions would not be registered. Their historical identity would be gone.

So, will humans one day step across the same boundary that the neutrinos have? World-renowned astrophysicist Stephen Hawking believes that once spaceships can exceed the speed of light, humans could feasibly travel millions of years into the future in order to repopulate earth in the event of a forthcoming apocalypse. This is because, as the spaceships accelerate into the future, time would slow down around them (Hawking concedes that bygone eras are off limits – this would violate the fundamental rule that cause comes before effect).

Hawking is therefore reserved yet optimistic. Time travel was once considered scientific heresy, and I used to avoid talking about it for fear of being labelled a crank. These days I’m not so cautious.

TEST 4.1 – FOR THE STRENGTH OF THE PACK IS THE WOLF, AND THE STRENGTH OF THE WOLF IS THE PACK. 

 – Rudyard Kipling, The Law for the Wolves

A wolf pack is an extremely well-organised family group with a well-defined social structure and a clear-cut code of conduct. Every wolf has a certain place and function within the pack and every member has to do its fair share of the work. The supreme leader is a very experienced wolf – the alpha – who has dominance over the whole pack. It is the protector and decision-maker and directs the others as to where, when and what to hunt. However, it does not lead the pack into the hunt, for it is far too valuable to risk being injured or killed. That is the responsibility of the beta wolf, who assumes second place in the hierarchy of the pack. The beta takes on the role of enforcer – fighter or ‘tough guy’– big, strong and very aggressive. It is both the disciplinarian of the pack and the alpha’s bodyguard.

The tester, a watchful and distrustful character, will alert the alpha if it encounters anything suspicious while it is scouting around looking for signs of trouble. It is also the quality controller, ensuring that the others are deserving of their place in the pack. It does this by creating a situation that tests their bravery and courage, by starting a fight, for instance. At the bottom of the social ladder is the omega wolf, subordinate and submissive to all the others, but often playing the role of peacemaker by intervening in an intra-pack squabble and defusing the situation by clowning around. Whereas the tester may create conflict, the omega is more likely to resolve it.

The rest of the pack is made up of mid- to low-ranking non-breeding adults and the immature offspring of the alpha and its mate. The size of the group varies from around six to ten members or more, depending on the abundance of food and numbers of the wolf population in general.

Wolves have earned themselves an undeserved reputation for being ruthless predators and a danger to humans and livestock. The wolf has been portrayed in fairy tales and folklore as a very bad creature, killing any people and other animals it encounters. However, the truth is that wolves only kill to eat, never kill more than they need, and rarely attack humans unless their safety is threatened in some way. It has been suggested that hybrid wolf-dogs or wolves suffering from rabies are actually responsible for many of the historical offences as well as more recent incidents.

Wolves hunt mainly at night. They usually seek out large herbivores, such as deer, although they also eat smaller animals, such as beavers, hares and rodents, if these are obtainable. Some wolves in western Canada are known to fish for salmon. The alpha wolf picks out a specific animal in a large herd by the scent it leaves behind. The prey is often a very young, old or injured animal in poor condition. The alpha signals to its hunters which animal to take down and when to strike by using tail movements and the scent from a gland at the tip of its spine above the tail.

Wolves kill to survive. Obviously, they need to eat to maintain strength and health but the way they feast on the prey also reinforces social order. Every member of the family has a designated spot at the carcass and the alpha directs them to their places through various ear postures: moving an ear forward, flattening it back against the head or swivelling it around. The alpha wolf eats the prized internal organs while the beta is entitled to the muscle-meat of the rump and thigh, and the omega and other low ranks are assigned the intestinal contents and less desirable parts such as the backbone and ribs.

The rigid class structure in a wolf pack entails frequent displays of supremacy and respect. When a higher-ranking wolf approaches, a lesser-ranking wolf must slow down, lower itself, and pass to the side with head averted to show deference; or, in an extreme act of passive submission, it may roll onto its back, exposing its throat and belly. The dominant wolf stands over it, stiff-legged and tall, asserting its superiority and its authority in the pack.

TEST 4.2 – ENVIRONMENTAL MEDICINE

– also called conservation medicine, ecological medicine, or medical geology –

In simple terms, environmental medicine deals with the interaction between human and animal health and the environment. It concerns the adverse reactions that people have on contact with or exposure to an environmental excitant1. Ecological health is its primary concern, especially emerging infectious diseases and pathogens from insects, plants and vertebrate animals.

Practitioners of environmental medicine work in teams involving many other specialists. As well as doctors, clinicians and medical researchers, there may be marine and climate biologists, toxicologists, veterinarians, geospatial and landscape analysts, even political scientists and economists. This is a very broad approach to the rather simple concept that there are causes for all illnesses, and that what we eat and drink or encounter in our surroundings has a direct impact on our health.

Central to environmental medicine is the total load theory developed by the clinical ecologist Theron Randolph, who postulated that illness occurs when the body’s ability to detoxify environmental excitants has reached its capacity. His wide-ranging perception of what makes up those stimuli includes chemical, physical, biological and psychosocial factors. If a person with numerous and/or chronic exposures to environmental chemicals suffers a psychological upset, for example, this could overburden his immune system and result in actual physical illness. In other words, disease is the product of multiple factors.

Another Randolph concept is that of individual susceptibility or the variability in the response of individuals to toxic agents. Individuals may be susceptible to any number of excitants but those exposed to the same risk factors do not necessarily develop the same disease, due in large part to genetic predisposition; however, age, gender, nutrition, emotional or physical stress, as well as the particular infectious agents or chemicals and intensity of exposure, all contribute.

Adaptation is defined as the ability of an organism to adjust to gradually changing circumstances of its existence, to survive and be successful in a particular environment. Dr Randolph suggested that our bodies, designed for the Stone Age, have not quite caught up with the modern age and consequently, many people suffer diseases from maladaptation, or an inability to deal with some of the new substances that are now part of our environment. He asserted that this could cause exhaustion, irritability, depression, confusion and behavioural problems in children. Numerous traditional medical practitioners, however, are very sceptical of these assertions.

Looking at the environment and health together is a way of making distant and nebulous notions, such as global warming, more immediate and important. Even a slight rise in temperature, which the world is already experiencing, has immediate effects. Mosquitoes can expand their range and feed on different migratory birds than usual, resulting in these birds transferring a disease into other countries. Suburban sprawl is seen as more than a socioeconomic problem for it brings an immediate imbalance to the rural ecosystem, increasing population density so people come into closer contact with disease-carrying rodents or other animals. Deforestation also displaces feral animals that may then infect domesticated animals, which enter the food chain and transmit the disease to people. These kinds of connections are fundamental to environmental medicine and the threat of zoonotic disease looms larger.

Zoonoses, diseases of animals transmissible to humans, are a huge concern. Different types of pathogens, including bacteria, viruses, fungi and parasites, cause zoonoses. Every year, millions of people worldwide get sick because of foodborne bacteria such as salmonella and campylobacter, which cause fever, diarrhoea and abdominal pain. Tens of thousands of people die from the rabies virus after being bitten by rabid animals like dogs and bats. Viral zoonoses like avian influenza (bird flu), swine flu (H1N1 virus) and Ebola are on the increase with more frequent, often uncontainable, outbreaks. Some animals (particularly domestic pets) pass on fungal infections to humans. Parasitic infection usually occurs when people come into contact with food or water contaminated by animals that are infected with parasites like cryptosporidium, trichinella, or worms.

As the human population of the planet increases, encroaching further on animal domains and causing ecological change, inter-professional cooperation is crucial to meet the challenges of dealing with the effects of climate change, emergent cross-species pathogens, rising toxicity in air, water and soil, and uncontrolled development and urbanisation. This can only happen if additional government funds are channelled into the study and practice of environmental medicine.

TEST 4.3 – TELEVISION AND SPORT

When the medium becomes the stadium

The relationship between television and sports is not widely thought of as problematic. For many people, television is a simple medium through which sports can be played, replayed, slowed down, and of course conveniently transmitted live to homes across the planet. What is often overlooked, however, is how television networks have reshaped the very foundations of an industry that they claim only to document. Major television stations immediately seized the revenue-generating prospects of televising sports and this has changed everything, from how they are played to who has a chance to watch them.

Before television, for example, live matches could only be viewed in person. For the majority of fans, who were unable to afford tickets to the top-flight matches, or to travel the long distances required to see them, the only option was to attend a local game instead, where the stakes were much lower. As a result, thriving social networks and sporting communities formed around the efforts of teams in the third and fourth divisions and below. With the advent of live TV, however, premier matches suddenly became affordable and accessible to hundreds of millions of new viewers. This shift in viewing patterns vacuumed out the support base of local clubs, many of which ultimately folded.

For those on the more prosperous side of this shift in viewing behaviour, however, the financial rewards are substantial. Television assisted in derailing long-held concerns in many sports about whether athletes should remain amateurs or ‘go pro’, and replaced this system with a new paradigm where nearly all athletes are free to pursue stardom and to make money from their sporting prowess. For the last few decades, top-level sports men and women have signed lucrative endorsement deals and sponsorship contracts, turning many into multi-millionaires and also allowing them to focus full-time on what really drives them. That they can do all this without harming their prospects at the Olympic Games and other major competitions is a significant benefit for these athletes.

The effects of television extend further, however, and in many instances have led to changes in sporting codes themselves. Prior to televised coverage of the Winter Olympics, for example, figure skating involved a component in which skaters drew ‘figures’ in the ice, which were later evaluated for the precision of their shapes. This component translated poorly to the small screen, as viewers found the whole procedure, including the judging of minute scratches on ice, to be monotonous and dull. Ultimately, figures were scrapped in favour of a short programme featuring more telegenic twists and jumps. Other sports are awash with similar regulatory shifts – passing the ball back to the goalkeeper was banned in football after gameplay at the 1990 World Cup was deemed overly defensive by television viewers.

In addition to insinuating changes into sporting regulation, television also tends to favour some individual sports over others. Some events, such as the Tour de France, appear to benefit: on television it can be viewed in its entirety, whereas on-site enthusiasts will only witness a tiny part of the spectacle. Wrestling, perhaps due to an image problem that repelled younger (and highly prized) television viewers, was scheduled for removal from the 2020 Olympic Games despite being a founding sport and a fixture of the Olympics since 708 BC. Only after a fervent outcry from supporters was that decision overturned.

Another change in the sporting landscape that television has triggered is the framing of sports not merely in terms of the level of skill and athleticism involved, but as personal narratives of triumph, shame and redemption on the part of individual competitors. This is made easier and more convincing through the power of close-up camera shots, profiles and commentary shown during extended build-ups to live events. It also attracts television audiences – particularly women – who may be less interested in the intricacies of the sport than they are in broader ‘human interest’ stories. As a result, many viewers are now more familiar with the private agonies of famous athletes than with their record scores or match- day tactics.

And what about the effects of male television viewership? Certainly, men have always been willing to watch male athletes at the top of their game, but female athletes participating in the same sports have typically attracted far less interest and, as a result, have suffered greatly reduced exposure on television. Those sports where women can draw the crowds – beach volleyball, for example – are often those where female participants are encouraged to dress and behave in ways oriented specifically toward a male demographic.

Does all this suggest the influence of television on sports has been overwhelmingly negative? The answer will almost certainly depend on who among the various stakeholders is asked. For all those who have lost out – lower-league teams, athletes whose sports lack a certain visual appeal – there are numerous others who have benefitted enormously from the partnership between television and sports, and whose livelihoods now depend on it.

TEST 5.1 – THE DISCOVERY OF PENICILLIN

The Scottish bacteriologist Dr Alexander Fleming (1881-1955) is credited with the discovery of penicillin in London in 1928. He had been working at St Mary’s Hospital on the bacteriology of septic wounds. As a medic during World War I, he had witnessed the deaths of many wounded soldiers from infection and he had observed that the use of harsh antiseptics, rather than healing the body, actually harmed the blood corpuscles that destroy bacteria.

In his search for effective antimicrobial agents, Fleming was cultivating staphylococcus bacteria in Petri dishes containing agar1. Before going on holiday in the summer of 1928, he piled up the agar plates to make room for someone else to use his workbench in his absence and left the windows open. When he returned to work two weeks later, Fleming noticed mould growing on those culture plates that had not been fully immersed in sterilising agent. This was not an unusual phenomenon, except in this case the particular mould seemed to have killed the staphylococcus aureus immediately surrounding it. He realised that this mould had potential.

Fleming consulted a mycologist called C J La Touche, who occupied a laboratory downstairs containing many mould specimens (possibly the source of the original contamination), and they concluded it was the Penicillium genus of ascomycetous fungi. Fleming continued to experiment with the mould on other pathogenic bacteria, finding that it successfully killed a large number of them. Importantly, it was also non-toxic, so here was a bacteria-destroying agent that could be used as an antiseptic in wounds without damaging the human body. However, he was unsuccessful in his attempts to isolate the active antibacterial element, which he called penicillin. In 1929, he wrote a paper on his findings, published in the British Journal of Experimental Pathology, but it failed to kindle any interest at the time.

In 1938, Dr Howard Florey, a professor of pathology at Oxford University, came across Fleming’s paper. In collaboration with his colleague Dr Ernst Chain, and other skilled chemists, he worked on producing a usable drug. They experimented on mice infected with streptococcus. Those untreated died, while those injected with penicillin survived. It was time to test the drug on humans but they could not produce enough – it took 2,000 litres of mould culture fluid to acquire enough penicillin to treat a single patient. Their first case in 1940, an Oxford police officer who was near death as a result of infection by both staphylococci and streptococci, rallied after five days of treatment but, when the supply of penicillin ran out, he eventually died.

In 1941, Florey and biochemist Dr Norman Heatley went to the United States to team up with American scientists with a view to finding a way of making large quantities of the drug. It became obvious that Penicillium notatum would never generate enough penicillin for effective treatments so they began to look for a more productive species. One day a laboratory assistant turned up with a melon covered in mould. This fungus was Penicillium chrysogeum, which produced 200 times more penicillin than Fleming’s original species but, with further enhancement and filtration, it was induced to yield 1,000 times as much as Penicillium notatum. Manufacture could begin in earnest.

The standardisation and large-scale production of the penicillin drug during World War II and its availability for treating wounded soldiers undoubtedly saved many lives. Penicillin proved to be very effective in the treatment of pneumococcal pneumonia – the death rate in WWII was 1% compared to 18% in WWI. It has since proved its worth in the treatment of many life-threatening infections such as tuberculosis, meningitis, diphtheria and several sexually-transmitted diseases.

Fleming has always been acknowledged as the discoverer of penicillin. However, the development of a commercial penicillin drug was due to the skill of chemical scientists Florey, Chain and others who overcame the difficulties of converting it into a usable form. Fleming and Florey received knighthoods in 1944 and they, together with Chain, were awarded the Nobel Prize in Physiology or Medicine in 1945. Heatley’s contribution seems to have been overlooked until, in 1990, he was awarded an honorary doctorate of medicine by Oxford University – the first in its 800-year history.

Fleming was mindful of the dangers of resistance to penicillin early on and he expressly warned on many occasions against overuse of the drug, because this would lead to bacterial resistance. Ironically, the occurrence of resistance is pushing the drive today to find new, more powerful antibiotics.

TEST 5.2 – DAYLIGHT SAVING TIME 

Each year in many countries around the world, clocks are set forward in spring and then back again in autumn in an effort to ‘save’ daylight hours. Like many modern practices, Daylight Savings Time (DST) dates back to ancient civilisations. The Romans would adjust their routines to the sun’s schedule by using different scales in their water clocks for different months of the year. 

This practice fell out of favour, however, and the concept was renewed only when, in 1784, the American inventor Benjamin Franklin wrote a jocular article for The Journal Paris exhorting the city’s residents to make more use of daylight hours in order to reduce candle use. In 1895, in a more serious effort, New Zealand entomologist George Vernon Hudson proposed a biannual twoUhour shift closely resembling current forms of DST. His cause was not taken up, however, until Germany first pushed their clocks forward in April 1916 as part of a drive to save fuel in World War I. 

Over the next several decades, global use of DST was sporadic and inconsistent. Countries such as the UK and USA adopted DST in World Wars I and II, but reverted to standard time after the wars ended. In the USA, the decision to use DST was determined by states and municipalities between 1945 and 1966, causing widespread confusion for transport and broadcasting schedules until Congress implemented the Uniform Time Act in 1966.

Today, DST is used in some form by over 70 countries worldwide, affecting around one sixth of the world’s population. There is still no uniform standard, however. Countries such as Egypt and Russia have adjusted their policies on multiple occasions in recent years, in some instances leading to considerable turmoil. Muslim countries often suspend DST for the month of Ramadan. The European Union finally standardised DST in 2000, while the USA’s most recent adjustments were introduced with the Energy Policy Act of 2005. 

In general, the benefits of DST are considerable and well documented. Perhaps the most significant factor in terms of popular support is the chance to make better use of daylight in the evening. With extended daylight hours, office workers coming off a 9 to 5 shift can often take part in outdoor recreational activities for an hour or two. This has other positive effects, such as reducing domestic electricity consumption as more opportunities become available to use sunlight instead of artificial lighting. A further benefit is a reduction in the overall rate of automobile accidents, as DST ensures that streets are well lit at peak hours.

Many industries are supportive of DST due to the opportunities it provides for increased revenue. Extended daylight hours mean people are more likely to stay out later in the evening and spend more money in bars and restaurants, for example, so tourism and hospitality are two sectors that stand to gain a lot from more daylight. In Queensland, Australia, which elected not to implement DST due to complaints from dairy farmers over disruption to milking schedules, the annual drain on the state’s economy is estimated to be as high as $4 billion. 

Some research casts doubt on the advantages of DST, however. Although the overall incidence of traffic accidents is lower, for pedestrians the risk of being hit by a car in the evening increases by as much as 186 per cent in the weeks after clocks are set back in autumn, possibly because drivers have not yet adjusted to earlier sunsets. Although this shift does in turn make streets safer in early mornings, the risk to pedestrians is not offset simply because fewer pedestrians use the streets at that time. 

A further health concern involves the disruption of our body clock. Setting clocks one hour forward at night can cause many people to lose sleep, resulting in tiredness and all its wellUdocumented effects, such as mood swings, reduced productivity and problems with overall physical wellUbeing. In 2008, a Swedish study found that heart attack rates spike in the few days following the switch to DST for summer. Tiredness may also be a factor behind the increase in road accidents in the week after DST begins. 

Finally, safety issues have arisen in parts of Latin America relating to a suspected relationship between DST and higher incidences of street crime. In 2008, Guatemala chose not to use DST because it forced office workers to leave their homes while it was still dark outside in the morning. This natural cover for criminals was thought to increase incidents of crime at this hour.

TEST 5.3 – WILLPOWER

Although willpower does not shape our decisions, it determines whether and how long we can follow through on them. It almost single-handedly determines life outcomes. Interestingly, research suggests the general population is indeed aware of how essential willpower is to their wellbeing; survey participants routinely identify a ‘lack of willpower’ as the major impediment to making beneficial life changes. There are, however, misunderstandings surrounding the nature of willpower and how we can acquire more of it. There is a widespread misperception, for example, that increased leisure time would lead to subsequent increases in willpower.

Although the concept of willpower is often explained through single-word terms, such as ‘resolve’ or ‘drive’, it refers in fact to a variety of behaviours and situations. There is a common perception that willpower entails resisting some kind of a ‘treat’, such as a sugary drink or a lazy morning in bed, in favour of decisions that we know are better for us, such as drinking water or going to the gym. Of course this is a familiar phenomenon for all. Yet willpower also involves elements such as overriding negative thought processes, biting your tongue in social situations, or persevering through a difficult activity. At the heart of any exercise of willpower, however, is the notion of ‘delayed gratification’, which involves resisting immediate satisfaction for a course that will yield greater or more permanent satisfaction in the long run.

Scientists are making general investigations into why some individuals are better able than others to delay gratification and thus employ their willpower, but the genetic or environmental origins of this ability remain a mystery for now. Some groups who are particularly vulnerable to reduced willpower capacity, such as those with addictive personalities, may claim a biological origin for their problems. What is clear is that levels of willpower typically remain consistent over time (studies tracking individuals from early childhood to their adult years demonstrate a remarkable consistency in willpower abilities). In the short term, however, our ability to draw on willpower can fluctuate dramatically due to factors such as fatigue, diet and stress. Indeed, research by Matthew Gailliot suggests that willpower, even in the absence of physical activity, both requires and drains blood glucose levels, suggesting that willpower operates more or less like a ‘muscle’, and, like a muscle, requires fuel for optimum functioning.

These observations lead to an important question: if the strength of our willpower at the age of thirty-five is somehow pegged to our ability at the age of four, are all efforts to improve our willpower certain to prove futile? According to newer research, this is not necessarily the case. Gregory M. Walton, for example, found that a single verbal cue – telling research participants how strenuous mental tasks could ‘energise’ them for further challenging activities – made a profound difference in terms of how much willpower participants could draw upon to complete the activity. Just as our willpower is easily drained by negative influences, it appears that willpower can also be boosted by other prompts, such as encouragement or optimistic self-talk.

Strengthening willpower thus relies on a two-pronged approach: reducing negative influences and improving positive ones. One of the most popular and effective methods simply involves avoiding willpower depletion triggers, and is based on the old adage, ‘out of sight, out of mind’. In one study, workers who kept a bowl of enticing candy on their desks were far more likely to indulge than those who placed it in a desk drawer. It also appears that finding sources of motivation from within us may be important. In another study, Mark Muraven found that those who felt compelled by an external authority to exert self-control experienced far greater rates of willpower depletion than those who identified their own reasons for taking a particular course of action. This idea that our mental convictions can influence willpower was borne out by Veronika Job. Her research indicates that those who think that willpower is a finite resource exhaust their supplies of this commodity long before those who do not hold this opinion.

Willpower is clearly fundamental to our ability to follow through on our decisions but, as psychologist Roy Baumeister has discovered, a lack of willpower may not be the sole impediment every time our good intentions fail to manifest themselves. A critical precursor, he suggests, is motivation – if we are only mildly invested in the change we are trying to make, our efforts are bound to fall short. This may be why so many of us abandon our New Year’s Resolutions – if these were actions we really wanted to take, rather than things we felt we ought to be doing, we would probably be doing them already. In addition, Muraven emphasises the value of monitoring progress towards a desired result, such as by using a fitness journal, or keeping a record of savings toward a new purchase. The importance of motivation and monitoring cannot be overstated. Indeed, it appears that, even when our willpower reserves are entirely depleted, motivation alone may be sufficient to keep us on the course we originally chose.

TEST 6.1 – SENDING MONEY HOME

The economics of migrant remittances

Every year millions of migrants travel vast distances using borrowed money for their airfares and taking little or no cash with them. They seek a decent job to support themselves with money left over that they can send home to their families in developing countries. These remittances exceeded $400 billion last year. It is true that the actual rate per person is only about $200 per month but it all adds up to about triple the amount officially spent on development aid.

In some of the poorer, unstable or conflict-torn countries, these sums of money are a lifeline – the only salvation for those left behind. The decision to send money home is often inspired by altruism – an unselfish desire to help others. Then again, the cash might simply be an exchange for earlier services rendered by the recipients or it could be intended for investment by the recipients. Often it will be repayment of a loan used to finance the migrant’s travel and resettlement.

At the first sign of trouble, political or financial upheaval, these personal sources of support do not suddenly dry up like official investment monies. Actually, they increase in order to ease the hardship and suffering of the migrants’ families and, unlike development aid, which is channelled through government or other official agencies, remittances go straight to those in need. Thus, they serve an insurance role, responding in a countercyclical way to political and economic crises.

This flow of migrant money has a huge economic and social impact on the receiving countries. It provides cash for food, housing and necessities. It funds education and healthcare and contributes towards the upkeep of the elderly. Extra money is sent for special events such as weddings, funerals or urgent medical procedures and other emergencies. Occasionally it becomes the capital for starting up a small enterprise.

Unfortunately, recipients hardly ever receive the full value of the money sent back home because of exorbitant transfer fees. Many money transfer companies and banks operate on a fixed fee, which is unduly harsh for those sending small sums at a time. Others charge a percentage, which varies from around 8% to 20% or more dependent on the recipient country. There are some countries where there is a low fixed charge per transaction; however, these cheaper fees are not applied internationally because of widespread concern over money laundering. Whether this is a genuine fear or just an excuse is hard to say. If the recipients live in a small village somewhere, usually the only option is to obtain their money through the local post office. Regrettably, many governments allow post offices to have an exclusive affiliation with one particular money transfer operator so there is no alternative but to pay the extortionate charge.

The sums of money being discussed here might seem negligible on an individual basis but they are substantial in totality. If the transfer cost could be reduced to no more than one per cent, that would release another $30 billion dollars annually – approximately the total aid budget of the USA, the largest donor worldwide – directly into the hands of the world’s poorest. If this is not practicable, governments could at least acknowledge that small remittances do not come from organised crime networks, and ease regulations accordingly. They should put an end to restrictive alliances between post offices and money transfer operators or at least open up the system to competition. Alternately, a non-government humanitarian organisation, which would have the expertise to navigate the elaborate red tape, could set up a non-profit remittance platform for migrants to send money home for little or no cost.

Whilst contemplating the best system for transmission of migrant earnings to the home country, one should consider the fact that migrants often manage to save reasonable amounts of money in their adopted country. More often than not, that money is in the form of bank deposits earning a tiny percentage of interest, none at all or even a negative rate of interest.

If a developing country or a large charitable society could sell bonds with a guaranteed return of three or four per cent on the premise that the invested money would be used to build infrastructure in that country, there would be a twofold benefit. Migrants would make a financial gain and see their savings put to work in the development of their country of origin. The ideal point of sale for these bonds would be the channel used for money transfers so that, when migrants show up to make their monthly remittance, they could buy bonds as well. Advancing the idea one step further, why not make this transmission hub the conduit for affluent migrants to donate to worthy causes in their homeland so they may share their prosperity with their compatriots on a larger scale?

TEST 6.2 – ANGELO MOSSO’S PIONEERING WORK IN THE STUDY OF HUMAN PHYSIOLOGY

Scientists in the late nineteenth century were beginning to investigate the functions of blood circulation, trying to tease out the reasons for variations in pulse and pressure, and to understand the delivery of energy to the functioning parts of our bodies. Angelo Mosso (1846–1910) was one such pioneer, an Italian physiologist who progressed to become a professor of both pharmacology and physiology at the University of Turin. As was true of many of his enlightened, well-educated contemporaries, Mosso was concerned about the effect of the industrial revolution on the poorer working classes. Hard physical labour and an excessively long working day shortened lives, created conditions conducive to accidents, and crippled the children who were forced into such work at a very early age. One of his most influential contributions to society came from his work and writings on fatigue.

Early experimenters in any field find themselves having to construct previously unknown equipment to investigate fields of study as yet unexplored. Mosso had reviewed the work of fellow scientists who had worked on isolated muscles, such as those extracted from frogs, and who had observed movement and fatigue when these were stimulated electrically. He found two major issues with their methodolgy: there was a lack of evidence both that the findings would be relevant to the human body, and that the dynamometers used to measure the strength of movement could give accurate results. He therefore became determined to construct an instrument to measure human muscular effort and record the effects of fatigue with greater precision.

His device was named an ergograph, meaning work recorder. To modern eyes it seems remarkably simple, but such is true of many inventions when viewed with hindsight. It allowed the measurement of the work done by a finger as it was repetitively curled up and straightened. There were basically two parts. One held the hand in position, palm up, by strapping down the arm to a wooden base; this was important to prevent any unintentional movement of the hand while the experiment was taking place. The other part was a recording device that drew the movements of the finger vertically on a paper cylinder which revolved by tiny increments as the experiment proceeded. The index and ring fingers of the hand were each inserted into a brass tube to hold them still. The middle finger was encircled with a leather ring tied to a wire which was connected to a weight after passing through a pulley. The finger had to raise and lower the weight, with the length and speed of these flexions recorded on the paper by a stylus. In this way, he not only learned the fatigue profiles of his subjects but could observe a relationship between performance, tiredness and the emotional state of his subjects.

Mosso’s interest in the interaction between psychology and physiology led to another machine and further groundbreaking research. He was intrigued to observe the pulsing of circulating blood in patients who had suffered traumatic damage to the skull, or cranium. In these patients, a lack of bone covering the brain allowed the strength of the heart’s pumping to be seen beneath the skin. He carried out experiments to see whether certain intellectual activities, such as reading or solving a problem, or emotional responses, such as to a sudden noise, would affect the supply of blood to the brain. He detected some changes in blood supply, and then wanted to find out if the same would be true of individuals with no cranial damage.

His solution was to design another instrument to measure brain activity in uninjured subjects. He designed a wooden table-top for the human subject to lie on, which was placed over another table, balanced on a fulcrum (rather like a seesaw) that would allow the subject to tilt, with head a little higher than feet, or vice versa. Heavy weights beneath the table maintained the stability of the whole unit as the intention was to measure very tiny variations in the balance of the person. Once the upper table was adjusted to be perfectly horizontal, only the breathing created a slight regular oscillation. This breathing and pulses measured in the hands and feet were also recorded.

Once all was in equilibrium, Mosso would ring a bell, while out of sight of the subject. His hypothesis was that this aural stimulus would have to be interpreted by the brain, and that an increased blood flow would result in a slight head-down tilt of the table. Mosso followed the bell-ringing with a wide range of intellectual stimuli, such as reading from a newspaper, a novel, or a university text. He was no doubt well satisfied to observe that the tilting of the table increased proportionately to the difficulty of the subject matter and the intellectual requirements of the task. Mosso’s experiments indicated a direct link between mental effort and an increased volume of blood in the brain. This research was one of the first attempts to ‘image’ the brain, which is now performed by technology such as MRI (magnetic resonance imaging), commonly used in making medical diagnoses today.

TEST 6.3 – WHO WROTE SHAKESPEARE?

William Shakespeare is the Western world’s most famous playwright – but did he really write the plays and poems that are attributed to him? 

There has been controversy over the authorship of the works of Shakespeare since the nineteenth century. The initial impetus for this debate came from the fact that nineteenth century critics, poets and readers were puzzled and displeased when they were presented with the few remaining scraps of evidence about the life of Shakespeare, as his name was most commonly spelled. The author they admired and loved must have been scholarly and intellectual, linguistically gifted, knowledgeable about the lifestyle of those who lived in royal courts, and he appeared to have travelled in Europe. 

These critics felt that the son of a Stratford glove maker, whose only definite recorded dealings concerned buying property, some minor legal action over a debt, tax records, and the usual entries for birth, marriage and death, could not possibly have written poetry based on Classical models. Nor could he have been responsible for the wide ranging intellectually and emotionally challenging plays for which he is so famous, because, in the nineteenth century world view, writers inevitably called upon their own experiences for the content of their work. 

By compiling the various bits and pieces of surviving evidence, most Shakespearian scholars have satisfied themselves that the man from Stratford is indeed the legitimate author of all the works published under his name. A man called William Shakespeare did become a member of the Lord Chamberlain’s Men, the dramatic company that owned the Globe and Blackfriars Theatres, and he enjoyed exclusive rights to the publication and performance of the dramatic works. There are 23 extant contemporary documents that indicate that he was a well known poet or playwright. Publication and even production of plays had to be approved by government officials, who are recorded as having met with Shakespeare to discuss authorship and licensing of some of the plays, for example, ‘King Lear’. 

However, two Elizabethans who are still strongly defended as the true Shakespeare are Christopher Marlowe and Edward de Vere, both of whom would have benefited from writing under the secrecy of an assumed name. 

Marlowe’s writing is acknowledged by all as the precursor of Shakespeare’s dramatic verse style: declamatory blank verse that lifted and ennobled the content of the plays. The records indicate that he was accused of being an atheist: denying the existence of God would have been punishable by the death penalty. He is recorded as having ‘died’ in a street fight before Shakespeare’s greatest works were written, and therefore it is suggested that he may have continued producing literary works while in hiding from the authorities. 

De Vere was Earl of Oxford and an outstanding Classical scholar as a child. He was a strong supporter of the arts, including literature, music and acting. He is also recorded as being a playwright, although no works bearing his name still exist. However, in 16th century England it was not acceptable for an aristocrat to publish verse for ordinary people, nor to have any personal dealings with the low class denizens of popular theatre. 

To strengthen the case for their respective alternatives, literary detectives have looked for relationships between the biographies of their chosen authors and the published works of Shakespeare. However, during the sixteenth and seventeenth centuries, there was no tradition of basing plays on the author’s own life experiences, and therefore, the focus of this part of the debate has shifted to the sonnets. These individual poems of sixteen lines are sincerely felt reactions to emotionally charged situations such as love and death, a goldmine for the biographically inclined researcher.  

The largest group of these poems express love and admiration and, interestingly, they are written to a Mr. W.H. This person is clearly a nobleman, yet he is sometimes given forthright advice by the poet, suggesting that the writing comes from a mature father figure. How can de Vere or Marlowe be established as the author of the sonnets? 

As the son of a tradesman, Marlowe had no aristocratic status; unlike Shakespeare, however, he did attend and excel at Cambridge University where he mingled with the wealthy. Any low born artist needed a rich patron, and such is the argument for his authorship of the sonnets. The possible recipient of these sonnets is Will Hatfield, a minor noble who was wealthy and could afford to contribute to the arts; this young man’s friendship would have assisted a budding poet and playwright. Marlowe’s defenders contend that expressions of love between men were common at this time and had none of the homosexual connotations that Westerners of the twenty first century may ascribe to them. 

The Earl of Oxford had no need of a wealthy patron. The object of De Vere’s sonnets, it is suggested, is Henry Wriothesley, Earl of Southampton, whose name only fits the situation if one accepts that it is not uncommon to reverse the first and surnames on formal occasions. De Vere was a rash and careless man and, because of his foolish behaviour, he fell out of favour with Queen Elizabeth herself. He needed, not an artistic patron, but someone like Henry to put in a good word for him in the complex world of the royal court. This, coupled with a genuine affection for the young man, may have inspired the continuing creation of poems addressed to him. Some even postulate that the mix of love and stern advice may stem from the fact that Henry was de Vere’s illegitimate son, though there is no convincing evidence of this fact.

Leave a comment

Your email address will not be published. Required fields are marked *