Category Archives: Politics

Factfulness by Hans Rosling (A Synopsis)

The following is a synopsis of Factfulness by Hans Rosling. It’s a great read on the Ten Reasons We are Wrong About Everything and Why Things are Not as Bad as We Think

Introduction: Why I Love the Circus

Hans Rosling was a physician, academic and public speaker. Together with his son, Ola Rosling and his daughter in law Anna Rosling Ronnlund, he founded the Gap Minder Foundation in 2005 to fight ignorance and encourage what he calls a more factful approach to life. Although this book, like his TED Talks, was written in his voice it is a collaboration between the three of them.

Although he pursued a career in medicine and became a leading academic, Rosling’s true passion as a child was the circus. He loved everything about it and was convinced he would one day live his dream and run off to become a performer. His parents had other ideas; they wanted him to enjoy the first-rate education they didn’t have and so he studied medicine instead.

When studying medicine, he discovered he could stick his hands down his throat further than anyone else. For a short time, he dreamed once again of joining the circus as a sword swallower. Because swords were in short supply, he decided to start with a fishing rod, but found it impossible. It was only later, when treating an actual sword swallower, that he learned the reason why he had failed.

The throat, he was told, is flat and can only take flat objects. To his delight he discovered he could actually do it and, later in life when he began giving talks, he often used it as a finale to his act.

He talks about sword swallowing for a reason. It’s one of those ideas which inspire people to think differently, to question perceptions and, as such, to accomplish the seemingly impossible. This is a key feature of the book.

Another is the continual need to test yourself and question your assumptions. To illustrate he lists 13 questions:

  1. How many girls finish primary school in low income countries around the world?
  2. Where does the majority of the world’s population live: low income, middle income or high-income countries?
  3. In the last 20 years, the number of people living in extreme poverty has increased or halved?
  4. What is the life expectancy of the world today?
  5. There are 2bn children aged 0 to 15 years old. How many will there be in the year 2100 according to the UN?
  6. The UN predicts that, by 2100, the world’s population will have increased by 2bn. Why is this: more younger people or more older people?
  7. How has the number of deaths from natural disasters changed over the last 100 years?
  8. There are 7bn people in the world. Where do they live: mostly in Europe, Africa, Asia or America?
  9. How many of the world’s one-year old children today have been vaccinated?
  10. 30-year-old men have spent ten years in school on average. How many years have women of the same age spent?
  11. In the 1990s tigers, giant pandas and black rhinos were all listed as endangered. How many are listed as more critically endangered than they were then?
  12. How many people in the world have access to some electricity?
  13. Over the next 100 years will the average temperature get warmer, stay the same or get colder?

He sets these tests to people around the world and the majority get them wrong. Our perception of the world is far removed from the reality. His aim in the book is to give people the tools to think in a factful manner, to challenge perceptions and understand the world more completely.

Chapter 1: The Gap Instinct

Our world view is often distorted thanks to the tendency to divide everything into two extremes with a space or a gap in between. For example, we often view the world through the perspectives of distinct groups such as ‘rich versus poor’, ‘us versus them’ or ‘developed and developing countries’.

This is simple and makes it easy to develop a view of the world, but he believes it’s wrong.

To demonstrate his point, he looks at our tendency to divide the global population into developing and developed countries. Those in developed worlds have better access to healthcare, live longer lives and have better access to electricity, while those in the developing world do not. However, the data shows that most people have access to all these things.

In reality, most countries fall into a gap between the perceived developed and developing worlds, with many moving towards the group of developed countries. As such, this divisive world view makes no sense.

Instead, he introduces four categories which can provide us with a better view of the world. These are:

Level 1: Earning less than $2 per day.

Level 2: Earning between $2 and $8 per day.

Level 3: Living off$8 to $32 per day.

Level 4: Bringing in more than $32 each day.

It doesn’t matter where you live in the world; people in each group tend to have a similar quality of life. Those in Level 1 suffer from poor nutrition, live hand to mouth and rely on walking to get where they need to go. In Level 2 people are better off; they can feed themselves have better access to electricity and education but they are not secure. At any time, an emergency could see them slip back to Level 1. At the top is Level 4 in which people can afford a car and an annual holiday.

This produces a more accurate view of the world, one which divides people by quality of life rather than simply where they are living. Even so, the traditional gap-based view of the world prevails.

To combat this, he cites a number of warning signs which can encourage us to slip into the gap-based world view.  

  • Comparing averages between two situations. We forget about the overlap which creates the illusion of a gap.
  • Comparing extremes: For example, thinking of the poorest versus the richest is wrong because most people are in neither extreme.
  • View from above: People in higher income levels look down on lower levels without any idea of the conditions in those levels. If you’re in Level 4, you might think of people in levels 2 and 3 as being poor when in fact they have a much better quality of life than you might think.

The reality of life is that there is no gap. Most people exist in the middle where the gap is supposed to be. Thinking about people in two groups distorts our view of the world. To form a fact-based world view, we have to recognise that many of our perceptions are filtered through mass media which loves to focus on examples which are extraordinary or extreme.

“There is no gap between the West and the rest, between developed and developing, between rich and poor,” Rosling writes. “And we should all stop using the simple pairs of categories that suggest there is.”

Chapter 2: The Negativity Instinct

“The world is getting worse”. It’s a view that we hear often and which, according to polls, most people share. However, it is also wrong. In truth, the world is getting better. We simply don’t notice when it does.

Most humans pay attention to the bad rather than good. As such, they believe the world is only getting worse.

There is some truth in this. The environment is deteriorating and terrorism is higher than it was 30 years ago. Even so, the state of the world is generally improving. However, according to Rosling, these improvements go unnoticed because they aren’t reported and we look back at the past through rose tinted spectacles.

Instead, minor setbacks receive greater coverage. If you look at the news, you’ll be forgiven for assuming that the world is set on a downward trend. However, the facts tell another story.

In the 1800s most people in the world were at income Level One. Extreme poverty was the norm for most people in the world. Today, only 9% of the world is still at level one. Life expectancy has improved from 31 years in the 1800s to over 70 years today. Slavery has been abolished, child mortality is down; plane crashes, hunger and deaths in battles have decreased. Access to electricity, water and health have improved.

Even so, the negative world view persists.

This view is caused by three things: mis remembering the past, selective reporting and the feeling that it would be insensitive to say things are getting better when they are still bad for many people.

When people start living better lives, they can forget how bad things were. They romanticise their youth.

Most reporting is negative. Good news is not news and neither are gradual improvements. A successful flight receives no coverage, but a crash will be splashed across all media outlets.

Rosling suggests three ways to control this negativity instinct.

  • Remember that ‘bad’ and ‘better’ are not mutually exclusive. Things can be bad but they can also be better than they were before. Saying things are better should not be confused with believing everything is fine.
  • Expect bad news. If we recognise that news is likely to disproportionately highlight negatives, we can be better prepared for it. Factfulness is remembering that most of the news which reaches us is bad news and there are plenty of positive developments in the world which do not make headlines. More news does not necessarily mean things are getting worse. It could equally mean that we are getting better at monitoring this suffering, which in turn will make it easier to alleviate it.
  • Avoid romanticising the past. Life was not as good as we think it was, if we present history as it was, we can realise that life, in general, is getting better.

Chapter 3: The Straight-Line Theory

Life does not always work in a straight line, but our thinking does. In this chapter Rosling examines our tendency to assume a certain trend will continue along a straight line in perpetuity. Reality is very different but our straight-line instinct stops us seeing life as it truly is.

He talks about an Ebola outbreak in Liberia. Like most people, he assumed the number of cases would continue in a straight line with each person infecting, on average, another person. As such it would be relatively easy to predict and control as most other outbreaks of disease are. However, he came across a WHO report which said the number of infections was doubling with each case. Every person infected two more people on average before dying.

This spurred him into action. He discusses an old Indian legend. Krishna is challenged by the King to a game of Chess and asked to name his prize if he wins. Krishna asks for one grain of rice to be placed on the first square with the number doubling with each square.

The King agrees assuming it will increase in a straight line. However, it takes him a little while to realise that, by the time it gets to the 60th square, he would have to find more rice than the entire country could produce.

Many people assume the world’s population is increasing. If nothing is done it will reach unsustainable levels meaning something drastic must happen to stop this tend getting any worse. However, UN data shows the rate of population increase is slowing. As living conditions improve, the number of children per family is falling. Instead, growth can be controlled by combatting extreme poverty.

Rosling uses the example of a child. In their first few years, babies and toddlers grow rapidly. If you were to extrapolate that growth for the future, ten-year olds would be much taller than they are. Of course, we know that doesn’t happen because we are all familiar with how the rate of growth slows over time.

When faced with unfamiliar situations, we assume a pattern will continue in a straight line. Instead, he suggests we should remember that graphs move in many strange shapes, but straight lines are rare. For example, the relationship between primary education and vaccination is an S curve; between income levels in a country and traffic deaths is a hump and the relation between income levels and number of babies per women is a slide.

We can only understand the progression of a phenomenon by understanding the shape of its curve. Assuming we know what will happen leads to erroneous assumptions and false conclusions which will in turn lead to ineffective solutions.

To control the straight-line instinct, we must remember that curves come in many forms and we will only be able to predict it when we understand the shape of its curve.

Chapter 4: The Fear Instinct

“Critical thinking is always difficult,” writes Rosling, “but it’s almost impossible when we are scared. There’s no room for facts when our minds are occupied by fear.”

This is why the ‘fear instinct’ can be so destructive. When people are afraid, their ability to tell fact from fiction falls off dramatically. Factfulness demands that we control our fear.

Never before has the image of a dangerous world been broadcast more widely and more effectively than it is today. We are frightened of almost everything, but the truth of the matter is that the world has never been safer or less violent.

Rosling starts the chapter by looking back to an old story from his days as a junior doctor. It was 1975 and news came in of a plane crash. The survivors were being rushed to his hospital; senior staff were at lunch which would leave just him and a nurse to handle the situation.

It would be his first emergency and, in his panic, he confused one of the survivors for a Russian pilot and became convinced Russia was attacking Sweden. He mistook a colour cartridge for blood and was narrowly stopped from shredding through a G suit worth thousands of dollars.

Fear stopped him from seeing the situation for what it was. Our minds have an attention filter which decides what reaches it. This is the useful. The word contains vast amounts of information and we have to filter it to avoid overload. However, what gets through tends to be the unusual or scary.

This is why newspapers are full of events which are frightening. However, the more of the unusual we see, the more we become convinced the unusual is actually the norm. Our fear instinct has been baked into our minds by millennia of evolution. Fear kept our ancestors alive, but even though many of these dangers have gone, the perception remains.

The dangers are more real for people in Level 1 and 2 income categories because they are more likely to suffer threats. For example, they might be more likely to be bitten by a snake which might make them jump if they see a funny shaped stick. However, for people in higher income levels, being bitten by a snake is much less likely. Even if they were to be bitten, they have access to good healthcare.

For them, the fear of the snake does more harm than good. Newspapers know these fears are hardwired into our brains so they use it to grab our attention. The same fears which kept our ancestors alive are keeping journalists employed today.

This GOES East satellite image taken Tuesday, Sept. 11, 2018, at 10:30 a.m. EDT, and provided by NOAA shows Hurricane Florence in the Atlantic Ocean as it threatens the U.S. East Coast, including Florida, Georgia, South and North Carolina. Millions of Americans are preparing for what could be one of the most catastrophic hurricanes to hit the Eastern Seaboard in decades. Mandatory evacuations begin at noon Tuesday, for parts of the Carolinas and Virginia (NOAA via AP)

The number of deaths from natural disasters has fallen as countries develop better healthcare and infrastructure. Organisations, such as the WHO and UN, help victims in these situations but people in Level 4 aren’t aware of their success because the media reports it as the most serious disaster in history.

It is important to look at things with a fact-based approach to make better use of resources. For example, the 2015 earthquake in Nepal which killed 9,000 people attracted global attention, but diarrhoea from contaminated water kills 9,000 children each year but receives very little attention in comparison.

2015 was the safest year in aviation history but this fact was not reported. The number of deaths from battle has fallen and so has the threat of nuclear war. All these good pieces of news slip under the radar.  

Following the Tsunami in 2011, 1,600 people died escaping Fukushima while nobody died from what they were running away from. Fear of chemicals such as DDT leads to deaths which could have been treated by DDT. The fear of an invisible substance leads to more harm than the substances itself.

Terrorism causes fewer deaths than alcohol but receives far more publicity.

Fear is a terrible guide for understanding the world. We pay attention to things we are afraid of but ignore things which can do us harm. Factfulness is knowing how to tell the difference between actual risks and perceived risks.

Chapter 5: The Size Instinct

From immigration to the number of deaths in hospitals, we consistently overestimate size. In this chapter, Rosling shows how this leads to a distorted view of reality and warps decision making.

He starts by going back to his time as a young doctor in Mozambique in the 80s when it was the world’s poorest country. One in 20 children died. He argued with a friend about the standard of treatment at the hospital. He felt they needed to provide better care outside of the hospital, but his friend believed he should concentrate on improving care within the hospital.

He decided to look at the number of children who died in the hospital compared to those who died outside. To his surprise he found that, while deaths were comparatively low inside the hospital, over 3,900 people died in the community. He therefore decided to go out into the community to provide better care to people who couldn’t get to the hospital.

When looking at a single number in isolation, it is easy to give it too much importance. For example, when he saw he was saving 95% of the children who came to the hospital it was easy to assume he was doing a great job. However, when he compared it to those who were outside the hospital, he realised he had to do more.

Journalists always give us numbers and exaggerate their importance. It leads to solutions which do not help. At the hospital, people would have assumed increasing the number of beds would reduce deaths but, with more information at their disposal, they realised the best path was to improve the levels of care and education within the community.

To avoid the trap of the size instinct, he recommends using the tools of comparison and division.

For example, two million children died before the age of one in 2016. This seems high until you compare it with 1950s number when 14.4 million children died before their first birthday. Infant deaths are falling, but you wouldn’t know this if you only looked at the first figure.

A Swedish hunter killed by a bear received more coverage than a woman killed by her husband. The first incident was a freak event; but it received far more coverage than the second which is a far more common and serious risk. Rather than being concerned by domestic violence, the media became obsessed with an event which is unlikely to happen again for many years.

Another example comes from the Swine flu epidemic which killed 31 people in 2 weeks. However, 63,000 people died from TB in the same period, but it received no coverage.

The lesson is that we tend to over state the unusual and ignore issues which are far more common. It distorts our world view and leads us to make poor decisions. Factfulness is understanding that just because something is more widely reported, doesn’t make it more common.

Chapter 6: The Generalisation Rule

As humans, we love to categorise and generalise everything. While this can help us to simplify our view of the world, it can also lead to distortions. Attributing one characteristic to an entire group based on one unusual example leads to serious misconceptions and can have quite serious consequences.

As he writes, the generalisation instinct “can make us mistakenly group together things, or people, or countries that are actually very different. It can make us assume everything or everyone in one category is similar. And, maybe most unfortunate of all, it can make us jump to conclusions about a whole category based on a few, or even just one, unusual example.”

For example, he begins the chapter by talking about his experiences working in the Congo when he was presented with a less than appetising dessert made from lava. To avoid eating it, he tried to convince them that it was against Swedish custom to eat lava.

Generalisations, he says, are mind blockers. They create a distorted view of reality and often lead people to miss important opportunities. For example, after polling financial experts, he says, he found that they assumed most children in the world were not vaccinated before the age of one. In fact, the vast majority are. For that to happen, countries need a lot of infrastructure, which is also the same kind of infrastructure which is required for factories and other forms of enterprise.

Statistically Valid Things

The belief persisted because these financial experts believed the images of extreme poverty presented about some countries in the media. As such, these experts were potentially missing out on investment and business opportunities because they believed these countries were more deprived than they were.

To combat the generalisation instinct, he suggests travelling. This helps you to get out into the world and gain first-hand experience of cultures as they actually exist, rather than the way they are portrayed in the press.

As with other chapters, questioning is vital. You should always question the different categories you are given. Look for similarities across and within groups; remember to be suspicious of generalisations and to be aware when you are generalising one particular group from another.

Many people generalise African countries but they are not all at the same level of development. This has enormous consequences. The Ebola epidemic in Liberia affected tourism in Kenya even though the two countries are thousands of miles apart.

Majority is an extremely blunt concept. It can be anything between 51% and 99%; which does not produce a realistic picture of any situation.

Examples give a poor picture. Many people around the world suffer from chemophobia, the fear of chemicals, when in reality most are beneficial.

Folders Icon with variations of colors

Assuming you are normal can lead to you generalising others and failing to understand the reasons behind their actions. What is normal to you is not necessarily normal to other people.

Factfulness is recognising what categories are used and keeping in mind that these categories can be misleading. While humans categorise and generalise everything, this can lead to stereotypes which can lead to poor solutions. By travelling and questioning assumptions, you can combat the problems caused by the generalisation instinct.  

He also focuses on what he calls the 80/20 rule. We should always look at items which take up more than 80% of the total. Looking at the world’s energy sources, for example, you might feel they seem equally important, but only three generate more than 80% of the world’s energy.

Divide it by a total to get a clearer idea of the situation. If you look at the total emissions produced by each country, it might seem that China and India produce more Co2 emissions than Germany and the USA, but if you divide it by the population, you’ll see that USA and Germany produce more emissions per head than either China or India. A single number does not provide a clear picture of any situation.

Chapter 7: The Destiny Instinct

When Rosling gave a presentation to a group of capitalists and wealthy individuals about the opportunities of emerging markets in Asia and Africa he was surprised by their reaction.

At the end of the talk he was approached by what he describes as a ‘grey haired’ man who told him, in no uncertain terms, that there was no chance that African countries would ever make it. Despite all the positive data about economic progress he’d seen in the presentation, this man believed there was something about African society which meant they were destined to be poor.

This is an example of the destiny instinct and its roots go back to the earliest history of man. As humans, we often assume that a nation, group or culture’s destiny is determined by general characteristic which we believe it shares. For example, white Europeans are destined to be developed and wealthy while black Africans will always be poor.

This attitude persists even in the face of contradictory data. Not only do people believe it to be true, but they assume there is nothing that individuals in this culture can do to change things. Destiny, as they say, is all.

The instinct stems from evolution when people lived in small groups and didn’t travel very far. It was safer to assume things would stay the same as they wouldn’t need to constantly evaluate the surroundings. It’s also a great way to unite a group.

However, today’s societies are constantly changing. These changes occur gradually which gives the perception that things are staying the same. As such, gender equality is perceived to have remained unchanged around the world, but in reality, things have largely improved.

There is also an idea that Africa is destined to remain poor. However, most African countries have reduced their infant mortality rates faster than Sweden did. Asian countries have moved into the category of developed nations and many countries have escaped extreme poverty.

It was also assumed that the number of babies a woman has would depend on her religion. Religious women were more likely to have babies than those with no religion. However, careful analysis of data shows that the number of babies depends not on religion but income levels.

Other examples include the rise of support for women’s rights and liberal ideas in Sweden. Concepts which might have been far from the mainstream in the past are now commonly accepted.

To control the destiny instinct, you should recognise that:

  • Slow change does not mean no change at all.
  • Be ready to update your knowledge. Knowledge is never constant in the social sciences.
  • Collect examples of cultural change.

Factfulness is the art of recognising that many things appear to be static just because change is happening more gradually. Groups are not defined by their innate characteristics and, just because something has been true once, it doesn’t mean it is set in stone for the future. Like many of the other attitudes discussed in this book, the destiny instinct is hard coded into our evolution, but while it might once have been helpful for our ancestors, it is holding us back today.

Chapter 8: The Single Perspective Instinct

We live in a world in which many people have strong opinions. However, when these opinions are set into a single world view, we can become blind to any information which contradicts us. This is the problem of the single perspective instinct and it can make it very difficult to understand reality.

“Being always in favour of or always against any particular idea makes you blind to information that doesn’t fit your perspective,” writes Rosling. “This is usually a bad approach if you like to understand reality.” 

The single perspective instinct can be alluring. It is the preference for simple explanations and simple solutions to the world’s problems, but life is somewhat more complicated. This single cause and solution view creates a completely warped view of the world.

There are two main reasons why we do this: professional and political bias. Professionals will always see the world from the perspective of their own expertise. A hammer will see everything as a nail and will probably adopt the same approach. Even experts in their field can be wrong.

A poll of women’s rights activists found that only 8% realised that 30-year-old women have spent only one year less in school on average than men. They were so intent on seeing the situation as bad that they ignored the progress their own efforts have brought about.

He warns against relying the media to form a world view. It’s like forming an impression of a person just from his or her feet. The foot is far from the most attractive part of the body, so it doesn’t give you a fair representation of the rest of the person.

In the same way, the media tends to present the worst of the world; the disasters, the catastrophes and the crime. If you form your world view based on media reports, you’ll imagine the situation is much worse than it actually is.

Campaigners often paint the world as getting worse, but they are not aware that progress happening. If they ditched the attitude that things are only getting worse, they could garner more support for their cause.

People love the idea of being able to point to a single cause and single solution. For example, many use numbers to illustrate all sorts of issues, but they do not always represent the best solution. They do not help you to understand the reality behind them.

He recalls a conversation with the Prime Minister of Mozambique who said he believed the economy was making progress. Rosling argued that the data did not show this, but the Prime Minister replied that he did not solely rely on numbers to measure progress. He’d look at the shoes people wore and construction projects taking place. If the shoes were old, it meant that people didn’t have money to replace them. If construction projects were overgrown with grass, it suggested there wasn’t enough money being invested.

There is never a single explanation to any situation. It limits your imagination. Factfulness is realising the limitations of this perspective and finding ways to view situations from a wider range of viewpoints. This will help people to develop a more accurate understanding of the world they live in and to challenge their own world views. 

Chapter 9: The Blame Instinct

We live in a culture which loves to apportion blame. This instinct assigns a clear cause to an event and finds someone who was at fault.

It’s a comforting approach. When things go wrong it is nice to think it’s because of bad people with bad intentions, but that seldom tells the entire story. We attach a lot of importance to individual groups or people, but it also stops us from understanding the world.

Once we find someone to blame, we stop looking for the actual cause of the problem and focus on punishing the person we think is at fault. The result is that we’re unable to prevent it from happening again.

For example, we might want to blame a plane crash on a pilot who falls asleep, but this does not stop another pilot from falling asleep in the future. Instead we should be looking at why the pilot fell asleep in order to stop it from happening again.

Hand in hand with the blame instinct comes the tendency to credit someone for an achievement even if the reality is more complicated. Someone has to take the blame or be given the credit. We love to point fingers if it confirms our belief

Even Rosling himself is not immune. When UNICEF hired him to investigate a company given a contract for malaria drugs, he became convinced that the company was acting improperly. Even before he finished his investigation, he started pointing fingers. In reality, the company was honest but simply had an innovative business model.

The same principle is at work when looking at the number of immigrants killed trying to cross the sea into Europe. It is common to blame the smugglers who traffic these people across the sea in small craft which are routinely dangerously overloaded. In reality, though, the problem is Europe’s immigration policies which state that an airline which brings illegal immigrants into the country will have to pay for their repatriation.

Airlines will not be able to tell if someone is truly an illegal immigrant in the few minutes they have before they board a flight, so they will ban anyone who doesn’t have a Visa. This means that refugees who have a right to enter Europe under the Geneva Convention cannot do so by any legitimate means. They are forced into the hands of the smugglers thanks to official European policy.

Any boats which bring refugees by sea are confiscated by the authorities which is why smugglers turn to cheap dinghies because they cannot afford to lose a larger boat.

On the other hand, we can also be in a rush to give credit to a single person or law. China’s low birth rate is accredited to Mao’s single child policy, but rates had started to fall before the law came into force. Instead the decline was down to institutions and technology that were in place.   

Factfulness is the ability to recognise when someone is being scapegoated and to understand that this stops people creating viable solutions for the future. It is easy to look for a clear solution when something bad happens, but it stops us developing a fact-based view of the world and coming up with a solution which actually works.

Chapter 10: The Urgency Instinct

Rosling’s tenth and final instinct is one which can bring all the others to the fore: the tendency to take urgent action to solve a problem. While this might have served us well in the past, it can cause us to make rash decisions based on incomplete information.

Doctors running for the surgery

The urgency instinct is embedded in our evolution, and in times gone by, it has served us very well. For example, if you think there is a lion in the grass you don’t want to spend time analysing your options; you simply need to start running. It’s always the safest option.

It can also be useful today. If you’re driving and someone slams on the brakes, you’ll have to take drastic action to avoid a crash. However, in today’s modern world, it can often create problems.

While Rosling was a doctor in Mozambique, a disease broke out that paralysed patients within minutes and sometimes caused blindness. He wasn’t certain that it was infectious, but the Mayor didn’t wait to find out. He ordered the military to set up roadblocks to prevent busses reaching the city. To get around these roadblocks, women asked fishermen to take them to the city by sea. It was a dangerous journey and many of these boats were overloaded. They capsized and causes the deaths of women and children.

After some research they discovered the root cause was eating processed Cassava. So, while the Mayor believed his prompt action was the safest approach, it actually caused a number of deaths which should have been avoided.

Alarm clock on laptop concept for business deadline, schedule and urgency

The principal of ‘now or never’, causes people to conjure a worst-case scenario. It kills the ability to think things through and encourages bad decision making. This is why salespeople come up with limited time offers. They are giving you a deadline and introducing a sense of urgency into your decision making in the hope you’ll be rushed into making a purchase.

To compel people to taking action, activists will often try to trigger the urgency instinct. They will stress how urgent a problem is and paint a worst-case scenario of what will happen if you don’t do something now. However, Rosling believes this is counterproductive.

Fear and exaggerated data can numb people to the risks campaigners are warning about which can lead to complacency and inaction.

For example, most countries say they are committed to fighting a climate change but aren’t tracking their progress. It begs the question: how can they truly fight climate change if they are not tracking their progress?

Rosling tells us that the world faces five serious risks: global pandemic, financial collapse, world war, climate change and extreme poverty. The first two have happened before while the second two are happening now.

They need to be approached with cool heads and data analysis rather than sparking fear and urgency. It’s about crying wolf. It can lead to these risks being ignored despite the dreadful consequences. We must worry about the right thing.

The idea of factfulness is to remember that things might not be as urgent as they seem. If you’re afraid and under pressure to act quickly, you are likely to make bad decisions. We must take a breath, take action based on data and be very wary of fortune tellers who insist they know what’s going to happen. Although the world’s problems need to be solved it is not always a good idea to take urgent or drastic measures.

Chapter 11: Factfulness in Practice

The final chapter brings everything together and demonstrates how all the lessons explained so far can be put to practical use. We see each of the ten instincts on show and how factfulness can lead to truly positive real-world solutions.

To demonstrate, Rosling takes us to a remote village in the DRC. He had travelled there to investigate a disease which was caused by eating unprocessed Cassava. The villagers believed he was collecting their blood to sell it and they were angry.

All the instincts discussed in this book were on display. The sharp needles and blood had triggered the fear instinct. The generalisation instinct made them categorise him as a plundering white man. Blame instinct caused them to assume he had malicious intent and the urgency instinct convinced them they had to act immediately, in this case by threatening him with machetes.

Things might have worked out very badly for both him and his translator if it hadn’t been for an old woman who successfully calmed the crowd down and explained that he was trying to help them. Although she was illiterate, she was bringing all the core principles of factfulness to bear in a very dangerous situation. As such, she managed to save both their lives.

In the same way we can bring it to our daily lives, in business, education and journalism and communities. Children should be taught to adopt a fact-based approach to life which will help them to develop a better view of the world and create better solutions. Children should be taught to be curious, to hold two different ideas at the same time. They should be willing to alter their opinions with new facts. This will protect them from ignorance.

Having a typo in your CV can keep you from getting a top job. However, people who make policies are placing a billion people in the wrong continent. Businesses have distorted world views and fail to understand that markets are growing in Africa and Asia. Being an American company is no longer a privilege which will automatically attract employees and customers. If investors relinquish their preconceptions about Africa, they may realise that it contains some of the best business opportunities in the world.

With a more factful attitude, journalists may become aware of their dramatic world views and present something more accurate and useful. However, it is a bit of a stretch to expect them to truly embrace all the principles of factfulness and start reporting the mundane alongside the unusual. The onus is on people to learn how to consume news in a more factful manner.

If you are ignorant at the global level there’s a good chance you’ll also be ignorant at the local level within your own community, company or organisation. They are all using erroneous data to manage their businesses and prepare for disasters.

Leaders in companies, cities, countries and organisations should carry out fact-based surveys to uncover ignorance within their organisations. Only by doing so can they develop a more accurate and realistic world view. Factfulness can be put into practice in all our daily lives, at home at work and in our communities.

Key takeaways from Steve Jobs’ life based on Walter Isaacson’s biography

This is an analysis based on Steve Jobs by Walter Isaacson and other sources of research. Enjoy.

Location Really Does Matter For Entrepreneurs:

You need to be in the right place at the right time. Being exposed to many ideas, variables, and potential inputs for accidental discoveries is better than living in a risk averse environment. In High School, Jobs took an electronics class which would have been less likely in most other cities in the US or Canada. Steve Jobs was fortunate to be raised in Silicon Valley, and because of that location it is less of a mystery as to why Jobs is who he was. Defense contracts in Silicon Valley during the 1950s shaped the history of the valley, military investment was used to build cameras to fly over the USSR, for example. Military companies were on the cutting edge, and made living in Silicon Valley interesting. In the 1930s, Dave Packard moved into Silicon Valley, and his garage was the core of the creation of Hewlett Packard. In the 1960s, HP had 9,000 employees, and it was where all engineers wanted to work. Jobs was ambitious enough at a young age to phone Dave Packard and ask for some parts. That’s how he got a summer job there. Moore’s Law emerged in Silicon Valley, Intel was able to develop the first micro processor. Financial backing was made easier to acquire where rich New Yorker’s retired to…By having the chip technology that could be cost measured for projections, Jobs and Gates would use this metric to revolutionize the technological world.

 

Childhood Shapes Your Thinking:

Jobs was never interested in cars, but he wanted to hang out with his dad, who emphasized the importance of building quality products, and loved souping up cars. The interior of a product is equally important as the exterior for Paul Jobs (Steve’s Father). Eichler Homes were great designs, with a simple capability that was common in Silicon Valley. Paul also taught that you should know more than the person you bargain with. Paul Jobs could not successfully get into real-estate because he was unwilling to sell, and be like-able. By his teens, Jobs realized he was smarter than his parents.  Steve Jobs was willful, and his parents would go to great lengths to feed Jobs every whim by deferring to his needs. Steve Jobs got into a fight with his dad for smoking marijuana, but by his senior year, Jobs was looking into sleep deprivation, LSD, and other drugs.

Jobs was fascinated by the need for perfection in technology. Later on in the 1980s, he argued that even if you can’t see something, it should be done well. Jobs wanted to ensure that the Macintosh mother board was beautiful, so he had members of his team sign the circuit board. Steve Jobs became more interested in electronics than in car engineering, in particular the laser technology his father was working at Spectra Physics.

 

Go Get What You Want, If You Have The Courage:

The 9100A was the first desktop computer, it was a huge computer that Jobs saw in the Explorers Club he participated in. Jobs created a frequency counter as part of the club, but he needed a special part so he phoned the home of the CEO of HP, and spoke with Hewlett directly over the phone for over 20 minutes. This conversation got Jobs a summer position at HP. Jobs had pushed his way into the factory. Steve Jobs hung out with the engineers mostly, but he worked in the electronic components section of HP.

Steve Jobs walked into the lobby of Atari in sandals, and demanded that he work as one of the first 50 people for Atari at $5 an hour. Jobs was very intelligent, and excited about technology. Nolan Bushnell used the power of his personality to build Atari, and Steve Jobs learned about this skill in part from Bushnell. Steve was a prickly person, and he had horrible body odor. Steve Jobs was brash, and, at Atari, told many of his co-workers that they were “dumb shits.” Atari didn’t mind his horrible BO because Jobs was agressive, smart, and worked hard. However, Jobs was put on the night-shift at Atari so that no-one had to deal with him during regular work hours.

 

Education Is For Conformists:

Steve Jobs was not interested in memorizing information but being stimulated. He was sent home repeatedly. Jobs began to excel when he was incentivised by his game-changing teacher Imogen Hill “Teddy” who bribed Jobs into doing Math problems in exchange for lollipops. She further invested in Jobs with cameras and other toys. Steve Jobs was able to convince another kid to give him her Hawaii shirt for a school photo, he knew how to convince others to do things for him early on. Steve Jobs was put forward by one grade for his brilliance. He was not a straight-edged student however.

Assume That You Will Die Young:

Jobs believed that he was going to die young. He worked extremely hard because he was certain that he would be dead at an early age.

The Cream Soda Computer:

Wozniak was able to build a calculator that displayed binary code while drinking cream soda extensively in 1973. Wozniak’s great strength was that he was emotionally and socially inexperienced, was a high school geek who cared more about computers. Wozniak knew more electronics than Steve Jobs, and Jobs was more mature, so they met in the middle. Wozniak and Jobs both listened to Bob Dylan. Dylan’s words struck chords of creative thinking for Woz and Jobs. They bootlegged many Bob Dylan concerts. They even worked as entertainers in Silicon Valley dressing up as clowns to perform for kids.

 

Go To India:

Steve Jobs went to India to expand his meditation skills. Jobs sought spiritual calm but he could not get into his own inner calm in Silicon Valley. He spent 7 months in India being mentored in meditation. Jobs found a spiritual leader in Silicon Valley in Los Altos. Steve Jobs would do meditations, they learned how to tune out distractions. His friends noticed that Jobs became self-important. Steve Jobs also engaged in primal screaming which helped to resolve his childhood pain. Jobs appreciated intuitive spirituality, he wanted to grow in that way. You need to avoid getting stuck in thought patterns that are really just chemical patterns in your brain. By age 30, many people cannot escape their own grooves. You need to be able to throw yourselves out, according to Jobs. Artists go and hibernate somewhere. To be truly innovative over time, you need to think outside of the box, and escape yourself.

 

Pranking People Requires Creative Thinking:

Steve Jobs and Wozniak produced a banner with a hug hand flipping the middle finger to all the seniors as the graduating classes marched past during a High School pep rally. This got Steve Jobs suspended. Steve Jobs was interested in pranking his classmates, and even put a small explosive under one of his teacher’s desk. Their most effective prank had been to scramble TV frequencies with a remote control. Wozniak and Jobs would hide in the bushes while university students were watching television.

On cue, the TV would be scrambled with a small device Woz had built, and one of the students would get up to fix it. Wozniak played around so that the student would be compelled to hold an awkward position in order to keep clear the TV signal. Wozniak’s device was highly effective in manipulating people.

 

Starting A Company Is Very Difficult:

If you’re not passionate about what you are doing, then you will give up. So in order to succeed you need to be passionate, and hardworking. It turns out that Woz and Jobs were not trying to build a company at first but were in fact trying to build a computer that they wanted. They had not gone to business school, and they didn’t even know what the Wall Street Journal was. They wanted to just go build a computer so that they and their friends could use it.

Meet A Brilliant & Noble Engineer:

Jobs was fortunate to meet Steve Wozniak who believed in engineering as the highest, and most noble activity. ‘Woz’ did not believe in marketing, and did not aspire to be in the lime light. Their meeting was truly fortunate. Wozniak’s father taught his son how to build circuits at an early age. His father also taught ‘Woz’ to never lie, accept in the service of a good practical joke. Wozniak had an easier time making eye contact with a circuit than a girl, built a transistor to allow 6 kids to communicate with eachother, read about new computers in his spare time, and focused on designing circuits. Wozniak was socially shut out in high school. Wozniak worked on designing computers with half the number of chips the company had designed in his blue prints. Jobs had inferior tech-skills but had other advantages like charisma and persuasiveness.

 

Meetups Bring Insanely Great Ideas Together:

The Homebrew Computer Club did not conform to the Hewlett Packard mold, or the hierarchical business structures of the UK, Japan or Germany. In Silicon Valley, USA, there were study groups who were building up computers for creative meetups. These were basically self-fulfillment movements in the California area of Silicon Valley where everyone was sharing ideas, and everybody was gaining from that exchange. For most people, computers were ominous, government machines that would destroy life values. By the mid-1970s, computing was no longer a bureaucratic control mechanism but rather a liberating one.

The Altair computer was available in 1975 from MITS, and Bill Gates started building BASIC which would become the first software product from his company Microsoft. Jobs and Wozniak bought the Altair as well in order to learn how it worked.

Borrowing ideas was the way that Wozniak developed the Apple I. He started to sketch out the idea of the Apple I from 1975 to 1976. Since the Intel 80 was so expensive, Wozniak bought a bunch of microchips that were not Intel compatible. This incompatibility would subsequently not allow Apple computers to work with other software products without some modifications. Wozniak built on the shoulders of previous processor chips, and he wrote the code by hand. When he had built the prototype, and the letters were displayed on the screen correctly, there was great excitement. It could not have happened in New York, London, or a small city in France. Innovation is geographically situated because you need to meet the right people, and be at the right place for this kind of success.

 

Knowing What You Wanted To Do Earlier On Is Not Great For Entrepreneurs:

Steve Jobs wanted to go to Reed College because Stanford students already knew what they wanted to do. Reed College had a high dropout rate, and they tuned in, turned on, and dropped out. At Reed, Jobs did a lot of drugs, and he still swears by the importance of taking LSD. Steve Jobs refused to go to Reed classes that he was assigned, and focused on taking classes he was interested in, as well as breaking the rules. Steve Jobs decided that using his parents college funding which his parents had saved was unfair so, and he dropped out, but he didn’t want to leave Reed. Remarkably Reed allowed Steve to stay, and he audited classes. Steve Jobs learned about typography, and he found it fascinating. Jobs rejected the lack of idealistic vision in the 1980s, and he believed in the importance of the counter-culture movements of the 1970s.

 


Steve Jobs Excluded Relevant Information Where Necessary: 

Wozniak was at HP but would come by to play the new Atari games because Jobs was working at Atari. In the 1975, Bushnell asked Jobs to design a single player game which required that bricks fall towards the paddle when struck by the ball, instead of having a computer or a simple wall to compete with. The head of Atari knew that Jobs could not build such a computer programme but he knew that Jobs would likely enlist the help of Steve Wozniak. There was a bonus offered for every chip used below 50. Jobs told Wozniak that this project needed to be completed within 4 days, he then said that they would split the payment. Wozniak was so enthusiastic that he worked hard to get it done on time. The deadline was a false one as Jobs wanted to go apple picking that weekend.

In addition, Jobs did split the payment for the project but he failed to mention the bonus for the number of chips below 50. There were 45 chips so Jobs received 100% of the bonus that Wozniak did not know about. 10 years later on the history of Atari, it was revealed the Jobs was given a bonus and Wozniak was shocked. This program was the basis of the final product which was wildly successful as an arcade game. Wozniak states, “I’m not going to judge Steve’s morality. Apple wouldn’t be where it was without Jobs manipulative nature.”

 

Have Discipline Over Body & Mind:

Steve Jobs got into a disciplined fasting by eating just apples. He believed that minimalism led to great rewards when encountering complexity, and that experience is relative. Vegetarianism, acid, rock music, and the enlightenment campus seeking culture at Reed College was a laboratory for Steve Jobs’ development. Steve Jobs had extremely terrible BO in college because he did not believe in using any chemicals or deodorants. At Reed, Robert Freidland was able to mesmerize him. Jobs learnt from Freidland about charisma, and the art of persuasion. Friedland was a LSD drug dealer, and was sentenced to two years in prison in 1972. When he was released, Friedland ran for student president at Reed College. Freidland had met the Maharaji in India, and Jobs learned about how a state of enlightenment could be attained through practiced mediation. Steve Jobs had an ability to stare people deep into the eyes. Freidland taught Steve Jobs how to initiate the reality distortion field by bending the situation to his will. Freidland was dictatorial, and wanted to be the centre of attention, and a real salesman. Jobs said LSD helped him to understand the connection with human history, and the absence of the need for profit. Steve Jobs was hardly interested in presenting himself in a proper way throughout the early years of Apple Computer Inc.

Picking A Name Is As Simple As Picking Apples:

Steve Jobs was on a fruitarian diet and he picked apples at the One Brand Farm which was a hippy commune. Apple Computer was a smart choice as a name because it was friendly, and simple. It was counter-culture, and nothing could be more American. Apples and Computers don’t go together so it got people thinking.

Crime Does Pay!!!???: 

If you own an Apple product then you are complicit in supporting crime, kinda but not really… However, we forget that sometimes rules have to be broken in order to innovate. Read the following and see if you agree that we might never have heard of Apple Computers without an illegal gadget called The Blue Box…..

Steve Jobs and Wozniak Created Through Illegal Activity

Share this Image On Your Site


Crime Does Pay? Paul Jobs (Steve’s adopted father) made extra money by souping up cars without telling the IRS, and this was duly noted by Steve. When it came to borrowing, Steve Jobs didn’t mind using his high school’s money to buy parts from a major company. After-all, to Steve, hus school had a lot of money. By 1971, Steve Wozniak had read in Esquire about hackers, and ‘phone-freakers’ who had invented a way to cheat phone companies. Woz read the article to Jobs over the phone from college. The so-called Blue Box was invented by a guy named Captain Crunch. It was interesting because the device mimicked the dial tones necessary to connect long-distance calls thereby allowing calls to be made for free. Jobs and Wozniak went to work reading the Bell System Technological Journal produced by AT&T in order to mimic Captain Crunch’s long-distance tones mimicking device. Of course, this was all illegal.

After much research and work, the two Steves created their Blue-Box device which allowed them to call the Pope, Australia, and elsewhere free of charge. Jobs always felt that stealing long-distance calls was fair when a company like Bell was involved. Although it was illegal, Jobs believed they could sell these devices, and they did manage to sell over 100 of them. Jobs did all the pitching of the Blue Box to random people in the Palo Alto area. It was their first real entrepreneurial endeavour. In an illegal market like telephone hacking, however, there were risks. In one encounter, Jobs and Wozniak were robbed of one of these devices by a crazed man who held Jobs and Woz up at gun-point. By doing something illegal, Steve Jobs and Wozniak gained confidence that they could put a product into production. The Blue Box gave them a taste of the combination of engineering and vision. The lesson is that it turns out that crime does pay when the work is the forerunner of something like Apple.

Sharing Ideas Is Fine Up To A Point:

The Homebrew Computer Club (a collection of computer enthusiasts) believed that their ideas should be shared, exchanged, and disseminated. It was coordinated by people who believed that like-minded nerds should all share information for free. They believed that there should be no commerce at the Homebrew Computer Club. Wozniak supported that view, he wanted the Apple I to be shared for free with other people at the Homebrew meetings. Others disagreed. Bill Gates wrote a letter to the Homebrew Computer Club saying the opposite; that they should stop stealing the programming that he and his partners had created.

The letter argued “Who can afford to do their professional work if everyone is stealing it?” Steve Jobs agreed with Bill Gates about sharing ideas. Jobs convinced Wozniak to follow a closed approach, and to sell computers rather than sharing them. Jobs asked that Woz stop sharing the schematics information regarding the Apple I with others, for that reason. Jobs decided to sell these computers by buying 50 panels for circuitry. The closed system had major benefits in his later career. Starting in 1999, Apple created iMovie, FinalCut Pro, iDvd, iPhoto, GarageBand, (most importantly) iTunes, and the iTunes Store. The personal computer was morphing into a lifestyle hub, and only Apple was positioned to create a full (CLOSED) experience where the product was simple, and enjoyable. Therefore, sharing is great up to a certain point.

 

Most Good Ideas Have To Be Forced Down People’s Throats:

Wozniak did not want to go into business, but Jobs convinced Woz to join Apple. But first, Wozniak decided that he would do the ethical thing by telling Hewlett Parckard about his Apple I product which he had constructed based on his experience and training at HP. Wozniak presented the Apple I to executives at HP, but they did not think a personal computer made any sense. During one Homebrew Computer Club gathering, Jobs showed the Apple I and after his presentation he asked how much people would pay for the Apple I. The room was silent, no one was interested in buying the Apple I. That is, no one but Paul Terrell who owned an electronics store called The Byte Shop. Even Atari was pitched by Jobs, but they thought Jobs was a clown.

Apple’s first order was for a total of 50 computers from Terrell for $500 each. It took until 1981 for IBM which had dominated the mainframe computers industry to enter the personal computer market while Apple dominated as the fastest growing company in the history of the world at that time, and had already been in the process of developing both Lisa, and the Macintosh.

Another example is that Xerox PARC researchers had invented the Graphical User Interface (GUI) which was visual point and click system that would replace the black screen coding required to operate a computer previously. The only problem is that the Xerox management did not want to explore this personal computer technology. The management at Xerox did not understand the vision of these researchers at Xerox PARC and could not see a P&L statement that justified the time and energy to make the leap from photo-copying to personal computers. Steve Jobs would later explain that the Xerox management were “copy-heads.”

Adele Goldberg showed Jobs the Xerox GUI, but she was angry that Xerox was allowing Jobs to see ‘everything.’ She understood that Xerox had “grabbed defeat from the jaws of success” according to Jobs, by giving him access to their R&D work in exchange for shares in Apple. Without Jobs’ visit to Xerox PARC, the Macintosh, and Lisa would not have had the GUI, and Bill Gates might not have subsequently revolutionized computers with Windows.

Run Your Company Out Of Your Parents House In Order To Appear Like a Real Company:

In order to fulfill their first order from Mr. Terrell’s electronics store, Jobs ran Apple out of his parents house. This was complicated by the fact that Jobs’ father would frequently insist that he rightfully watch the end of Sunday football instead of letting Steve program computer chips on the family tv screen. Things were awkward; they even had a company phone number which was diverting calls to Steve’s mother who acted as secretary…

A curious brand marker has been the much vaunted Apple logo. Interestingly, the original logo of Apple was a ridiculous picture of Newton and a quote from Wordsworth (as seen on the left side of your screen). For the Terrell batch, Jobs and Wozniak marked up the price of the computer from production to $666.66 for every Apple I sold. Steve Jobs claimed that he was a private consultant at Atari in order to improve his start-up’s credibility. The original Apple I was displayed at a computer fair. Wozniak was the best circuit engineer, but the Sal 20 was better looking. Apple I looked like it was not created by serious people. That is when Jobs realized he needed to build a fully packaged computer, and he was no longer aiming for hobbyists but for the people who wanted to use a computer which would be ready to run out of the box.

Jobs and Wozniak agreed to start their own computer company with $3,000. Wozniak was excited to start a company with Steve Jobs. Apple started with $1,300 of working capital. Wozniak wanted to use his Apple work at HP, but Jobs insisted that the work should be  controlled within Apple, and not given to HP. Steve Jobs’ idea was to have control over the computer, and Jobs created tools so that no one but Apple employees could open their computers. Wozniak refused to leave HP, and Jobs forced Wozniak to give up HP by calling Woz’s family and friends. Jobs actively cried a lot over the phone to Woz’s family in order to convince Wozniak to quit his day job. The only way to get Wozniak onboard was if he could stay at the bottom of the organizational chart at Apple from 1977 onwards. That was not a problem for Jobs.

 

Mike Markkula’s Marketing Theory Is Built Around Three Ideas:

First, you need to connect with your customers, and understand their needs and aspirations. You need to understand their needs better than any other company. Second, you need to focus, and eliminate any activities that do not help to achieve your goal. Third, is to impute. You need to make sure that your brand is respected, because people form their opinion of you based on the signals that you convey. You might have the best product but if you present them in a slip-shot manner you will not get what you want. Steve Jobs would always impute the desires of his customers. He cares about the packaging, and cared about setting the tone for how customers perceived the product.

MacKenna’s Advertising Style Worked: The Apple logo was developed as a multi-colour symbol. The brochure read “Simplicity Is the Ultimate Sophistication.” Apple’s display area in computer fairs was always very impressive. There were only 3 Apple IIs that were finished for the computer fair in 1977, but they stacked up Apple II boxes to suggest they had more. Steve Jobs and Wozniak were forced to dress up, and they were trained on how to act by Markkula.

 

Don’t Worry About A Business Plan Until You Need Investment In A Serious Venture: 

Mike Markkula entered Apple because Jobs needed money to get the Apple II built. They needed to build inventory, and they needed to develop a marketing strategy, and distribution in order to build a business plan. Markkula worked in computer chips, and was excellent at finance, and price measures, Markkula was very successful already. When Markkula showed up he had a convertible. He wrote a business plan that centred on guesstimates of how many people would own a computer in their home. Markkula wanted Apple to balance check books, and keep receipts. The spirit of Markkula’s prediction was true.

Markkula co-signed a bank loan of $250,000. They owned 25% of the stock, and Apple was incorporated on April 1, 1976. He believed that Apple was at a start of an industry. Apple Computer was growing at an incredibly fast rate. The numbers were mind-blowing: from 2,500 Apple IIs sold in 1977, 8,000 were sold in 1978, and up to 35,000 in 1979. Remember there was no market for personal computers before! The company earned $47 million in revenues in fiscal year 1979, making Steve Jobs a millionaire on paper (he owned $7 million worth of private stock). Markkula believed that Apple would go public within 2 years, it went public on December 12, 1980 at $10 per share making over 300 people millionaires. Several VCs cashed out reaping billions in long-term capital gains. Through Markkula, Jobs learnt about marketing and sales. Importantly, Markkula did not want to start a company just to get rich.

 

Your Product Needs To Be A Full Simple Package:

Jobs went in to pitch Atari for support for Apple II which had colour, a power source, and keyboards. It was rejected partly because Jobs went to the meeting without shoes. Another company, building the Commodore decided that it would be cheaper to build their own machine. The Commodore Pet came out 9 months later which sucked according to Jobs. Jobs was willing to sell to Commodore but Wozniak felt that this was a bad move. They designed a simple case for the Apple II which would set Apple apart from other machines. The VisiCalc also allowed Apple II to breakinto in the Financial market. Jobs wanted light molded plastic, and offered a consultant $1,500 to produce the design. The Apple II had the advantage of not requiring a fan, or multiple jacks. Jobs wanted a closed system, a computer that was difficult to pry open. Conversely, Wozniak wanted to give hackers the chance to plug in, but Jobs did not want that option.


Steve Jobs endorsed the view that less is more, and that God is in the details. Jobs embraced the Bauhaus style which rejected Sony’s approach of gun metal or black. The alternative was to create hi-tech products by packaging the products in beautiful, white, and simple casing. Apple customers understand the value of presenting their products out of the packaging. You design a ritual of unpacking a product. Jobs also felt that intuitive ideas need to be connected in computers

Jobs’ Management Style Was “Shit” from 1977 to 1985 Firing:

Steve Jobs loved to tell people that their work was shit, and would force his co-workers to pull all-nighters to finish applications. When Apple started to get going in 1978 – 1979, he would come into the office, and tell Wozniak’s engineering team that they were all shit. This further distanced the two as Wozniak felt that Jobs was abusive, and had changed. Jobs would cry easily, and he would put his feet in the toilet bowl in the middle of the day to wash them. For more stability, Michael Scott was brought into Apple Computer Inc as the president, Scott was fat, had ticks, and was highly wound.

Scott was argumentative, and Jobs clashed with him. Jobs produced conflict, and he was only 22 years old, but Apple was Jobs’ company, he did not want to relinquish control. Steve Jobs and Michael Scott fought about employee numbers. Steve Jobs wanted to be employee number 1, and Wozniak would be number 2. So Scott made Jobs’ badge number O but in reality Jobs’ pay role remained number 2. Scott was a pragmatist while Jobs was not. Steve Jobs started crying over a one year warrantee for the Apple II. At age 26, he had a successful company and the Apple II. In 1981, Jobs was kicked off the Lisa project and took over Macintosh so that he could make a contribution comparable to Wozniak.

Once at Macintosh, Jobs was considered to be a dreadful manager. Jef Raskin (who had headed the Macintosh team and disagrees with Jobs on most issues) said the following about Jobs:

  • a) Jobs missed most appointments;
  • b) Jobs acted without thinking and with bad judgment;
  • c) Jobs attacked any suggestion without thinking, claimed it is stupid and a waste of time only to turn around, if the idea was good, and propose the same idea as his own a week later;
  • d) Jobs would never give credit where it was due;
  • and e) Jobs would cry when conflicts erupted in board room meeting.

Michael Scott was fired as he became more and more erratic giving Jobs more power. In retrospect, the New York Times wrote: “by the early 80’s, Mr. Jobs was widely hated at Apple. Senior management had to endure his temper tantrums. He created resentment among employees by turning some into stars and insulting others, often reducing them to tears. Mr. Jobs himself would frequently cry after fights with fellow executives.”

A Startup Will Become Impersonal With Success:

Wozniak wanted Apple to be a family while Jobs wanted the company to grow quickly. Jobs felt that Wozniak had failed him because Woz appeared to be unfocused, and failed to get a ‘floating point’ BASIC finished for Apple II. The Apple II launched the personal computer industry. Wozniak had created the machine, and Jobs designed the exterior which was marketed more effectively. Steve Jobs wanted to spur a great advance in computers. This meant that the company had to hire more and more people, and Jobs became increasingly disrespectful towards slackers, and B Players within Apple. The point is you can’t really have a family environment in a startup that scales. And you need to scale in most competitive industries because the big players will try to destroy you at every turn. If you want to have a family like atmosphere then good luck you but expect to fail.

Apple III Was A Bastard Child Idea: 

Apple created a failed project, and it was not marketed well. The design that Jobs insisted on was not manageable for the circuits, and the Apple team all collectively made their contributions to the device so it was a gigantic mess. Steve Jobs insisted that there be no fan in the computer, as a result, the design did not allow the computer to cool properly, and it frequently overheated, the only way to prevent the chips from disconnecting with the mother board was to drop the computer onto the desk which customers were instructed to do whenever they phoned Apple; “Okay, just pick the computer up and drop it on the desk, that should knock the chips back into place.” The IBM PC crushed Apple III in sales. It was a disaster.

Being Abandoned = Ignoring Reality & Discrediting That Reality:

Steve Jobs had an illegitimate daughter that he didn’t bother to recognize as his at first. How’d that happen? In the mid-1970s, Jobs lived in a four bedroom house, and rented the place out to strippers. Chris-Ann Brennan lived with Jobs in separate rooms, apparently they lived as weirdos, and did acid. When Chris-Ann became pregnant with Steve’s child, he became disconnected from the situation, and did not deal with the pregnancy. He could be engaged and disengaged in minutes. Chris-Ann Brennan and Jobs had sex, but instead of taking responsibility, he engaged in character assassination against Brennan, and tried to prevent a paternity test in order to avoid dealing with the possibility of bringing a child into the world. He did not want to take responsibility, and he decided to believe in his own lies, according to Isaacson. Steve and Chris-Ann were 23 when they had their child, which was the same age as Jindal (Steve’s biological father) when he had Jobs. Jobs did not try to help Chris-Ann, and instead would ridicule her.

Walter Isaacson speculates that being abandoned by his biological parents led to this heartless/irrational behaviour, but it’s not entirely convincing and clear. Another classic example of ignoring reality would be when Jobs was diagnosed with cancer, but waited 9 months before pursuing surgery. Ignoring reality is how Jobs got through tough times.

Robert Friedland helped Chris-Ann Brennan have her baby girl but Steve Jobs helped name the child, and Jobs insisted in the name Lisa Nicole Brennan.  Finally, a year later, Jobs agreed to get a paternity tests where he was found 94.1% likely to be the father, and a Californian court forced Jobs to pay a monthly child support bill of $385. Despite the test, he claimed at Apple that there was a large probability that he wasn’t the father. He did this by using statistics improperly. Jobs claimed that 28% of the male population of the US could have been the father. When Chris-Ann heard what he said, she interpreted it as if Jobs was claiming that she had slept with 28% of the US male population.

 

Good Artists Borrow, Great Artists Steal:

The best way to predict the future is to invent it’ was one of Steve Jobs’ favourite sayings. Jobs was granted access to Xerox PARC which was established in the 1970s as an R&D digital spawning ground in Silicon Valley for Xerox. One of its products was the Xerox Alto which was a new computer interface that went beyond the BASIC systems like MS-DOS (ie. black screen + code commands), and in the process created a desktop that was called the Graphical User Interface (GUI) ie. everything on the screen was visually represented by icons. Meanwhile at Apple, Jef Raskin brought Bill Atkinson on board in the Macintosh division to develop a cheaper version of LISA but of course, Jobs wanted to get on the front of the wave, and “make a dent in the universe”. Jobs began to exert more influence on the Macintosh project which was Jef Raskin’s brainchild. Jobs hated Raskin because he was a professor/abstract thinker, and Raskin was obviously in control of the Macintosh project which Jobs saw as his own way forward.

In 1981, Jobs gave 100,000 Apples shares at $10 per share to Xerox in exchange for access to their Xeroc PARC. When Steve Jobs saw the demo of GUI he was amazed that Xerox had not commercialized these innovations: 1) the networking, 2) object oriented programming, 3) the mouse and GUI. With this one visit, Steve Jobs had found the way to connect users to the future with GUI, and a way to leapfrog over Raskin’s plans for Macintosh. Steve Jobs was proud of his stealing the great ideas from Xerox. What transpired was less a heist by Apple but a fumble by Xerox. Xerox was too focused on photocopies, and selling more machines. Ideas are important but execution and positioning is also crucial. Microsoft would subsequently ‘steal’ the GUI concept from Apple, but in reality, Bill Gates had also visited Xerox PARC.

 

The Bicycle Alternative to Macintosh nameSurround Yourself With “A Players”:

In the early 1980s, Jobs recruited people by dramatically unvailing the MacIntosh, and seeing how interviewees responded to the designs. He even unplugged an engineers computer named Andy Hurtsfeld (while he was coding), and forced him over to Macintosh from the Apple II team because Jobs recognized Hurtsfeld’s A Player status. You need to build your company with a collaborative hiring process where a candidate tours around the company meeting everyone that is relevant for hiring that candidate. Why? Because Jobs may not have always had A player ideas. For example, he wanted to call the MacIntosh the ‘bicyle’ because like an actual bike, the MacIntosh would help the human achieve objectives that were not possible on their own. The idea of the Apple Bicycle was shot down by wiser marketing minds. A Players hold you in check.

When Wozniak crashed his airplane in February 1981, he left Apple Computer. After the launch of MacIntosh in 1984, Scully merged the MacIntosh and Lisa teams with Jobs as their head. Jobs told the Lisa team that he was firing 25% of their team because they were B and C players. The management of the MacIntosh team would all gain top positions in the amalgamation. It was unfair, but Jobs latched on to a key management experience, that you had to be ruthless to produce an A Player lineup.

For Jobs, if you hire a B player you will cause of Bozo explosion. B players always want to hire people who are inferior to them. C players hire D players. So keep the best people in your team, and make sure that you keep the right people in your organisation. He believed that if you let any B players into your organization, they would attract other B players as well. A players love to work with other A players, by definition, they want to grow and be the best. That is what makes A Players valuable.

 

Reality Distortion Field:

This reality distortion field was empowering. Bud Tribble in the early 1980s said that “Steve has a reality distortion field. In his presence, reality is malleable. He can convince anyone of practically anything. […] The reality distortion field was a confounding melange of a charismatic rhetorical style, an indomitable will, and an eagerness to bend any fact to fit the purpose at hand.” It was self-fulfilling, you do the impossible because you would believe it. Jobs could deceive even himself which allowed him to con other people. Jobs used this tactic which helps to make irrational goals real. The rules didn’t apply to him. He was a liar, and the Reality Distortion Field is a creative way of saying that he was a liar. As a child, Jobs had been rebellious, and this plays into his special, abandoned, unique self. If you trust Jobs, he will make it happen.

That is the great part about the reality distortion field. If you pretend to be completely in control, people will believe you are, and will be empowered. Jobs was so passionate about Apple and NExT devices, and his force of personality allowed him to change peoples minds as a salesman. Steve Jobs was able to change reality by using charismatic rhetoric, and bend facts. The reality distortion field was never acutely apparent. Jobs was lying quite a lot during team meetings.

As a result, it was difficult to have a realistic deadline since bending facts has its downsides (Think wasteful factory decorations, missing product dates at NExT etc). Jobs did not like manuals, and told Gates in 1984 that they should not have any manuals, but Gates did not bother mentioning that they had an entire team working on manuals for Mac. Bill Gates was completely immune to Steve Jobs’ reality distortion field. When reality hit, Jobs had a difficult time dealing with it.

 

Be At The Nexus of Humanities and Technology:

Connecting arts with technology is powerful. Jobs practiced Buddhism & mediation. Simplicity is important for a company. And it is evident that Buddhism was instrumental in Jobs’ development of Apple. Keeping it simple is essential to producing a user-friendly product that even the parents of baby-boomers can use. In his senior year, Jobs loved King Lear, Plato, and Dylan Thomas. Steve Jobs took AP English in high school. Jobs worked in electronics, and learnt about literature. Jobs took an electronics class at high school with McCaulum.

At Reed, Jobs audited a typography class which Jobs later argued was responsible for the Mac having typeface or proportionately spaced fonts. Steve Jobs understood that creative people are disciplined but technology people think they are lazy, while technology people do not know how to communicate intuitively to people, and have created a secret language to exclude ordinary people. Steve Jobs bridges that gap beauty through his life’s work. Producing something artistic takes real discipline.

 

The Believe In A Closed System & Product Control:

The architectural structure and software had to be tightly linked according to Jobs. Functionality would be sacrificed if one were to allow for multiple software producers. While Microsoft could be used on any hardware, Jobs refused to have Apple computers fragmented by the work of partners who did not follow Apple’s rules. On the customer level, Jobs refused to allow users to alter the product, pitching the idea that Apple products were more user friendly (which they were). He did not want to give users control. The closed system is useful for the iPhone era but not from 1981 until the mid to late 1990s with IBM (Big Blue) and Microsoft working across platforms; Apple’s competitive advantage in the PC market eroded dramatically in the early to mid 1980s. By scaling with multiple hardware platforms such as IBM PCs, Dell, and Compaq, software developers had an open-source alternative to the closed Apple system. Bill Gates realised this closed system problem in the early 80s and exploited it. Jobs wanted end-to-end control so that software developers had to buy into the Apple system, however, critical mass was essential for that to work. In 1982, Jobs wanted the industry standard to be Apple software + hardware, he did not want sales cannibalization that comes with allowing other computers to use the Apple Operating System on their computers. But for developers, the labour required to work within Apple’s ecosystem was prohibitive compared to the gains made by working on an open-source PC world. As a result, in 1997, Jobs admitted that they had been overly proprietary, and thus failed to see how that was hurting their marketability from 1984-1997. In the 2000s, the closed system had the advantage as Apple become a premium/closed brand through carefully working with 3rd parties.

 

Market Research Is For Idiots:

For Steve Jobs, Apple was about producing what people did not know they wanted yet. To be innovative, meant producing what he believed was needed. He was not interested in group testing his products. He once asked, “Did Alexander Graham Bell create a focus group before inventing the telephone?” Customers are going to try to get a better, cheaper computer. Focus groups do not tell you what the customers actually need. Customers do not know what they want.

 

Macintosh As The 3rd Industry Standard:

Bill Gates’ Microsoft appeared in Hawaii for the software dating game. The Macintosh was the product that Bill Gates felt was revolutionary. The ideal relationship would be for Bill Gates to work exclusively with Apple but that was not Gates’ strategy. Gates wanted to be a competitor, and wrote software for the IBM. In 1982, 279,000 Apple II were sold compared to 240,00 IBM but in 1983 there were 420,000 Apple II versus 1.3 million IBMs and clones. IBM had taken 26% of the market, and IBM/Clones would take over half of the market which included other compatible PCs.

Motivate With The Big Picture:

Steve Jobs was not interested in profiting more than competitors, but in producing a better, more beautiful computer. Macintosh’s team was burned out in conflict, and demoralized but Jobs had moments of brilliance. To counteract the negatives of Jobs’ management style, he would illicit the big picture. In one meeting, the issue was with regard to the booting time/start time for a new computer which was over a minute long. Jobs explained that if you combined 1 million people’s boot times, it would add up to many many cumulative hours of waste. In dramatic terms, Jobs argued that reducing the booting time by a few seconds could save about 50 lifetimes in total.

‘Making a dent in the universe’ was the overarching idea behind Apple. In 1981, IBM released their own personal computer, and Apple was confident about their market position. The problem was that IBM was a more powerful company, and had real strengths in the corporate establishment, and brand recognition. The Big Blue vision was to crush Apple, and IBM was the perfect foil for the spiritual struggle of Apple. Jobs felt that once IBM gains in a market sector, they almost always stop innovating. For Jobs, IBM was a force of evil, later the enemy was Microsoft and then Goggle subsequently.

 

Unhealthy Competition Within A Company Can Be Corrosive:

Entrepreneurs do not always transition into effective managers. Steve Jobs had a pirate flag waving over his Macintosh office at Apple. The Lisa team was jealous of this renegade team, and stole their Macintosh pirate flag as a prank. The Macintosh members then found the secretary who was hiding the flag under her desk, and wrestled it from her. This bizarre corporate behaviour had a negative effect, it said that Jobs team was better than other ones, and it was divisive within the company.

Steve would not allow Apple II employees to visit the MacIntosh office. Jobs wanted people to know about Macintosh but he wanted everyone else at Apple to know that they sucked even though Apple II was generating the revenue for the company. Steve Jobs’ Macintosh team seemed to be trying to destroy Lisa because Jobs was kicked off the project.

The Lisa team did feel that the Macintosh was undercutting Lisa since people were going to wait until Macintosh was released before buying their next Apple product, as it was announced in 1983 that Macintosh was on the way. In the PR campaigns, Steve Jobs admitted that the Macintosh was better than Lisa, and within two years Lisa was too expensive, and would be obsolete. Within months of Lisa’s launch, Apple had to pin the companies hopes on Macintosh.

 

The Best & Most Innovative Products Don’t Always Win:

The Microsoft team members wanted to know everything about the OS operating system during their close partnership with Apple in 1983. Gates believed that GUI was the future, and he claimed that the Xerox Alto was the foundation of all personal computers so Jobs was stealing the idea anyway. By November 1983, Gates admitted that there were plans to create an Microsoft operating system to be launched on all IBMs and clones.

The product was called Windows. Steve Jobs was furious. Part of their partnership in 1982 onwards was that Microsoft would not develop any programs for IBM until a year after the MacIntosh launch in January 1983. Unfortunately, Apple did not launch the Macintosh until January, 1984 so Gates was within his rights to proceed with licensing to IBM. Gates came down to Apple, and Jobs assailed Gates “You’re ripping us off! We trusted you.” Bill Gates put it well, “We both had this neighbour named Xerox, and I broke into to house to steal the TV but found that you had already been there.” When Gates showed Jobs what he had developed for Windows, Jobs did not complain that it was stealing because he told Gates right to his face that Windows was a “piece of shit.” Jobs was almost crying about it, and went on a long walk in November 1983. Apple and Microsoft were now in serious conflict at this point. Windows was not launched until 1985 because it was not very good, but Microsoft made Windows better over time, and by 1995, it was dominant. Until the return of Jobs in 1997, there was a dark period of Microsoft dominance in the computer industry according to Jobs. The open system approach that Microsoft adopted by working with multiple hardware partners proved better because it allowed Microsoft to get on to multiple platforms for scalability. Meanwhile, other Apple developers began working with clones as well.

 

Eras Are Defined By Partnerships & Rivalry – Gates Versus Jobs Round 1:

Two high energy college drops ended up shaping the commercial PC market. Bill Gates created a program for scheduling classes, and a car counting program while in high school. Gates was skilled at being logical, practical, and analytical while Jobs was design friendly, and less disciplined. Gates was methodical in his business style. Bill Gates was humane but could not make eye contact. Gates was fascinated by Jobs’ mesmerizing persona but saw Jobs as rude and cruel. Jobs has always maintained that Gates should have dropped acid to open up his mind to creativity. The only thing Gates was open to was licensing Microsoft to Apple but not on an exclusive basis. Jobs long believed that Gates was not a creative person, and that Gates ripped off other people’s ideas or at least did not have original ideas. Meanwhile, Gates derisively called Macintosh “S.A.N.D.” ie. Steve’s Amazing New Device. Gates mentions that he did no like Jobs’ management style, as Steve had a tendency to call his own co-workers idiots on a regular basis.

The rivalry was also beyond the personal. In 1982, Apple’s sales were $1 billion, while Microsoft made $23 million. Jobs had an attitude with Gates that suggested Gates should be honoured to work Jobs, it was insulting. From Jobs’ perspective, Gates did not understand the elegance of the Macintosh. There were 14 people working on the Macintosh while Microsoft programmers created applications that had 20 people working on programs to Mac.  Their rivalry was deep and probably spurred innovation forward for that reason.

 

Genius Versus Shit-Head:

For Steve Jobs, you were either a genius or a shit-head/bozo. He sought absolute perfection, and he loved to define people according to this rubric. Steve Jobs tended to be high voltage and might actually say that an idea you proposed is ‘piece of shit idea’. But then he would turn around to propose your idea as his own a week later. Sometimes, he would then take your position in an argument, and agree with you just to mess you up. Jobs could not avoid impulsive opinions, his team at Macintosh were used to moderating his opinions, and not reacting to the extremes of either being a ‘piece of shit’ or ‘genius.’ At Macintosh in the 1981 – 1985 period, Atkinson taught his team to interpret “this is shit” to mean “how is this the best way?” when speaking with Jobs. Steve had a charismatic personality, and knew how to crush people psychologically. In addition, he had huge expectations with his Macintosh team, and it created a fear factor. If you demonstrated that you knew what you were talking about, Jobs would respect you. From 1981 onwards, employees were annually awarded for standing up to Steve Jobs. One marketing specialist stood up to Jobs twice because the marketing projections were unrealistic in 1981. She won the award having at one point threatened to stab Jobs in the heart.

 The Boardroom Showdown & Emotionality:

In May 1985, the boardroom meeting to demote Jobs from Macintosh was nasty. Jobs presented his case first saying that Scully did not care about computers but in response a manager retorted that Jobs had been behaving foolishly for over a year. Scully then presented his case to the board for demoting Jobs and stated that he (Scully) would either get his way or they would need a new CEO. Scully said that Jobs should be transitioned slowly out of the management role at Macintosh. Jobs felt betrayed by Scully. Steve Jobs was emotionally unstable, and even felt as though he should be able to repair his friendship with Scully. Meanwhile, Jobs would spend a lot of time plotting against Scully in light of his career crisis.


Advertising Does Matter:

The 1984 Ridley Scott advertisement entitled “1984” was a way of affirming a desired renegade style, and attached Apple Computers with the rebels, and hackers. Ironically, Apple was a controlled system. Jobs believed in total control. Initially, the 1984 Ad was not popular on the board at Apple. Markkula and Scully thought it was the worst commercial ever, and that they should not put it on during the Superbowl. They were proven wrong by the timelessness of that 1984 Ad. The next advertisement in 1985 was an ad focused on insulting business people by showing them that they were walking off a cliff as if to suggest that they were blindly following the IBM brand. When the commercial was featured at the 1985 Superbowl in January, there was little reaction, and in truth it was a blunder since it insulted the market it was trying to reach. Apple performed poorly in 1985, the ad is not the cause of the outcome but was a symptom of Apple’s situation in 1985; IBM was expanding immensely.

Frame Your Business Around War – Big Blue Versus Apple:

During the 1984 Apple shareholder meeting, Jobs set the stage for the epic conflict between IBM and Apple. The question Jobs asked at the 1984 conference was “Will Big Blue dominate the entire computer market? Will they control the entire information age? Was Geroge Orwell right?” These rhetorical questions helped inspire his company. Afterall, IBM did not have the vision to buy Xerox in the 1950s. Computer dealers fear IBM dominance on pricing. For Jobs, it was about Apple versus evil. Apple is the only hope against Big Blue. With that frame of mind, Apple could do anything. The MacIntosh was finally launched on “time” in January, 1984.

 

John Scully Hello WorldA Messy Company Can Still Work:

When Scully joined Apple, he was surprised at the disorder, and bickering between Jobs and the Lisa team over a) why Lisa was a failure, and b) why Macintosh had not been launched in 1983. Scully felt that Apple was ‘like a household where everyone were running to the beach when there was an earthquake only to discover a tsunami was approaching that forced them back into the house.’ (Isaacson Biography). Things weren’t great on the numbers side for Scully’s first year as CEO either. He had to announce at the 1984 shareholders meeting that 1983 was a bad year for Apple. It was. The competitors were entering the market with cheaper products that were not as user-friendly as Apple but still semi-useful machines. The Apple balance sheet still showed major growth but IBM had launched the PC, and there were many lower-priced clones on the market in 1981 onward which were harming Apple’s competitive advantage.

Steve Jobs Mike ScullyBut Macintosh was marketed as “the computer for the rest of us” and would refocus Apples efforts away from their core Apple II & LISA product offerings. Apples future was bright because there were 25 million information based users in offices across America, and their work had not changed much since the industrial revolution. The only desktop product people used was the phone until the personal computer. Apple hoped that their market share would expand with the unveiling of Macintosh….1984 would prove pivotal for Apples future (to be continued). Below is the balance sheet for the January 24th, 1984 Apple Shareholders meeting.  Apple was a chaotic start-up turned revolutionary full fledged company. It was a messy operation from the standpoint of senior management but generally Apple worked.

Apple RainbowThe Apple Computer, Inc Balance Sheet In 1983            

Current Assets 

Fiscal Year 1983

Cash and Investments

$143,000,000

Receivables – Net

$136,000,000

Inventory

$143,000,000

Other

$47,000,000

Total Current Assets

$469,000,000

Net Fixed Assets

$67,000,000

Other Assets

$21,000,000

Total Assets

$557,000,000

Current Liabilities

$129,000,000

Long-Term Liabilities

$50,000,000

Shareholders’ Equity

$378,000,000

Total Liabilities & Equity

$557,000,000

 

A Clean Factory Is Insanely Great But The Product Has To Sell:

Freemont, California was the location of Apples new automated factory overlooking the Ford manufacturing facility. Apple was more profitable in its early years of existence relative to Ford. Apple was indeed a miraculous company. Jobs spent time going over the machines in the new factory in 1984, at one point, he demanded that the Apple team repaint the machines for aesthetics. This repainting actually screwed up their machines, however, and corrections proved costly. The Apple factory had white walls, and beautiful machines. Jobs believed the factory was a way to establish a passion for Apple amongst employees. Jobs was influenced by the Japanese manufacturing which had a sense of team and discipline. Debby Coleman, a Stanford MBA, was the operations manager. By the end of 1984, the Macintosh’s performance in sales was very low. They had an expensive factory but a failed product.

 

Being Right Isn’t As Important As Winning

Renegades weren’t such a problem to Steve Jobs. In fact, he respected those who stood up to him if they knew what they were talking about on the Macintosh team. Often if they disagreed with Jobs, they realized that they could ignore Jobs’ commandments, and in so doing effectively spare Jobs the embarrassment of making a mistake or a bad judgement. One such incident involved the disk drive called Twiggy which was defective in the Lisa. The alternative would be a 3½ disk drive which was designed by Sony. The dirty Tokyo disk drive factory in Sony did not impress Jobs and he wanted to go with Alps disk drive which had made a clone of the Sony product. So Jobs decided to do a deal with Alps (a competing manufacturer), but Bob Belleville (behind Jobs’ back) decided to hire Sony in secret without Jobs’s approval.

Belleville hired Komoto who was tasked with building a disk drive for the MacIntosh from 1982-83, but Belleville did not want Jobs to know about this backup plan for the disk drive collaboration taking place at Alps, the Japanese company. Whenever Jobs came through the Macintosh office, Komoto was quickly escorted into a closet, or under a desk where he would have to hide for a few minutes at a time. In May 1983, the Alps team in Japan failed to deliver their disk drive, and asked for an additional 18 more months to work out the problems. It was a disaster as Mark Markkula grilled Jobs about what he was going to do about the lack of a disk drive with the MacIntosh launch potentially being pushed back to 1985? Bob Belleville saved Jobs by interjecting that Bob had a disk drive ready thanks to his secret work without Jobs’ approval. Jobs appreciated this renegade behaviour, and swallowed his pride. So we can infer that winning is more important than being right in management.

Bringing In An Outside Expert Can Be Costly:

Steve Jobs was too rough-edged to be Apple CEO so Markkula and Jobs went shopping for an alternative. They focused away from the tech sector to find a marketing genius. John Scully was an outsider who was an expert in management, and a consumer marketer who had a corporate polish. He invented the Pepsi Challenge campaign at Pepsi, and he was good at marketing, and advertising. Scully was struck by how poorly marketed computers were in the mid-1980s. Scully did not actually like computers because they seemed to be too much trouble, however Scully was enthusiastic about selling something more interesting than Pepsi Co.

Scully decided that Apple should work on the idea of ‘enriching their users lives’. Scully was good at generating PR, and excitement around Pepsi. The ability to generate a buzz about Pepsi would be replicated by Steve Jobs in the unveiling of new Apple products subsequently. Initially the two hit it off very well in their meetings about Scully joining Apple. They both admitted to be smitten with each-other over the big ideas surrounding computer technology. Jobs knew how to manipulate Scully’s insecurities to his advantage. Jobs and Scully seemed to understand each-other, and they had become friends, and emotional confidants. The problem was that most marketing people are paid posers, according to a former Apple manager. Scully actually did not care about computers but cared largely about marketing, and selling an idea to the public.

When Jobs showed Scully the Macintosh, he was more interested in Steve Jobs presentation skills than the computer itself. Scully claimed to share with Jobs goals but he was not 100% enamored with the product. Steve Jobs knew that Scully would be able to teach him the most, and Scully successfully sold Jobs the idea of his being appropriate for Apple. Jobs asked him famously: “do you want to go on the rest of your life selling sugar water, or do you want a chance to change the world.” Scully received $1,000,000 in salary, and a $1,000,000 signing bonus as the new CEO of Apple in April 1983.

 

The Original Macintosh Had Bad Sales:

During the planning for the release of Macintosh, the marketing costs needed to be factored into the price according to then CEO John Scully. Scully said $1,999 price was too low because the marketing budget required to spend more in order to sell Apple to the masses. As a result, they set the price to $2,499 for the Macintosh. Steve Jobs argues that this price was the reason that the Macintosh did not sell well in 1984. After the 2nd quarter of 1984, Macintosh started to slump in sales. It was slow, dazzling but not powerful enough. In addition, Macintosh had only 2 applications so there was a major software development gap. It was beautiful but Macintosh used a lot of memory. Lisa functioned on 1000K of Ram. Macintosh had 128K of Ram. There was lack of an internal hard-disk drive.

Jobs wanted to have a floppy disk drive. Macintosh did not have a fan so it over heated easily. When people became aware of flaws, reality hit. By the end of 1984, Jobs made a strange decision, he took unsold Lisa’s grafted on a Macintosh emulation program, and sold them as a new product. Jobs was producing something that wasn’t real, it sold well, and then it had to be discontinued within the company once the extra LISA’s were sold.

People attend the annual Apple Expo at the CNIT center at La Defense in Paris September 15. Apple p..The distribution system did not respond to demand effectively, and there was an inventory backlog which was unintended by Apple Inc. Macintosh very simply did not sell well enough for the production level of building a copy of the computer every 23 seconds. This would later help Jobs realise that a Just-In-Time inventory strategy would be better suited. This was Dell computer’s competitive advantage.

On balance, Jobs’ marketing from 77 to 85 was brilliant but there were some patchy points. Not everything that Apple did on a marketing level had been genius under Jobs’ influence in the 1977-1985 era. We always talk about the 1984 commercial but check out the worst Apple ad ever from 1985 which reads: “you corporate hacks are buying IBM computers without really thinking.”

 

Fall From Grace Through Management Incompetence:

Scully thought that Jobs was a perfectionist, while Scully didn’t care about products at all. Scully did not learn quickly in his new role but was instead focused on marketing and management rather than the products according to Steve Jobs’ recollection. In addition, Scully seemed to be clueless that Jobs was manipulating him with flattery, while Scully believed in keeping people happy and worrying about relationships.  Outside of Apple, the market responded negatively to Macintosh and by mid-84 into 85 a crisis was growing. By early 1985, the managers had told John Scully that he was supposed to run the company and be less eager to please Jobs. Also, Steve Jobs was told to stop criticizing other departments in Apple which was becoming difficult to stomach. Sales in the first quarter of 1985 were only 10% of their projections. Management changes were on the horizon.

Steve Jobs’ abuse of others increased through character assassinations and intense and direct criticism but this was also coupled with a quickly declining market share. Many middle managers rose up against Jobs. Noting the increased tension, Steve Jobs asked Scully if Jobs could create a Macintosh in a book-like format while also heading an “Apple Labs” project as a new R&D off-shoot of Apple Computers. From Scully’s perspective, if Jobs agreed to leave Macintosh, this solution would solve the management issues and get rid of Jobs’ presence at Apple’s head office. Jean-Louis Gassee would move in to take over the Macintosh only if he could avoid working under Jobs. The problem was that Jobs did not want to quit MacIntosh but wanted more responsibility by running both Macintosh and the new R&D project. Finally, Scully had a meeting with Mike Murray. By mid-1985, Apple executives started to blame Jobs for the miscalculated forecasting of Mac sales and resentment built up due to Job’s management style. Mike Murray, Jobs’ lieutenant in marketing, wrote a memo summarizing the problems that Apple had. Murray laid a lot of  blame on Steve Jobs which was a coup considering his closeness to Jobs. Murray pointed out that Jobs had a controlled power-base within the company which created a strategic alliance amongst high value employees. When Scully confronted Jobs, he said that it wasn’t going to work with Jobs’ approach at the Macintosh division. Jobs said that Scully did not spend enough time teaching Jobs as an excuse for the demotion that Scully was proposing ie start an R&D division outside of Apple. Jobs was erratic, he would reach out to Scully, and then lash-out at him behind his back. Jobs would phone one manager at 9pm to discuss Scully’s poor performance, and then he would phone Scully at 11pm to say that he loved working with Scully. The end of the line for Jobs was approaching quickly.

 

Being Vindictive Is Part Of Leadership:

In 1985, Jobs refused a $50,000 bonus for Macintosh engineers who went on vacation during the bonus awarding period. Andy Hurtzfeld quit because he didn’t like Macintosh’s team, or Jobs. Woz and Jobs were no longer friends. As an expression of that, Jobs also shot down Wozniak’s universal remote control company ‘Cloud 9’ by arguing that the design agency should not be allowed to work with 3rd party companies such a Woz’s. Steve Wozniak left Apple saying that the company was not being run properly for the past 5 years. Jobs was vindictive, and convinced himself that Woz’s remote control designs was a problem because it resembled other of Frog’s designs which were used to design Apple products. In 1999, Adobe refused to write programs for the iMac, so when the iPhone was released, Steve Jobs refused to allow flash on its products arguing that these products ate too much battery power, when in reality the core problem was that Adobe had screwed Apple in the past. In other words, being vindictive is part of business leadership as far as Steve Jobs is concerned.

Steve Jobs Rolling StonesRolling Stone PR Stunt:

Apple wanted to build a relationship with Rolling Stone magazine, and Steve Jobs pitched them to get on the cover but they rejected Jobs’ idea. In response, Jobs said that Rolling Stone was a piece of shit in the early 1980s to a Rolling Stone journalist, and that they needed to get a new audience of people who care about technology.

 

Finding Similarities Between Yourself & Your Business Partners May Not Be Good:

John Scully, and Steve Jobs were perfectionists, and they were self-deluded about each other. They had different values, and Scully did not learn quickly. Jobs managed to manipulate Scully into believing Scully was exceptional. Jobs was secretly astounded at Scully’s deference. Scully would never yell at employees, or treat them horribly as Jobs had. Jobs tried to find similarities between himself and Scully in order to justify choosing Scully as Apple’s CEO. Thinking in this way is a mistake.

Eras Are Defined By Partnerships & Rivalry – Gates Versus Jobs Round 2:

As Jobs stepped in the limelight again at MacWorld 1997, he announced a partnership with Bill Gates’ Microsoft stating that a zero-sum game (between Apple and Microsoft) was not the way forward. Gates had stolen the Graphical User Interface from MacIntosh which was borrowed from Xerox PARC, but had struck a deal with Scully to not release a GUI until after 1988. When Windows 2.0 was released, Apple sued them unsuccessfully for IP theft. By 1997, Gates refused to help Amelio create a Word processor. When Clinton began building an anti-trust case against Microsoft for their near monopoly (particularly their destruction of Netscape), and other unethical business practices, Jobs told a Justice department official to continue if only to allow Apple to develop an alternative.

Steve Jobs closed a simple deal with Gates with the agreement that Apple would stop suing Microsoft for stolen IP, while Microsoft would have a $150 million stake in Apple with non-voting shares, and produce Microsoft Office, and Microsoft Explorer for the Mac. At MacWorld 1997, this decision to work with Microsoft was very controversial, and there was a public relations gaffe that Jobs would later regret. When introducing Bill Gates at MacWorld, Jobs decided to have Bill Gates beamed into the auditorium via satellite. The only problem was that Bill Gates was put on a giant projector screen over looking the audience like a powerful overlord or Big Brother.

Force An Ultimatum To Get Control Of A Company:

The Friday executive meeting (in May 1985) was where Scully would confront Jobs about the attempted coup. Jobs said that “Scully was bad for Apple, and the wrong guy, you don’t know how to develop products. I wanted you to help me grow, and you have been ineffective in helping me.” Jobs said that he would run Apple better, so Scully polled the room with each person explaining who would be better for Apple. “It’s me or Steve. Vote.” Everyone supported Scully, and Jobs started to cry again. Jobs left Apple with his core MacIntosh staff. Scully was very upset about what happened. Scully’s wife confront Jobs in a parking lot and said that he had nothing behind his eyes other than a bottomless pit.

Targeting The Education Market Is Not Lucrative:

In September 1985, Steve Jobs announced to the Apple board that he would be focusing on a computer for the higher education market in a new company of his own. This was an outstandingly strange decision since it is not as lucrative as other areas, but he saw a market share for himself. Apple dominated the education market so Jobs took with him key people who would be useful for his goal. Their team would then have proprietary information about Apple’s future goals in the education sector. Jobs raided key employees in a somewhat vindictive manner. Even Markkula was offended at how ungentlemanly he was behaving. So Apple sued Steve Jobs for (a) secretly taking advantage Apple’s plans for the product, (b) secretly undermined Apple by getting new people, and (c) secretly being disloyal to Apple by using their information.

Never Tell The Allies Of Your Opposition That You’re Planning A Coup:

As the summer of 1985 approached and Jobs was transitioning out of his leadership role as the head of the Macintosh division, he begged Scully to reverse the boardroom decision. Scully refused and argued that Jobs had failed to get another Macintosh out to market. May 14th, Tuesday 1985, with a boardroom present Jobs was defiant and argued that it was alright to have Apple II and Macintosh developing two different disk drives. Jobs begged Scully again not to move him out of the role, and in-front of the board, Scully said no. The die was cast. Scully was planning on going to China to launch the opening of Apple to the Chinese computer market, so Jobs started to plan his coup around the Memorial weekend visit that Scully would be going on.  Jobs went around canvassing for the support needed to swing the board against Scully.

The board was largely with Scully. Jobs revealed his plans to Jean-Louis Gassee who was the guy that Scully was going to replace Jobs with. Naturally, Gassee told Scully who immediately cancelled his trip to China. Jobs refused to accept the reorganization of Apple with Jobs as a product visionary. Jobs did not want to play ball. Jobs was excluded from management reports. It was a personal and career disaster for Jobs.

 

How To Save A Dying Tech Company – Return To Your Successful Roots:

Jobs believed that killing the Macintosh clones was the way forward in 1997. He felt that licensing the Mac OS software to third party hardware producers was a mistake and that the largest battle was the software licensing problem for Apple. The problem was that by having a closed system, Apple had to manage its own software development. Microsoft dominated because they produced software that was cross-platform. The clones of Apple cannibalized Apples’ own computer sales even if these clones had to pay Apple software at $80 per sale. Jobs believed that hardware, and software should be integrated, and Jobs wanted to control the user experience from end to end. With this return to Apple’s roots, Jobs was setting a course for creating a closed, highly controlled user experience that had pros and cons.  

 

Avoid The Problem Of Focusing On The Small Battles & Not Seeing The Big Picture:

October 1988, the NeXT launch was an amazing event. After 3 years of consulting with universities across the country, Jobs was betting the company on new technology. Every minor detail was analysed and reworked as the release windows passed for the NeXT computer. In an effort to seek out the best quality technology, Jobs built a highly advanced product but NeXT did not have a floppy disk which was rare for the era. NeXT was risked on the lavish use of Steve Job’s finances to set up his company, and he targeted the higher education industry. The problem was that the features were great but the price of the product was $6,500. At the launch, the applause was scattered when Jobs announced the price tag, the academics were extremely disappointed at the launch event for NeXT because the machine was too expensive. Apparently, the education sector representatives of his NeXT launch were shocked at the cost given the feedback that NeXT had no doubt received. The price has to be low enough to scale the product into universities, other wise the sales pitch has to be extremely aggressive. This price shock was reflected in the sales.

Instead of focusing on price, Jobs’ team focused on features and other details…universities didn’t buy the product. Pricing a product is essential. Most of the features were trivial for the NeXT. In addition, there were too few people interested in building software for the NeXT, and the price was a massive deterrent. In addition, the NeXT was incompatible because few developers were designing the software needed to use the product. Jobs’ strategy was to target the workstations industry where Sun was dominant. It failed, and in 1991, NeXT stopped making hardware much like Jobs had given hardware up at Pixar. By the mid-1990s, NeXT was working in the Operating System market exclusively.

 

Gain Financial Control Against Your Business Partners:

Pixar needed to challenge Disney’s dominance in animation. Toy Story’s success was heavily associated with Disney which was frustrating to Jobs because Pixar created Toy Story. Jobs felt that Pixar was helping Disney roll out their movies and taking all of the credit for Toy Story. Pixar ran and created the movie, and Disney was the distributing channel. There was a need to go public with the Pixar considering that Toy Story was the top grossing film in 1995.

When Pixar was in trouble in 1988, Jobs needed to fire people which he did with a complete lack of empathy. The company was failing partly because their mass market animation hardware did not sell well. He gave these redundant employees a notice of two weeks, but this was retroactive from two weeks before the date of termination! Fast forward to 1995, Pixar was worth $39 per share on the first day of the IPO, Steve Jobs made $1.2 billion dollars in the initial IPO stage (a huge portion of its value). With the success of the IPO, Pixar wanted to assert a co-branding relationship with Disney, rather then being just a studio. Steve Jobs fought to make sure Pixar was every bit as valuable as Disney which later resulted in a Disney take over at a huge valuation.

 

Art Reflects Reality:

Jobs bought Pixar from Lucas films and became a majority stakeholder in 1986. Pixar was technology meeting art which was perfect for Jobs who wanted to live on the intersection of the humanities and technology. He looked into the finance, and strategy in the late 80s to familiarize himself more with the bean counting elements of business. Jobs spewed out all kinds of crazy and good ideas at Pixar meetings. He even tried to sell hardware, and software design via a digital animation product called Renderman but this did not sell well. In the early 1990s, John Lasseter came up with Toy Story. Originally, Woody was a nasty character (who acted like Steve Jobs) but finally they decided to change the story so that Woody was no longer a mean character, and the film was very successful after much difficulty with Disney. A Bug’s Life tells the story of an Ant with all kinds of crazy and good ideas, but he gets in trouble with the colony and he is then expelled from the colony. He goes out to find a solution to the colony’s grasshopper problem, and ends up saving the colony. It basically follows the same life pattern as Steve Jobs who was fired from Apple, only to triumphantly return.

 

Rivalry Of The Ants & Breaking With Disney:

Woody Allen’s Antz film was not a huge success but it was used to challenge the Disney production A Bug’s Life. Katzenberg (Dreamworks) wanted to copy Pixar’s Ant movie, and so Hollywood had two Ant movies being made in the same year. Katzenberg have a falling out with Disney in the mid-90s after being responsible for productions like Little Mermaid and Aladdin. Later Finding Nemo was the most popular DVD and sold $0.867 billion, and Pixar made $521 million with the showdown from Disney. Pixar was producing the films, and Disney was the distribution channel.

 

Build A Board That Cannot Operate Independently of the CEO:

During his transition into the leadership of Apple, Steve Jobs hired Larry Ellison, and other board members who were all loyal to Jobs. This would allow Jobs to take complete control over the company, and give him the breath of control needed to execute the long list of chances that were needed to fix Apple. Once the board was set, Steve Jobs become the CEO of Apple, and he took a salary of $1. The next step would be to rebuild the company. Instead of building Apple off of the divisions in a product line model used originally, with Jobs, there were to be no divisions with independent bottom-lines. Jobs wanted to have a cohesive structure so that he could directly control the company from the top down. He would be able to interact with smaller teams, who were in constant dialogue with each other rather than in painful competition against each other. Instead of a competitive bureaucratic structure where teams competed against eachother, Apple was now a heavily top-down organisation.

  Do Not Chase Profits, Chase Value:

By 1996, Apple had a 4% share of the market from a high of 16% in the late 1980s. Apple had expanded into every technology sector with a wide variety of products over the decade + that Jobs had been outcasted. John Scully did not think that high-tech could be sold to mass markets. According to Jobs, in the 1990s, Scully brought in corrupt people that wanted to make money only for themselves rather than create new ideas through Apple. Scully’s drive for profits at the cost of market share reduced Apple’s value. Apple’s decline was due to its inability to innovate in any area. The Macintosh hardly improved after Jobs had left. In one instance, Jobs was asked to autograph a late-1980s model of the Macintosh keyboard but first he insisted that the arrow keys be removed. Jobs hated the arrows on the keyboard and viewed it as an example of bad decision-making within Apple. Apple was almost sold to Sun and HP in 1996, Apple’s stock fell to $14 in 1996. In 1994, Gil Amelio became the CEO of Apple and wanted to integrate the Apple with Windows NT which would have corrupted Apple further. Amelio did not like Jobs much, and thought Jobs was trying the reality distortion field at every point of interaction.
tion. Amelio was probably right.

 

Do Not Force Other Businesses Into Your Closed System:

In 1983, Jobs loved Microsoft Excel so he made an offer to Gates. If Gates agreed to produce Excel exclusively for Apple for the first 2 years, then Jobs would shutdown his team working on BASIC, and license Gates’ BASIC. Gates accepted. This deal became a lever in future negotiations. When Jobs decided he wanted other companies to produce software for Apple, he exercised a clause in the contract with Gates so that Microsoft would not get an automatic bundling in every Macintosh sold. Instead of getting $10 per Application, per Macintosh sold, Microsoft would have to sell their products separately.

Gates knew that Jobs was good at playing fast and loose with the truth so he was not actually that upset because he then turned around, and started work on versions for IBM. Microsoft gave IBM priority, and Jobs’ decision to back out of the bundling deal was another major mistake by Jobs. When Gates and Jobs unveiled Excel, a reporter asked if Microsoft would be creating a version for IBM. Gates’ answer was “in time.” Jobs’ response was “Yes, in time, we’ll all be dead.”

apple boardHow To Save A Dying Tech Company – Fire The Board Or Resign:

In 1997, Apple was losing good people so Jobs pushed to give the best people a re-pricing of their stock options to ($13.29 per share) as Apple’s stocks were so low that they were nearly worthless. This was not considered good corporate practice. Having quality people was essential to ensure the success of the company. When the board said it would take 2 months to do a financial study, Jobs said he needed their absolute support now. His response was that he would not return on Monday if the board did not agree, Jobs needed to make thousands of decisions, and this was just one hurdle. Most of the board was happy to leave subsequently. Jobs said that the problem with Apples products was that they sucked.

Steve Jobs 1997 Insult ResponseMerge Your Venture With A Giant That You Can Take Over:

NeXT was failing and idea of being bought by Apple in 1996 was a tantalizing prospect for Steve Jobs. He wanted to get back into Apple while Larry Ellison of Oracle wanted to get more money by buying Apple outright. However, Jobs wanted the moral high ground by not making money in the process of transitioning back into Apple. In 1996, Steve Jobs negotiated with Gil Amelio the purchase starting with Apple Computer buying $12 per share for $500 million valuation of NeXT. Amelio countered with $10 per share for $400 million valuation of NeXT, and Jobs agreed as long as he received a payout in cash.

Jobs would hold 100 million in cash, and 35 million in Apple stock. Gil Amelio was not sure about giving Jobs entry into the board of Apple because of the history of 1977-85. You could say that Gil Amelio was caught in Jobs’ reality distortion field because later Amelio would realise that Jobs was positioning himself to destroy Amelio as CEO of Apple. Jobs’ return to Apple was fortuitous; if you can merge with a major company then you are effectively be hired by that company. Bill Gates said that Amelio was an idiot for bringing NeXT into Apple, and that Jobs was a salesman without an engineering understanding. An early example of the feathers that Jobs ruffled circa 1997…

 

How To Save A Dying Tech Company – Make Products Not Profit, Fundamentally:

Do not race to the bottom on prices. Get your user to have an emotional connection with the product. Amelio’s approach was to build a cheap product based on sketches of bolder ideas. Jobs believed in digging into the depth of what a product should do. You need to understand the essence of a product in order to get rid of the parts that are not fundamental. Can you get 1 part to do 4 times as much work? Design was not about surface but design is the fundamental soul of a human-made creation. A good design can be ruined by a bad factory production. Products should be pure and seamless. Do not let the engineers drive design. Apple worked the other way. Jobs found Jonathan Ive to produce the core designs at Apple going forward.

There is an Apple office that Ive’s runs which is built around models for future design to see where the products are heading, and to get a sense of the whole company on one desk. Apple has patented hundreds of devices. They built the modern Apple company around the assumption that design and product trump profits. Together Steve Jobs and Jonathan Ive produced the iMac, iPod, iPhone, iPad, PowerMac 5, iBook.

 

 

Skate Where The Puck’s Going, Not Where It’s Been:

“Skate where the pucks going, not where it’s been.” – Wayne Gretzky. Jobs believed that it was his goal to understand what the customer wants before they do. The iMac is about inspiring with a beautiful plastic blue, and it was translucent so that you could see into the machine. The casing would help to give all the components. The simplicity of the plastic shell had to be perfected, and they even studied jelly beans to see how it would be attractive. Some people at Apple wanted to conduct a study to see if the cost of the translucent casing would be justified by focus groups, Steve Jobs said no. iMac should sell for $1200, and produce an all in one consumer appliance. iMac did not include the floppy disk drive but it was ahead of its time. iMac was friendly so much so that there was a handle on the top of the iMac to actually pick it up. Jobs almost started crying because the iMac had a tray instead of a slot drive. May 1998 was the iMac launching. In 2001, iMac was changed to have a sunflower type design.

 

The Loser Now Will Be Soon To Win:

Jobs believed Amelio was a bozo. Gil Amelio did not actually present or sell himself particularly well, and he famously bombed on stage at MacWorld in 1996. That particular presentation was very poor and unplanned. Once back inside Apple, Jobs was too honest and spoke with one of the board members Willard who asked Jobs what he thought about Amelio. Jobs said that Amelio was not in the right job, and then added that Gil Amelio was the worst CEO ever. Famously, Gil Amelio had explained to a journalist that “Apple is like a ship, that ship is loaded with treasure, but there’s a hole in the ship. And my job is to get everyone to row in the same direction.” That lack of logic in this statement spoke to Amelio’s lack of efficiency as a leader.

Ellison tried to call for the drafting of Steve Jobs as CEO of Apple. When Amelio confronted Jobs about the possible takeover, Jobs denied any of it but refused to declare that he was not positioning himself for a takeover. Jobs loved to dish out flattery with Amelio, meanwhile Jobs was busy turning the board against Amelio, and Apple’s dire situation financially. People were leaving Apple, and thinking of leaving Apple which is never good when your people are an important asset. Amelio was fired because he was incompetent, but once Jobs was offered the CEO job, Steve Jobs moved into the interim CEO because he was still running Pixar. After years in the wilderness, Jobs was back at the top of Apple. The first thing he did there was to commit a subtle by significant vindictive act: Jobs hated the Newton personal assistant because you needed a stylus and also because the Newton was one major innovation of John Scully’s. Scully was the man who kicked Jobs out of Apple in 1985. Jobs cancelled the Newton.

 

The Internet Is Made For Music:

Napster, Limewire, and other music file sharing websites allowed the illegal downloading of music on a massive scale, and a precipitous decline in sales of traditional distribution platforms for music which began dropping by 9% in 1998. The executives at the music companies were desperate to agree on a common standard for copy-right protection. If the music industry could agree to the coding of music across the industry, they might be able to get a head of the Peer-to-Peers. Sony and Universal came up with Press-Play. EMI had their own system alternative, each had a subscription based system where you would rent the music, and the two competing solutions would not license each other’s songs. The interfaces were clunky, and the services were terrible, the record companies did not get how to solve the problem. Warner/Sony wanted to close a deal with Jobs, largely because Warner/Sony did not know what to do. Steve Jobs was opposed to the theft of creative products even though he bootlegged Bob Dylan in the 1970s. If people copied Apple software, there would be no incentive for new music other than from the passion of musicians.

Creative companies never get started, and it’s wrong to steal, and it hurts your own character according to Jobs. iTunes was the alternative to the brain-dead services, iTunes was the legal alternative to P2P where everyone wins: a) users would no longer steal, b) record companies generate revenue, c) artists get paid, and d) Apple disrupts the music industry. Steve Jobs had a tough pitch with record companies because of the pricing model, but he used the fact that Apple was still only 5% of the computer market to convince them that such a deal was not have a major impact oo their bottomline. So if iTunes was destructive, it would not be quite so too damaging. Apple was a closed system, and so these Record companies could use Apple as means of controlling the MP3s.

Record companies got $0.77 of the $0.99. People wanted to own music, not rent, or subscribe to it. The subscription model did not make as much sense. Record companies had made a lot of money by having artists produce two or three good songs with 10 fillers, the iTunes store would allow users to select only the songs they liked, further upsetting Record companies. Steve Jobs’ response was that piracy had already deconstructed the album. He closed deals across the music industry which was astounding. Jobs bridged the gap between technology and art.

 

Brand Yourself Differently:

Think Different – the new slogan was not perfectly grammatical if you think about what you are trying to say: it is most appropriately think differently. Steve Jobs explained that Apple’s future in 1997 was to think differently. The craziness of Apple’s customer base was that they had a sense of creativity and uniqueness that others did not. Steve Jobs argued that Apple was distinctive as a brand, and they formulated a brand image campaign to celebrate what creative people could do with their Apple computers. The Think Different campaign was about reminding themselves about who they were. Here’s to the crazy ones, who think differently. Their television commercial was historic, as well as the posters for Think Different. Jobs believed in the renegade brand that people would choose because it made them feel proud and exclusive.

 

Create Complimentary Product Offerings Without a Lead Loss Generator:

Sales of the iPod would drive sales of the iMac, and vice versa. They got a triple bang for the buck in advertising by invigorating the Apple, Mac, and iPod. Steve Jobs completely dominated the market for music players by putting all of his advertising spending on the Mac into the iPod. So the iPod advertised more aggressively at about 100 times more spend, than the closest competitor. The beautiful iPod cost $399, some people said that iPod stood for “idiots price our devices.” The iPod was about intersection between technology & arts, software & music. 

 

Don’t Be Afraid To Cannibalize Yourself Because If You Don’t Others Will:

When iTunes was released, Microsoft managers realized that they needed to create direct user value with an end to end service. Gates felt like an idiot once again, and Microsoft wanted to move forward although it was caught flat footed by Apple. So Microsoft tried to copy iTunes. When Apple created the compatible iPod, and iTunes systems for other PCs it meant that PC users would not have to buy Macs to use the iPod. Steve Jobs did not want to put iTunes on the PC. The cannibalization of not selling Macs was out weighed by the potential iPod sales. Once iPods went PC, Apple was on its way to be extremely extremely lucrative. Jobs said that iTunes for Windows was the best application for PCs ever. When Microsoft came up with Zune, it was obvious that they did not care about the music or the product. Steve Jobs believed that an iPhone might cannibalize sales for Mac, but it would not deter Jobs. When the inventor of the Walkman tried to compete against Apple, they were held back by cannibalization because Sony had a music department etc etc. In 2004, the iPod Mini was the next innovation which helped eliminated the portable music player competitions. Apple’s market share in the portable music player industry went from 35% to 75% in 18 months. The iPod Shuffle also helped grow it further because people like to be surprised. Jobs decided that they should get rid of the screen, you don’t need to navigate all you needed was to skip over the songs you heard.

 

Focus On What People Really Want…1,000 Songs:

 Jobs could not include the first CD burners in the iMac because he hated trays. The mark of an innovative company is that it knows how to leapfrog when it finds itself behind in the development of new innovation. Napster exploded in growth, the number of blank CDs sales also increased massively in 1999, and Jobs worked hard to catch-up. Steve Jobs wanted to make music management easy. You can latterly drag, and burn a CD. Jobs bought a company called SoundJam, and instead of an interface to see your songs, Jobs wanted a simple search box. In 2001, iTunes was free to all Mac users. The next step was to create a portable player which was the simple interface. Getting all the record companies alongside iTunes would be the complicated part. By the fall of 2000, Apple was working towards this goal.

Fidel and Rubenstein clashed over the iPod because Fidel was charismatic, and wanted to claim control, and he had already been shopping around other companies to pitch his idea of a portable software based device which later became the iPod. They found small company to help them with the Mp3 technology. Steve Jobs wanted white on everything for the iPod, the purity of the white headphones became iconic. Steve Jobs pushed the idea of their iconic advertising. Apple’s whole history was making the software, and hardware together so the iPod made strategic sense. Gates said it was great, too bad it was only for Macs… By 2007 iPod was half of Apple’s revenues.

 

Steve Jobs Said that Google’s ‘Don’t Be Evil Mantra’ Is *Bullshit*:

Android’s touch screen features was clearly stolen from the iPhone. They had a grid app list much like the iPhone. The swipe to open, pinch to expand, these were all Apple ideas that Google was implementing. Google was engaged in grand theft as far as Steve Jobs was concerned. Jobs went to Google, and shouted at everyone there. Jobs wanted Android to stop stealing their ideas. The open source code approach was valuable because Google was able to sell their platform to multiple mobile phone providers where Apple had more control. Nonetheless the Apple App market is much larger than the Google one to this day.

Get Yourself Into The Cloud & A Castle:

Apple’s MobileMe was a failure because it did not sync data. It was expensive but iCloud was the future. This was not a new idea. In 1997, Steve Jobs explained that at NeXT he had all of his data on the server. The idea is that you won’t have to back up your computer by downloading into the iCloud. All you stuff is on the server, Jobs was talking about this idea as early as 1997. The concept that everything would work simply has been applied to cloud computing. Microsoft said that CloudPower would allow individuals to access their content wherever they are but this opens up the door to licensing agreements etc. In a final twists, the Apple Campus is under construction and will be completed in 2015. It is similar to Google HQ. Copied?

 

Don’t Fear Changes In Industry & Anticipate Competitive Market Disruption:

The digital camera industry was destroyed by cellphones, and Steve Jobs knew that in order to stay ahead of the wave, they would have to cover the cellphone market as well. The iPhone was born out of a concern that Nokia et al would eat Apple’s lunch by creating mobile photos that could easily play music, just as Nokia et al had crushed Kodak. Motorola was a stupid company to Jobs because the Rokr was a joke. Jobs realized that the iPod wheel was not going to dial phone numbers. Jobs was working on the iPad with the touch screen system before the birth of the iPhone. The ability to process multiple touch items was Steve Jobs’ ideas. They wanted to transfer the track pad to the computer screen. Ive never made a demonstration with other people because he know Jobs would shoot it down. The tablet development was put on hold, and shifted to the iPhone screen. Jobs split the multi touch track pads and wheel based iPhone plans. The case could not be opened, and Apple made sure that people could not access the iPhone. The iPhone was three products bundled into one: 1) internet interface, 2) mobile phone, and 3) touch controls. The iPhone was a massively successful product even though it was the most expensive phone in the world $500. Ballmer said the iPhone sucked because business people want a keyboard. Apple sold 90 million phones within months.

 

Create An Inventory Management System & Build Stores That Work:

Everything you do incorrectly is in order to get it right. If something isn’t right you can’t fix it later. Steve Jobs wanted to control the customer experience, which included the experience of creating wood, stone, steel, and glass an Apple store. Mega chains were where the salesman did not care about Apple because other products were available. Jobs was impressed by the Gap store business, and Jobs hired Drexler from Gap to build a prototype of the store. Tim Cook, reduced key suppliers from 120 to 24, forced many to move closed to Apple’s plants. He helped save Apple a great deal of money. Apple stores were strategically placed in Covent Garden London, or in New York. Sales are quickly tabulated using Oracle technology every 4 minutes so that they have a lean manufacturing production line, and the building of products can respond to market demand quickly.

 

Converge Old Devices Into 1 New Device:

Is there room for something in the middle of the iPhone and PC, Jobs asked in 2010? The iPad allows people to bring technologies together. The iPad was not sold as well as the iPhone. The name iPad was ridiculed as a women’s hygiene product. Gates still believed that it’s a nice reader but didn’t like the iPad. Further divergence in views suggests that Gates believed in a stylus while Jobs said we already have 10 stylus’. There were 800 emails in Steve Jobs’ inbox. The iPad had the limitation that it was for consumers but does not facilitate creation. The iPad arguably mutes the user turning you back into a passive observer. The question about iPad was whether it should be closed. Google’s Android was an open platform that could be used openly. The iPad was the clearest test of the closed-system model versus the open-system model. In the end, iPad was the most successful consumer product launch in history with 1 million sold in the first month. Jobs was in the process of changing the print industry, he closed deals like he did with the music industry. Apple would take a 30% take of the subscriptions sold, and Apple would have all of their purchase information which they would use later on. The problem was the publishing industry did not want the subscription base to be controlled by Apple since Apple would then change the prices. Steve Jobs believed that the paper textbook was going to be a industry ripe for digital destruction, and created digital versions of the products. The Chinese employees are paid $2.00 per day. It takes 5 days, and 3500 hands to produce 1 iPad in Foxconn China.

Do Not Ignore Medical Diagnoses:

When Jobs was diagnosed with cancer, he did not rush to have surgery to remove the tumour found in his pancreas. Instead, he tried to see if other treatments would work. Why was he hesitant? Partly because he had difficulty with the idea of opening up his body. He went under herbal remedies and psychic treatments as a result of his quibbling. As a response to his psychological concerns, Steve Jobs tried to cure himself in strange ways: reality is unforgiving. Once again, he was able to filter out the world, and ignore stuff that he does not want to confront. Jobs had been rewarded for willing things away, but in July 2004, the cancer had spread. Finally, he underwent surgery in 2004 but a less radical surgery.

The cancer had spread into the liver. Had doctors operated 9 months earlier they would have possibly arrested it. When he had a liver transplant in 2009 by going to another state and by having a multiple listing, the liver Jobs received was the product of a car accident that killed a 25 year old. Steve Jobs lied about his condition throughout the last years of his life by calling it a hormone imbalance. The privacy rights of the CEO had to be weighed, but Jobs also embodied his company more than most CEOs so the impact of negative news regarding his health could have an impact on the stock.

Make Peace With Your Old Enemies:

Microsoft had stolen the interface developed by Apple with multiple clip windows etc. IN 1997 Jobs announced that the only way forward was to make a deal with Bill Gates and Microsoft. In 2007, Bill Gates and Steve Jobs got together to talk about technology. It is an EPIC discussion.

Follow Your Heart:

there is no reason to not follow your heart, and gain meaning because you will be dead one day. Don’t live someone else’s dream. Stay hungry, stay foolish. The Stanford University commencement address is considered one of the greatest commencement ever made.Jobs did not believe that people should be materialistic but should seek to be valuable.

Steve Jobs Was A Brilliant Jerk

From the creator of Going Clear, Steve Jobs: The Man In The Machine is about the now infamous career flaws of one of the most successfull entrepreneurs in American history. It looks like a good rehashing of memories from 2012 when everyone you knew + your grandma read the Isaacson’s biography.  I’m certain Kutcher and the script writers of the disappointing Jobs film are going to have a front row seat as they didn’t actually read the Isaacson biography….’cause that film sucked badly.

Steve Jobs:
  • a) abandoned his own daughter and girlfriend,
  • b) cheated Wozniak out of a bonus at Atari,
  • c) verbally assaulted the LISA team and created intense competition between teams,
  • d) screamed at Macintosh developers regularly,
  • e) cried like a baby when the iMac CD tray was a tray not a slot,
  • f) fired employees with retroactive consequences to their salary,
  • g) parked his car in the handicap spot,
  • h) sped down the highway regularly,
  • i) discovered his Syrian father (who also abandoned him) was the owner of the restaurant chain he frequented regularly but never came by to say “hi”,
  • j) tried to instigate a coup against foolish management and lost…
  • k) cried whenever someone disagreed with him,
  • l) attacked creative ideas for being idiotic then within a week apprioriated them as his own,
  • m) called his co-workers idiots and bozos whenever they fell short of his goals,
  • n) his colleagues had to hide a disc drive developer in the Macintosh supply closet (whenever Jobs visited) in order to prevent Jobs from discovering a parallel disc drive solution was being built which ultimately saved Jobs from disaster as his solution failed,
  • o) he refused to donate to any charity ever,
  • p) built and painted an expensive factory at NExT meanwhile the product completely bombed,
  • q) refused to give shares to one of his earliest Apple colleagues even though the guy put in many hours into the project and begged Jobs for a small part of the equity,
  • r) made his step-mom answer early customer service calls to Apple without pay (laugh out loud)….
  • s) took the tv away from his step-dad who wanted to watch football in order to program Apple’s……
  • t) declared war on IBM as a means of galvanising his company,
  • u) claimed Microsoft was stealing Apple’s ideas when both actually stole from Xerox PARC,
  • v) tried to destroy Adobe and any organisation that expected fair treatment…

This list is not exhaustive & what can we learn from this list, right?

This is an analysis based on Steve Jobs by Walter Isaacson and other sources of research. Enjoy.

Running a Company from the Financial Perspective | Accounting Analysis

Accrual Accounting versus Cash Accounting

Accrual basis = immediate recognition.

Cash basis = when the case is received.

Before we dive into earnings management as a subtopic within business analysis and valuation, it is helpful to understand the difference between Accrual and Cash Accounting. The cash basis is only available for use for companies has no more than $5 million sales per year.

The accrual basis is used by larger companies because matching revenue and expenses in the same reporting period so that the true profitability of an organization can be discerned.

Cashflows are harder to manipulate. The big difference between the two is when the transactions are recorded.

Cash basis: Revenue is recorded when cash is received from customers, and expenses are recorded when cash is paid to suppliers and employees.

Accrual basis: Revenue is recorded when earned and expenses are recorded when consumed.

Revenue recognition is delayed under the cash basis until customer payments arrive at the company. Similarly, recognition of expenses are paid under the cash basis until such time as supplier invoices are paid.

Revenue recognition: a company sells $10K of green widgets to a customer in March which pays the invoice in April. Under the cash basis, the seller recognizes the sale in April, when the cash is received. Under the accrual basis, the seller recognizes the sale in March, when it issues the invoice.

Expenses recognition: a company buys $500 of office supplies in May, which it pays for in June. Under the cash basis, the buyer recognizes the purchase in June, when it pays the bill. Under accrual basis, the buyer recognizes the purchase in May, when it receives the supplier’s invoice.

Creating Cookie Jars: by deferring revenue that was genuinely earned or by taking additional expense by taking on excessive reserves for bad debts (we’ll explore this in a future post)

Debt Covenants: Keeping ratios within certain ranges. Debt/Equity. Lender have a capped upside. So lenders like covenants; What if you are ear to violating your covenant? Then you might adjust the bonus threshold.

Opacity of the Firm: capital markets & stakeholders. Competitive consideration: opaque firms will use the argument that they can’t divulge financial statement performance to the same degree as other firms because of their competitors.

You have to sit in awe of the most in genius management invention of all: Plausible Deniability

STEP 2: ACCOUNTING ANALYSIS:

How to adjust financial statements for distortions?

How firms communicate with financial statements and how regulations and managerial discretion affect statements for distortions. There are several steps to Accounting Analysis:

Step 1: Identify Principal Accounting Policies:

Key policies and estimates used to measure risks and critical factors for success must be identified. IFRS require firms to identity critical accounting estimates. For example banks issue credit risk and interest rate risk. For airlines, depreciation is important because their biggest asset are the planes themselves. Therefore, for airlines, depreciation is a critical accounting policy. It is also where the accounting manipulation can occur.

Step 2: Assess Accounting Flexibility

Accounting information is less likely to yield insights about a firm’s economics if managers have a high degree of flexibility in choosing policies and estimates. If the firm is using GAAP accounting; there is limited flexibility for example look at how restrictive R&D and Marketing cost are under GAAP. How much of the flexibility has management already used? For other areas under GAAP, there is a lot of flexibility for example credit risk. Is the company being aggressive or conservative? A firm that is conservative now has the potential to be aggressive.

Step 3: Evaluate Accounting Strategy

Flexibility in accounting choices allows managers to strategically communicate economic information or hide true performance. How has their accounting differed from competitors? Are the accounting strategies changing regularly; think of CGIs accounting policy changes in the last decade. Does the firm have realistic assumptions in the past.

Issues to consider include:

  • Norms for accounting policies with industry peers
  • Incentives for managers to manage earnings
  • Changes in policies and estimates and the rationale for doing so
  • Whether transactions are structured to achieve certain accounting objectives.

Step 4: Evaluate the Quality of Disclosure

Managers have considerable discretion in disclosing certain accounting information. Is the firm providing adequate information about their strategy and explain the economics of its operations? Are those accounting policies justified adequately? Is the firm providing equally helpful disclosures in bad times? Firms that are more transparent are potentially far less likely to conduct earnings management.

Issues to consider include:

  • Whether disclosures seem adequate;
  • Adequacy of notes to the financial statements
  • Whether the management report section sufficiently explains and is consistent with current performance
  • Whether accounting standards restrict the appropriate measurement of key measures of success

Step 5: Identify Potential Red Flags

Unexplained transactions that boost profits. Here are a few examples.

  • Unusual increase in inventory or A/R in relation to sales
  • Increases in the gap between net profit and cash flows or tax profit
  • Use of R&D partnerships, SPEs or the sale of receivables to finance operations
  • Increasing Gap between Net Income and Cash Flow from Operations: firm may be fiddling with accruals.
  • Unexpected large asset write-offs (suddenly just write something off)
  • Large year-end adjustments
  • Qualified audit opinions or auditor changes
  • Related-party transactions (Valeant and Philidor)

Maybe we should list MORE!!!!…..

Red Flags in Accounting used for Earnings Management by (some) CEOs and CFOs Today

Note that Earnings Management is not illegal in some cases, in fact, it’s a strategy used by many companies believe it or not. Just like a Prime Minister who announces a snap election, a CEO can engage in earnings management (the manipulation of Financial Statements) behind a wall garden that only he or she and their team is privy to…..The following at POSSIBLE red flags, it’s hard to tell in reality, but here are things to look for with publicly traded companies:

Income smoothing: Companies love steady trends in profits, rather than wild changes in profits  No kidding! Income smoothing techniques (i.e. declaring high provisions or maybe deferring income recognition in good time) led to lower wild changed in reported earnings. Items to look out for is a pattern of reporting unusual losses in good operating years and unusual gains in bad ones.

Achieving forecasts: Is there a pattern of always meeting analysts’ earnings forecasts, an absence of profit warnings, or interim financial statements consistently out of line with year-end statements? Is a company making changes in accounting policies that revise profits upwards in years when underlying earnings have fallen, and vice-versa? Could be a redflag!

Profit enhancement: Current year earnings are boosted to enhance the short-term perception of performance which is what shareholders and analysts crave!

Accounting-based contracts: When accounting-based contracts are in place such as loan covenants, any accounting policy that triggers a shortcoming can circumvent the debt covenants….

Gap between earnings and Cash flows: Is there a large gap between earnings and cash flows? Is that gap increasing? If so there may be poor accruals.

Reported income and taxable income: Is reported income vastly different from taxable income, with no explanation or disclosure? That’s a problem.

Ratios: Do obsolescence analyses reveal old inventories or receivables, declining gross margins but increasing net margins, inventories/receivables increasing more than sales, or more leverage ratios?

Unusual financial statement trends: What is the relationship between revenue and (earnings per share) EPS growth? Is there a weird pattern of year-end transactions? What is the timing and recognition of exceptional items? What is the relationship between provisions for bad accounts and profits? It’s within a CEOs discretion due to asymetric information.

Accounting policies: Have there been recent changes in accounting policies, such as off-balance sheet financing, revenue recognition or expense capitalization? Furthermore, have the nature, purpose and effect of any changes been adequately explained?

Incentives for management: Are there incentives for managers to boost short-term profit to increase compensation (i.e. bonuses based on EPS and share option plans).

Audit qualifications: Are any auditors’ adjustments outlined in an audit report significant?

Related party transactions: Are these material and to what extent are directors affected

Manipulation of Reserves: Has there been under-provisioning in poor years, over-provisioning in good year, a manipulation of reserves, aggressive capitalization of costs, overly optimistic asset lives, accelerating expenses and increased write-downs in good years, and exceptional gains timed to offset exceptional losses?

Revenue recognition: Has there been a premature recording of revenues, recognizing sales prior to physical movement of goods, recognizing service revenue from service contracts prior to service being performed, upfront recognition of sales that should be spread over multiple periods, percentage of completion estimates out of line with industry norms?

Transaction timing: On the revenue side, have deliveries been sped up near the year end? On the cost side, have discretionary expenditures, such as maintenance and R&D, been delayed to future periods?

Regulated industries: Is there a pattern of engaging in accounting practices whose principal purpose is to influence regulatory decisions (i.e. lowering reported profits where the perception of excessive profits could prompt unfavorable regulatory action)?

Internal accounting: In a multi-division company there may be incentives to shift profits to divisions (or subsidiaries in relatively low tax jurisdictions) to reduce the overall tax burden.

Commercial pressures: In the anticipation of mergers, takeover bids or IPOs, there could be pressure to create a favorable perception by, for example, lowering credit standards to temporarily boost sales OR pumping up the value of the company at the risk of harming long-term customer relationships.

Other: When a company has foreign operations and is re-translating overseas subsidiaries’ results, a functional currency is determined for each entity. However, has the company taken advantage of ambiguous situations or facts, manipulating the selection to generate favorable currency gains or minimize currency losses? Has a company allocated joint costs among long-term contracts to create the appearance that no contract has produced losses, thereby avoiding an immediate loss provision?

The existence of these potential red flags do not indicate anything wrong per se, but should lead a prudent analyst to undertake diligent investigations to see if they are justified by company-specific factors. If distortions do exist, an analyst should, to the extent possible, undo the distortions to better evaluate a company’s financial performance within a historical and competitive context.

Step 6: Undo Accounting Distortions

  • Taxable income
  • Cash flow statement information
  • Management Guidance: no one is forcing management guidance, except management themselves.  What MG is specifically, is when a C-suite manager provides insight into the company to investors or analysts. If you are close to the target. I’d like to get that bump rather have a small loss. You want to cross the Threshold of Zero.

Elon Musk: Leaked Email in August 2016

So if you Tweet the kind of things that provoke strong reactions, that are basically the standard musings I might have made as teenager, you probably have no problem manipulating financial analysts! Expectation management is a tactic that Musk and other CEOs will leverage when the short-term performance for what is a long-term Bezos-style play (Tesla) . Elon Musk (graduated of Queen’s University in Kingston, Canada and whose mother is a Saskatchewanian) is of course a bat-shit crazy badass. In August of 2016, he was saying his Tesla Q4 expenditure will be huge, in the run up to new production line, so he’s providing a small negative estimate of profitability to his own employees and then intentionally leaking the email to the press to get the word out to financial analysts. Leaks in politics = leaks in business.

Here’s the full text that Bloomberg has published:

I thought it was important to write you a note directly to let you know how critical this quarter is. The third quarter will be our last chance to show investors that Tesla can be at least slightly positive cash flow and profitable before the Model 3 reaches full production. Once we get to Q4, Model 3 capital expenditures force us into a negative position until Model 3 reaches full production. That won’t be until late next year.

We are on the razor’s edge of achieving a good Q3, but it requires building and delivering every car we possibly can, while simultaneously trimming any cost that isn’t critical, at least for the next 4.5 weeks. Right now, we are tracking to be a few percentage points negative on cash flow and GAAP profitability, but this is a small number, so I’m confident that we can rally hard and push the results into positive territory. It would be awesome to throw a pie in the face of all the naysayers on Wall Street who keep insisting that Tesla will always be a money-loser!

Even more important, we will need to raise additional cash in Q4 to complete the Model 3 vehicle factory and the Gigafactory. The simple reality of it is that we will be in a far better position to convince potential investors to bet on us if the headline is not “Tesla Loses Money Again”, but rather “Tesla Defies All Expectations and Achieves Profitability”. That would be amazing!

Thanks for all your effort. Looking forward to celebrating with you,

Elon”

“Gap in profitability” So he can’t miss this target badly. In the end, he sold a large build up of environmental credits: so that they could hit Tesla’s target thus satisfying the analysts who wanted to short the stock.

  • Dead giveaways that this is for analysts? Um, technical language that his employees without financial training might not dig.
  • Also, just being a total douche communicator because he probably doesn’t like analysts.

https://cleantechnica.com/2016/09/07/tesla-ceo-elon-musks-august-29-email-employees-calls-3rd-quarter-rally-profitability-full-email-text/

Research & Development GAAP versus IFRS

As a side note: Under IFRS, R&D is significantly more complex. Under US GAAP you will have your R&D costs expense as they are incurred.  Under IFRS, research costs are expense but IFRS has broad-based guidance which require companies to capitalize development expenditures, for internal costs, when certain criteria is met. In IFRS, intangible assets are capitalized and amortized under IFRS but expense under US GAAP. Therefore, this difference means that for IFRS; you need to distinguish research activities with development activities.

Research costs are costs created to plan an investigation or undertake research with the aim of gainin new scientific or technical knowledge. Example, search activities for alternatives for concrete rail ties.

Development costs are incurred in the application of the research findings or knowledge to plan or design for the production of new or substantially improved products before the start of commercial production. Example, testing a new smart phone OS to replace the current OS.

Under IFRS, here was when you would start to capitalize development phase of a project. When it is technically feasible to complete the intangible asset so that it will be available for sale. Its intention to complete the intangible asset and use it is another trigger.

Soviet Union to Russia: Understanding what Russia wants through an Academic Lens

Communism, Post-communism & Nationalism

The following are in depth research notes on Communism, Nationalism and Russia from the perspective of both Eastern and Western academic thinkers.

Politics, history, psychology are complicated. When the Soviet Union collapsed, the territorial maps were redrawn. Many Russian nationals become minority citizens of new countries that were formed. The following is an analysis of that story. It’s implications for nationalism studies today and in the future. And in some ways an answer to what Putin wants.

 

Facts & Figures                                                                                                       

List of previous questions:

Was the resurgence of nationalism in eastern Europe in the 1980s a cause or consequence of communist failure?

What role does nationalism play in post-communist states?

SOURCES: Ronald Suny, Valery Tishkov, Jemery Smith, George Schopflin, Ernest Gellner, Stephen White, Alexander Motyl

Case Examples: USSR (Poland, Latvia, Chechnya, Russia) or (secondary) Yugoslavia (Croatia, Serbian, Bosnia-Herzogovina, Kosovo, Slovenia, Montenegro)

  • Ronald Grigor Suny, The Soviet experiment : Russia, the USSR, and the successor states (Oxford, Oxford University Press, 1998), Chapter 12.
  • Jeremy Smith, The Fall of Soviet Communism (London, Palgrave, 2005), pp.16-20, 73-79.
  • Brubaker, Rogers, 1994. “Nationahood and the National Question in the Soviet Union and Post-Soviet Eurasia: An Institutionalist Account”, Theory of Society, vol 23, no. 1, pp. 47-48.
  • George Schöpflin, ‘Power, Ethnicity and Politics in Yugoslavia’, chap 23 in Nations, Identity, Power (London, Hurst, 2000).
  • Tishkov, Valery Ethnicity, Nationalism and Conflict in and after the Soviet Union: the Mind Aflame London: SAGE, 1997. Ch. 2
  • Alexander J. Motyl, Sovietology, rationality, nationality: coming to grips with nationalism in the USSR (New York, Columbia University Press, 1990).
  • Ronald Grigor Suny The revenge of the past: nationalism, revolution, and the collapse of the Soviet Union (Stanford, Stanford University Press, 1993).
  • George Schöpflin, ‘Nationhood, Communism and State Legitimation’ and ‘Power, Ethnicity and Politics in Jugoslavia’, chap 12 in Nations, Identity, Power (London, Hurst, 2000).
  • Stephen White, Communism and its Collapse (2001).
  • *Henry E. Hale, The Foundations of Ethnic Politics: separatism of states and nations in Eurasia and the world
  • *Philip G. Roeder, Where Nation-States come from: institutional changes in the age of nationalism

 

  • Defining, Background, Foundations (Detailed Background)

What are the debates regarding the collapse of the USSR in 1991?

The first point is to ask whether the post-1991 nation-states of the former USSR were already defined as national quasi-states (to use Roeder’s term) within the USSR. Generally the answer is: YES.

Less clear is why YES. It seems to me there are in principle three major explanations:

  1. The national explanation – a sense of national identity had been created which was important. One can then argue whether this was a Soviet invention or not.
  2. The statist explanation – the individual republics had institutions and interests which gained an advantage over other institutions and interests. (Roeder, 2008)
  3. The International Relationship (IR) explanation – the international community would only accept sovereignty for state-like entities. 

Counter-factual of the USSR: why did it breakdown into nation-states? Why not into a multicultural/national power.

Thesis: primordial nationalism is only important when USSR prospects of collapse are high: the resources are utilized to reorganize political power when the USSR’s centre collapses.

International self-determination recognition used the primordialist approach so as to avoid precedent setting for any political movement.

Nationalism Studies was Re-invigorated with the Breakup of the Soviet Union (1991)

  • The Soviet Union was divided into territorially defined republics so had an ethnic colouration – ethnic nationalism defined the breakup of the Soviet Union.

USSR COLLAPSE: HISTORICAL NARRATIVE                                                  

The Soviet Union‘s collapse into independent nations began early in 1985.

After years of Soviet military buildup at the expense of domestic development, economic growth was at a standstill.

Failed attempts at reform, a stagnant economy, and war in Afghanistan led to a general feeling of discontent, especially in the Baltic republics and Eastern Europe.

Greater political and social freedoms, instituted by the last Soviet leader, Mikhail Gorbachev, created an atmosphere of open criticism of the Moscow regime.

Gorbachev ushered in the process that would lead to the dismantling of the Soviet administrative command economy through his programs of glasnost (political openness), uskoreniye (speed-up of economic development) and perestroika (political and economic restructuring) announced in 1986.

Gorbachev doesn’t want to intervene in the affairs of others.

Additionally, the costs of superpower status—the military, space program, and subsidies to client states—were out of proportion to the Soviet economy. The new wave of industrialization based upon information technology had left the Soviet Union desperate for Western technology and credits in order to counter its increasing backwardness.

Unintended Consequences: Gorbachev’s efforts to streamline the Communist system offered promise, but ultimately proved uncontrollable and resulted in a cascade of events that eventually concluded with the dissolution of the Soviet Union. Initially intended as tools to bolster the Soviet economy, the policies of perestroika and glasnost soon led to unintended consequences.

In all, the very positive view of Soviet life, which had long been presented to the public by the official media, was being rapidly dismantled, and the negative aspects of life in the Soviet Union were brought into the spotlight[5]. This undermined the faith of the public in the Soviet system and eroded the Communist Party’s social power base, threatening the identity and integrity of the Soviet Union itself.

The dramatic drop of the price of oil in 1985 and 1986, and consequent lack of foreign exchange reserves in following years to purchase grain profoundly influenced actions of the Soviet leadership.[1]

POLITICAL: Several Soviet Socialist Republics began resisting central control, and increasing democratization led to a weakening of the central government.

Gradually, each of the Warsaw Pact nations saw their communist governments fall to popular elections and, in the case of Romania, a violent uprising. By 1991 the communist governments of Bulgaria, Czechoslovakia, East Germany, Hungary, Poland and Romania, all of which had been imposed after World War II, were brought down as revolution swept Eastern Europe.

The USSR’s trade gap progressively emptied the coffers of the union, leading to eventual bankruptcy.

The Soviet Union finally collapsed in 1991 when Boris Yeltsin seized power in the aftermath of a failed coup that had attempted to topple reform-minded Gorbachev.

To break Gorbachev’s opposition, Yeltsin decided to disband the USSR in accordance with the Treaty of the Union of 1922 and thereby remove Gorbachev and the Soviet government from power. The step was also enthusiastically supported by the governments of Ukraine and Belarus, which were parties of the Treaty of 1922 along with Russia.

But by using structural reforms to widen opportunities for leaders and popular movements in the union republics to gain influence, Gorbachev also made it possible for nationalist, orthodox communist, and populist forces to oppose his attempts to liberalize and revitalize Soviet communism. Although some of the new movements aspired to replace the Soviet system altogether with a liberal democratic one, others demanded independence for the national republics. Still others insisted on the restoration of the old Soviet ways. Ultimately, Gorbachev could not forge a compromise among these forces and the consequence was the collapse of the Soviet Union.

If the causes are above why did nation-state solution emerge WHY NOT multi-nationalism?

Two competing approaches to the causality of USSR collapse:

Answer MUST be conscious of TIMING: (PRE) = pre-collapse (POST) = post-collapse

(A) On the one hand, there are those, like (Motyl, 1990), who argue communism was inevitably doomed as a container, freezer or prison house of nations, a repression of nationalism and that nationalism brought communism to the brink in 1980s. This paper argues that nationalism was a preexisting and competing ideology throughout Soviet Union history.

(B) Nationalism was a weak, insignificant phenomenon, like (Tishkov, 1997), adding it was the benefactor of a chess-game miscalculation by political agents, pragmatic decision-making. Nationalism needs to be stoked up by the leadership. Looks at socially engineered states such as Ukraine, the Baltics + Caucasus. Capitalism + agency defeated Communism. (cause) Nationalism is the benefactor. (consequence)

(C) Self-determination was used to bring the USSR together, it was also used to tear the USSR apart. International factors must be explored.

Answer MUST be conscious of TIMING: (PRE) = pre-collapse (POST) = post-collapse

Whether there was a nation narrative before or during the USSR, it only matters after! SO what? (A) doesn’t matter until the collapse is eminent.

  • Communism as inevitably doomed, repressive/non-acccomodative & Nationalism caused the collapse: the law of declining Empires applies, a natural reaction against Russification.

 

 

Nationalism was the key benefactor of collapse: but was it the cause?

Causality            On Collapse                              Nationalism

Tishkov, 1997    top-down (elite-error)             not-inevitable      engineered

Suny, 93,98       nationalism instrumental         not-inevitable      engineered/primordial

Motyl, 1990       repressive commies                 inevitable           primordial

Martin, 2001      affirmative action empire        not-inevitable      primordial

Gellner, 1983      wrong address repressive     unknown            modernity/engineered?

STANDARD (A) NARRATIVE

  • Western Academic Perspective: “Standard view of break-up of Communism and nationalism: National aspirations suppressed. Nations denied their autonomy by the might of the centralizing force of Moscow and the Party. The Communists tried to create homo sovieticus.
  • Marxist-Leninism’s Ideological Competition: They were scared that nationalism would compete with Marxist-Leninism as an ideology. Lenin etc had witnessed lots of unrest in the Empire in the 1880s as the Tsar attempts to Russifying.

SOVIET POLICY as INCOHERENT

  • Soviet Man: Soviet went back-and-forth on the idea of creating a monolithic new nation. Stalin, in particular, attempted to create “Soviet Man” through co-ordinated education, language, history; learning Russian was a means of social advancement. HOMO SOVIETICUS: Russian education language, history, means of advancement, territorial integrity, cultural distinctions: republics given more autonomy.
  • Stalin crushes nationalism > changes the alphabets.
  • BUT republics always keep their territorial integrity, have their cultural institutions, and non-Russians promoted to power. Tokenistic – people dancing in cheesy national costumes – but significant.
  • Never a Soviet passport (unlike Yugoslavia) Soviet Union never overtly calling for Russian nationalism….“The center never attempted to homogenize the multinational country and create a single nation-state. There was a Soviet people but no Soviet nation. No-one was permitted to choose ‘Soviet’ as their passport nationality.” (Suny, 1993)
  • Nationalism Under Cloak: Stalin was Georgian but power was Russian: Even if it wasn’t perfectly repressive, the Soviet Union actually masks nationalism under the cloak of communism: it failed (perhaps inevitably) and so nationalism becomes the logical successor…..
  • In the post-Communist world, there were both nationalist ethnic groups that preceded the Soviet Union such as the Baltic states – acquired in 1945 – while others were constructed, mutated and altered – the Central Asian states – during the 70 years of the communist post-nationalist experiment.

How did the USSR handle the nationalist question?

PRE-COLLAPSE) USSR HISTORICAL NARRATIVE

  • Czarist Russia was only threatened against Polish but not much else was significant. Czarist Russia in 1880s attempted to Russify their empire. (Breuilly)
  • Germans supported Polish and Ukrainians against Russian hegemony.
  • Russian Revolution of 1905: was a reaction against Russification provoking a confused response. The frustration of the Russian Revolution of 1905 typifies the national fact which the socialists learned could not be rejected outright. In that particularly confused affair, it was both a socialist and a nationalist revolution that had failed. Only through pragmatic compromise – against mutual anti-imperialism – could a new political system emerge in 1917.
  • From the inception of Bolshevik Revolution, pragmatic decision-making accommodated self-determination as a means to “recruit ethnic support for the revolution, [but] not to provide a model for governing of a multiethnic state.”[1]
  • Lenin Prevailed and 1922 the USSR based on Ethno-Territorial: Russia Belurasi, Urkain Azerbajan and Georgia 6 Republics. It stuck to the nation principle than anyother state.
  • Stalin had 15 non-Russia republics in the 1930s. There were 17 national regions the Republic had a constitutional right for succession under the 1936 constitution. There was elaborate ethnography which was linked to educational entitlement. There was titular nationalities employment in Ukrainian an Affirmative Action system. Ukrainians rose in the Russian government. Nikita Kruschev was Ukrainian for example. (Breuilly)
  • Affirmative Action Empire: The Soviet Union “was the world’s first Affirmative Action Empire.” (1, (Martin, 2001)) It was the multiethnic state’s character that forced a confrontation “with the rising tide of nationalism and respond by systematically promoting the national consciouseness of its ethnic minorities and establishing for them many of the characteristic institutional forms of the nation-state.”
  • Nationalist in Form, Socialist in Content: Stalin in the mid-1920s accepted the special Central Committee conference on nationalities policy, (Martin, 2001), Stalin believed in ‘Nationalist in form, socialist in content’ ((Martin, 2001) Stalin treated them as real nations.
  • Ideological Goal of Communism: was to redirect the postal error of nationalism through homogenization and steady process of integration. This is most apparent under Stalin who “extended the state’s power over all aspects of public and social life, the Soviet leadership no longer was willing to tolerate the autonomy of art and culture that it had allowed in the 1920s. (Smith, 2005) Artists were to be mobilized as ‘engineers of the soul,’ in Stalin’s words, to become on more tool in the construction of socialism.”[2] What emerged was a de facto empire under the guise of communism. (Suny, 1993)
  • Nations to be Frozen: Assimilation what not viable in the short-term but the Bolshevki’s believed “national consciousness was an unavoidable historic phase that all peoples must pass through on the war to internationalism.” (5, (Martin, 2001)) nationalism is NOT ephemeral as this paper has demonstrated. But it is wrong to assume it is forever. If primordialism can show that it existed before modernization then it is a long-term reality, if modernization theory is correct, it may be a shorterm step on the way to internationalism[3].
  • BUT the KEY INSTITUTIONS were the centralized planning, army and secret police.

After the Stalisnist period: Corruption and nomenkaltura officially specifying who is allowed to have control in the system. Secession was made punishable by death. THIS IS ARGUABLY about totalitarianism. Appeal to Russian language and history in the 1941. There was a fixity. Signs of decline was very clear by the 1980s. There was shift to bilingualism in USSR. There was no Russia communist party: only none Russians had a communist party only the non-Russian republics had their own communist system. There was corrupt kinship in the republic areas THEREFORE THIS lays the groundwork for nation-states. (Breuilly)

Gellner’s Wrong Address Thesis:

  • (Gellner, 1983) In typical Gellnerian fashion, he amusingly suggests the Wrong Address Theory that in the competition for the hearts and minds of the polity, the Marxist message “intended for classes…by some terrible postal error was delivered to nations”[4]. In other words, cultural conceptions have more appeal than the proletarian conception of identity.[5]
  • This (A) argument argues that there was significant cultural factors such as language, symbols and traditions inherent in the ethno-nations before the USSR. The question is are they important? NOT UNTIL the Collapse is imminent says argument (B).

 

This Sleeping Beauty, Prison House or Freezer description of the Soviet Union has many supporters, particularly in the capitalist academia (inevitability of collapse)

  • For the primordialists: Nation academics: the freezer metaphor makes sense.

 

Bolshevik’s SAVED the APPARATUS of RUSSIAN EMPIRE, Just as Yeltsin SAVED the Russians from DISINTEGRATION, totalitarian Putin Regime:

ONCE THE COMMUNIST WHEELS FALL OFF: NATIONALISM inevitably occurs.

“Nationalism, then, may be, and clearly often is, a potent weapon of regionally or ethnically based politicians who aspire to material largesse or political power. But nationalist behaviour is not just means to, and the nationalist ideal is not just a rationalization of, non-nationalist goals: the ideal can be an end in itself and can be the most rational means for pursuing that end. That is to say, nationalist behaviour may also be the rational response of bona fide nationalists – individuals with a sincere and strong belief in the nationalists ideal – to opportunities to pursue their goals.” (Motyl, 1990).

(Brubaker, 1994)

  • Brubaker said: “Those policies…were intended to do two things:

(1st): to harness, contain, channel, and control the potentially disruptive political expression of nationality by creating national-territorial administrative structures and by cultivating, co-opting and (when they threatened to geto out of line) repressing national elites;

(2nd): to drain nationality of its content even while legitimating it as a form, and thereby to promote the long-term withering away of nationality as a vital component of social life.” (Brubaker, 1994, pg 49)

Tishkov says Brubaker wrong:

1) overestimated the existing political architecture, (including strategy for promoting the slow death of nationalities. {1940s (withering away of nationalitity) versus the 1980s (they realized they couldn’t do away with these things).

Brubaker also makes the common mistake of 2) overlooking the roles of momentum, improvised reactive actors, a search for immediate responses for challenges (political opportunism) and power dispositions in the Soviet State. TISHKOV sees manipulation from the centre as a game of chess. And “constant struggles for power in the Kremlin” (36,Tishkov, 1997).

1) Nationalism as a cause of Communist failure: (Breuilly says no way)

  • Nationalism trumps Communism: explains conflict between China and Russia.
  • ‘Primordial’ explanation – the Soviet Union ‘froze’ the nationalisms, especially under Stalin, and they re-emerged after the collapse
  • Strong significance on national identities before the existence of the Soviet Union (ie. before the Bolshevik Revolution)
  • Ronald Suny: ideas of nationality are deeply embedded in nations’ understandings of their pasts
  • Jeremy Smith: the Soviet system used repression and ‘Russification’ to put a lid on nationalist movements… re-emerged once Gorbachev’s glasnost policies took effect (ie. emphasis on blaming Gorbachev)
  • Ethnic and territorial loyalties that were present in the former Soviet republics
  • Soviet Union as a ‘prison house of nationalities’ based on the repression of national identities that ‘burst out’ – a key factor in the collapse of the Soviet Union. 

Critique:

  • Nationalism was not a strong influence until after the Bolshevik Revolution and Lenin’s ‘national policy’ that defined the republics and autonomous regions within the new Soviet Union
  • The ‘national’ question was not significant in late Tsarist Russia in the late 19th century and early 20th
  • Majority of the republics were not ethnically homogenous – multi-ethnic composition for many years before the Soviet’s ‘national’ policies based along ethnic lines were introduced.
  • For most of the Soviet period, the nationality policies were successful in integrating people (Jeremy Smith, 2005)

 

  • (A) Primordial nations re-emerge. Those who think nations as natural. The Soviet Union was a create FREEZER they had recognition but no political power BUT as the freezer breaks down, the nations re-emerge, there is mutation.

(Tishkov, 1997) TISHKOV’s CHARACTURE of (A) Arguments

  • USSR as an “empire-type polity whose history was marked by territorial expansion, colonial methods of rule and the cultural assimilation of ethnic groups by more dominant language and cultures”
  • Crimes of mass deportation: and repressions, annexation and liguidation of sovereign state entities, destruction of environment, undermined ethnic groups.
  • USSR language policy: the Russian-language policy through its international Communist ideology, suppressing attempts to establish political cultural autonomy for ethnic minorities unless these attempts were sanctioned by central or perepheral elite.
  • Communist ‘bureaucracy strictly regulated the daily lives of the citizenry, violating their rights and freedoms and ignoring the interests connected with ethnic culture and values.

Top-down social engineering/undervaluing nationalism’s contribution (Tishkov’s, 1997) critique against (A) basically: Western theorists are over deterministic, Soviet Union was NOT, according to Tishkov, the last empire of the late 20th century,

(A) Scholars emphasize the inevitability of national emergence because of the USSR’s repressiveness. BUT (Tishkov, 1997) does not think it was inevitable. Western tautology: the law of collapsing Empires.

  • Nationalism as socially engineered, weak & benefactor: Collapse was not inevitable (Tishkov et al): Agential, Accommodative, manipulation: Riding the Tiger, Marxists ideology, pragmatic decision-making, post-communist signposts turn to nationalism:

Answer MUST be conscious of TIMING: (PRE) = pre-collapse (POST) = post-collapse

  • But the “ancient hatreds” – primordial, deep-freeze – thesis hardly explains what happened in 1989-91.”
  • (B) Instrumental nations are invented: quick-thinking politicians in non-Russia areas and concluded that if they were to acquire power they found nationalism as the basis.
  • (D) Reforms weaken the centre> crystallisations of power elsewhere. There have been very few cases that go for seccession from the centre and end up with war and the centre winning. A lot of the outcomes are unintended. Gorbachev’s political reforms had the effect of unintended consequences.
  • (E) Democratization weakens elite power> need for popular mobilization and external support. The claim for national independence and nation-self-determination particularly if it is framed in democratic terms. Breuilly saw what happened at the CENTRE of the USSR. What happened at the centre, reduced the capacity of the elite to prevent revolt: public unrest.
  • 1970s the centre couldn’t distribute as much. Economic system was inefficient. Regional barons 9party bosses gave them more and more power. Dissolution of USSR there was a running off with the family silk: once regional leaders say that authority was undermined: they switched to nationalist to preserve leadership. (it occurs again and again) there are still Nursultan Abishuly Nazarbayev in Kazakstan. NATIONALISM is INCIDENTAL. Break up of USSR allows regional bosses to take over with nationalism as the new ideology.
  • CRUCIAL THING IS THE CENTRE VERUS PERIPHERY; Why were the leaders happy to enjoy. Why did they stop going along with the essential Moscow line? Steal the family silver. (Slezkine, 1994)
  • Nations exist because we say they exist. So this is about politics.
  • 1st School of Thought: (A) Totalitarian at the Top Managing everything. (A) Then the politicians and public groups are constraints against (B): BUT (b) is what really matters
  • 2nd School of Thought: (B) is reacting against society, Russian public was very active in the Soviet Union: there is no sociological data to back these claims…. USSR looks like a monolith but it wasn’t.

(Connor, 1984)

What is the relationship between nationalism and communism?

(Walker Conner, 1984): ethnonationalism.

National Question in Marxist-Leninist Theory and Strategy: 1984: Princeton University

Conner reaches four broad conclusions:

  • That communist endorsement of the right of national self-determination, including the right of secession, was instrumental in the success of the Soviet, Yugoslave, Chinese and Vietnamese revolutions;
  • That the prescriptions laid down by Marx and Lenin for post-revolutionary nation-building and national integration are too disjointed and contradictory to form a coherent strategy; Walker points out the abrupt and sharp changes in policies such as language, education, culture, personnel recruitment and regional investment: COUNTER-argument: his examples are deviations from Marxist-Leninest orthodoxy rather than symptoms of doctrinal contradictions. Some incoherence’s could also be viewed as pragmatism.
  • That communist regime have regularly deviated from the few clear guidelines that Marx and Lenin did lay down; reincorporating secessionists, geerymandering of national boundaries, deportation of nationalities from their ethnic homelands, curtailin national language education and national self-expression THIS is natural for antipathy with ‘survivals of the past, a commitment to rapid economic modernization and a vested interest in attracting majority support.
  • That nationalism is a continuing and growing problem throughout the communist world: this is less convincingly demonstrated in a variety of cases.

CONCLUSION: Marxism-Leninism is flawed for a failure to recognize that nationalism is a ‘permanently operating factor’ in modern history.

Jeremy R. Arael: “From this perspective not only Marxism-Leninism but the many the rtheories that post a negative correlation between ethn-centricism and socioeconomic modernization are also ‘fallacious’. In fact, however, the pertinet data are considerably more ambiguous than Conner implies. While he introduces a useful corrective to an already widely questioned bit of conventional wisdom, he almost certainly overstates his case. Switzerland, after all, is not entirely a produce of wishful thinking, and even Yugoslavia may turn out to be more than a passing illusion – even if its surivival owes little, if anything, to Marxist-Leninist theory.” JeremeyR. Arael.

(Slezkine, 1994)

The USSR as a Communal Apartment, or How a Socialist State Promotes Ethnic Particularism: Title: USSR & Ethnic Particularism

  • Bolshevik’s news that the national was powerful.
  • The Great Transformation of 1928-1932 turned into the most extravagant celebration of ethnic diversity that any state had financed. It accepted ethno-territorialities nationalities. THERE was the Great Retreat in the mid-1930s where Stalin changed the alphabet (for example).
  • “…explanation that class was secondary to ethnicity and that support of nationalism in general (and not just Russian natonalism or ‘national liberation’ abroad was a sacred principle of Marxism-Leninism.
  • Stalin said that the “Finnish nation exists objectively, to not recognize it is ridiculous: they will force us to if we don’t.”
  • Soviet nationality policy: allows for the coexisted of 1) republican statehood and 2) passport nationality. 1) the former assumed that territorial states made nations, 2) primordial nations might be entitled to their own states.
  • (Slezkine, 1994) “USSR was an apartment where different rooms housed nationalitities: but the tenants of those rooms barricaded their doors and started using the windows, while the befuddled residents of the enormous hall and kitchen stood in the centre wondering: should we recover our property? Knock down the walls? Cut off the gas?” Russians decided to let GO.

 

Nationalism was one of the important elements in the State-structures. But these collapses could only come about once the Communists withdrew from Satellite states. (Breuilly)

  • Instrumentalist nation academics: saw the Soveit System was collapsing, Russia lacked the will to preserve USSR. There was a need for a new basis of ideological power.

 

(Tishkov, 1997) -> political manuveuring, top-down nation-building, collapse not-inevitable, (some) nationalisms were engineered to manage SU politics:

  • Political Agency: Taking a hard agency approach, Tishkov argues that it was political agency at various junctions that explains the sporadic nationalist policy changes and that the collapse of the Soviet Union could have been avoided with proper distribution of graft to the political elite[6] of those 15 union republics accommodated into the USSR by 1956.
    • 1917: Marxist ideology’s success was only possible through pragmatic manipulation of some preexisting ethno-nationalist movements in the Russian Empire.
  • Nationalism was WEAK or unimportant until the 1980s: His emphasis on agency, private motivations of actors detracts from the contribution nationalism made prior to, during the early, middle and the post-Communist stages of this complicated historical narrative.
  • NATIONALISM was used against bourgeois movements.

LONGTERM goal of Soviet Union was to ‘demystify and discard nationalism.

  • Incoherent Contradictory M/L Policies: the subsequent contradictory nationalist policy of socialist federalism, retrenchment of dominant ethie (Russian) and social engineering under Stalin was a complicated bi-product of political agents responding to continued pragmatic concerns.
    • Central Assian: large block of Islam > Salin fears this > constructed separate republics in the Caucasus.
    • Marxist revolution was international but only in the Russian Empire did it occur: minority nationalities were given their republics to help in the revolution…

Social Engineering:

  • a) Unitary control was not a viable option and socialist federalism would territorialize ethnicity creating further identities and ethnic distortions.[7] The problem with any form of federalism is that it both garners necessary support from elite leadership of the satellite republics but hinders unitary integration to the dominant ethnie. (Tishkov, 1997)
    • 1st the Inventory of ethno-nations: Social Engineering “meant inventing nations where necessary.” (30, Tishkov) Tishkov believes in the census of 1897 there were 146 languages an dialects in the country.
    • Lenin said that the state proclaimed the right of self-determination for ‘formerly oppressed nations’
    • b) Federalism Produces Ethnic Particularism: therefore is not a supporter of federalist systems since they “promote ethnic particularism.”[8]
    • Long-Term v Short-Term Goals: Long-term goals of communism called for unity “within a single state”[9] but short-term political expediency called for Lenin’s socialist federalism and state driven ethnography.[10] (Tishkov, 1997)
    • Improvised Nationalism Policy: argues that “the ethnic policy of the Soviet Union was designed on an improvised basis partly to meet the serious challenges issuing from the regions and ethnic peripheries of the Russian empire, and partly to meet doctrinal aspirations.”[11] (Tishkov, 1997)

Territorialized Nationalism: When USSR broke up there was long existing territorial boundaries: these territorial republics were ethnically mixed: ethnography in USSR legitimized these groups.

  • The post-Communist consequence, for Tishkov, was that territorially entrenched republics under the USSR were then transformed into nation-states by their political elite. This leads Tishkov to conclude that the USSR created those nations rejecting the prison house or freezer description of the Soviet Union by western academics who see the collapse of communism as inevitable, particularly (Motyl, 1990).[12]

(Alter, 1985)

  • Russian Nation as New: Russian nation in an ethnic sense was introduced to public discourse rather recently as a logical ingredient of what official propaganda and academic language had labeled ‘the building of Soviet nations.
  • Stalin never really denied that nations existed.
  • Stalin Russian Appeal: mobilized the “glory of Russia’, its deep historical roots’, its mystical soul’ as part of popular mobilization during World War II.
  • Later it began to reflect social changes within the Soviet Union, especially in the demographic patterns and growing social mobility of non-Russian nationalist.
  • New Russian nationalists: clothed their hegemonic motives with emotional rhetoric about the impending extinction of the Russian people and the degradation of their traditions and culture (Tishkov, 1997)

 

  • Two hand approach:
    • “pursued a harsh policy of repression and hyper-centralized power;
    • they carried out a policy of ethno-national state-building, accompanied by support for prestigious institutions and elites as a means to preserve the integrity of the state and exercise totalitarian rule.” (39, Tishkov).

Tishkov doesn’t believe in the triumph of nations. “It is rather a small layer of political and intellectual elites who set themselves up as the representatives of the nation and formulate national demands, with little or no pressure from or consulation with the mass of the people, in order to fulfil their own agendas.

a Tishkov is about political agency, individual actors affecting the consequences of nationalist movements, they didn’t have to TURN TO NATIONALISM: it wasn’t inevitable: it was easiest…

(COLLAPSE) USSR HISTORICAL NARRATIVE

  • As Gorbachev attempted to reform the centre, opening up first the economy and then the political sphere, disparate groups – including those claiming to represent national interests – started to assert themselves.
  • -> Soviet Union -> Collapse nationalist states. The Baltic’s had a recent historical memory, therefore were first to secede from the USSR. Historically, each part of the USSR’s collapse has a unique particularistic narrative: we CANNOT claim that it was all a socially engineered and that nationalism were weak IF existed. Tishkov’s belief that nations were meaningless instruments of state actors is grossly misleading.
  • The Baltic nations had a historical memory before 1945, therefore were first to secede from the USSR. (<-Tishkov says they were socially engineered)
  • (POST) The Poles would have trouble believing their historical nationalism was a socially engineered project from Moscow. As we’ve seen, the USSR had to managed nationalisms from the beginning and would not have gained control of the Russian Empire without the consent and support of this competing ideology.

(Breuilly, Lecture) The Nationalist Resurgence and the Collapse of Communism

  • The Gorbachev era and reforms > instability> non-Russian republics as power arena. Gorbachev came to power much earlier than the USSR usually allows. Gorbachev wanted USSR to remain competitive against the West. He wanted to great levels of competition: economic reforms failed. Political participations in local and federal election. There was a turn against established elites. There was a shifting conflict into the republics.
  • Non-intervention policy > impact on the Warsaw Pact state> collapse late 1989-ealy 1990> variations > state/nationhood combination. Gorbachev sees the unraveling. Yugoslavia multi-party, The Berlin Wall falls in November, Czech, Romania December Ceausescu executed.
  • There was major dissident: their platforms were anti-communist BUT they saw the genuine national. The state structures were important. It was a statist system. The Soviet Union withdrew their incompetent regimes. Breuilly will not focus on non-USSR regimes.

(Breuilly, Lecture) The politics of collapse supports Tishkov

(1) Unintended Outcomes: the accidental collapse argument: USSR The non-Russia republics> the range of aims. Independence often an unintended outcome> key was Russia action.

  • Russia was less willing to concede independence. Republican structures meant that all of these keen Baltic states and EuraAsia were given independence. We should not confuse outcomes with preferences. (Breuilly, Lecture) We cannot say that having independence was not the necessarily desired outcomes. The more rapid independence. Just as the key to the Warsaw states: the key to the none-Russia republics.
  • There was the failure of the August 1991 coup in Boris Yeltsin. Russia had decided they didn’t want the burden. The Russian action triggered the outcomes elsewhere. Although nationalism was important, it wasn’t the cause. There was major dissident: their platforms were anti-communist BUT they saw the genuine national. The state structures were important. It was a statist system. The Soviet Union withdrew their incompetent regimes. (Breuilly, Lecture) will not focus on non-USSR regimes.
  • Liberalization can lead to self-destruction very easily…..

 

CRITIQUE of THIS (B) PERSPECTIVE:

  1. Simplistic top down approach: excludes the cultural approach that would argue that nationalism was a more significant force beyond its political instrumentalism and that social engineering did not create the post-Communist states in every case or by itself. Underlying Tishkov’s emphasis on the Soviet political agents is his neglect of the nation and ethnic group existence beyond the “construct[ed] realities that could correspond to political myths and intellectual exercises.”[13]
  2. Secondly, there is a more nuanced reality to the post-Communist nation-state formation. It would be misleading to argue that these preexisting ethno-nationalist movements were not influenced by the Soviet Union – nationalist formations particularly in the Central Asian cases were greatly influences by the USSR – but they were not created solely by top down engineering. Tishkov undervalues the fact that the USSR was a diverse multi-ethnic and multi-national political society at its inception. Multilingual groups existed regardless of subsequent Soviet social engineering that had been legally entrenched in the political structure.
  3. CULTURAL FACTOR: top down approach to nationalism underplays significant cultural factors such as language, symbols and traditions inherent in the ethno-nations before the USSR. Tishkov is not completely wrong since each part of the USSR’s collapse has a unique particularistic narrative; one cannot claim that it was all a socially engineered, but it is partially true since the Soviet Union did shape the subsequent nation-states through political decisions of its leadership as Tishkov suggests. Smith’s critique: “Tishkov’s account of nationalism is deficient. In identifying elites as both the inventors and propagators of nationalism, he perhaps goes too far in dismissing the ‘nation’ as a meaningless category in academic discourse; however false its foundations, however much it is a tool of political elites, natiaonlism cab e a powerful mobilizing force and the issue of the connection between elite scheming and mass mobilisaions is inadequately dealt with.” (1545, Smith).
  4. Value-Laden Tishkov: NOTE THAT RUSSIAN ACADEMICS are more likely to claim that the Russians created the Baltics, Poland, Ukraine etc: because they were under the Russian Empire, then Soviet Union.

 

Causality debate: Nationalism was not the chief cause of the collapse but one of the primary beneficiaries of that collapse.

2) Nationalism as a consequence of Communist failure:

  • ‘Instrumental’ explanation – nations and nationalisms were invented in response to the collapse of the Soviet Union
  • John Breuilly: instrumental nations were invented through the collapse of the Soviet Union and the failure of the Communist regime – the republics responded to the collapse with the need to form new political institutions and adopt a new ideology to replace Communism (lecture).
  • the non-Russian republics didn’t necessarily want independence – often an unintended outcome (Breuilly)
  • The structural conditions facing the Soviet Union – deteriorating economic situation, geopolitical weakening at end of Cold War – were the principal causes of the collapse of the Soviet Union, not because of nationalism.
  • Gorbachev’s political and economic reforms weakened the Soviet centre and allowed the republics more political and economic control over resources
  • Ronald Suny: as economic decline accelerated, people reacted by adopting only other form of personal identity open to them – of the nation.
  • Instrumental approach places emphasis on the impacts of Gorbachev’s reform policies in the late 1980s as a key explanation for the invention of the former Soviet nations – especially Jeremy Smith – blames nationalist resurgence on Gorbachev and that his reforms in the 1980s provoked violence and downfall of Soviet system
  • Alexander Motyl: Role of republican elites who led anti-Soviet oppositional discourse – powerful in their own republics… also several were imprisoned during the 1960s under Brezhnev and later released in the 1980s. Gorbachev actually reinforced oppositional elites by legitimizing their opposition to the central state – through ‘perestroika’, ie. official liberalization policy.

 

Critique:

  • why was the only ‘alternative’ identity nationalism? (critique of Suny)
  • why did these movements take on a ‘national’ character and not another type of opposition?
  • Motyl: importance of language and culture to non-Russian republics – role of symbols, memories and postwar independence (ie. brief periods of independence of most republics in the 1920s following the Russian Revolution)
  • downplays ethnic ties and kinships – prominent in republics, especially elite groups were ethnically based.

  • 3 BIG CASES & Belarus

What role does nationalism play in post-Communist states?

THESIS: Important to point out differences because whatever we try to say generally about communism and nationalism is only going to be right in some cases. Diversity proves a point: communism had not succeeded in homogenizing.

 

Engineered                  Primordial

Ukraine                       Ukraine

Azerbijan

Armenian

Georgia

Baltics

 

*ROMANIA: Katherine Verdery (see: “Nationalism and National Sentiment in Post-socialist Romania”, Slavic Review, 52:2, Summer 1993).

  • (Verdery, 1993) socialism couldn’t expunge national consciousness created in the 19thC
  • leaders of Communist parties held power in environment where all other organizational entities had been dissolved – nation was a natural option once the centre had collapsed
  • Shortage-alleviating strategies were common: when there’s a shortage of hair dye, only the Hungarians get serviced..
  • Post-communists: blaming minorities for the country’s ills rather themselves, externalizing blame
  • Sociological: “Most East Europeans are used to thinking in secure moral dichotomies between black and white, good and evil.” US vs THEM and “Social Schizophrenia” of communism has to be replaced by “other ‘others’”
  • Soviet Union was the first state in history to be formed of political units based on nationalism. (Suny, 1998)

(POST-COLLAPSE) Nation-States EMERGE

  • USSR post-collapse are frustrated that the Baltic states claimed independence because the Russian imperialists feel that they engineered those states…..
  • Post-Communism: Role of Russia as successor of Soviet empire; new states in the Caucasus and the Balkans since the 1990s; on-going ethnic conflicts and independence movements in post-Communist states (eg. South Ossetia-Georgia, Kosovo) – legacy.
  • Remaining Communist Political Elite.

 

*CAUCASUS:

South Caucasus comprises:

Georgia (including disputed, partially recognized Abkhazia and South Ossetia)

Armenia Restoration of independence

Azerbaijan (including disputed Nagorno-Karabakh Republic)

 

*BALTICS: (most keen) quick to seceessed but the Russians were depriveldged. There is questions about this ethno-nationalist. EU gives these groups the rights.

Latvia

Lithuania

Estonia

All had historical memories of independence. Not socially engineered.

 

*UKRAINE: Divided between social engineering and pre-existing nationalism: linguistic distinction. Civic national perspective rather than an ethnic.

 

*BELARUS (Central Asian Republics) stay close there. They don’t have the nationalist sentiments.

Close connection with Russia. Russia was less willing to give up.

DON’T confuse outcomes with preferences in the case of Belarus. This was a process of Unintended outcomes: can’t read back from the final outcome to the role of nationalism.

 

*RUSSIA: “The dog that didn’t bark” : imperial nationality is not interested. There were specific features: land based empire. There was a civic nationalism volunteristic. Ideas of restoring the USSR model, expanding power to the disapora, protector of Russian abroad. This was prevented by the August 1991 coup: Boris Yeltsin decides that Russia no longer wanted the burden. Crucial fact is the policy act in Moscow. Not the Renaissance in Putin: there is been only ONE CHECHNYIA. People though Russian nationalism would be violent. But that hasn’t happened there is amore important issues.

Warsaw Pact Cases: Czech/Slovaks (only nationalism), nationalism wasn’t key here: it was state power capitalism issues.

Yugoslavia serves to demonstrate the importance of political resources as a foil to USSR. Relationship between the agency of actors: Milosevic -> Serbian nation territory not willing to give up control.

A LITTLE primordial a LITTLE modernist:

*Henry E. Hale, The Foundations of Ethnic Politics: separatism of states and nations in Eurasia and the world

(Hale, 2008)

  • “Uncertainty Reduction’: naturally to support something that everyone can agree on within a new polity: ie. national objectives. It gives people a map: it wasn’t an emotional primordial approach but it wasn’t of interests (modernists) but it was about having a signpost.
  • Secessionist republics tore on global superpower apart and plunged Tito’s Yugoslavia into homocidal chaos BUT WHY?
  • While Lithuania spearheaded secession from SU in 1990, neighboring Belarus remained loyal to the idea of integration.
  • While Slovenia and Croatia seized the chance, Montenegro+Serb remained
  • (Hale, 2000) wonders a) why some ethnic groups fight for secession, b) why some ethnic groups fight for multi-national states to find that secession = affluent ethinc group + autonomous already + least assimilated.

Despite implicating ethnicity in everything from civil war to economic failure, researchers seldom consult psychological research when addressing the most basic question: What is ethnicity? The result is a radical scholarly divide generating contradictory recommendations for solving ethnic conflict. Research into how the human brain actually works demands a revision of existing schools of thought. At its foundation, ethnic identity is a cognitive uncertainty-reduction device with special capacity to exacerbate, but not cause, collective action problems. This produces a new general theory of ethnic conflict that can improve both understanding and practice. A deep study of separatism in the USSR and CIS demonstrates the theory’s potential, mobilizing evidence from elite interviews, three local languages, and mass surveys. The outcome is a significant reinterpretation of nationalism’s role in the USSR’s breakup, which turns out to have been a far more contingent event than commonly recognized. International relations in the CIS are similarly cast in new light.

*Philip G. (Roeder, 2007) Where Nation-States come from: institutional changes in the age of nationalism

  • “Institutional advantage” (Roeder, 2007) There was a map to negotiate a position. There was a constant playing on the national to achieve power. The federalist system PRODUCES quasi-national states. The republics each have one foot out the door already: they have the institutional advantage. It wasn’t just a matter of cognition. Roeder: stresses the institutional. There was a constant political negotiation with eachother.

COMBINE These two books see why that institutions of Republican entities would come to dominate post-Soviet politics. So Breuilly and Primordialists have a middle ground here.

To date, the world can lay claim to little more than 190 sovereign independent entities recognized as nation-states, while by some estimates there may be up to eight hundred more nation-state projects underway and seven to eight thousand potential projects. Why do a few such endeavors come to fruition while most fail? Standard explanations have pointed to national awakenings, nationalist mobilizations, economic efficiency, military prowess, or intervention by the great powers.Where Nation-States Come Fromprovides a compelling alternative account, one that incorporates an in-depth examination of the Russian Empire, the Soviet Union, and their successor states. Philip Roeder argues that almost all successful nation-state projects have been associated with a particular political institution prior to independence: the segment-state, a jurisdiction defined by both human and territorial boundaries. Independence represents an administrative upgrade of a segment-state. Before independence, segmental institutions shape politics on the periphery of an existing sovereign state. Leaders of segment-states are thus better positioned than other proponents of nation-state endeavors to forge locally hegemonic national identities. Before independence, segmental institutions also shape the politics between the periphery and center of existing states. Leaders of segment-states are hence also more able to challenge the status quo and to induce the leaders of the existing state to concede independence. Roeder clarifies the mechanisms that link such institutions to outcomes, and demonstrates that these relationships have prevailed around the world through most of the age of nationalism.

  • International Causality

 

  • (Hutchinson, Revision2) “What is the role of international factors: the buildup of post 1989 territorial split up into ethnic categories: what might be the factors that explain the survival of these entities. Was it the international communities desire to keep the patchwork. The reluctance of accepting the sovereign states. They are more likely to support territoriality in the Ukraine: it’s the role of international recognition.”

INTERNATIONAL RELATIONS:

Reform enabled crystallization of the centres in the regions. They never wanted a war of secession. If the centre resists it: the centre usually win those wars.

  • WESTERNERS love self-determination framed on democratic claims so those elites are central.
  • 1) Seeking ‘States” to recognize > Warsaw Pact, non-sovereign republics.
  • 2) The problem with other claims. Once one claim is recognized there is a fear of creating a doctrine of succession rights. The way around this is to say that these states aren’t new at all. With Poland there is some difficulty in recognition. The same reasoning in the Republics in Yugoslavia. A federalist system was half-way into secessionist movements. There have been problems in Bosnia Herzegovina. In the case of Kosovo: was part of the republic of Serbia. Russia has the difficulty of Chechnya didn’t fit the same structure as other non-Russia republics for Federal System transition self-determination.
  • 3) Specific features> e.g. The creation of German was ingenious, German-German simply create lander in East Germany and allow those landers to vote into West Germany, German decision to recognize of Croatia changed Yugoslavia, émigré influence, the EU and Baltic states. Germany recognized Croatia 1st which destroyed Yugoslavia. The EU has played a crucial role.
  • You could argue federalist system was a natural movement to self-determination: Bosnia/Croatian and Serbian and Kosovo: clear cut case but was part of Serbia.

EMERGENCY KNOWLEDGE: Self-determination:

INTERNATIONAL RELATIONS: NATIONAL SELF-DETERMINATION AND STATE SOVEREIGNTY

INTERNATIONAL RELATIONS: A WORLD OF STATES

THE IR CONCEPT OF THE STATE

HISTORICAL JUSTIFICATIONS

FROM DYNASTIC TO POPULAR SOVEREIGNTY

THE CONCEPT OF NATIONAL SELF- DETERMINATION

THE AMBIVALENCE OF SELF-DETERMINATION

NATIONAL SELF-DETERMINATION VERSUS STATE SOVEREIGNTY

CONTEMPORARY ARGUMENTS AND ISSUES

INTERNATIONAL RELATIONS: A

WORLD OF STATES

THE REALIST APPROACH

State as unitary actor > narratives and “games” States act. Diplomatic: Britain this, France does this; it is very rational choice theory. There is a narrative history OR sometimes descriptive of game theoretical.

International “anarchy”: SUPRA-state authority does not exist. With these simple assumptions you can go on to other things.

State interests/preferences > State powers (how to measure) >

State reason (best use of power to realize interests)

The state wants to impose power. The state has interests and preferences. The state has powers. Some states are more powerful than others. The economy, stability, support by the subjects. Quality of leadership is more difficult to measure. ONCE a state has rationally gauged its powers, you have already defined the uses of SUCH TERMS. The term is Ends or the methods.

Order out of anarchy > diplomacy and war > international order

As plural, conflictual and consensual (in the sense that states recognize that they all have same reasoning capacities: this should produce stability)

QUALIFICATIONS OF “SOFT”APPROACH

Non-state actors: not states that have power: international corporations, religious groups, illicit organizations, there is a question of international organizations.

International norms: not simply self-interested actors: liberal democratic states behave rather differently. These leads to international society.

 

IR ASSUMPTIONS ABOUT THE STATE

The state as sovereign > indivisible power with nothing above or below > a modern world? Sovereignty: key assumption: it is an indivisible source of power, backed up on the means of violence. There is no comparable authority is above or below. THIS is a new framework BUT also doesn’t exist in parts of the world today.

The state as territorial > indivisible power with nothing beyond > modern?

The notion of a sharply defined state territory is arguably a product of modern history.

The distinction between state-society and state-state relations > reflected in distinctions between academic disciplines

Externally the state enters into discussions with other states.

Mirrored in nation/nation-state distinction

Nation (sovereignty) and Nation-state (state)

HISTORICAL JUSTIFICATIONS I                                                                                

THE STORY OF WESTPHALIA: treaty of Westphalia 1648. A European wide settlement of the disputes in Question:

  • Formal equality of states: recognized the actual power of larger states.
  • Rejection of any authority above that of the state: Holy Roman Empire or Papacy.
  • State consent as the basis of legal obligation: treaties and diplomacy
  • Territorial integrity: supported the territorial integrity of the sates making the agreement.

THERE IS A NON-INTERVENTION VALUE IN MOST STATES>

Non-intervention in the domestic affairs of other recognized states (1555 Peace of Augsburg: ‘Cuius regio, eius religio’(`in your kingdom, your religion’)

The treaty of Westphalia recognized Lutheranism, Catholic,

The religion of the people did not switch to the religion of the new prince.

Complications: three recognized religions, guarantors, non-territorial “states”(Holy Roman Empire) involved

 

HISTORICAL JUSTIFICATIONS II                                                                               

AFTER WESTPHALIA French hegemony was given up officially. They then developed the balance of power ideal.

European diplomacy > Utrecht > balance of power > dynastic policy as fitting the IR model > but arguably a source of disorder

Dynastic wars, calculation, deliberate removal of passion, Frederick the Great anti-Machiavellian but he is Machiavellian, Paul Schroeder: the consequence of all of this is great deal of instability egotistically; lack of an overarching norms was a product of that time.

New state formation: 18/19thcenturies American Revolution + British North America, Greece, Bulgaria, 1878 Treaty: it was certain nations accepting intervention or non-intervention.

New state formation: after 1918: there is a new source of international legitimacy: the state as a territorial bound unit in the 1884-85 Berlin Conference decides where the African territories are defined. National self-determination Woodrow Wilson was used to legitimate Hapsburg Empire. Unlike before, there was the establishment of international organizations. There was new order of nation states. There were a number of bilateral minorities where people had to sign and respect.

New state formation: after 1945 decolonization, the UN

New state formation: after 1989: there was an attempt to square the legal order of sovereign state. Along with this shift to creating a series of states that were explicitly recognized, there was a whole set of political theory which sustains these states.

Political theory, international law and the sovereign state

SOME PROBLEMS WITH THE HISTORY

THE MODEL OF THE STATE                                                                                       

  1. Normative not descriptive > stateless zones and

Empires > adjustments to IR model

It’s arguable that the concept of the normative. Are these states aspiring or do they actually establish this. Much of the world is still stateless and empires and there are adjustments to the IR model.

  1. The internal problem of sovereignty: state as divided, not unitary > possible answer: “switching interests” They cannot be called unitary, there are different interests. Can someone understand German foreign policy if we don’t understand the shift of political democracy to authoritarian regime. These can be constantly switched: its external interest will change.
  1. The external problem of “sovereignty”> transnational economic and ideological power relationships. When one state

THE IR RESPONSE: THE NEED FOR MODELS The world is complicated. At least this simplifies matters: this model works.

  • What alternatives are on offer?

 

FROM DYNASTIC TO POPULAR

SOVEREIGNTY

SOVEREIGNTY AS NECESSITY: John Locke, Hobbes necessary political power of the state, social contract

SOCIAL CONTRACT: sovereignty is based on a social contract that people have.

EQUALITY AND DEMOCRACY: this implies that they are democratic and equal

INSTITUTIONAL SOLUTIONS – elected institutions and so forth. This adds to the impersonal nature of the modern state.

THE GROWTH OF MASS POLITICS –

LEGITIMATE STATES AND INTERNATONAL RECOGNITION – this increases the notion of legitimate states. Once you have the idea of a world of states: you have legitimacy based on sovereignty THEN HOW DO YOU Conceptualize popular sovereignty.

NEW COMPLICATIONS

THE CONCEPT OF NATIONAL

SELF- -DETERMINATION

The concept of self-determination: Kant It is about some actor that has rational moral autonomy of the human being. He talks about collectivized and nationalized.

Collectivizing the concept: Herder, Fichte

Identifying the nation > before the people can choose it is necessary to choose the people

How do you identify the nation: before people can choose you need to find the people WHERE IS the right is exercised. How elections turn out is the question of the boundaries: it seems to be who exercises the democratic process.

The conceptual tension between “popular” and “national” sovereignty: how are the majority and minority constructed. The collectivity is no just the people it is the nation according to the academics. The idea of national self-determination can then conflict with popular sovereignty within a state.

 

NATIONAL AND POPULAR

SOVEREIGNTY: AMBIGUITY

Who are the “people”?

Who is the “nation”?

States equal nations: sovereignty and non- intervention > separation/union as rare > no region has a “right” to self-determination. UN says that nations = state. This makes separation/union are rare. This implies the rarity of the dominant groups blocking out the notion of changes through state boundaries.

States do not equal nations > the right of separation/union > justifiable intervention > territorial change as potentially frequent

Nations are very distinct from a given state. It can be used to justify intervention to support such rights. There is not just a tension between popular sovereignty.

 

SELF- DETERMINATION: THE

HISTORIC RECORD

Before the 19thcentury: qualifying Westphalia > dynastic sovereignty

18/19thcentury: justifying new states > revolution > unjust rule > identity

The right of revolution on unjust rule. They are based on just rule based on the identity of the people. BUT this is an ambiguous term. The declaration if 1789: Citizens & the Nation. Is it the collective cultural French nation applied here?

Post-1918: justifying new states > Wilson’s Fourteen Points: he talks about national self-determination: is it to be used on democratic consent? There might be territorially difficulty. The rules were only applied to the LOSERS but many thought that the rules were not APPLIED to the winners in the Middle East. Tere were problematic issues of drawing boundaries: this was tremendously difficult. Turkish in habitants in Greece etc. The new danger of self-determination becomes a national purifying act.

CONSTANT AMBIVALENCE > justice (religious, political etc.) > democracy > identity

National self-determination as new power ideology

The dangers of “new state formation”

HISTORICAL RECORD CONTINUED.

Post-1945: justifying new states > anti-colonialism UN 1960 Resolution: Declaration on the Granting of Independence to Colonial Countries and Peoples confines right of self-determination: “. In respect of a territory which is geographically separate and is distinct ethnically and/or culturally from the country administering it.”

USSR & US were anti-colonial. They also so that ethno-national identity was highly problematic. UNITED STATES would have been geographically BUT not ethnically or culturally distinct to the nation administrating it. Under that resolution.

Post-1989: justifying new states

State renunciation (Russia 1991) > federal units as “quasi-states”

The 1960 Resolution doesn’t really work. The quasi states of the USSR: the Baltic States had been independent. The problem can even be applied in the case of Yugoslavia. At least to parts of the Yugoslavia before it broke up.

Federal units annexed East Germany: Lander before merging with the Western German portions.

Blurring “state”, “popular” and “national”

Erecting barriers against separation and union: the real problem is when there is no state: Croatia and Serbian, Bosnia and Kosovo. Russia and Chechnya. The view of the state is a NORMATIVE VIEW. It is not what they actually are; it’s what they should be.

IR view of state as norm rather than reality

CONTEMPORARY ARGUMENTS

CONTEMPORARY ARGUMENTS

it means that irredentism there is an issue of normative language is producing the world of fixed nation-states: it reifies them. SO

Arguments for secession: positive/negative; rights/identity. Seeks the political and democratic argument. If a clear majority wants to succeed. It is what they have decided. This is the positive argument.

The negative argument: it is what others have decided about that state. If the inhabitants are subject to violence. The second would focus on the particular inhabitants of a territory.

Arguments against secession: arbitrary, creating and intensifying conflict, recipe for instability

The opponents would say that such a right would cause much trouble. The opponents say that giving such countenance: it may bring about ethnic groups genocide to prevent such independence from ever occurring. The international response would be problematic.

Once a region has a right of succession: what if a region within a region. There is always a minority within a minority: this will produce a further set of justice: it may also create non-functionally small states in the world.

From collective to individual rights: arguments for intervention > arguments against

The current record of intervention is not an encouraging. The states will use self-interest, which wouldn’t have anything to do with human rights. They are trying to get rid of sovereignty as a whole. Removing the ideas of the state as hard units. This is moving towards notions of autonomy. This is a way to create national recognition while not challenging state sovereignty.

From self-determination to autonomy > end of sovereignty argument? > Norms or “real”?

It is often language in which arrangements will be legitimized. This is all misleading fiction. This actually justifies the current state control. SOME MIGHT say that the IR model doesn’t apply well and is an over simplification.

From norms to “realism”: can we build in a sovereign state. Modern state has been very good at resisting on sovereignty. An irrational actor

Bangladesh is the only real example. Is secessionism is some kind of breakdown. There was a break down and leads to disputes; It is a question of whether we confuse a final claim versus the BARGANING that goes to avoiding that claim. The interwar years: it was realistic to making ethnic language. BUT it became illegimate to make that kind of language. It had to be put in civic terms. People now frame arguments in multiculturalism.

 

The changing languages of norms and “realism”>

Is the sovereign nation-state still a “realistic” power unit or an acceptable normative concept?

The language is implicated in the vary process itself.

Recent relevant publications

On the continuing importance of imperial states:

John Darwin, After Tamer lane: the global history of empire (2007); Herfried Münkler, Empires: the logic of world domination from Ancient Rome to the United States (2007)

On secessionism in contemporary Europe see the books by Hale and Roeder listed in last slide of my communism and nationalism lecture

As general reader: John Baylis, et.al (eds), The Globalization of World Politics: an introduction to international relations(4thed., 2008)

(Elie Kedourie, 1960/93) Nationalism

Chapter 2 SELF-DETERMINATION

  • Before Immanuel Kant (1724-1804) our knowledge was based on sensations and the memory of sensations. As a result you could argue that people were the prisoners of their sensations. Therefore, liberty and other virtues were difficult to prove.
  • Kant showed a way out of this predicament. He argued that morality and knowledge are separate things. The first is the outcome of obedience to a universal law which is to be found within ourselves, not in the world of appearances that is at the base of the second. Man is free if he obeys the laws of morality which he finds within himself and not in the external world. Only when the will of man is moved by such an inward law can it be really free, and only then can there be talk of good and evil, of morality and justice.
  • This inward law is denoted by Kant as the categorical imperative. This was a new and revolutionary formula because it was totally independent of nature and of external command. For Kant the natural world could not be the source of moral value, neither the will of God. Good will = free will = autonomous will.
  • Kant’s doctrine makes the individual the very centre, the sovereign of the universe. This however has consequences for the believe in the existence of God. Due to Kant’s theory God is reduced to an assumption which man makes in asserting his moral freedom. Man is no longer the creature of God, but the creation occurs the other way around.[14]
  • The logic of his doctrine was carried further for example by the theologian Friedrich Schleiermacher (1768-1834) for whom religion was only the spontaneous expression of the free will. Everything must contribute to the self-determining activity of the autonomous individual. Religion functioned as the perpetual quest for perfection, a perfect way of self-cultivation.
  • Kant’s doctrine influenced not only theology it also had consequences for politics. Autonomy becomes the essential end of politics. A good man is an autonomous man, and for him to realize his autonomy, he must be free. Self-determination thus becomes the supreme political good. In other words, the idea of self-determination could from now on be seen as the highest moral and political good. Kant did attempt to discuss politics in terms of his ethical doctrine, for example in his treatise on Perpetual Peace (1794).[15]
  • Kant’s ethical teachings expressed and propagated a new attitude to political and social questions. Several habits and attitudes were encouraged and fostered by the doctrine. They helped to make self-determination a dynamic doctrine.
  • According to Kedourie, Nationalism found the great source of its vitality in the doctrine of self-determination. Both the French Revolution and this Revolution of Ideas constituted nationalism.

James (Mayhall, 1999) “Nationalism & International Society”

+The Domestication of National Self-Determination

-new sense of international legitimacy post 1918: equating popular sovereignty with end of Europe’s dynastic empires (later anti-colonialism)

-groups who political self-determination heightened after 1918, view taming of national self-determination as betrayal

-thus, these “new” self-proclaimed national groups challenge the state

-this indeterminacy is because boundaries of collective are not given by nature

-though existentially this is may not true (J.S. Mil) – nation as a group whose identity is forged by particular interpretation of its own history

-when it is acknowledged that there is no external objective criteria to distinguish between legitimate and illegitimate collectives, turn to an open test of public opinion to solicit the wishes of individuals in respect to their collective identity (this was strategy of Versailles Conference)

+Weakness of establishing self-determination by plebiscite

  1. Assumes collective identity already exists
  2. Discounts agenda (whoever controls the questions of the plebiscite usually can manipulate the outcome.

+At first, these weaknesses not acknowledged by liberals after WWI; but soon became obvious when redrawing the map of Europe, in respects to three situations:

  1. Problem of minorities: scrambled throughout Europe (virtually impossible to assign a territory that wouldn’t exclude at least some nationals)
  2. Question of territory: previously this was decided by conquest, which now was seen as politically incorrect, but then how do we determine which states are legitimate?
  3. State integrity v. security : winning powers willing to use plebiscite to determine new states, but did not want to apply it to their own territories (e.g. British not willing to settle Irish question by plebiscite)

-Kashmir dispute is a contemporary example:

-formula of partition allowed rulers to choose India or Pakistan, Hindu rulers at the time chose India, despite overwhelming majority of Muslims

-after uprising which led to a de facto partition of the state, India willing to conduct plebiscite – thus presuming that Pakistan was not in position to determine outcome – yet since it was clear vote would not favor Indians, it was never actually honored.

-this example shows when using a plebiscite it is impossible to prevent intrusion of political and strategic interests of the major powers.

+From the League to the United Nations

-trouble with redrawing European map based on language of self-determination was that it created political problem of minorities and substituted national determination for the idea of an act of self-determination (Wilson aware of potential for tyranny)

-protection of national minorities under League proved ineffective

-thus UN charter anxious to play down cultural nationalism to obtain cooperation between states that was already collectively guaranteed.

+The Conventional Interpretation

-even after 1945, whenever nation came into conflict with state; it was the people who had to move (e.g. 10 million Germans uprooted after WWII, mass population transfers following Indian partition)

-refugee became typical in 20th century

-concept applied to removal of European powers from overseas territories

-post-colonial tests of public opinion to settle disputes over self-determination ruled out by regional organizations established in newly independent African and Asian states

-revision of artificial boundaries created by Europeans no longer central to many African political agendas following independence

-national self-determination is ironic: conquered world by legitimizing state; yet also attempts to freeze political map by bringing an end to territorial division of the world.

-national movements are mostly unsuccessful in overthrowing conventional interpretation of self determination

-what challenge doe nationalism pose for contemporary international order, and under what circumstances is it likely to be successful?

+Two Challenges:

-conventional interpretation (anti-colonial) of national self-determination is clearly a compromise

-thus remains vulnerable to those who feel underrepresented

-main challenges are irredentism and main rationalist challenge with secession

-irredentism: any territorial claim made by one sovereign national state to land with another

-supported by historical/ethnic claims (e.g. Argentina claim to Falkland Islands, Moroccan claim to Mauritania, Spanish to Gibraltar)

-claims to land is combined with appeals to popular sentiment

-claims by national core to peripheral lands, used by government as a means of mobilization and to secure popular support

secession: successful secession is very rare, but term also applies to unsuccessful separatist rebellions against the state

-any attempt by a national minority to exercise its right to self-determination by breaking away to join another state or establish independent state of its own

-secession depends on group sentiment and loyalty; form of mass politics organized from below rather than above

-seems likely that irredentist claims (except where supported by powerful secessionist sentiment) will be defeated when submitted for legal arbitration.. thus many not constitute a permanent threat to international order

-secession is a more standing challenge (based on ‘rationalist’ world in which self-determination is seen as a basic human right)

+The Preconditions for national success:

-territorial revision is rare, thus so are the circumstances that are conducive to it

-8,000 identifiable separate cultures, yet only 159 independent states.

-the three great waves of state creation associated with collapse of empires

-but no more empires to collapse (at least formal..)

+Three circumstances where secession has potential

  1. Regional patronage : if two superpowers reluctant to support secession and a stalemate occurs – opportunity for region power (with its own interest) to assist the national movement. (e.g. creation of Bangladesh – Americans supported Pakistan, Soviets – Indian; India ultimately able to intervene and support secession)
  2. Superpower competition: ideological rivalry between US and Soviets led them to encourage ethnic separatism in order to weaken the other side or gain a tactical advantage; also used to obtain leverage in global diplomacy (e.g. American support of Kurdish rebellion in Iraq not motivated by American desire for an independent Kurdistan, but to weaken Iraq internally for their Iranian allies)
  3. Constitutional Separatism: possibility of secessionist demands being peacefully accommodated for ; only examples are Norway secession from Sweden in 1905 and Irish Free State from UK in 1921 – neither power willing to preserve unity by forcefully suppressing the demands for separation and did not want a civil war; also structural restraints – in each historical sense of identity predating nationalist era and generally accepted, contending parties led by liberal nationalists shared belief in parliamentary system.

-since 1960s, western countries seen ethnic revival (Basque movement, IRA, Quebec)

-though separation will continue to be resisted, Mayhall contends that if the demands persists it might become easier to accommodate in industrial societies.

8) Why did territorial rather than ethnic nationalism triumph after 1945: collapse of the USSR?

(Hutchinson, Revision2)

  • You can’t say that territorial won. You have both in the USSR. Ethnic nationalism in the USSR was obvious. Was territory the core focus or was it the nature of the people so that you have to redraw the territory of the people: African boundaries?
  • When USSR broke up there was long existing territorial boundaries: these territorial republics were ethnically mixed. The argument was that the driver of anti-USSR sentiment was ethnic: it was ethnic antagonism was the explosive. You look at the caucuses: Azerbaijan, Armenian, and Georgia.

 

  • The definition of triumph: you could argue about yes it emerges as a territorial block: you would have to know about the USSR or cases: you don’t need to know all of them: just know two or 3 specific cases>>>\

 

  • Why is the question being posed in these terms: after Empire what emerges in the world. You get a proliferation of nation state formation. The proliferation of nation-state after 1945: so when the Q asks why: what factors does one look to: the role of empire: it leaves open the idea of whether these collapses have occurred because of internal pressures. Metropolitan state -> you can argue that you have to look at the reasons for the way the empire produced movements of resistance. Its possible that the empire collapsed from military exhaustion, military defeat and there was no nationalist movements around.

 

  • YOU Might want to think about the TIME: which era will YOUR ANSWER FOCUS ON PRIMARILY?

CONCLUSIONS:

(A) (B) (C)

(A)

  • Gellner’s incisive Wrong Address Theory was not predicting the collapse of communism.
  • nationalism wasn’t exactly suppressed under Communism (taking from Katherine Verdery). It was always there. In fact, communism nurtured national consciousness, it exacerbated ethnic tensions, and it laid the seeds for strongly nationalist citizens to emerge in many places.
  • Communism attempted to blanket over nationalisms (Sleeping Beauty/Prison House.Freezer effect of communism) but pragmatic communist political leadership requires mobilization via the peoples ethnicity THERE Was something there before communism arrived: nationalism was present and reorganized.
  • The post-Communist states were territorially designed by the Soviet Union therefore the SU did have a significant influence on nationalism. BUT Tishkov overemphasizes the top down explanatory model of agents of control.
  • No Russian communist party, no Soviet passport. NOT all republics wanted sucession. Baltics different. Collapse somewhat of an accident. But the cause was not nationalism.
  • COMMUNISM is a powerful centralizing political doctrine that competed viably against nationalism but could not overcome the simple appeal of the doctrine of nationalism.
  • Fear of demographic Russian decline
  • Counter-factually, nationalist territories would have emerged regardless of communism in the Czarist Russia along bourgeois nationalist lines.
    WHY was Yugoslavian ethnic violence so high after communism’s failure? Because these were longstanding grievances where as the grievances of the Russian empire were centred against a highly dominant Russian ethnie.
  • Linguistic unification as a primary but not necessary contingency still

 

(B)

  1. Nationalism was given force by the way in which communism disintegrated. See Tishkov: elites within the Republics: before they had had power, but not privilege. By declaring themselves nationalist and playing the popular politics game, they could now have both.
  2. Nationalism NOT the cause of the break-up of Communism, but a secondary. Symptomatic. Gorbachev’s reforms, his acquiescence, the economic situation, and the spillover effects of other revolutions all more important.
  3. More nationalism a consequence of break-up, inc exclusive kind. Discrimination – eg against gypsies – as leaders and peoples seek to take advantage of new order.
  4. Verdery argues that the nation was a default setting after communism. I agree with this point.
  5. Federal states do reiefy nationalist groups: they are subcontainers in the state that serve dual contradictory purposes: protect/empower cultural groups + manage cultural group (superficial forms of nationalism). The dominant ethnie attempts to Russify the populations.
  6. The post-Communist states were territorially designed by the Soviet Union therefore the SU did have a significant influence on nationalism. BUT Tishkov overemphasizes the top down explanatory model of agents of control.
  7. Russians felt they were shouldering the burden of the EMPIRE>
  8. (F) CAUSALITY: USSR forgets the war, economic collapsed,

 

(C)

EU entrance: needed to pretend to be civic nationalism: USSR Russian dominance is very obvious.

Yugoslavia has a lack of resources -> too small to survive.  

  • Counter-factual of the USSR: why did it breakdown into nation-states? Why not into a multicultural/national power.
  • Your Thesis: primordial nationalism is only important when USSR prospects of collapse are high: the resources are utilized to reorganize political power when the USSR’s centre collapses.
  • International self-determination recognition used primordialist approach so as to avoid precedent setting for any political movement.
  • IT IS THE STRUCTURE at the CENTRE reduced the will to hold power in the regions.
  • Answer MUST be conscious of TIMING: (PRE) = pre-collapse (POST) = post-collapse
  • Whether there was a nation narrative before or during the USSR, it only matters after! SO what? (A) doesn’t matter until the collapse is eminent.
  • Nationalism was an outcome of the republican system of government: Breuilly.

 

The more things change the more they stay the same: POWER POLITICS

Russian Empire > Bolshevik Revolution >

[1] (Martin, 2001), Terry. The Affirmative Action Empire: Nations and Nationalism in the Soviet Union, 1923- 1939. London: Cornell University Press, 2001, pp 2.

[2] Suny, Ronald Grigor. The Revenge of the Past: Nationalism, Revolution, and the Collapse of the Soviet Union. (Standford, Stnaford University Press, 1993), pp. 259.

[3] The Affirmative Action Empire: Nations and Nationalism in the Soviet Union, 1923 -1939, by Terry (Martin, 2001).

[4] Gellner, Ernest. Nations and Nationalism. Oxford: Blackwell, 1983, pp. 129.

[5] Although Gellner may not have believe in the inevitability of the collapse of the Soviet Union, his description of the relationship between communism and nationalism seems to have gained further confirmation from that collapse.

[6] Tishkov, p. 294

[7] Tishkov, pp. 31.

[8] Smith, Jeremy. Review work(s): Ethnicity, Nationalism and Conflict in and after the Soviet Union: The Mind Aflame by Valery Tishkov. Europe-Asia Studies, Vol. 49, No. 8 (Dec., 1997), pp. 1544.

[9] Tishkov, Valery. Ethnicity, Nationalism and Conflict in and after the Soviet Union: the Mind Aflame. London: SAGE, 1997, Chapter 2, pp. 30.

[10] Ibid, pp. 30.

[11] Ibid, pp. 30.

[12] Motyl Alexander J. Sovietology, rationality, nationality: coming to groups with nationalism in the USSR (New York, Columbia University Press, 1990), pp. 47.

[13] Tishkov, pp. 30.

[14] If the categorical imperative imposes on us the duty to promote the highest good, then it is necessary to assume the existence of God, a perfect being, since an imperfect being could not be the source of the highest good. Therefore it is morally necessary to assume the existence of God. Kedourie, Nationalism, p. 17.

[15] In this work he sets out the conditions for a stable, peaceful international order. The civil constitution of every state should be republican, because for Kant a republican state was one were the laws could be the expression of the autonomous will of the citizens. Only in such a situation could peace be guaranteed.

American Express Case: The Story of AmEx Canada

American Express Case: the story of American Express Canada

Key Takeaways: Ivey MBA, Howard Grosfield CEO of Amex Canada Article

  • Total Service Experience: replace cards easily over night in the event of a lost card.

Recognize me: be valuable; engaged employees = engaged customers. Empower me: to pay the balance in full! Enable me: leverage technology integrate service provisions.

  • Luxury AMEX Card: differentiated from the Diner’s Club Card (Visa). AMEX has high fees, the rolling debt balance is very bad.
  • Amex is more expensive for merchants however Amex has better customers: wealthier customers. The merchant network is weaker (charge them a higher fee) but the customers are better.
  • Centurion Services: centurion members have access to professional assistance every minute of the day. It’s the Concierge: dedicated team of highly skilled professionals. Centurion webs: privileges platform.
  • Product Expansion
  • Branding / Positioning
  • Strategic Diversification
  • Distribution / Co – Branding
  • Product Innovation

1850 – Founded as express courier service

1891 – Launched travellers cheques

Don’t leave home without them

Early 1900’s – opened offices in Europe

1957 to 1978- green, gold, platinum card

1987 – launched Optima Card

1991 – Boston Fee Party

1999 – Exclusive arrangement with Costco

1999 – Launch of Centurion “Black Card”

Travellers Cheques Advertisements

1981 – acquired Shearson Rhodes

1984 – acquired Lehman Brothers Kuhn Loeb

1984 – acquired IDS

1988 – acquired EF Hutton

1991 – wrote off $300 million on Optima credit card launch

1992 – spun off First Data

1993 – spun off retail brokerage arm

1994 – spun off Lehman Brothers

2005 – spun off Ameriprise

Push to capitalize on brand / expand co branded cards beyond Costco – Starwood, Jet Blue, Delta

2008: GFC affected all credit card issuers, forced to tighten up credit – less impact on AMEX

2010: Paid $300 M for internet payments processor for consumers without bank accounts (Revolution Money became Serve)

2012: Launched BlueBird with Walmart – prepaid credit card as option as lower option to chequing accounts and debit cards

Cost pressures: Airline mergers forced Amex to open airport Amex lounges versus giving cardholder access to airline lounges

Attack on high end customers from Barclays and JP Morgan Chase – Chase now leads card penetration among $125K plus households

2015: Costco switched credit cards to VISA, 10% of Amex’s 112 million cards were Costco

Question about value of Amex brand – 23% of $1 trillion in spending from co branded cards

– “Partner” vs “Vendor”

  • 19 card options
  • Card Type: Personal vs Small Business
  • Card Benefits: Rewards, Concierge, Cash Back
  • Loyalty Programs: AeroplanPlus, Air Miles, SPG, Membership Rewards
  • Card Attractions: No Fee, First Year Waived, Welcome Bonus
  • Blue Sky Credit Cards
  • Response Time When Apply?

Card options

Drop in new customers in 2014 from 150K to 80K – half via referrals, others via traditional methods

Tested pop-up in a shipping container in shopping mall parking lots / in malls (take up eight parking spots) – local area marketing to drive traffic

Signed up more customers in two months than best Scotia branch in a year

Key is credibility of Scotia and single minded focus on customer acquisition – salespeople only in the pop up branch, videoconference customer to an advisor if needed

Now have nine pop up branches that shift location every 60 to 90 days

Now back to 150K; if 8K per popup = 160 weekly / 25 daily

Building A Stronger Exercise Culture

Drivers Versus Conductors

About 33% of preventative healthcare is concerned with physical activity with the other 66% being food consumption mixed with other choices and genetic predispositions. Workers in most jobs appear to spend most of their time sitting down. Back in the mid-20th century, doctors would say to patients “what ever you do, do not exercise!” So there was a lot of confusion about the health benefits. Then there were tests conducted in the 1950s, led by Jerry Morris (https://en.wikipedia.org/wiki/Jerry_Morris), that showed that the conductor in double deckers bus (who would have had to walk up and down stairs) have a better quality of life than then driver of the bus. The conductors had to climb the stairs 1000s of times to check ticket in the upper deck. Cardiovascular activity is critical. It turns out exercise helpful for dealing with heart disease.

Olympics Versus Citizen Wide Exercise

Building a national exercise program is a wiser allocation of funding than building an Olympic stadium according to Simon Kuper. I agree. I love the Olympics, but I love average live expectancy past 90 years old much more (for fellow citizens of my own country and beyond). We need the local facilities while not necessarily commercial viable OVER the Olympic facilities gained through winning a hosting city bid which will rarely get used post games (i.e. take a look at London’s 2012 Olympic stadiums). Spending $9 billion on the Olympics is country brand signalling, cool, but those benefits are notoriously difficult to quantity in financials or otherwise. Expanding the local facilities infrastructure to be all weather in norther countires like Canada, Sweden and the UK is a worthy endeavor. Exercise facilities at work should also be subsidized, potentially by government. Expanding exercise opportunities comes with risk of course; first, what if people don’t show up to use these facilities? It’s kind of crazy that no one has successfully proposed a tax deduction for gym memberships. Being afraid of tax scams is hardly the major concern. There are steps to drive traffic for sure, but the culture of sedentary life is ingrained and a slow moving epidemic we will never “see”. I’m not saying do something foolish like Tennis Canada’s board member who advocated that tennis domes be built in every town under 5000 people. Leave the details to others at this point. But if the federal government were to intervene in any healthcare area (thinking in chunky terms and being blindly cavalier about revenue spending right now) why not look at preventative healthcare via an exercise mandate with teeth.

Civil society in Canada is very weak. On average, people don’t even leave their house if they don’t have to. Health benefits of exercise are massive and then of course, it’ll improve the happiness of people, you will see improvements in actual performance in global competition because you have a healthier population. The subset of people who actually participate in the Olympics is very very minute and it usually upper middle class to wealthy people. Making exercise and sport more accessible to train and compete will boost the quality of life across the income spectrum. Exercise has to be in a physical space: investments are underway, but the next generation needs to be obsessive about social exercise.

Thoughts on Elizabeth Warren’s This Fight is Our Fight

Great Book, Inspiring Author

Elizabeth Warren highlights some incredibly important points relating to economic inequality, poverty and beyond. I really love Elizabeth Warren’s passion against poverty and this book centres around that topic extensively. Better than “Nickled and Dimed” which is a 90s classic, This Fight is Our Fight is kind of an invitation to swim against mighty economic currents. Or at least, think about the downsides of capitalism. And perhaps the immutable reality that success begets success and failure (without learning) = more failure. The core problem that I see is that poverty is in its extremest form life limiting. While this topic is wildly more complicated than language can convey, it is near impossible to disagree with the idea that when equality of opportunity is reduced, the GDP of the entire planet is reduced. Everyone should get a fighting chance at success, however they personally define it. Certainly, the story of Elizabeth Warren‘s upbringing should have readers draw the conclusion that poverty restricts opportunity. At the same time, she’s a direct example of herself succeeding despite and perhaps motivated by that poverty. Anger, for lack of a better term, is good. And her voice is a powerful and credible one in a network of ideas that she hopes to coordinate for her run in 2020. Poverty after all, sucks.

A Background in Financial Poverty, Fighting Spirit Is Inspiring For Everyone

Elizabeth grew up wearing plastic bags for shoes, living in a house where the carpeting was broken apart and worn out. Her experience of poverty is shocking and tough to read. And she discusses it in depth. Some readers will be able to recognize that level of poverty, however most people probably did not wear plastic bags for shoes growing up. No matter what you’re own background, it’s a pretty remarkable level of poverty that Elizabeth endured. Born Elizabeth Herring, Warren’s mother worked at Walmart, which Warren – is quick to point out – is now a huge conglomerate worth billions of dollars. Her mother was paid a really low amount there. Elizabeth Warren made the mistake, in her own summation, of marrying early instead of going to school. However, she was able to get the education (through night school) to validate natural or environmentally induced intelligence and by not turning to drugs, being curious, moving to where the actions at, she was able to kick ass. So in effect, Elizabeth was actually very wealthy in spirit and through economic justice, that error of birth was corrected (maybe an interesting way to think about it)? Poverty made Elizabeth a fighter. Who knew? The government needs to be there for the right person, at the right time, at the right place, and also it’s still the individual that has to get up in the morning; no one will do that for you. It would be cool if she addressed how important she, herself was to herself…in coordination with the support of others.

Economic Injustice Or The Way Things Are Right Now, This View Needs Clarifying

Warren’s mother working at Wal-Mart story is an interesting story; everyone knows someone who has been impacted by Wal-Mart. Wal-Mart is an excellent scapegoat for folks who are negatively effected by its success. And it is indeed a pretty fascinating story in American business and Sam Walton’s Made in America literally inspired Trumps Make America Great Again hats. Success for Wal-Mart has meant driving lower prices, greater economies of scale and consolidating mom & pop operations throughout the US. That’s a mixed outcome for sure.

On the one hand, you have to admit that $4.97 kitchen utensils is kind of amazing for customers; who get the most value out of Wal-Mart not even the CEO gets as much value as customers (in aggregate). CEOs typically only get about 1% of the total revenue that they orchestrated. And also, consider how executive compensation works, you’d be insane to work at Wal-Mart if a similar role paid more for the same amount of work. On the other hand, the actual low function roles of stocking shelves, directing customers to the checkout etc etc, are paid not so well…it’s a frustrating reality! Salary capping would likely led to creative ways of rewarding C-suite executives so the reality is success rewards the successful over time. Would customers like to pay $6.97 for utensils or $4.97? Salaries are expensive and the value that Wal-Mart brings is mostly in low prices, democratizing the utensils! Everyone can afford them.

While reality is more complex, the fact that Warren’s mother was poorly paid is a kind of injustice; being paid to work while others sit planning operations, increasing shareholder value over employee value, it does (on the surface) make for a very compelling story. Why should someone, who’s parents paid for school no doubt, get paid more than someone who didn’t get the education needed to progress? I guess being intellectually free means releasing your mind for ideology, which means you can consider the reality of scarcity in economic terms. Thinking freely, you have to then ask what are the available/future solutions against low wage employment? Did Elizabeth Warren’s mom consider moving to another town to get a better job? Yes, but she made other choices. Schooling? Not something she pursued. So, if the solution Warren proposes is to hinder innovation and economic development, then those solutions have hidden costs. Why should bureaucrats decide which businesses thrive and which die? Does it have to be that businesses are bad, and workers are good? Sometimes, this book reads that way. A thriving middle-class creates the customers that Wal-Mart needs, the job creators aren’t just the entrepreneurs but the middle class people themselves so it’s important that Sam Walton and others not forget who is the real job creator (alongside the enrepreneur). Honestly, I was a bit surprise that Elizabeth’s solutions are lacking in depth perhaps because lawyers aren’t in the creative problem solving business OR more probably because she needs to stay strategically vague on policy so that she can campaign in 2020 without giving away her negotiation positions upfront, too early.

Side Note: I can still enjoy her message without agreeing with everything right? And I should be allowed to point out that it’s lacking in certain areas? Well, political parties do not allow you to criticize the boss. I however am intellectually free.

Emotional Power, Maybe a Bit Much, Though?

If you don’t at least tear up reading this book, you have no soul. But if you don’t start getting concerned about the repetitiveness on stories of poverty no matter how gripping, you have no clue. I mean, if you want to run for president, where is your foreign policy, your policy on NAFTA? etc. And so we have to ask a critical question which is whether Elizabeth Warren being political in her storytelling? Of course, she’s running in 2020. Does this book detail policy objectives? Heck no! Watching Warren grilling a Wells Fargo CEO or a captain of industry, is cathartic and entertaining but perhaps a little bit over dramatic. Just a bit. This book is an extension of those Senate hearing that show-case Warren’s demonization of the big bad corporate bureaucracy, and the complacency of the upper middle class when it comes to how (some) companies* create value.

*Credit card companies, banks etc….provide a service and some executives do not handle that relationship well, according to Warren but it definitely takes two to tango. Complicating.

Bill Gates Joke and How Averages Are Deceiving

Warren has a very hilarious quote: What happens when Bill Gates walks into Moe’s tavern? Congrats, on average, the patrons of that bar just got 51 billion dollars richer! The truth about averages is they are misleading and can potentially mislead in negative ways that aren’t anticipated. You can say that on average American standard of living is getting better but real wages have been static. The truth is that many people aren’t getting the benefits according to the data Warren is looking at from 2015 backwards. Warren’s point is that people are hurting a lot more then is measurable in anecdotal stories. It does sometimes sound like envy actually but it’s cool. What Warren is missing is the perspective of business and value creation. Like the guy who invented the latest product, she would likely say why can’t he share most of his wealth with his employees…the truth is that most of the wealth of a new widget go to the customer through it’s usage. Founders usually only get 1 or 2% of the wealth created from the idea they create. You could say, yes, well that was not created in a bubble, yes, but that entrepreneur did create and then capture that value….without him or her, there is no value…See…it’s complicated.

Warren on (FDR) i.e. Roosevelt, She Should Spend More Energy talking about the Benefits of Business (Small and Corporate)

Elizabeth Warren looks at Franklin Delano Roosevelt and sees an amazing 3 term president. He was a great guy who thought about things in terms of benefiting both finance and the broader society and he is the model for Elizabeth Warren. I think that makes a lot of sense. However, it was an exceptional time in American history. In talking about FDR, she strongly implies that the economic prosperity that followed the new deal can be attributed largely to government Keynesian economics which is hard to know for sure. Why? Because there were many variables at play in the 1940s and 50s that led to American economic leadership on the global stage. In reality, it is more complicated than words can describe. There were entrepreneurs and American industry involved during the FDR era which Warren appears to massively downplay. The Ford motor company was a huge, literal, engine of growth, for example. Also, think about war and industrial build up.

Entrepreneurship Happens When Motivated (both through Poverty or/and Opportunity), Warren Needs to Fix That Claim

Warren is partisan in the sense that she assumes that, for example when the economy is good that’s when entrepreneurship happens. I think that is partly true but also entrepreneurship is increased when someone is unemployed and more people are unemployed when the economy is in a downward portion of the business cycle. New businesses occur more frequently when there aren’t easy jobs with great pay to be had. When the going get tough, the tough create businesses because they can’t find an employer. The opposite is also possible, when there is money to be made, people switch to their own businesses ideas. But on balance, it is more likely that entrepreneurship happens when a person can’t find a job, an immigrant that can’t get job for example, is a budding entrepreneur because desperation (within reason) is a great motivator. Warren is an academic and the challenge with academics is they aren’t directly in touch with the world around them; they are more susceptible to confirmation bias, they are the most analytical people. And believe me, there’s a big difference between intelligence and analysis. Complicating this further is the fact that Warren is building her 2020 campaign with this book. So she can’t honestly be more balanced because she is trying to build a campaign around scapegoating the economic winners in America.

Deprioritizing Economics in Warren’s Political Preferences, It Should Be Addressed More Seriously

Warren seems to channel these compelling emotional stories of poverty in order to support a politically based argumentation. She doesn’t necessarily have solid solutions other than to increase regulation, it’s a tight rope because she wants to be president so what she spends more of the book on is about how abusive some businesses have been. Focusing on the abuses is cathartic and convenient as she doesn’t want to say what she would do as president just yet. Politics is about pulling people behind your bandwagon; it’s persuasion and finding enemies that we can all smack down together. If you de-prioritize how economics works (or doesn’t work) then you will find Elizabeth Warren’s arguments a breath of fresh air, unmitigated by economic reality. And of course, reality can change over time. However, finance and accounting are honest reflections of reality for the most part. Artificially manipulating industry usually makes the economy less efficient unfortunately, increasing the cost of goods and services. At least that’s what’s happened in the past. The data can be manipulated to show the opposite but generally, we know that people are motivated by incentives that benefit them as individuals; sad but true. The problem is Warren is not an economist, she’s a commercial law professor, so that’s reflected here a little bit when she extensively highlights the story of poverty and almost no mention of the fact that most people have to make a living without government support in the business world (private sector). There are excesses, transgressions but most businesses are at war with their competition, it’s about the bottom line. While I have witnessed financial poverty as well as poverty of mindset, poverty has to be fought with precision not inexact redistribution of wealth. When and where the government should show up to help people in need is probably the biggest challenge of people who will live through the 21st century; Warren is pretty simplistic or unsophisticated in the solutions needed to get the right services to the right people are the right time or at least intentionally vague because she’s running in 2020, she’s negotiating with voters.

Warren Strongly Suggests Economic Inequality is a Zero-Sum Game, She Needs to Revise That View

In order for business succeed, the poor have to fail, according to Warren’s more aggressive passages. In reality, the disagreement is to the degree to which banks should be restricted in terms of their practices of predatory lending. They still have a critical duty of resource allocation in the economy. It is a very nuanced and a complex issue, which really requires policy tests, A/B testing regulation for banks; think like a scientist. You’re kind of disagreeing with what I think is an empirical reality around the fact that the “proper allocation of banking resources and accountability towards those who are mathematical inclined to advance their own interests but then also advance the communities interests, is not a zero sum game.” Unfortunately Elizabeth Warren seems to think it is a zero sum game or have indications that that plays well to her support base. If Apple creates another iPhone, does that wealth get distributed to customer’s who use that product as well as employees? I think so. Even if production is overseas? Yes.

Credit Card Companies, An Easy Scapegoat for Elizabeth Warren, This View Needs More Nuance

Credit card companies are providing a service but are misleading customer according to Warren. Warren points out this because there was so much profit being generated from credit card policies, credit card policies that were particularly not focused on making customers aware. She points out that it was almost impossible not to join that chorus of business people making so much money off of customers. You could not feasibly be an executive in a credit card company and argue against misleading customers in terms of interest rates (annual price rate) because you were undermining your own ability to accrue revenue from customers. I would say, it’s hard to say that customers aren’t completely oblivious to debt.

Warren Misses A Better Solution for Credit Card Problems, FacePalm!

But what’s an obvious solution that Warren completely misses is that students in high schools throughout the US DO have to learn about the time value of money. Individuals should be better informed as part of the solution, not simply increasing restrictions on what credit card companies say and do. The fact that students aren’t learning about how finance and accounting work and then are allowed to hold credit cards, start businesses or work in government is a bit baffling. I mean, it’s obvious that teachers themselves might have difficulty teaching these concepts since they are focus on calculus for the 35% of students who pursue science, technology, engineering and math. However, much more important is finance and accounting than calculus? Way more. Warren knows this, but this book is not about solutions I suspect as I’ve mentioned above…

British Petroleum, Another Easy Scapegoat for Elizabeth Warren, This View Is A Bit Biased

Elizabeth Warren makes an excellent point when it comes to the British Petroleum catastrophe in 2010 in the Gulf of Mexico. In this case, they had a bunch of fines from the federal government. What’s interesting about those fines ($7 billion in total) is that they were able to expense those fines. In other words those fines were tax-deductible from their total profit for the year. Remember that a tax deduction reduces the total amount of money in your pool of money that the government can then tax so if I have $1000 profit and then bought a $200 car for my business, I can expense that so that I only have $800 of profit from which the government can tax me at the 25% rate. That means I pay $800 x 25% = $200 rather than $1000 x 25% = $250. So in essence, BP had a huge tax deductible amount in 2010. And BP is very powerful, they have connections in Washington and London.

Money in Politics, Always A Bad Thing for Elizabeth Warren, That View Needs More Nuance

The solution should be for the best ideas to win regardless of where they come from. However, Warren is saying that the lobbyists in the House of Representatives and the Senate are gaining undue traction and affecting public policy with their commercial interests at the centre of decision making. Seems likely but she basically thinks that all lobbyists are a bad thing. Or at least her persuasion tactic is to convince her voting base to believe that all lobbyists are evil. However, she is downplaying the benefits of having lobbyists explain the details and nuances of technical policy to decision-makers in order to get the optimal decision for the best outcome for the economy. Of course, Warren might say the economy is rigged so that’s a complication. Lobbyist restrict the number of doctors in the market, thus increasing their salaries for example. We have to ask if it’s the actual structure of the lobbying that is the problem and that she is incorrectly attributing all lobbying as being bad or if she believes that the self interest of a single organization is a problem even though the self interest of an organization will obviously benefit the broader economy as well as the organization itself. The problem is that the lobbyists have run amok in her view and that might very well be the case however we can’t generalize all obvious as bad and all consumers is good. Also, what’s the solution?

This book is a fundraising solution for Elizabeth Warren for sure!

Unions At the Negotiation Table, Sounds Good, Might Have Complications

Warren was arguing with that union leadership should be at the table (Board of Directors level) as well as the corporate executives. Add a 25% corporate representation versus some 75% union and community leadership representation. The reason she argues this is that you can be sure that the interests of American workers can be protected so that even while the cost of production goes up with wages that corporations can’t do anything about it. For example, the corporation will not be able to do foreign manufacturing in places like Mexico at $0.75 USD per hour. Trump’s position on manufacturing is certainly overlapping Warren here; so that might be a nullified issued if she is the Democrat nominee.

Other Interesting Ideas from This Fight Is Our Fight

  • Walmart is being subsidized by tax payers because employees collect food stamps.
  • Warren advocated boycotting companies like Nabisco that move their production to Mexico? (My thoughts: consumer coordination is pretty difficult in practice, i.e. the prisoner’s dilemma)
  • Astro turf campaigns versus grass roots campaigns….an Astro turf campaign is when a politician is backed by big donors to basically do whatever they want versus grass roots campaigns that raise funds from many small donors; (My thoughts: clearly financing campaigns is pretty daunting in the US…should be a way to fund the best ideas, not the best politicians).
  • Brookings Institution is a bad actor / think tank.
  • “Corporate, corporations”; these are almost dirty words for Warren. Lady Justice can be bought by big business. (My thoughts: She’s too perfect an academic to make the mistakes that sometimes happen in business, sometimes irresponsibly, but more often because mistakes are part of innovation. It is indeed heart wrenching, and easy to point fingers from the side lines, when things go terribly wrong for example: GlaxoSmithKline heart attack deaths due to Avandia)

Warren’s Priority List (Probably):

  • Poverty should be avoidable through government support (my thoughts: hard to disagree, but how, to what degree, how precise? Do businesses help reduce poverty at all?);
  • Prices are something that should be artificially adjusted by governments to help the poorest people (my thoughts: there are really bad hidden costs to restricting businesses like fewer jobs, less dynamic economy, less creativity / innovation, human nature is not as malleable as Warren wants it to be, how do you curb the excesses of capitalism without punishing good businesses as well?);
  • Education is good but there also has to be stable jobs for people who are risk averse (my thoughts: yes, some people cannot survive in a competitive business world, so giving them easy jobs and good pay is a kind of social service, who pays for that though? Through tax revenue, there must be a better way, test out solutions!);
  • Businesses are self-interested and do not care much about their customers (my thoughts: I don’t think that’s really true, it only looks like that when a customer gets a raw deal, it’s easy to point out horror stories because they are memorable and heart wrenching; it doesn’t mean they are the reality for most people);
  • Hidden costs of higher taxes aren’t as important as helping the poor directly (my thoughts: it’s the job of the children of babyboomers to solve this problem, it’s complicated and involves a more scientific way to deliver public services);
  • Great economic growth should be sacrificed because the benefits to the poorest are more important than those who struggled and then successfully created new business and new economic activity (my thoughts: hard to agree, if we focused all resources on the poorest people, then we would be under-serving the people who can create more tax revenue who then contribute to the tax revenue needed. It’s complicated!).

 

Final Grade

-A

Universal Basic Income and the Policy Experiments of the Future

What if you were given a stipend from the government in order to live comfortably and chase that dream of becoming an ice sculptor or writing the next best seller? Would you sit toiling away at your desk? Or why not watch Jeopardy? Is it possible that different people react differently to the same opportunity?Introducing the universal basic income experiment.

Thomas Paine is the first dude to propose the concept in the modernish era. Ever since Thomas Paine argued that free citizens should have the “power to say no” to bad job opportunities, other academics and policy makers have floated a basic income. Typically, the trigger for advocating for a universal basic income (Ubi) is an economic downturn or perceived adverse pattern relating to human productivity. <Perceived based on predictions about Artificial Intelligence…which in reality are hard to map against the economic benefits of increased productivity that AI is likely to create (predicting the future is kind of difficult). Curiously, there have been advocates on the left as well as the right politically for a UBI. The latest threat to human labour has been Artificial Intelligence and/or automation. Meanwhile, Thomas Friedman is suggesting that “[AI]’s going to be okay” in his latest “Thank You for Being Late“.

Background from the 20th Century

In the early 1970s, Nixon looked into UBI; $1,000 gave the means by which citizens can help themselves. 8,500 Americans were tested under the Nixon administration; people started analyzing the results. The results were mixed: it appeared that many were just enjoying this income. Increased separation and divorce rates was a bi-product so the program was shutdown. The initial plan was that they wanted to do two US states. Start small, expand slowly, let the experiment play out.

The US also has a corporate Benefits Package Idea as well. Happiness and well-being = increased productivity. Trying to figure it out in the corporate world has been an ongoing discussion.

Design Policies in the Way that Your Design Services & Objects

Basic Income is something that has been tested in Nordic welfare states, too.

DemosHelsinki is an organization that asks the critical question, do we employ design thinking for the government? First you have a challenge that needs to be developed. Test: try those ideas, get feedback, and then cycle and make revisions on the design in real time. Legislation by Design: design policies in the design thinking process. Finland increasingly wants to take prototypes of laws that are dynamically derived. You need to make sure that your laws have to treat people equally: the people in the experiments are variables….the special law needs to accommodate experimentation in the Finnish laws according to the DemosHelsinki team. The welfare office: the Basic Income experiment. Google “KELA social insurance” to learn more.

How To Figure Out if UBI is Workable In Any Case: Test Run this Policy!

Select 2000 people 560 euros, not students. Participants did not volunteer. Give half these folks a Basic Income. A/B test like an advertiser would. Within 2 years that the experiment is complete. In this model, participants can take a part-time job if I wasn’t part of the basic income. $175K in the profile to compare these two groups of people. How are these people behaving? There is also a few UBI tests in Ontario which is an ambitious plan to see if this policy could have legs generally…. Any partisan that has a problem with the scientific method is probably not qualified to serve, let’s see the results, people!

Counter-Arguments Against and For Universal Basic Income

There are a lot of people thinking it is not a good idea in Finland. The problem is the social security system. Not everyone wants a flat income. What if you have children with special needs? The basic isn’t enough. And of course, the larger challenge is How much would it cost in terms of redistributing government revenue? It’s a systematic shift; it would change the economy. It might save money however….meanwhile, the upfront is expensive to fund and politicians are replaced regularly so there are those factors….further research required. (Further Research Required = Fr-squared!)

  • Would Basic Income improve general productivity?
  • You might dis-incentivize people from working hard. Is pain a motivator for innovation?
  • You might lead to people taking on low value jobs to cover the remaining..?
  • You might have the next global best seller come out of the participating group…?
  • How does Ubi effect the relationship between T and G? Where T = tax revenue and G = government spending.
  • Who would pay for Ubi realistically? Corporations? Governments?
  • What are the least obvious consequences of implementing a $20K Ubi in Canada and the US, Europe, UK? i.e. Would there be more X and less Y?
  • Why might Ubi be appealing to right and left-wing advocates?
  • Is Ubi more or less feasible in Kenya or other developing markets?
  • What are three possible contingencies relating to unemployment rate and productivity if Ubi were to be implemented in Western countries?

The Scientific Method in Political Science

The Scientific Method in Political Science

These notes are a combination of notes from Matt A and Estelle H. Enjoy.

Topic One: What is the scientific method?

  • Overview
  • Science as a body of knowledge versus science as a method of obtaining knowledge
  • The defining characteristics of the scientific method
  • The scientific method and common sense

The nature of scientific knowledge claims

Four Characteristics of the Scientific Method:

What are the hallmarks of the scientific method?

Empiricism: require systematic observation in order to verify conclusions, tested against our experience

Intersubjectivity  require systematic observation in order to verify conclusions, tested against our experience

  • Explanation: the goal of the scientific method. Generalized understanding by discovering patterns of internal relationships among phenomena. How variations are related.
  • Determinism: a working assumption of scientific method. Assumption that behaviour has causes, recurring regularities & patterns. Causal influence. Must recognize that this assumption is not always warranted.
  • Empiricism requires that every knowledge claim be based upon systematic observation.

Assumptions:

Our senses (what we can actually see, touch, hear…) can give us the most accurate and reliable information about what is happening around us. Info gained through senses is the best way to guard against subjective bias, distortion.

Obtaining information systematically through our senses helps to guard against bias.

What is ‘Intersubjectivity’ and why is it so important?

Empiricism is no guarantee of objectivity.

It is safer to work on the assumption that complete objectivity is impossible. Because we are humans studying human behaviour, therefore values may influence research.

Intersubjectivity provides the essential safeguard against bias by requiring that our knowledge claims be:

  • Transmissible
  • The steps followed to arrive at our conclusions must be spelled out in sufficient detail that another researcher could repeat our research. Public, detailed
  • Replicable
  • If that researcher does repeat our research, she will come up with similar results.

In practice, research is rarely duplicated: funding, professional incentives (tenure, difficult to publish)

Transmissibility and replicability enable others to evaluate our research and to determine whether our value commitments and preconceptions have affected our conclusions.

Explanation

The goal of the scientific method is explanation.A political phenomenon is explained by showing how it is related to something else

If we wanted to explain why some regimes are less stable than others, we might relate variation in political instability to variation in economic circumstances: 

  • The higher the rate of inflation, the greater the political instability.

If we wanted to explain why some citizens are more involved in politics than others, we might relate variation in political involvement to variation in citizens’ material circumstances:

  • The more affluent citizens are, the more politically involved they will be.

Empirical research involves a search for recurring patterns in the way that phenomena are related to one another.

The aim is to generalize beyond a particular act or time or place—to see the particular as an example of some more general tendency.

Determinism

The search for these recurring regularities necessarily entails the assumption of determinism i.e. the assumption that there are recurring regularities in political behaviour.

Determinism is only an assumption. It cannot be ‘proved’.

The assumption of determinism is valid to the extent that research proceeding from this assumption produces knowledge claims that withstand rigorous empirical testing.

The scientific method versus common sense

In a sense, the scientific method is simply a more sophisticated version of the way we go about making sense of the world around us (systematic, conscious, planned, delibareate)

—except:

  • In every day life, we often observe accurately—BUT users of the scientific method make systematic observations and establish criteria of relevance in advance. Using the scientific method.
  • We sometimes jump to conclusions on the basis of a handful of observations—BUT users of the scientific method avoid over-generalizing (premature generalization) by committing themselves in advance to a certain number of observations.
  • Once we’ve reached a conclusion, we tend to overlook contradictory evidence—BUT users of the scientific method avoid such selective observation by testing for plausible alternative interpretations. Commit themselves in advance to do so.
  • When confronted with contradictory evidence, we tend to explain it away by making some additional assumptions—so do users of the scientific method BUT they make further observations in order to test the revised explanation. Can modify theory, provided new observations are gathered for the modified hypothesis.

 

The nature of scientific knowledge claims

Knowledge claims based on the scientific method are never regarded as ‘true’ or ‘proven’, no matter how many times they have been tested.

To be considered ‘scientific’, a knowledge claim must be testable—and if it is testable, it must always be considered potentially falsifiable.

We can never test all the possible empirical implications of our knowledge claims. It is always possible that one day another researcher will turn up disconfirming evidence.

Topic 2: Concept Formation

Overview:

  • Role of Concepts in the Scientific Method
  • What are Concepts?
  • Nominal vs. Operational Definitions
  • Four Requirements of a Nominal Definition
  • Classification, Comparison and Quantification

Criteria for Evaluating Concepts

 

Role of concepts in the scientific method

Concept formation is the first step toward treating phenomena, not as unique and specific, but as instances of a more general class of phenomena. Starting point of scientific study. To describe it, create a concept.

-w/out concepts, no amount of description will lead to explanation

-seeing specific as an instance of something more general

 

Concepts serve two key functions:

  • tools for data-gathering (‘data containers’): concept is basically a descriptive word. Refers to something that is observable (directly or indirectly). Can specify attributes that indicate the presence of a concept like power.
  • essential building-blocks of theories: a set of interrelated propositions. Propositions tie concepts together by showing how they’re related.

 

What are Concepts? (Part 1)

  • A concept is a universal descriptive word that refers directly or indirectly to something that is observable. (descriptive words can be universal or particular: we’re interested in universal words that refer to classes on phenomena). Empirical research is concerned with particular and specific, but only as they are seen as examples of something else.

 

  • Universal versus particular descriptive words:
  • Universal descriptive words refer to a class of phenomena.
  • Particular descriptive words refer to a particular instance of that class. Collection of particulars (data) tells us nothing unless we have a way of sorting it.
  • Conceptualization enables us to see the particular as an example of something more general.
  • Conceptualization involves a process of generalization and abstraction. It is a creative act. Often begins with perception that seemingly disparate phenomena have something in common.

-involves replacing proper names (people, places) with concepts. Can then draw on a broader array of existing theory, research that would be more interesting.

  • Generalization—in classifying phenomena according to the properties that they have in common, we are necessarily ignoring those properties that are not shared. Too many exceptions, look for similarities in exceptions that might show problem with theory.

-form concept -> generalize. But generalizing means losing detail. Tradeoff btwn generality & how many exceptions can be tolerated before theory is invalidated.

 

  • Abstraction—a concept is an abstraction that represents a class of phenomena by labeling them. Concepts do not actually exist—they are simply labels.

-abstract concepts grasp a generic similarity(like trees)

-a concept allows us to delineate aspects that are relevant to our research. A concept is an abstraction that represents a certain phenomenon: implies that concepts do not exist, and are only labels that we attach to the phenomenon. Are defined, given meaning.

-definition starts with a word (democracy, political culture)

 

Real definitions: don’t enter directly into empirical research

 

Nominal vs. Operational Definitions

  • Every concept must be given both a nominal definition and an operational definition.

 

  • A nominal definition describes the properties of the phenomenon that the concept is supposed to represent. Literally “names,” attributes

 

  • An operational definition identifies the specific indicators that will be used to represent the concept empirically. Indicate the extent of the presence of the concept. Literally spells out procedures/operations you have to perform to represent the concept empirically.

*When reading research, look to see how concepts are represented, look for flaws.

  • The nominal definition provides a basic standard against which to judge the operational definition—do the chosen indicators really correspond to the target concept?

 

  • A nominal definition is neither true nor false (though it may be more or less useful).

-very little agreement in poli sci on meaning & measurement. No need to define concept like age, but necessary for racism.

 

Four requirements of a nominal definition:

 

  1. Clarity—concepts must be clearly defined, otherwise intersubjectivity will be compromised. Explicit definition.
  2. Precision—concepts must be defined precisely—if concepts are to serve as ‘data containers’, it must be clear what is to be included (and what can be excluded). Nothing vague should denote distinctive characteristics/policies of what is being defined. Provides criteria of relevance when it comes to setting up operational definition.
  3. Non-circular—a definition should not be circular or tautologous e.g. defining ‘dependency’ as ‘a lack of autonomy’.

 

  1. Positive—the definition should state what properties the concept represents, not what properties it lacks (because it will lack many properties, besides the ones mentioned as lacking in the definition).

 

Classification, Comparison and Quantification

Concepts are used to describe political phenomena.

Concepts can provide a basis for:

 

  • Classification—sorting political phenomena into classes or categories. Taking concepts and sorting into different categories. e.g. types of regimes. At the heart of all science.

-1. Exhaustive: every member of the population must fit into a category.

-2. Mutually exclusive: any case should fit into one category and one only.

Concepts can provide a basis for:

  • Comparison—ordering phenomena according to whether they represent more—or less—of the property e.g. political stability. How much.
  • Quantification—measuring how much of the property is present e.g. turnout to vote. Allows us to compare and to say how much more or less. Anything that can be counted allows for a quantitative concept. (few interesting quantitative concepts in empirical research)

 

Criteria for evaluating concepts:

How? Criteria correspond to functions (data containers and building blocks)

1 Empirical Import—it must be possible to link concepts to observable properties (otherwise concepts cannot serve as ‘data containers’). However, concepts do not all need a directly observable counterpart.

 

Concepts can be linked to observables in 3 ways:

 

  • directly—if the concept has a directly observable counterpart e.g. the Australian ballot. Directly observable concepts are rare in political science.
  • indirectly via an operational definition—we cannot observe ‘power’ directly, but we can observe behaviours that indicate the exercise of power. Infer presence from things that are observable (power, ideology)
  • Via their relationship within a theory to concepts that are directly or indirectly observable.g. marginal utility. Such ‘theoretical concepts’ are rare in political science.

Gain empirical import b/c of relation to other part of theory.

2 Systematic (or theoretical) Import

—it must be possible to relate concepts to other concepts (otherwise concepts cannot serve as the ‘building blocks’ of theories).

Goal is explanation. Want to construct concepts while thinking of how they might be related to other concepts.

Topic Three—Theories

 

  • Overview
  • What is a theory?
  • Inductive versus deductive model of theory-building
  • Five criteria for evaluating competing theories
  • Three functions of theories

 

What is a theory?

Goal = explanation. Generalize beyond the particular, see it as a part of a pattern. Treating particular as example of something more general

-explanation: step 1 form concepts: identify a property that is shared in common. Step 2 form theories: tie concepts together by stating relationships btwn them

  • Normative theory versus empirical theory
  • Theories tie concepts together by stating relationships between them. These statements are called ‘propositions’ if they have been derived deductively and ‘empirical generalizations’ if they have been arrived at inductively.
  • A theory consists of a set of propositions (or empirical generalizations) that are all logically related to one another. Explain something by showing how it is related to something else.
  • A theory explains political phenomena by showing that they are logically implied by the propositions (or empirical generalizations) that constitute the theory. Theory takes a common set of occurrences & try to define pattern. Once pattern is identified, different occurrences can be treated as though just repeated occurrences of the same pattern. Simplify.

-tradeoff btwn how far we simplify and having a useful theory.

-skeptical mindset, try to falsify theories.

 

Inductive versus deductive model of theory-building

Inductive model—starts with a set of observations and searches for recurring regularities in the way that phenomena are related to one another.

Deductive model—starts with a set of axioms and uses logic to derive propositions about how and why phenomena are related to one another.

 

Deductive theory-building

Deductive theory-building is a process of moving from abstract statements about general relationships to concrete statements about specific behaviours.

-theory. Data enters into the process at the end. Develop theory first, then collect data.

-begins with a set of axioms, want them to be defensible.

-from axioms, reason through a set of propositions all logically implied by the same set of assumptions

-proposition asserts relationship btwn 2 concepts

-theory helps us to understand phenomena by showing that it is logically implied. Tells us how phenomena are related and that they are actually related.

-problem: logic is not enough -> need empirical verification.

-theories provide a logical base for expectations, predictions

-design research, choose tools, collect data. See if predictions hold. If so, theory somewhat validated.

-expectations stated in the form of hypotheses (as many as possible)

-a hypothesis states a relationship btwn variables

-variable is an empirical counterpart of a concept, closer to the world of observation, specific.

-any one test is likely to be flawed.

-deductive theory-building is more efficient, asking less of the data.

 

Inductive Theory-Building

-data

-statistical analysis, try to discover patterns. Data first then use it to develop theory.

-being with a set of observations, discern pattern, and assume that this pattern will hold more generally

-relying implicitly on assumption of determinism

-end up with empirical generalization, which is a statement of relationship that has been established by repeated systematic observation

-ex) regime destabilized when inflation increased. Collect data on other countries. If it holds, then have empirical generalization

-inductive theory ties several empirical generalizations together

-no logical basis, therefore more vulnerable to few disconfirming instances

-less efficient, more complicated questions

 

-what is proper interplay btwn theory and research? In practice, it is a blend of induction and deduction.

Generalization: always have to test theory using observations other than those use in creating it. If data does not support theory, can go back & modify it. Provided you then go out & collect new data about modified theory.

 

Five criteria for evaluating competing theories

 

 –Simplicity (or parsimony) — a simple theory has a higher degree of falsifiability because there are fewer restrictions on the conditions under which it is expected to hold. As few explanatory factors as possible. Why? Less generalizable harder to falsify when more complex.

Internal consistency (logical soundness) — it should not be possible to derive contradictory implications from the same theory.

 –Testability — we should be able to derive expectations about reality that are concrete and specific enough for us to be able to make observations and determine whether the expectations are supported. Allows us to derive expectations about which we can make observations and see if theory holds. Concrete and specific enough.

 –Predictive accuracy — the expectations derived from the theory should be confirmed. Never consider a theory to be true. Instead, is it useful? Does it have predictive accuracy?

 –Generality — the theory should allow us to explain a variety of political phenomena across time and space. Explains a wide variety of events/behaviours in a variety of different places. Holds as widely as possible.

 

Why is there inevitably tension among these five criteria?

-different criteria can come into conflict (more generality means less predictive accuracy, more predictive accuracy is less parsimonious)

-always going to be a tradeoff: ability to explain specific cases will tradeoff with ability to explain generally. (forests vs individual trees)

-in practice, you are pragmatic. Do what makes theory more useful.

-very rare to meet all criteria in poli sci

 

Three functions of theories (2nd way to evaluate)

-how well they perform functions they are meant to perform

Explanation — our theory should be able to explain political phenomena by showing how and why they are related to other phenomena. Part of some larger pattern, explain why phenomena that interest us vary.

 

Organization of knowledge — our theory should be able to explain phenomena that cannot be explained by existing generalizations and show that those generalizations are all logically implied by our theory. Explain things that other theories cannot. Should be possible to show that existing generalizations are related to theory/one another.

 

Derivation of new hypotheses (the ‘heuristic function’) — our theory should enable us to predict phenomena beyond those that motivated the creation of the theory.

Suggest new knowledge/generate new hypotheses. Abstract propositions should enable us to generate lots of interesting hypotheses (beyond those that motivate the study)

Topic 4: Hypotheses and Variables

OVERVIEW

  • What is a variable?
  • Variables versus concepts
  • What is a hypothesis?
  • Independent vs. dependent variables
  • Formulating hypotheses
  • Common errors in formulating hypotheses
  • Why are hypotheses so important?

 

 

What is a Variable?

 

  • Concepts are abstractions that represent empirical phenomena. In order to move from the conceptual-theoretical level to the empirical-observational level, we have to find variables that correspond to our abstract concepts. Highly abstract. Need empirical counter part -> variables

-empirical research always functions at 2 lvls: conceptual/theoretical and empirical/observation. Hardest part is moving from 1 to 2. Must minimize loss of meaning.

  • A variable is a concept’s empirical counterpart.

 

  • Any property that varies (i.e. takes on different values) can potentially be a variable.

 

  • Variables are empirically observable properties that take on different values. Some variables have many possible values (e.g. income). Other variables have only two ‘values’ (e.g. sex).

-require more specificity than concepts. Enable us to take statement w/abstract concepts & translate into corresponding statement w/precise empirical reference.

-one concept may be represented by several different variables. This is desirable.

 

Variables vs. Concepts

 

Variables require more specificity than concepts.

One concept may be represented by several different variables.

 

What is a Hypothesis?

In order to test our theories, we have to convert our propositions into hypotheses.

A hypothesis is a conjectural statement of the relationship between two variables.

A hypothesis is logically implied by a proposition. It is more specific than a proposition and has clearer implications for testing. What we expect to observe when we make properly organized observations. Always in the form of a declarative statement. Always states relationships btwn variables.

 

Independent vs. Dependent Variables

 

Variables are classified according to the role that they play in our hypotheses

 

The dependent variable is the phenomenon that we want to explain.

 

The independent variable is the factor that is presumed to explain the dependent variable. Explanatory factor that we believe will explain variation in DV.

 

The dependent variable is ‘dependent’ because its values depend on the values taken by the independent variable

 

The independent variable is ‘independent’ because its values are independent of any other variable included in our hypothesis

 

Another way to think of the distinction is in terms of the antecedent (i.e. the independent variable) and the consequent (i.e. the dependent variable).

 

We predict from the independent variable to the dependent variable.

-the same variable can be dependent in one theory and independent in another.

 

Formulating Hypotheses I

 

Hypotheses can be arrived at either inductively (by examining a set of data for patterns) or deductively (by reasoning logically from a proposition). Which method we use depends on whether we are conducting exploratory research or explanatory research.

 

Hypotheses arrived at inductively are less powerful because they do not provide a logical basis for the hypothesized relationship (post hoc rationalization is no substitute for a priori theorizing).

 

Hypotheses can be stated in a variety of ways provided that (1) they state a relationship between two variables (2) they specify how the variables are related and (3) they carry clear implications for testing.

 

Like the concepts they represent, variables can classify, compare or quantify. This affects the way the hypothesis will be stated.

 

Formulating Hypotheses II

 

-When both variables are comparative or quantitative, state how the values of the DV (dependent variable) change when the IV (independent variable) changes:

-When the IV is comparative or quantitative and the DV is categorical, state which category of the DV is most likely to occur when the IV changes:

-When the IV is categorical and the DV is comparative or quantitative, state which category of the IV will result in more of the DV:

-When both the IV and the DV are categorical, state which category of the DV is most likely to occur with which category of the IV:

Common Errors in Formulating Hypotheses

Canadians tend not to trust their government.

Error #1–The statement contains only one variable. To be a hypothesis, it must be related to another variable. Not general.

To make this into a hypothesis, ask yourself whether you want to explain why some people are less trusting than others (DV) or whether you want to predict the consequences of lower trust (IV):

The younger voters are, the less likely they are to trust the government. (DV)

The less people trust the government (IV), the less likely they are to participate in politics.

 

 

Turnout to vote is related to age

Error #2 The statement fails to specify how the two variables are related—are younger people more likely to vote or less likely to vote?

The older people are, the more likely they are to vote.

 

Public sector workers are more likely to vote for social democratic parties.

Error #3 The hypothesis is incompletely specified (we don’t know with whom public sector workers are being compared). When the IV is categorical, the reference categories must always be made explicit.

 

Public sector workers are more likely to vote for social democratic parties than for neo-conservative parties.

Error #4 The hypothesis is improperly specified. This is the most common error in stating hypotheses. The comparison must always be made in terms of categories of the IV, not the DV. This is very important for hypothesis testing.

The hypothesis should state:

Public sector workers are more likely to vote for social democratic parties than private sector workers or the self-employed.

 

 

The turnout to vote should be higher among young Canadians

Error #5 This is simply a normative statement. Hypotheses must never contain words like ‘should’, ‘ought’ or ‘better than’ because value statements cannot be tested empirically.

This does not mean that empirical research is not concerned with value questions.

 

To turn a value question into a testable hypothesis, you could focus on factors that encourage a higher turnout or you could focus on the possible consequences of low turnout:

The higher the turnout to vote, the more responsive the government will be.

 

 

Mexico has a more stable government than Nicaragua.

Error #6 The hypothesis contains proper names. A statement that contains proper names (i.e. names of countries, names of political actors, names of political parties, etc.) cannot be a hypothesis because its scope is limited to the named entities.

To make this into a hypothesis, you must replace the proper names with a variable. Ask yourself: why does Mexico have a more stable government?

The higher the level of economic development, the more stable a government will be.

 

 

The more politically involved people are, the more likely they are to participate in politics.

Error #7 The hypothesis is true by definition because the two variables are simply different names for the same property (i.e. it is a tautology)

Decide whether you want to explain variations in political participation (DV) or to predict the consequences of variations in political participation (IV).

The more involved people are in voluntary organizations, the more likely they are to participate in politics.

 

*Importance of nominal definition: could be non-circular if meant emotional involvement & behavioural expectations.

 

Why are Hypotheses so Important?

 

  • Hypotheses provide the indispensable bridge between theory and observation by incorporating the theory in near-testable form.

 

  • Hypotheses are essentially predictions of the form, if A, then B, that we set up to test the relationship between A and B.

 

  • Hypotheses enable us to derive specific empirical expectations (‘working hypotheses’) that can be tested against reality. Because they are logically implied by a proposition, they enable us to assess whether the proposition holds.

 

  • Hypotheses direct investigation. Without hypotheses, we would not know what to observe. To be useful, observations must be for or against any POV.
  • Hypotheses provide an a priori rationale for relationships. If we have hypothesized that A and B are related, we can have much more confidence in the observed relationship than if we had just happened upon it.
  • Hypotheses may be affected by the researcher’s own values and predispositions, but they can be tested, and confirmed or disconfirmed, independently of any normative concerns that may have motivated them.
  • Even when hypotheses are disconfirmed, they are useful since they may suggest more fruitful lines for future inquiry—and without hypotheses, we cannot tell positive from negative evidence.

 

-successful hypothesis: do variables covary?

Test for other variables that might eliminate relationship. Control variables. Think about control on data collection stage.

Topic 5: Control Variables

OVERVIEW

 

  • What are control variables?

 

  • Sources of spuriousness

 

  • Intervening variables
  • Conditional variables

 

What are control variables?

 

Testing a hypothesis involves showing that the IV and the DV vary together (‘covary’) in a consistent, patterned way e.g. showing that people who have higher levels of education do tend to have higher levels of political interest.

 

It is never enough to demonstrate an empirical association between the IV and the DV. Must always go on to look at other variables that might plausibly alter or even eliminate the observed relationship.

 

Control variables are variables whose effects are held constant (literally, ‘controlled for’) while we examine the relationship between the IV and the DV.

 

Sources of Spuriousness 

The mere fact that two variables are empirically associated does not mean that there is necessarily any causal connection between them

Think: pollution and literacy rates, number of firefighters and amount of fire damage, migration of storks and the birth rate in Sweden…

These are all (silly!) examples of spurious relationships. In each case, the observed relationship can be explained by the fact that the variables share a common cause

 

A source of spuriousness variable is a variable that causes both the IV and the DV. Remove the common cause and the observed relationship between the IV and the DV will weaken or disappear. If you overlook SS, you risk research being completely wrong.

 

To identify a potential (SS) source of spuriousness, ask yourself (1) whether there is any variable that might be a cause of both the IV and the DV and (2) whether that variable acts directly on the DV as well as on the IV.

 

If the variable only acts directly on the IV, it is not a potential source of spuriousness. It is simply an antecedent. An antecedent is not a control variable.

 

Sources of Spuriousness II

  • To identify a potential (SS) source of spuriousness, ask yourself (1) whether these is any variable that might be a cause of both the IV and the DV and (2) whether that variable acts directly on the DV as well as on the IV.
  • If the variable only acts directly on the IV, it is not a potential source of spuriousness. It is simply an antecedent. An antecedent is not a control variables.

SS à IV à DV

  • Examples; The higher people’s income, the great their interest in politics.
  • BUT it could be spurious: education could be a source of spuriousness:

Income à                               Interest in Politics

Education (spuriousness)

Education à                           Support for Feminism

Generation (spuriousness)

  • Some variables won’t have a spurious independent variable: ethnicity religion.

 

 

Intervening Variables I

 

Once we have eliminated potential sources of spuriousness, we must test for plausible intervening variables

 

Intervening variables are variables that mediate the relationship between the IV and the DV. An intervening variable provides an explanation of why the IV affects the DV

 

The intervening variable corresponds to the assumed causal mechanism. The DV is related to the IV because the IV affects the intervening variable and the intervening variable, in turn, affects the DV.

IV-> Intervening->DV

To identify plausible intervening variables, ask yourself why you think the IV would have a causal impact on the DV.

-can be more than one potential rationale. Intervening variable validates causal thinking.

 

Intervening Variables II:

  • To identify plausible intervening variables, ask yourself why you thinking the IV would have a causal impact on the DV.
  • Examples:
  • Women are more likely than men to favour an increase in social spending.
  • GENDER à RELIANCE ON THE WELFARE STATE à FAVOUR INCREASE IN SOCIAL SPENDING
  • The lower people’s income the more politically alienated they will be.
  • PERSONAL INCOME à PERCEPTION OF SYSTEM RESPONSIVENESS à Political ALIENATION

 

Conditional variables I.

-trickiest and most common. What will happen to relation btwn IV and DV?

Once we have eliminated plausible sources of spuriousness and verified the assumed causal mechanism, we need to specify the conditions under which the hypothesized relationship holds.

 

Ideally, we want there to be as few conditions as possible because the aim is to come up with a generalization.

 

Conditional variables are variables that literally condition the relationship between the IV and the DV by affecting:

(1) the strength of the relationship between the IV and the DV (i.e. how well do values of the IV predict values of the DV?) and

(2) the form of the relationship between the IV and the DV (i.e. which values of the DV tend to be associated with which values of the IV?)

-focus is always on its effect on hypothesize relation btwn IV and DV (in every category of the conditional variable.  Ex) category = religion. Christian, Muslim, Atheist. Or important, not important, somewhat)

 

To identify plausible (CV) conditional variables, ask yourself whether there are some sorts of people who are likely to take a particular value on the DV regardless of their value on the IV.

Note: the focus is always on how the hypothesized relationship is affected by different values of the conditional variable.

 

There are basically three types of variables that typically condition relationships:

(1) variables that specify the relationship in terms of interest, knowledge or concern. Example (interest, knowledge or concern):

Catholics are more likely to oppose abortion than Protestants.

If CV = attends church then: religious affiliation -> support for abortion.

If CV = not attend, then religious affiliation -> does not support

(2) variables that specify the relationship in terms of place or time. (where are they from?) Example (place or time):

The higher people’s incomes, the more likely they are to participate in politics

If CV = non-rural resident, then income -> political participation

If CV = rural resident then income does not -> political participation

(3) variables that specify the relationship in terms of social background characteristics.

Examples (Social Background Characteristics):

The more religious people are, the more likely they are to oppose abortion.

If CV = male then religiosity -> views on abortion

If CV = female then religiosity does not -> abortion

 

Stages in Data Analysis:

Test hypothesis –> Test for Spuriousness –> If non-spurious, test for intervening variables –> test for conditional variables.

 

Topic 6: Research Problems and the Research Process

OVERVIEW

 

  • What is a research problem?
  • Maximizing generality
  • Why is generality important?
  • Overview of the research process
  • Stages in data analysis

 

 

What is a research problem?

 

A properly formulated research problem should take the form of a question: how is concept A related to concept B?

Examples:

 

How is income inequality related to regime type?

 

How is moral traditionalism related to gender?

 

How is civic engagement related to social networks?

Maximizing Generality

 

Aim for an abstract and comprehensive formulation rather than a narrow and specific one.

 

Example: you want to explain support for the Parti-Québécois.

A possible formulation of the research problem:

 

How is concern for the future of the French language related to support for the PQ?

A better formulation of the research problem:

 

How is cultural insecurity related to support for nationalist movements?

Why is Generality Important?

 

  • Goal of the empirical method is to come up with a generalization.

 

  • Greater contribution because findings will have implications beyond the particular puzzle that motivated the research.

 

Access to a more diverse theoretical and empirical literature in developing a tentative answer to the research question.

 

The Research Process

Find a puzzle of anomally –> Formulate the research problem. How is A related to B? –> Develop hypothesis explaining how and why A and B are related –> Identify plausible sources of spuriousness, intervening variables and conditional variables. –> Choose indicators to represent the IV, DV and control variables (‘operationalization’) –> Collect and analyze the data.

 

Stages in Data Analysis:

Test hypothesis –> Test for Spuriousness –> If non-spurious, test for intervening variables –> test for conditional variables.

Topic 7: From concepts to indicators

Overview:

 

  • What is ‘operationalization’?

 

  • What are indicators?

 

  • Converting a proposition into a testable form

 

  • Key properties of an operational definition

 

An example: operationalizing ‘socio-economic status’

What is Operationalization?

 

Operationalization is the process of selecting observable phenomena to represent abstract concepts.

When we operationalize a concept we literally specify the operations that have to be performed in order to establish which category of the concept is present (classificatory concepts) or the extent to which the concept is present (comparative or quantitative concepts).

 

The end product of this process is the specification of a set of indicators.

What are indicators?

Indicators are observable properties that indicate which category of the concept is present or the extent to which the concept is present.

 

In order to test our theory, we examine whether our indicators are related in the way that our theory would predict.

 

The predicted relationship is stated in the form of a working hypothesis.

 

The working hypothesis is logically implied by one of the propositions that make up our theory. Because it is logically implied by the proposition, evidence about the validity of the working hypothesis can be taken as evidence about the validity of the proposition.

 

Converting a Proposition into a Testable Form I

Concept -> proposition -> concept

Variable -> hypothesis -> variable

Indicator -> working hypothesis -> indicator

 

 

 

Converting a Proposition into a Testable Form I

 

Just as it is possible to represent one concept by several different variables, so it is possible—and desirable—to represent one variable by several different indicators.

Concept: variable (2 or more): Indicator (2 or more each).

 

Key Properties of an Operational Definition

 

The operational definition specifies the indicators by setting out the procedures that have to be followed in order to represent the concept empirically.

 

A properly framed operational definition:

-adds precision to concepts

-makes propositions publicly testable

 

This ensures that our knowledge claims are transmissible and makes replication possible.

An Example: Operationalizing ‘Socio-Economic Status’

 

The first step in representing a concept empirically is to provide a nominal definition that sets out clearly and precisely what you mean by your concept:

 

Socio-Economic Status: ‘a person’s relative location in a hierarchy of material advantage’.

Socio economic status: 1. Income -> earnings from employment, annual household income

  1. wealth: value of assets, home ownership

Topic Eight: Questionnaire Design and Interviewing

Overview:

 

-The function of a questionnaire

-The importance of pilot work and pre-testing

-Open-ended versus close-ended questions

-Advantages and disadvantages of close-ended questions

-Advantages and disadvantages of open-ended questions

-Ordering the questions

-Common errors in question wording

-A checklist for identifying problems in the pre-test

 

important to know what makes good survey research

-simply a formal way of asking people questions: attitude, beliefs, background, opinions

-follows a highly standardized structured, thought out sequence

 

The Function of a Questionnaire

-The function of a questionnaire is to enable us to represent our variables empirically.

-Respondents’ coded responses to our questions serve as our indicators.

-The first step in designing a questionnaire is to identify all of the variables that we want to represent (i.e. independent variables, dependent variables, control variables).

Do not pose hypothesis directly. One question cannot operationalize two variables.

-We must always keep in mind why we are asking a given question and what we propose to do with the answers.

-A question should never pose a hypothesis directly. We test our hypotheses by examining whether people’s answers to different questions go together in the way that our hypotheses predicted.

 

The Importance of Pilot Work

Second step: pilot work

Careful pilot work is essential in designing a good questionnaire. Background work to prepare surveys.

 

Pilot work can involve:

-lengthy unstructured interviews with people typical of those we want to study

-talks with key informants

-reading widely about the topic in newspapers, magazines and on-line in order to get a sense of the range of opinion.

 

The Importance of Pre-testing

Third step: draft a questionnaire

Fourth step: pretest questionnaire

Once a questionnaire has been drafted, it should be pre-tested using respondents who are as similar as possible to those we plan to survey

-ideally, people you test are typical of group you want to represent.

-purposif/judgmental sampling: use knowledge of population to choose subjects

-pretest very important & often humbling

Pre-testing can help with:

  • identifying flawed questions
  • improving question wording
  • ordering questions
  • determining the length of time it takes to answer the questionnaire or interview the respondents
  • assessing whether responses are affected by characteristics of the interviewer
  • improving the wording of the survey introduction (who am I, what I’m doing, why I’m doing it. Doesn’t say what hypotheses are.)

 

 

 

 

Open-Ended versus Close-Ended Questions

 

Surveys typically include a small number of open-ended questions and a larger number of close-ended questions.

In open-ended questions, only the wording of the question is fixed. The respondent is free to answer in his or her own words. The interviewer must record the answer word-for-word, w/out abbreviations.

 

In close-ended questions, the wording of both the question and the possible response categories is fixed. The respondent selects one answer from a list of pre-specified alternatives. (don’t read out “other”, but should be present in case they say something else)

 

Advantages of Close-Ended Questions

  • help to ensure comparability among respondents
  • ensure that responses are relevant. Allows comparison
  • leave little to the discretion of the interviewer. Respondent has control over classification of their answer.
  • take relatively little interviewing time: quick to ask & answer
  • easy to code, process, and analyze the responses
  • give respondents a useful checklist of possibilities
  • help people who are not very articulate to express an opinion

 

Disadvantages of Close-Ended Questions

  • may prompt people to answer even though they do not have an opinion (preferable not to offer “no opinion” but have it on questionnaire. Difference btwn don’t know and no answer.
  • may channel people’s thinking, producing responses that do not really reflect their opinion. Bias results.
  • may overlook some important possible responses
  • may result in a loss of rapport with respondents: throw in open-ended to engage people
  • misunderstanding (if using terms that could be difficult, provide definition for interviewers. Don’t adlib.)

The responses to close-ended questions must always be interpreted in light of the pre-set alternatives that were offered to respondents.

 

Advantages and Disadvantages of Open-Ended Questions

 

Advantages

Open-ended questions avoid the disadvantages of close-ended questions. They can also provide rich contextual material, often of an unexpected nature. (quotes can make report more interesting).

-avoid putting ideas in people’s heads

-can engage people

 

Disadvantages

Open-ended questions are easy to ask—but they are difficult to answer and still more difficult to analyze. Open-ended questions:

  • take up more interviewing time and impose a heavier burden on the interviewer
  • increase the possibility of interviewer bias if the interviewer ends up paraphrasing the responses
  • require more processing
  • increase the possibility of researcher bias since the responses have to be coded into categories for the purpose of analysis (must reduce to a set of numbers. Introduce risk of bias. Getting others to code for intersubjectivity is time consuming and expensive.)
  • the classification of responses may misrepresent the respondent’s opinion. Respondent’s have no control over how their response is used.
  • transmissibility and hence replicability may be compromised by the coding operation
  • respondents may give answers that are irrelevant. Solution: use open-ended in pilot study, then create close ended with answers. Some amount of info lost, less likely to overlook important alternative.

 

-close-ended response categories must be mutually exclusive and cover every category.

-avoid multiple answers (which is closest, comes closest to point of view)

-can have open & close-ended versions of same question, spread out in survey. Always open first.

 

Ordering the Questions

 

Question sequence is just as important as question wording. The order in which questions are asked can affect the responses that are given:

  • make sure that open-ended and close-ended versions of the same question are widely separated and that the open-ended version is asked first. (sufficiently separated)
  • if two questions are asked about the same topic, make sure that the first question asked will not colour responses to the subsequent question. Change order or separate questions.
  • avoid posing sensitive questions too early in the questionnaire.
  • begin with non-threatening questions that engage the respondent’s interest and seem related to the stated purpose of the survey. Help create rapport.
  • ensure some variety in the format of the questions in order to hold the respondent’s attention.

-when reading over questionnaire, try to think how you would react.  Not intimidating. Shouldn’t seem like a test

-have you unwittingly made your own views obvious and favoured a particular position?

-worded in a friendly, conversational way. Should seem natural.

-writing questions is likened to catching a particularly elusive fish.

-making assumptions that everyone understands the question the same way. The way you intended, assuming people have necessary information. Make questions unambiguous. Problem: people will express non-attitudes.

-if problems writing questions, often b/c not completely clear on topic concept. Importance of nominal definition.

 

Common Errors in Question Wording

‘Do you agree or disagree with the supposition that continued constitutional uncertainty will be detrimental to the Quebec economy?’

Error #1: the question uses language that may be unfamiliar to many respondents. The wording should be geared to the expected level of sophistication of the respondents.

‘Please tell me whether you strongly agree, somewhat agree, somewhat disagree or strongly disagree with the following statements:

People like me have no say in what the government does

 

The government doesn’t care what people like me think’

Error #2: the wording of the statements is vague (the federal government? the provincial government? the municipal government?) Questions must always be worded as clearly as possible. (time, place, lvl of govt)

 

‘It doesn’t matter which party is in power, there isn’t much governments can do these days about basic problems’

Error #3: this is a double-barreled question. A respondent could agree with one part of the question and disagree with the other.

 

‘In federal politics, do you usually think of yourself as being on the left, on the right, or in the center?’

Error #4: this question assumes that the respondent understands the terminology of left and right.

 

‘Would you favor or oppose extending the North American Free Trade Agreement to include other countries?”’

Error #5: this question assumes that respondents are competent to answer. Also doesn’t say to what other countries. Solution: filter question: Do you happen to know what NAFTA is? People will want to answer even if they don’t know what it is (ex, fictitious topics). Lack of information.

 

‘Should welfare benefits be based on any relationship of economic dependency where people are living together, such as elderly siblings living together or a parent and adult child living together or should welfare benefits only be available to those who are single or married and/or have children under the age of 18 years?’

Error #6 this question is too wordy. In a self-administered survey, a question should contain no more than 20 words. In a face-to-face or telephone survey, it must be possible to ask the question comfortably in a single breath.

 

‘Do you agree that gay marriages should be legally recognized in Canada?’

Error #7: this is a leading question that encourages respondents to agree. The problem could be avoided by adding ‘or disagree. Especially important to avoid in regard to sensitive topics.

 

‘Canada has an obligation to see that its less fortunate citizens are given a decent standard of living’.

Error #8: this question is leading because it uses emotionally-laden language e.g. ‘less fortunate’, ‘decent’. Can also be leading by identifying with prestigious person or institution like Supreme Court, or w/someone who is disliked.

 

How often have you read about politics in the newspaper during the last week?

Error #9: this question is susceptible to social desirability bias because it seems to assume that the respondent has read the newspaper at least once during the previous week. People answer through filter of what makes them look good. “Have you had time to read the newspaper in the last week?”

 

-don’t abbreviate

-no more than 1 question per line

-open-ended must have space to write

-clear instructions

-informed consent

-privacy/confidentiality

A Checklist for Identifying Problems in the Pre-Test

  • Did close-ended questions elicit a range of opinion or did most respondents choose the same response category?
  • Do the responses tell you what you need to know?
  • Did most respondents choose ‘agree’ (the question was too bland -> should protect nature) or did most respondents choose ‘disagree’ (the question was too strongly worded -> abortion is murder)?
  • Did respondents have problems understanding a question? Were there a lot of don’t knows? (if they don’t get it, ask it again and move on)
  • Did several respondents refuse to answer the same question?
  • Did open-ended questions elicit too many irrelevant answers? (can you code responses)
  • Did open-ended questions produce yes/no or very brief responses? Add a probe. (best probe is silence, pen poised to record)

 

Topic 9: Content Analysis

Overview:

 

What is content analysis?

What can we analyze?

What questions can we answer?

Selecting the communications

Substantive content analysis

Substantive content analysis: coding manifest content

Substantive content analysis: coding latent content

Structural content analysis

Strengths of content analysis

Weaknesses of content analysis

 

What is content analysis?

-involves the analysis of any form of communication

-communications form the basis for drawing inferences about causal relations

-Content analysis is ‘any technique for making inferences by systematically and objectively identifying specified characteristics of communications’. (Holsti)

-Systematically means that content is included or excluded according to consistently applied criteria.

-Objectively requires that the identification be based on explicit rules. The categories used for coding content must be defined clearly enough and precisely enough that another researcher could apply them to the same content and obtain the same results

(transmissibility+replicability=intersubjectivity).

 

What can we analyze?

 

Content analysis can be performed on virtually any form of communication (books, magazines, poems, songs, speeches, diplomatic exchanges, videos, paintings…) provided:

  • there is a physical record of the communication.
  • the researcher can obtain access to that record

A content analysis can focus on one or more of the following questions: ‘who says what, to whom, why, how, and with what effect?’ (Lasswell)

-who/why: inferences about sender of the communication, causes or antecedents. Why does it take the form that it does?

-with what effect: inferences about effects on person(s) who receives it

What questions can we answer?

Content analysis can be used to:

  • test hypotheses about the characteristics or attributes of the communications themselves (what? how?)
  • make inferences about the communicator and/or the causes or antecedents of the communication (who? why?)
  • make inferences about the effect of the communication on the recipient(s) (with what effect?)

 

Rules of Content analysis

i.specify rules for selecting communications that will be analyzed

  1. specify characteristics you will analyze (what aspects of content)

iii. formulate rules for identifying characteristics when they appear

  1. apply the coding scheme to the selected communications

 

Selecting the communications

 

The first step is to define the universe of communications to be analyzed by defining criteria for inclusion.

 

Typical criteria include:

  • the type of communication
  • the location, frequency, minimum size or length of the communication
  • the distribution of the communication
  • the time period
  • the parties to the communication (if communication is two-way or multi-way)

 

If too many communications meet the specified criteria, a sampling plan must be specified in order to make a representative selection.

-if study is comparative, must choose comparable communications. Control in content analysis is the way communications are chosen (as similar as possible except one thing).

 

Type of Analysis (substantive vs structural)

Substantive content analysis

-In a substantive content analysis, the focus is on the substantive content of the communication—what has been said or written.

-A substantive content analysis is essentially a coding operation.

-The researcher codes—or classifies—the content of the selected communications according to a pre-defined conceptual framework

Examples:

  • coding newspapers editorials according to their ideological leaning
  • coding campaign coverage according to whether it deals with matters of style or substance

 

Substantive Content Analysis: Coding Manifest Content

-A substantive content analysis can involve coding manifest content and/or latent content

-Coding manifest content means coding the visible surface content i.e. the objectively identifiable characteristics of the communication

-list of words/phrases that are empirical counterparts to your concept (the hard part!)

-important to relate it to some sort of base -> longer means more likely to use particular words

-Example: choosing certain words or phrases as indicators of the values of key concepts and then simply counting how often those words or phrases occur within each communication.

Advantages:

  1. Ease
  2. Replicability
  3. Reliability (consistency)

Intersubjectivity?

Disadvantages

  1. meaning depends on context
  2. loss of nuance and sublety of meaning

-possible that word is being used in an unexpected way (irony, sarcasm)

-validity: are we really measuring what we think we’re measuring?

 

Substantive Content Analysis: Coding Latent Content

Coding latent content involves coding the underlying meaning. (tone of media, etc)

 

Example:

  • reading an entire newspaper editorial and making a judgment as to its overall ideological leaning.

reading an entire newspaper story and making a judgment as to whether the person covered is reflected in a positive, negative, or neutral light.

Advantages

(1) less loss of meaning and thus higher validity.

Disadvantages

(1) requires the researcher to make judgments and infer meaning, thus increasing risk of bias.

(2) lower reliability.-> differences in judgment

(3) lower transmissibility and hence replicability. -> cannot communicate to a reader exactly how judgement was made

-researcher is making judgments about meaning, which may be influenced by own values

Solution: take 1 hypothesis & test it different ways. More compelling, more experience w/ pros and cons of content analysis. Test hypothesis as many ways as possible.

-strive for high intercoder reliability (2 people recode independently, 90% similarity)

-use all 3 methods

 

Structural Content Analysis

 

A structural content analysis focuses on physical measurement of content.(time, space)

 

Examples:

  • how much space does a newspaper accord a given issue (number of columns, number of paragraphs, etc.)?
  • how much prominence does a newspaper accord a given issue (size of headline, placement in the newspaper, presence of a photograph, etc.)?
  • how many minutes does a news broadcast give to stories about each political party?
  • Column inches, seconds of airtime, order of stories, pages, paragraphs, size of headline, photograph= measures of prominence

 

Measurements of space and time must always be related to the total size/length of the communication

-standardize: relative to size w/same paper, not compare headline size in 2 papers

Advantages

  1. reliability
  2. replicability -easy to explain methods

Disadvantages

  1. loss of nuance & subtlety of meaning

-less valid: can you really represent subtle nuanced ideas by counting/measuring?

 

Strengths of Content Analysis

-economy

-generalizability (external validity). Representative, more confidence.

-safety: risk of missing something, time, etc not existant here. You can recode.

-ability to study historical events or political actors: asking people means you get answers they think now, not what they thought then

-ability to study inaccessibly political actors (supreme court justices)

-unobtrusive (non-reactive)

-reliability: highly reliable way of doing research, consistent results (structural, manifest)

-few ethical dilemmas. Communications already been produced, won’t harm or embarrass people.

 

Weaknesses of content analysis

-requires a physical record of communication

-need access to communications

-loss of meaning (low validity): are we measuring what we think we’re measuring?

-risky to infer motivations—political actors do not necessarily mean what they write or say. (Take into account purpose of communication if asking why)

-laborious and tedious

-subjective bias -> important elements of subjectivity (latent analysis: making judgements, inferences about meaning)

-> no one best way of doing content analysis. Do all 3.

 

 

Major Coding Categories

-warfare: a battle royal, political equivalent of heat seeking missiles, fighting a war on several fronts, a night of political skirmishes, took a torpedo in the boilers, master of the blindside attack

-general violence: a goold old-fashioned free-for-all, one hell of a fight, assailants in the alley

-sports and games: contestants squared off, left on the mat, knockout blow

-theatre and showbiz: a dress rehearsal, got equal billing, put their figures in the spotlight

-natural phenomena: nothing earth-shattering, an avalanche of opinion

-other

 

Coding Statements

-descriptive: present the who, what, where, when, without any meaningful qualification or elaboration

-analytical: draw inferences or reach conclusions (typically about the causes of the behaviour or event) based on fact not observed

-evaluative: make judgments about how well the person being reported on performed

 

Topic 10: Measurement

Overview:

 

What is measurement?

 

Rules and levels of measurement

 

Nominal-level measurement

 

Ordinal-level measurement

 

Interval-level measurement

 

Ratio-level measurement

 

What is Measurement?

-foundation of statistics

Measurement is the process of assigning numerals to observations according to rules.

 

These numerals are referred to as the values of the variable we are measuring (not numbers, but numberals, simply symbols or labels whereas numbers have quantitative meaning).

 

Measurement can be qualitative or quantitative.

 

If we want to measure something, we have to make up a set of rules that specify how the numerals are to be assigned to our observations.

 

 

 

 

Rules and Levels of Measurement

 

-The rules determine the level, or quality, of measurement achieved. <- most important part of definition.

-The level of measurement determines what kinds of statistical tests can be performed on the resulting data.

-The level of measurement that can be achieved depends on:

  • the nature of the property being measured
  • the choice of data collection procedures

-The general rule is to aim for the highest possible level of measurement because higher levels of measurement enable us to perform more powerful and more varied tests.

-The rules can provide a basis for classifying, ordering or quantifying our observations.

-no hierarchical order, can substitute any numeral for any other numeral. All they indicate is that the categories are different.

 

4 Levels: NOIR

Nominal-level measurement

Ordinal-level measurement

Interval-level measurement

Ratio-level measurement

 

Nominal-level measurement

-Nominal-level measurement represents the lowest level of measurement, most primitive, least information

-Nominal measurement involves classifying a variable into two or more (predefined) categories and then sorting our observations into the appropriate category.

-The numerals simply serve to label the categories. They have no quantitative meaning. Words or symbols could perform the same function. There is no hierarchy among the categories and the categories cannot be related to one another numerically. The categories are interchangeable.

-classify

-Rule: do not assign the same numeral to different categories or different numerals to the same category. The categories must be exhaustive and mutually exclusive.

Ex) sex, religion, ethnic origin, language

 

Ordinal-Level Measurement

-Ordinal-level measurement involves classifying a variable into a set of ordered categories and then sorting our observations into the appropriate category according to whether they have more or less of the property being measured. Allows ordering and classifying. Notion of hierarchy.

-The categories stand in a hierarchical relationship to one another and the numerals serve to indicate the order of the categories. Numerals stand for relative amount of the property.

-classify, order

-more useful, direction of relation btwn variables

-With ordinal-level measurement, we can say only that one observation has more of the property than another. We can not say how much more.

Ex) social class, strength of party loyalty, interest in politics

 

Interval-Level Measurement

-Interval-level measurement involves classifying a variable into a set of ordered categories that have an equal interval (fixed and known interval) between them and then sorting our observations into the appropriate category according to how much of the property they possess.

-There is a fixed and known interval (or distance) between each category and the numerals have quantitative meaning. They indicate how much of the property each observation has (actual amount).

-Classify, order, meaningful distances.

-With interval-level measurement, we can say not only that one observation has more of the property than another, we can also say how much more.

-BUT we cannot say that one observation has twice as much of the property than another observation. Zero is arbitrary.

Ex) celcius and farenheit scales of temperature

 

Ratio-Level Measurement (highest)

-The only difference between ratio-level measurement and interval-level measurement is the presence of a non-arbitrary zero point.

-A non-arbitrary zero point means that zero indicates the absence of the property being measured.

-Now we can say that one observation has twice as much of the property as another observation.

-Any property than can be represented by counting can be measured at the ratio-level.

-classify, order, meaningful distance, non-arbitrary zero

Ex) income, years of schooling, gross national product, number of alliances, turnout to vote

 

-in poli sci, few things are above the ordinal level. Stretches credulity to believe that we could come up with equal units of collectivism or alienation.

-anything that can be measured at a higher lvl can be measured at a lower lvl

-always try to achieve highest lvl of measurement. Constrained by technique used to collect data.

Topic 11: Statistics: Describing Variables

Overview:

 

Descriptive versus inferential statistics

Univariate, bivariate and multivariate statistics

Univariate descriptive statistics

Describing a distribution

Measuring central tendency

Measuring dispersion

 

Descriptive versus Inferential Statistics

 

Descriptive statistics are used to describe characteristics of a population or a sample.

 

Inferential statistics are used to generalize from a sample to the population from which the sample was drawn. They are called ‘inferential’ because they involve using a sample to make inferences about the population.

 

Univariate, Bivariate and Multivariate Statistics

 

Univariate statistics are used when we want to describe (descriptive) or make inferences about (inferential) the values of a single variable.

 

Bivariate statistics are used when we want to describe (descriptive) or make inferences about (inferential) the relationship between the values of two variables.

 

Multivariate statistics are used when we want to describe (descriptive) or make inferences about (inferential) the relationship among the values of three or more variables.

-can all be descriptive or inferential

 

Univariate Descriptive Statistics

 

Data analysis begins by describing three characteristics of each variable under study:

  • the distribution : how many cases take each value?
  • the central tendency: which is the most typical value? best represents a typical case
  • the dispersion: how much do values vary? how spread out are cases across the possible categories? If there is much dispersion, measure of central tendency may be misleading.

 

-frequency value tells us how many cases take each of the possible values. Records the frequency with which each possible value occurs.

 

Describing a Distribution I

 

Knowing how the observations are distributed across the various possible values of the variable is important because many statistical procedures make assumptions about the distribution. If those assumptions are not met, the procedure is not appropriate.

 

A frequency distribution is simply a list of the number of observations in each category of the variable. It is called a frequency distribution because it displays the frequency with which each possible value occurs.

-frequency value tells us how many cases take each of the possible values. Records the frequency with which each possible value occurs.

 

Describing a distribution:

Raw frequencies (how many cases took off diff possible values)

-title informative, tell us variable for which data is being presented. Not interpret table

-source: name source

-footnote

-totals are difficult to compare, translate into %

-gives a relative idea of what to expect in the rest of the population

-gives a consistent base to make comparisons

-never report % w/out also reporting total # of cases in survey. Makes data meaningful.

– no % w/fewer than 20 cases: present raw frequency

-if data come from a sample, round off percentages to the nearest whole number, should assume that there is error.

-round up to .6-.9. round down .1-.4. with 0.5, round to nearest even number.

-99, 100, and 101% are acceptable totals. Can add note saying that numbers may not add up to 100.

-present in form of graph or chart. Contains exact same info, but easier to visualize. More interpretable, more appealing. Pie-chart, line graph.

-tricks: truncated scale to make things look better/worse. Always check the scaling.

-need to check distribution to make sure that its appropriate to use a particular statistic

 

Interval/ratio: not simply numerals, but numbers w/quantitative meanings. Can’t use bar or pie chart. To present distribution, must collapse lvls of variables into small groups.

-guidelines: 1. At least 6, but no more than 20 intervals. Lose to much info about distribution if too small, but more than 20 defeats the purpose of creating class intervals & data is not readily accessible.

  1. intervals must all have same width, encompass same # of values to be comparable (can have larger open-ended category at the end)
  2. don’t want them to be too wide. Want to be able to consider every case within a given interval to be similar, makes sense to treat cases within the interval as the same.
  3. must be exhaustive and mutually exclusive.

 

Describing a distribution: interval lvl data

-create a line graph.

-the only pts w. any info are the dots. Connect to remind reader that original distribution was continuous.

 

 

,relative frequencies, bar charts, pie-chart, interval level data,

 

Central Tendency versus Dispersion

 

A measure of central tendency indicates the most typical value, the one value that best represents the entire distribution

 

A measure of dispersion tells us just how typical that value really is by indicating the extent to which observations are concentrated in a few categories of the variable or spread out among all of the categories.

-evaluating central tendency. Important for evaluating sample size. Don’t want to only describe variables (see if covary in predicted ways)

-2 distributions could have similar central tendency, but be very different. Use more than one measure.

A measure of dispersion tells us how much the values of the variable vary. Knowing the amount of dispersion is important because:

  • the appropriate sample size is highly dependent on the amount of variation in the population. The greater the variation, the larger the sample will need to be.
  • we cannot measure covariation unless both variables do vary.

 

 

Measuring Central Tendency and Dispersion (Nominal-Level)

The mode is the most frequently occurring value—the category of the variable that contains the greatest number of cases. The only operation required is counting.

The proportion of cases that do not fall in the modal category tells us just how typical the modal value is. This is what Mannheim and Rich call the variation ratio.

-bimodal distribution: 2 are tied for most cases

V= f nonmodal

 N

-dispersion: wht % of people were not in the modal category. The proportion who do not fall in the modal category tells us how typical the modal value is. Manheim and Rich call: variation ratio -> the lower the variation ratio, the more typical and meaningful the mode.

– in the case of bimodal or multimodal cases, select on mode arbitrarily.

 

Measuring Central Tendency and Dispersion (Ordinal-Level) I

Central Tendency:

-always present categories in order, natural order, should retain it

-central tendency based on order or relative position

 

The median is the value taken by the middle case in a distribution. It has the same number of cases above and below it. If even # of cases, take average of the two middle cases.

-cumulative frequency: eliminating raw frequency, tells # of cases that took that value or lower.

Dispersion:

 

The range simply indicates the highest and lowest values taken by the cases. Problem: could overstate variability. Range doesn’t tell us anything about how things are distributed btwn points.

The inter-quantile range is the range of values taken by the middle 50 percent of cases—inter-quantile because the endpoints are a quantile above and below the median value.

Measuring Central Tendency (Interval and Ratio-Level) I

 

The measure of central tendency for interval- and ratio-level data is the mean (or average value). Simply sum the values and divide by the number of cases:

 

Fall term grades: 70 75 78 82 85

GPA (or mean grade) = 78

 

-The mean is the preferred measure of central tendency because it takes into account the distance (or intervals) between cases. The fact that there are fixed and known intervals between values enables us to add and divide the values.

-The mean is sensitive to the presence of a small number of cases with extreme values:

When an interval-level distribution has a few cases with extreme values, the median should be used instead.

  • The mean is sensitive to the presence of a small number of cases with extreme values: 26,000. 28,000. 29,000. 32,000, 34,000, 36,000: mean = 31,000 median=32,000
  • Group #2 15,000. 18,000/ 19,000/ 22,000/ 23,000/ 25,000/ 95,000 mean=31,000 median 22,000

-Because the mean is subject to distortion, the mean value should always be presented along with the appropriate measure of dispersion.

-problematic when a few values are extreme cases. Mean take account of how far each case is from the others.

 

Measuring Dispersion (Interval- and Ratio-level) II

 

The standard deviation is the appropriate measure of dispersion at the interval-level because it takes account of every value and the distance between values in determining the amount of variability.

 

The standard deviation will be zero if—and only if—each and every case has the same value as the mean. The more cases deviate from the mean, the larger the standard deviation will be.

 

We cannot use the standard deviation to compare the amount of dispersion in two distributions that use different units of measurement (e.g. dollars and years) because the standard deviation will reflect both the dispersion and the units of measurement.

 

N= the number of cases, Xi = the value of each individual case, X= the mean see page 264.

 

Calculating Standardized Scores or Z-Values

 

-If we want to compare the relative position of two cases on the same variable or the relative values of the same case on two different variables like annual income and years of schooling, we can standardize the values by converting them into Z-scores.

 

The Z score allows us to compare scores that are based on very different units of measurement (for example, age measured in number of years and height measured in inches). -Z-scores tell us the exact number of standard deviation units any particular case lies above or below the mean:

 

Zi =  (Xi  – X)/S

 

where Xi is the value for each case, X is the mean value and S is the standard deviation.

 

Example: person1 has an annual income of $80,000 and person2 has an annual income of $30,000. The mean annual income in their community is $50,000 and the standard deviation is $20,000

 

Z1 = ($80,000 – $50,000)/$20,000 =  1.5

Z2 = ($30,000 – $50,000)/$20,000 =  – 1

Topic Twelve: Statistics — Estimating Sampling Error and Sample Size

Overview:

What is sampling error?

What are probability distributions?

Interpreting normal distributions

What is a sampling distribution?

The sampling distribution of the sample means

The central limit theorem

Estimating confidence intervals around a sample mean

Estimating sample size—means

Estimating confidence intervals around a sample proportion

Estimating sample size–proportions

 

What is sampling error?

 

No matter how carefully a sample is selected, there is always the possibility of sampling error (i.e. some discrepancy between our sample value and the true population value).

 

We cannot determine the amount of sampling error directly because we typically don’t know the true population value. But we can use inferential statistics to estimate the probable sampling error associated with any sample value. Use of probability distributions.

 

What are probability distributions?

 

Estimating sampling error involves using probability distributions.

 

Probability distributions are theoretical distributions that indicate the likelihood, or the probability, of certain values occurring, given certain assumptions about the nature of the distribution.

 

By far the most important class of probability distributions take the form of a normal distribution.

 

The normal distribution takes the form of a symmetrical bell-shaped curve. The mean, median and mode of normally distributed data coincide with the highest point of the curve (have the same value). Can use standard deviation to interpret distribution.

 

Interpreting normal distributions I

 

The standard deviation is used to interpret data that are normally distributed.

 

IF data are normally distributed, 68.3% of the cases will fall within one standard deviation of the mean of the distribution, 95.5% of the cases will fall within 2 standard deviations of the mean, and 99.7% of the cases will fall within 3 standard deviations of the mean.

These proportions are equal to the proportion of the area under the curve between these values.

Interpreting normal distributions II

 

We can determine the proportion of cases falling within any number of standard deviations, integer or non-integer, from the mean e.g. 83.8% of cases will fall within 1.4 standard deviations of the mean

 

Since we use standard deviation units and not simply the original values to interpret the normal distribution, we transform the original values into standard deviation units or Z-scores.

 

Z= (xi – X)/s

 

Z-scores tell us the exact number of standard deviation units any particular case lies above or below the mean.

 

If our data are normally distributed, all we have to do to estimate the probability of any range of values occurring around the mean is to convert the data into Z-scores and consult the appropriate table.

 

What is a sampling distribution?

 

The sampling distribution is a theoretical probability distribution that in actual practice would never be calculated.

 

The sampling distribution of the sample means is the distribution that we would obtain if:

  • every conceivable sample of a certain size were drawn from the same population
  • the sample means were calculated for each sample and
  • the sample means were arranged in a frequency distribution.

 

Different cases would be included in different samples so the sample means would not all be identical (e.g. some samples would contain only the very rich and some samples would contain only the desperately poor). But:

  • most sample means would tend to cluster around the true population mean value and
  • this clustering around the true mean value would increase if the sample size were increased

The sampling distribution of the sample means

 

IF the sample size is sufficiently large (at least 30 cases), the sampling distribution of the sample means will be approximately normally distributed and the mean of the sampling distribution of the sample means will coincide with the true population mean.

-can make use of the fact that it is normally distributed, and we can use that to estimate placement of the mean.

-the standard error of the mean is equal to the standard deviation of the population, divided by the square roots of the sample size

 

The Central Limit Theorem

 

The sampling distribution is a theoretical distribution–in real life, we select only one sample. But the fact that sample means will be normally distributed enables us to evaluate the probable accuracy of our particular sample mean.

 

Provided that our sample (1) is randomly selected (every case has a known probability of inclusion and a non-zero probability of inclusion) and (2) has at least 30 cases, the central limit theorem tells us that we can use our knowledge of the area under the curve to estimate how probable it is that the true population mean will fall within any given range of values of our sample mean.

 

e.g. since we know that 95.5% of sample means will lie within 2 standard deviation units of the true population mean, we can be 95.5% confident that our sample mean will also lie within 2 standard deviations of the true population mean.

 

Estimating confidence Intervals around a Sample Mean I

 

Conventionally, we want to be 90% confident, 95% confident or 99% confident. The corresponding Z-values are 1.64, 1.96 and 2.57

 

i.e. we can be 90% confident that our sample mean will lie within 1.64 standard deviations of the population mean, 95% confident that it will lie within 1.96 standard deviations, and 99% confident that it will lie within 2.57 standard deviations. These ranges of values are called confidence intervals.

 

A confidence interval is a range of values, estimated on the basis of sample data, within which we can say, with a pre-specified degree of confidence that the true population value will lie.

-the higher the confidence lvl, the wider the confidence interval must become.

 

Confidence level: the likelihood that our sample is in fact representative of the larger population within the degree of accuracy we have specified.

 

The lower the percentage of sampling error and the greater the level of confidence, the better a piece of research will be.

 

The size of the confidence interval will depend on how confident we want to be that the interval does contain the true unknown population mean. The more confident we want to be, the wider the confidence interval will have to be.

 

Estimating confidence intervals around a sample mean II

 

In order to determine what 1.96 standard deviations actually means in terms of our original measurement scale (e.g. dollars, years), we need to estimate the value of the standard deviation of the sampling distribution of the sample means.

 

The standard error of the mean is equal to the standard deviation of the population, divided by the square root of the sample size.

This makes sense intuitively:

  • the more variability there is in the population, the more variability there will be in the sample estimates.
  • as the sample size increases, the variability in the sample estimates should decrease because extreme values will have less of a distorting effect on the calculation of the sample mean.

Since we typically do not know the true population standard deviation, we use our best estimate i.e. the standard deviation from our particular sample.

 

Estimating confidence intervals around a sample mean III

 

We then simply multiply our estimate of the sampling error of the mean by the Z-value associated with our chosen confidence level (1.64, 1.96 or 2.57) and we have the familiar plus or minus term:

Confidence lvl: X +- Zc.l. SX

-can be confident that something lies btwn 2 levels.

 

Estimating sample size I

 

Exactly the same concepts are used to help determine sample size. The formula for calculating the sample size simply involves rearranging the terms:

where:

E = ( Zc.l. S)/square root N

  • ZL. is the Z-value associated with the desired confidence level
  • S is the estimate of the population standard deviation
  • E is the amount of error we are willing to tolerate (i.e. the plus or minus term)

-variability, how accurate you want to be, how confident you want to be that you are that accurate.

-what is not a factor in this calculation? Population size. What matters is how much variation there is.

-calculation of sample size: constrained by resources, by variability w/in population

 

Estimating sample size II

 

In other words, we need 3 pieces of information in order to calculate sample size:

 

  • the amount of variability or heterogeneity in the population on the characteristic that we want to estimate. We typically do not know this, so we have to use our best estimate based on e.g. prior studies or a pilot study
  • the amount of error we are willing to tolerate i.e. how wide do we want our confidence interval to be?
  • the confidence level–how confident do we want to be that our sample estimate is that accurate?

The population size does not affect the sample size unless the sample is going to constitute 5 percent or more of the population

 

Example: to estimate mean GPA within + 2 points with a 95% level of confidence and an estimated population standard deviation of 12 points:

 

Estimating confidence intervals around a sample proportion

 

The logic is exactly the same when we want to estimate a population proportion on the basis of a sample proportion.

 

This time we draw on our knowledge of the fact that the sampling distribution of the sample proportions will be normally distributed and we have to calculate the standard error of the proportion.

(see 12.15)

 

If we have no basis for estimating the sample proportion, we should use the value that assumes the maximum amount of variability.

The maximum possible value for the standard error of the sample proportion occurs when we assume a population proportion of .5

TOPIC 13: Causal Thinking and Research Design

 

Overview:

Why is research design so important?

The nature of causal inferences

The classic experimental design

Internal validity

Extrinsic threats to internal validity

Intrinsic threat to internal validity

Threats to external validity

Variations on the classic experimental design

Quasi-experimental designs

 

-generalize causal inferences

-determine causal connection

-> our ability to do this hinges on how we design our research. Don’t rule our plausible causal interpretations.

Why is research design so important?

 

Purpose: to impose controlled restrictions on our observations of the empirical world.

 

A good research design:

  • allows the researcher to draw causal inferences with confidence
  • defines the domain of generalizability of those inferences

The way we structure our data-gathering strongly affects the nature of the causal interpretations we can place on the results.

 

The research must be designed so that we can rule out plausible alternative interpretations of the observed relationships.

 

The nature of causal inferences

 

We can never be certain that one variable ‘causes’ another–but we can increase confidence in our causal inferences if we are able to:

-demonstrate co-variation

-eliminate sources of spuriousness

-establish time order

Covariation—show that the IV and DV vary together in a patterned, consistent way (if A, then B)

 

NonSpuriousness — rule out the possibility that the IV and DV only co-vary because they share a common cause

 

Time order — show that a change in the IV preceded a change in the DV

 

How can we get more confident?

-don’t say what causes what. Assume there is some sort of causal influence involved

-change in value of 1 variable enhanced another’s change in value

-fundamental problem of causal inference: always has one causal influence. Demonstrate covariation, demonstrate non-spuriousness, time order.

-demonstrating covariation at the heart of hypothesis testing. Time order: demonstrate that IV occured before DV. Cause b4 effect.

-causal interpretation cannot come from data itself. However, can design research so that some outcomes are impossible and/or use statistical methods to analyze data & rule out possibilities ex-post facto. Can only do this if thought of it at research design stage.

 

The classic experimental design I

 

The classic experimental design consists of two groups: an experimental group and a control group.

 

These two groups are equivalent in every respect, except that the experimental group is exposed to the IV and the control group is not.

 

To assess the effect of differential exposure to the IV, the researcher measures the values of the DV in both groups, before and after the experimental group is exposed to the IV.

 

The first set of measurements is called the pre-test and the second set of measurements is called the post-test.

 

If the difference between the pre-test and post-test is larger in the experimental group, this is inferred to be the result of exposure to the IV.

 

Group

Experimental

Control

Time 1

Pretest

Pretest

Time 2

Exposure to IV

 

Time 3

Post-test

Post-test

 

Why is the classic experimental design so powerful?

 

The classic experimental design has 3 essential components that enable us to meet the 3 requirements for demonstrating causality:

Comparison -> covariation

Manipulation -> time order

Control -> non-spuriousness

 

-able to study impact of IV free of all other conflicting inferences

-unfortunately, much of what we study is not amenable to this design

-even in non-experimental research, we try to mimic this design.

 

 

Internal Validity

-absolute basic requirement of a research design

A research design has internal validity when it enables us to infer with reasonable confidence that the IV does indeed have a causal influence on the DV. Must enable to us to rule out plausible alternative causal relations.

 

To demonstrate internal validity, our research design must enable us to rule out other plausible causal interpretations of the observed co-variation between the IV and DV.

 

The factors that threaten internal validity can be classified into those that are extrinsic to the actual research and those that are intrinsic.

 

Extrinsic threats to internal validity

 

Extrinsic threats to internal validity typically arise from the way we select our cases.

 

They refer to selection biases that cause the experimental group and the control group to differ even before the experimental group is exposed to the IV.

 

If the two groups are not equivalent, then a possible explanation for any difference in the post-test results is that the two groups differed to begin with.

 

Intrinsic threat to internal validity I

 

Intrinsic threats to internal validity arise once study is under way from:

-changes in the cases being studied during the study period (history)

-flaws in the measurement procedure

-the reactive effects of being observed

 

There are six major intrinsic threats:

History—events may occur while the study is under way which affect values on the DV quite independently of exposure to the IV. The longer the study, the greater this threat.

Maturation—physiological and /or psychological processes may affect values on the DV quite independent of exposure to the IV

Mortality—selective dropping out from the study may cause the experimental group and the control group to differ on the post-test, quite independent of exposure to the IV.

Instrumentation—if our measuring instruments do not perform consistently, this unreliability may explain why cases differ before and after exposure to the IV.

The regression effect—if cases score atypically high or atypically low when they are pre-tested, it is likely that their scores will appear more typical when they are post-tested, quite apart from exposure to the IV.

Reactivity (‘test effect’)—the very fact of being pre-tested may cause people’s values to change, quite apart from exposure to the IV.

 

Countering extrinsic threats to internal validity I

 

Extrinsic threats are countered by ensuring that the experimental group and the control group are equivalent. (selection bias might cause groups to differ before exposure) There are 3 ways of ensuring equivalence:

Precision matching (also known as ‘pairwise matching’)—each case in the experimental group is literally matched with another case in the control group which has an identical combination of characteristics.

 

This method can be impractical because of the difficulty of finding matched pairs of cases.

Countering extrinsic threats to internal validity II

 

Frequency distribution matching—instead of matching cases on combinations of characteristics, the distribution of characteristics within each group is matched (i.e. the two groups should have the same proportion of men and women, the same average income level, the same ethno-linguistic composition, etc.)

This method is easier to achieve, don’t have to reject a lot of potential cases, but:

  • the effects of any one characteristic may be conditioned by the presence of other characteristics e.g. the effects of age may differ for men and women.
  • we can only match social background characteristics, but people who share the same social characteristic may differ in other ways.
  • we can never be confident that we have matched on all relevant characteristics.

 

Countering extrinsic threats to internal validity III

Randomization—cases are assigned to the experimental group and the control group in such a way that each case has an equal probability of being assigned to either group i.e. selection is left entirely to chance. (table of random, numbers, flip a coin)

 

If the randomization is done properly, the two groups should be equivalent.

 

This method controls for numerous factors simultaneously without the researcher having to make decisions about which factors might have a confounding effect.

 

BUT randomization requires a large number of cases in order to work effectively.

 

Countering intrinsic threats to internal validity

 

The presence of a control group that is equivalent in every respect to the experimental group except that is not exposed to the IV counters the intrinsic threats to internal validity:

History–both groups are exposed to the same events—so any difference in their post-test values must reflect differential exposure to the IV.

Maturation–both groups undergo the same maturational processes

Mortality–selective dropping out will affect both groups equally.

Instrumentation—both groups will be equally affected by random errors in measurement.

Regression effect—both groups will be equally susceptible.

Reactivity—if the pre-test does affect values on the post-test, this will be true of both groups.

 

-any difference must be because of the IV because everything else has been controlled for

-unambiguous basis for knowing that change in the IV occurred before change in the DV in time

-very strong internal validity, strong basis for inferring causal relations

problem: causal relations may only apply to case that you studied -> weak external validity = weak basis for generalizing

 

Threats to external validity

 

External validity concerns the extent to which the research findings can be generalized beyond the particular cases that were studied.

 

There are 3 threats to external validity:

-unrepresentative cases (people who volunteer are not representative)

-the artificiality of the research setting (people do not react the same way in the real world)

-reactivity—the pre-test may sensitize participants to respond atypically to the IV

 

The classic design is strong on internal validity and weak on external validity.

 

The Solomon 3-control group design

(also known as the Solomon 4-group design)

 

-This design has stronger external validity because it enables the researcher to assess the reactive effects of the pre-test experience.

-enhance external validity, helps assess reactive effect of the pretest

-This design is similar to the classic experimental design but it adds two more control groups. One group is exposed to the IV, but the other group is not. Neither group is pre-tested, but both groups are post-tested.

 

The post-test only control group design

The Solomon 3-control group design is stronger on external validity but:

  • often impractical
  • too costly

 

Another solution is to omit the pre-test altogether. This is only possible if we are very confident that the experimental group and the control group are really equivalent.

-avoid problem of testing, but still problem w/unrepresentativeness & artificiality.

-in practice cannot maximize internal & external validity. The more generalizability, the less internal validity.

-which matters most? Internal validity. Unequivocal basis for making causal inferences. However, typically study things as they are already, can’t manipulate countries/education, etc in experiments. Studying people already exposed to IV, so must use designs that are weaker in internal validity.

 

Quasi-experimental designs I

 

Experimental designs provide the most unequivocal basis for inferring causal relationships—but political phenomena are typically not amenable to experimental manipulation.

 

Quasi-experimental designs attempt to use the logic of the experimental design in situations where the researcher cannot randomly assign observations to experimental and control groups or control exposure to the IV.

 

In this design, comparison and control are achieved statistically. Multivariate statistical analysis is the most common alternative to experimental methods of control.

 

Quasi-experimental designs II

 

-The ex post facto experiment is most common type of quasi-experimental design. It attempts to approximate the post-test only control group design by using multivariate statistical methods. Try to apply logic of experimental design after having collected data. Cross-tabulations. Compare in order to demonstrate covariation.

 

-The researcher collects data on the IV, the DV and any other variables that might plausibly alter or even eliminate any observed covariation between the IV and the DV.

 

-At the analysis stage, cases are assigned to groups depending on their values on the IV. Then the researcher compares each group’s values on the DV. Any difference is inferred to be the result of the fact that the groups differ on the IV.

-To demonstrate non-spuriousness, the cases are divided into groups based on their values on the plausible source of spuriousness variable and the researcher compares values on the IV and the DV (as above) within each group. If the IV and DV continue to covary within each group, the relationship is not spurious.

-when we examine categories, we are matching: same drawback. Researchers must decide what are relevant variables and possible SS>

-taking liberties w/notion of control, try to mimic logic of the control group

-demonstrate non-spuriousness, correlation, but can’t demonstrate time-order.

Topic Fourteen: Statistics — Cross-Tabulations and Statistical Significance

 

Overview

Demonstrating covariation

Creating a cross-tabulation (nominal-level relationship)

Interpreting a cross-tabulation

Statistical significance

Type I versus Type II error

Estimating the probability of Type I error

The Logic of the Chi Square Test

Calculating Chi Square

Using and Abusing the Chi Square Test

 

Demonstrating Covariation

Demonstrating covariation involves answering 3 questions:

 

ü Degree–how strong is the relationship between the IV and the DV? Strength of association. Descriptive statistics.

 

ü Form–which values of the DV are associated with which values of the IV? Descriptive statistics. Positive or negative relationship.

 

ü Statistical significanceif the data are taken from a sample, can the relationship be generalized to the population from which the sample was drawn? Could we have obtained this relationship if there wasn’t one in the population? Inferential statistics

 

The tests that are used to answer these questions will depend on the level of measurement of the IV and the DV. The higher the level of measurement, the more varied and the more powerful the tests that can be used.

-cases can be affected by frequency distribution.

 

Creating a Cross-Tabulation (nominal-level relationship) I

 

The first step in describing the relationship between two variables is to arrange the data so that we can get an initial visual impression of the relationship.

 

If both variables are measured at the nominal level, this involves arranging the data in the form of a contingency table or cross-tabulation.

 

-A cross-tabulation involves classifying cases according to their values on the IV and then cross-classifying them according to their values on the DV. The cells of the table display the number of cases having each possible combination of values on the IV and the DV.

-eliminate irrelevant categories (missing data). Eliminate Categories that are useless for meaningful analysis (numbers too small). Can only do this in nominal level.

-The single most common error in constructing a cross-tabulation is to percentage the wrong way.

-The cell percentages must be calculated in terms of the total number of cases in each category of the independent variable. If we are testing the hypothesis that women are less likely to vote for new right parties than men, we have to compare the % of women who voted Alliance with the % of men who voted Alliance.

-A cross-tabulation is interpreted by comparing categories of the independent variable in terms of the percentage distribution of the dependent variable i.e. we compare the % of women who voted Alliance with the % of men who voted Alliance.

-If the independent variable forms the columns of the table, the percentages are calculated by column and then the columns are compared i.e. percentage down and compare across columns.

-total in each column/row are marginal frequencies. Literally, on margin of table.

-reasons to % table: 1. If don’t have equal cases in diff IV categories, difficult to compare cell frequencies 2. Even if equal, easier to read out of 100 than other things.

-don’t use decimal to avoid a false sense of precision in %

 

Interpreting a Cross-Tabulation (nominal-level relationship) II

  • check whether there are differences in the distribution of the DV for the different categories of the IV.
  • if there are differences, check whether they are consistent with the hypothesis.
  • if the percentage differences are consistent with the hypothesis, see how big they are. The larger the differences, the stronger the relationship.
  • if the data come from a sample, check how likely it is that differences this large could have occurred by chance (as a result of sampling error) i.e. how confident can we be that the relationship observed in the sample exists in the population at large?

 

Tests:

  • inter-ocular strike test: no substitute for eyeballing the table. If no difference, then there is no relation.
  • Are differences continuous w/hypothesis? Is the gap the one predicted? (form)
  • If % are in hypothesized direction, how big are the differences? Bigger the difference, the more impact IV is having. But % don’t have to be drastic to be meaningful.
  • Statistical significance

 

Statistical Significance

 

Statistical significance indicates how likely (or probable) it is that the relationship between two variables observed in a sample might have occurred by chance and might not exist in the population from which the sample was drawn.

 

This probability is termed the level of statistical significance. The lower the probability, the higher the level of statistical significance. Want a low probability (.05 or less is conventional. 5% chance they don’t generalize)

 

A test of statistical significance is an inferential statistic. Purpose to estimate how likely it is that results occurred by chance & is not representative of population.

 

Type I versus Type II Error

 

In making inference about the whole population based on the results of a sample, we risk making one of two types of error:

 

  • inferring that there is a relationship when none actually exists.

 

  • inferring that there is no relationship when there really is a relationship.

The risk of Type I error is always viewed as much more serious than Type II error:

  • the analogy of a court of law—just as we’d rather risk letting a guilty person go free than convicting an innocent one, so we’d rather risk missing a relationship than inferring one where none exists.
  • If our sample indicates that there is no relationship, we are usually ready to accept this verdict without worrying how confident we should be.

-much harder to calculate type II error

 

How to calculate type I error

-rely on theoretical frequency distribution, which provides us with criteria for assessing risk of error

-theoretical now gives likelihood of each possible degree of association in a sample if there was no relation w/in population.

-chisq distribution. Chisq= appropriate w/nominal level relations, also ordinal lvl

-cross tab is interpreted by comparing categories of the IV in terms of the % distribution of the DV. (% women alliance voters with % men alliance voters)

-if the IV forms the columns, the % are calculated by column and then the columns are compared (percentage down, compare across)

-use knowledge of theoretical distribution to judge how confident we can be that results will hold in population

 

Estimating the Probability of Making a Type I Error

 

Estimating the probability of making a Type I error (i.e. determining the level of statistical significance) involves the use of a theoretical sampling distribution.

 

For nominal-level relationships, the appropriate sampling distribution is the Chi-square distribution. This distribution gives the likelihood of each possible degree of relationship occurring in a sample if there were no relationship in the population from which the sample was drawn.

 

We use this theoretical distribution to determine how likely it is that we would have found a relationship as strong as the one observed in our sample if there were really no relationship in the population.

 

The Logic of the Chi Square Test

  • set up a null hypothesis i.e. assume that there is no relationship in the population.
  • calculate the cell frequencies you would expect to observe if the null hypothesis were true.
  • compare the expected cell frequencies with the observed cell frequencies — the greater the differences, the less risk of Type I error, and the bigger chisq, the more confident we can be that there is a relationship in population.
  • make a partial adjustment for sample size since the absolute amount of difference between the expected and observed cell frequencies is also a function of sample size.
  • calculate the degrees of freedom—the more cell there are in a table, the greater the opportunity for the observed distribution to depart from the expected distribution.
  • consult the theoretical Chi Square distribution to determine the significance level (SPSS automatically does this for you).

 

Calculating Expected Frequencies

-To obtain the expected cell frequency for a given cell, multiply the column marginal by the row marginal and divide by the total number of cases e.g.:

-The expected cell frequency tells us how many women we would expect to vote Alliance if the vote distribution for women matched the vote distribution for the sample as a whole. (461/1357) x 100 = 34% of the sample voted Alliance—so we would expect 34% of women to vote Alliance.

 

Calculating Chi Square

Xsquared =  (fo – fe)squared/fe

Where: fo = the frequency observed in each cell.

fe = the frequency expected in each cell

Degrees of freedom (# of columns minus one)(# of rows minus one) = 1 x 3 = 3

 

Chi Square = 29.2       significance level = .001

 

i.e. there is less than one chance in a 1,000 that we would have obtained a relationship like the one observed in our sample if there were really no relationship in the population.

-square to get rid of negative signs so that #’s don’t cancel out

-(fo-fe)squared/fe: other things being equal, the larger size, the larger the discrepancy. Therefore want to compensate for that by making a partial adjustment. Divide by expected frequency for each cell.

-only partial adjustment b/c larger samples are more reliable.

 

-distributional freedom: adjust for differences in the size of the table (differences btwn tables in the # of cells that they have). the more cells in a table, the more chances there are to deviate from the random model & want to adjust for this)

 

Chi square:

-significant at the .001 lvl (1/1000 chance)

-if relation is .006 can talk about it being borderline, approaching statistical significance

FOR EXAM: define statistical significance, name a nominal level test, describe the logic

 

Using and Abusing the Chi Square Test

 

  • Chi Square assumes that the researcher has hypothesized a relationship in advance.

 

  • Chi Square assumes that the sample was selected randomly. (non-zero chance of inclusion)

 

  • Chi Square assumes that no more than 25 percent of the cells have an expected frequency of less than five. More of an issue if it appears to be significant, must alert reader of the problem.

 

  • the larger the number of cases, the larger Chi Square will be since the adjustment for sample size is only partial. This is as it should be since a larger sample reduces the risk of Type I error. BUT this means that Chi Square should NEVER be used to draw conclusions about the strength of the relationship between IV and DV (since trivial relationships will attain statistical significance if the sample is large enough). Cannot compare size of chi square from one table to another

 

  • a non-significant Chi Square does NOT mean that our sample is unrepresentative. What it usually means is that the relationship we have observed is so weak that it could easily have occurred by chance.

Topic Fifteen: Statistics — Nominal-Level Measures of Association

Overview

What is a Measure of Association?

What are PRE-Based Measures of Assssociation?

Calculating Lambdaa

Interpreting Lambdaa

Why Lambdaa can be misleading

 

What is a Measure of Asssociation?

 

A measure of association (or correlation coefficient) is a single number that summarizes the degree of association between two variables.

 

There is a wide range of measures available for describing how strongly two variables are related. Some differ in their basic approach, but even when the basic approach is similar, measures may differ with respect to:

 

  • the type of data for which they are appropriate
  • their computational details

 

This means that different measures of association are not directly comparable. Never compare how strong different relationships are unless the same measure of association has been used.

 

What are PRE-Based Measures of Association?

 

The logic of proportional reduction in error (PRE) provides an intuitive approach to measuring association. It involves asking: how much does knowing the values of cases on the independent variable help us improve our ability to predict their values on the dependent variable?

 

If two variables are perfectly related, knowing a case’s value on the IV will enable us to predict its value on the DV with complete accuracy. Conversely, if two variables are completely unrelated, knowing the value of a case on the IV will be no help at all in predicting its value on the DV.

 

If two variables are partially related, knowing the value of a case on the IV will be some help in predicting its value on the DV. PRE-based measures enable us to summarize that improvement in predictive ability.

 

Calculating Lambdaa I

 

Lambdaa is a PRE-based measure of association that is appropriate when one or both variables are measured at the nominal level.

 

Lambdaa measures how much our predictive ability is improved by knowing the values of cases on the IV. It ranges in value from .00 (no improvement) to 1.00 (perfect predictability).

If you had to guess how any one person voted, your best guess would be the modal category (Liberal).

And if you had to make the same guess for every person, you would make the fewest errors if you always guessed the modal category.

-single # that summarizes degrees of correlation btwn 2 variables

-many diff variables of association: conceptualize in different ways. Cannot compare different measures of association.

-widely used. Employ logic that is very direct literal interpretation

-lamda a = asymetrical lamda

-lamda is attractive measure of association b/c it is easily readable. Don’t want to take it that literally, not strong relationship til .50

 

Lamda = fi-fd/n-fd

 

Where fi = maximum frequency w/in each subclass or category of the IV

Fd= maximum frequency in the totals of the DV

N = number of cases

 

Interpreting Lambdaa

 

The value of Lambdaa depends on which variable is used as the predictor variable—the column variable or the row variable.

 

Lambdaa is asymmetric Lambda (hence the subscript), meaning that it is used when we want to predict the values of one variable based on the values of a second variable. There is also symmetric Lambda which is used when we want to summarize the degree of mutual predictability between two variables (how much does our predictive ability improve if we use each variable to predict the other?)

 

SPSS provides all three Lambdas—so be sure to choose the asymmetric Lambda that corresponds to your DV.

 

Why Lambdaa can be misleading

Lambdaa will always be zero if the modal value is the same for all categories of the IV.

-be skeptical if get .00. If modal category is the same for categories of the DV, lamda will be .00. statistic no longer giving appropriate distribution of variables.

 

If the modal value is the same for all categories of the IV, then Cramer’s V will be an appropriate measure to use for nominal-level relationships. Cramer’s V is based on the logic of Chi Square (i.e. it is not a PRE-based measure). It adjusts Chi Square to minimize the effects of sample size and distributional freedom(the more cells in a table, the more opportunities there are to differ from population) and to constrain the coefficient to range between .00 and 1.00.

-cannot give literal interpretation of cramer’s v. only gains meaning when comapred to diff tables & strength of association

-cannot compare cramer’s v and lamda

-not a PRE based measure.

 

-arrange data to get initial visual impression. Can create rank-ordering (used w/ordinal variable w/ large # of possible values. Very few cases w/ same values) or cross tabulation/contingency table. Btwn 3 & 7 values.

Topic Sixteen: Statistics — Ordinal-Level Measures of Association

Overview

Creating a cross-tabulation

Measuring association at the ordinal level

The logic of PRE at the ordinal level

Calculating Gamma

Why Gamma can be misleading

Ordinal measures of association: Tau

Choosing a measure of association

 

Creating a Cross-Tabulation I

 

The first step in describing a relationship between two variables is to arrange the data so that you can get an initial visual impression of whether there is a relationship or not. With ordinal-level data, there are two methods for doing this:

  • rank orders are used when there are few cases having the same value (i.e. when there are few “ties”).
  • cross-tabulations are used when there are many ties and/or when both variables have only a small number of possible values.

When cross-tabulating ordinal variables, it is important that the values of both variables be listed in the same order (e.g. from low to high, from weak to strong, etc.).

 

The best general indication of a relationship in a cross-tabulation between two ordinal variables is a consistent increase in the %s in one direction across the top row and in the opposite direction across the bottom row. Pattern where % increase in the top & bottom row. Do they increase in opposite directions? If so, relationship.

-always compare across rows

-focus on the gap, but not to the exclusion of what happens btwn the endpoints. Have to see a steady pattern on incrase.

 

 

Measuring Association at the Ordinal Level

 

Having checked that Chi Square is statistically significant (i.e. the significance level is .05 or less), the next step is to calculate a measure of association.

 

Measures of association at the ordinal level differ from measures of association at the nominal level in ranging from –1.00 to +1.00 (instead of .00 to 1.00).

 

A negative coefficient indicates that cases with high values on the IV tend to have low values on the DV (and vice versa). This indicates that there is a negative relationship between the IV and the DV.

 

A positive coefficient indicates that cases with high values on the IV also tend to have high values on the DV (and vice versa). This indicates that there is a positive relationship between the IV and the DV.

 

The Logic of PRE at the Ordinal Level I

 

-Gamma is an ordinal measure of association that uses the logic of proportional reduction in error.

-Association is still treated as a matter of predictability, but the nature of the predictions changes because we have ordered categories.

-With ordinal data, we are interested in measuring how much knowing the relative position (or ranking) of a pair of cases on the IV will help us to improve our ability to predict their relative position (or ranking) on the DV.

 

The Logic of PRE at the Ordinal Level II

 

There are 2 conditions under which the ranking of a pair of cases will be perfectly predictable:

  • if all the cases are ranked in exactly the same order on both variables (perfect agreement) i.e. cases that have low values on the IV all have low values on the DV, etc.
  • if all the cases are ranked in exactly the opposite order on both variables (perfect inversion) i.e. cases that have low values on the IV all have high values on the DV, etc.

 

In either case, we can predict the relative position of a pair of cases on the DV from their relative position on the IV with perfect accuracy.

 

The degree of predictability (or association) is a function of how close the rankings on the two variables are to either perfect agreement or perfect inversion. Both situations represent perfect association—the only difference lies in the direction of the association.

 

Calculating Gamma I

 

We use probabilistic logic to calculate and interpret Gamma.

-If two variables are in perfect agreement, the probability of drawing a positive pair (a pair of cases ranked in the same order on both variables) will be 100%:

-If two variables are in perfect inversion, the probability of drawing a negative pair (a pair of cases ranked in the opposite order on both variables) will be 100%:

-If two variables are totally unrelated, the probability of drawing a positive pair will equal the probability of drawing a negative pair.

 

In order to calculate the chance of drawing positive and negative pairs, we have to count the total number of positive and negative pairs.

 

To compute the number of positive pairs, begin with the cell in the upper leftmost corner and multiply it by the sum of the frequencies in all the cells below and to the right. Cells below will have higher values on the DV and cells to the right will have higher values on the IV. Repeat for every cell that has cells below and to the right:

 

To compute the number of negative pairs, begin with the cell in the upper rightmost corner and multiply it by the sum of the frequencies in all the cells below and to the left. Cells below will have higher values on the DV and cells to the left will have lower values on the IV. Repeat for every cell that has cells below and to the right:

 

Interpreting Gamma

 

If positive pairs predominate, Gamma will be positive. If negative pairs predominate, Gamma will be negative.

 

Gamma is literally interpreted as indicating the probability of correctly predicting the order of a pair of cases on the DV once we know their order on the IV, ignoring ties. Still using the logic of guessing.

 

The size of the coefficient indicates the strength of the relationship, while the sign (positive or negative) indicates the direction of the relationship. Strength of association. Clsoer to 1 = stronger.

 

Why Gamma can be misleading

In calculating Gamma, we ignore cases that have the same value on one or both variables (‘ties’). Cases that have the same value on one variable, but a different value on the other variable violate the notion of association. Ignoring these cases causes Gamma to overstate the degree of association.

 

Ordinal Measures of Association: Tau

 

Because Gamma can be inflated, it is preferable to use Tau. Tau does take into account cases that are tied on one variable, but not on the other (cases that are tied on both variables are consistent with the notion of association).

 

Like Gamma, Tau ranges in value from –1.00 to +1.00

 

Taub is used when both variables have the same number of values (i.e. the table is symmetrical, with an equal number of columns and rows).

 

Tauc is used when one variable has more values than the other variable (i.e. the table is asymmetrical, with an unequal number of columns and rows).

 

[There is also a Taua but this is not used with cross-tabulations since it assumes that there are no ties.]

-only use if both measures are ordinal (exception: dichotomous variables can be treated as ordinal, evne interval)

 

Left-right self-placement x support for free enterprise:

Gamma = .37  Taub = .23

 

Choosing a Measure of Association I

 

Gamma and Tau should only be used when both variables are measured at the ordinal level unless one or both variables is a dichotomy.

 

A dichotomous variable has only 2 categories (e.g. sex). As such, it satisfies the requirements for both interval-level (there is only one interval which, by definition, is equal to its self) and ordinal-level (the ordering is arbitrary but neither ordering violates the mathematical requirements) measurement

 

IV                    DV                   Measure of Association

 

nominal           nominal                       Lambda or Cramer’s V

nominal           dichotomy

nominal           ordinal

dichotomy       nominal

ordinal            nominal

 

ordinal            ordinal            Gamma or Tau

dichotomy       ordinal

ordinal            dichotomy

dichotomy       dichotomy

 

Topic Seventeen: Statistics — Examining the Effects of Control Variables

Overview

 

How are controls introduced?

Interpreting control variables

Sources of Spuriousness

Intervening variables

Conditional variables

Replicated relationships

 

How are controls introduced?

 

It is never enough to demonstrate covariation. We must always go on to examine the effect of other variables (‘control variables’) that might plausibly alter or even eliminate the observed covariation.

 

In order to determine whether some third variable affects the observed relationship between the IV and DV, we must be able to hold the effects of that variable constant and then re-examine the relationship between the IV and the DV. Note: the focus is always on what happens to the IV – DV relationship.

 

With nominal variables or with ordinal variables that have only a small number of possible values, we use a physical control i.e. we divide our cases into groups based on their values on the control variable and then re-examine the original relationship separately for each of these groups, using a series of cross-tabulations.

 

Interpreting control variables

 

When you do this, one of three things can happen to the original relationship:

  • it can stay more or less the same in every category of the control variable (replicated relationship).
  • it can weaken or disappear in every category of the control variable (spuriousness OR intervening variable). Gap smaller, measure of association smaller, no sig chisq
  • it can weaken in some categories and strengthen in others or even assume different forms in different categories of the control variable (conditional variable).

Note: there is no statistical technique for distinguishing between an intervening variable and a source of spuriousness. You have to decide on substantive grounds which interpretation makes the most sense. Usually, this is decided on the basis of time order. Draw chart.

-data analysis is not just a mechanical process -> process of imparting meaning to data by interpreting them.

 

Source of Spuriousness I

 

-The first priority must be to test for spuriousness i.e. we must ask whether there is some common factor that could cause both the IV and the DV.

 

-If the relationship between the IV and the DV is spurious, the relationship will weaken or disappear when we control for the source of spuriousness variable (remove the common cause and the observed covariation will weaken or disappear).

-If your relationship turns out to be spurious, you should make the source of spuriousness variable your new independent variable and then test the relationship between this variable and your dependent variable. You will then test for the effects of two plausible control variables.

 

If the original relationship weakens in every category of the control variable, but there is still some relationship in every category (i.e. the significance level is .05 or less and the measure of association is close to .20), you have a partial source of spuriousness. In this case, you do not need to change your hypothesis because there is still some covariation even controlling for the common cause.

 

If there is more than one plausible source of spuriousness, you must test for these additional possibilities.

 

Intervening Variable I

 

If your relationship is not spurious (or is only partially spurious), the next priority is to test for a plausible intervening variable.

 

Intervening variables are variables that mediate the relationship between the IV and the DV. An intervening variable provides an explanation of why the IV affects the DV.

 

The intervening variable corresponds to the assumed causal mechanism. The DV is related to the IV because the IV affects the intervening variable and the intervening variable, in turn, affects the DV.

To identify plausible intervening variables, ask yourself why you think the IV would have a causal impact on the DV.

 

Ex) The relationship has weakened in both categories of the control variable, but it has not disappeared. This indicates that ideology is a partial intervening variable (it only explains some of the observed relationship between religious affiliation and vote choice).

 

Conditional variables I

 

Once we have eliminated plausible sources of spuriousness and verified the assumed causal mechanism, we need to specify the conditions under which the hypothesized relationship holds.

 

Ideally, we want there to be as few conditions as possible because the aim is to come up with a generalization.

 

Conditional variables are variables that literally condition the relationship between the IV and the DV by affecting:

(1) the strength of the relationship between the IV and the DV (i.e. how well do values of the IV predict values of the DV?) and

(2) the form of the relationship between the IV and the DV (i.e. which values of the DV tend to be associated with which values of the IV?)

 

Conditional variables II

To identify plausible conditional variables, ask yourself whether there are some sorts of people who are likely to take a particular value on the DV regardless of their value on the IV.

Note: the focus is always on how the hypothesized relationship is affected by different values of the conditional variable

 

There are basically three types of variables that typically condition relationships:

(1) variables that specify the relationship in terms of interest, knowledge or concern.

(2) variables that specify the relationship in terms of place or time.

(3) variables that specify the relationship in terms of social background characteristics.

 

Replicated relationship II

 

What matters is what happens to the differences across the columns. Even though the cell percentages may change, the impact of the IV on the DV will be similar to the uncontrolled relationship if the gap across the columns in each control table remains more or less the same (and the measure of association indicates that the strength of the relationship is more or less similar)

 

Topic Eighteen: Validity and Reliability

Overview:

Validity versus reliability

Systematic versus random errors

Face validity

Criterion-related validity

Construct validity

Test-retest reliability

Parallel-forms reliability

Internal consistency

Sub-sample reliability

 

-central issue: how well do empirical indicators respond to abstract concepts

-can we build into our data collection a provision to collect info that we need to persuade that our measures work.

Validity versus reliability

 

Validity—are we measuring what we think we are measuring? i.e. does our indicator really represent our target concept?

 

Reliability—does our measurement process assign values consistently? i.e. if we repeated our research, would we assign the same values to the same observations?

 

Validity and reliability are jeopardized by measurement errors.

 

Measurement errors are differences in the values assigned to observations that are attributable to flaws in the measurement process i.e. they do not reflect authentic differences between observations in the property we want to measure

 

Measurement errors can be either systematic or random.

Systematic versus Random Errors I

 

Systematic errors occur when our indicator is picking up some other property, in addition to the property it is supposed to measure. This type of error systematically affects our results. Constant. Biasing effect is predictable, once identified.

 

Random errors are chance fluctuations in the measurement results that do not reflect true differences in the property being measured. These errors occur as a matter of chance and affect each observation differently.

-can be due to transient aspect of case being measured

–could be due to measurement situation (interviewer has an off day)

-measurement procedure itself that varies from case to case

-b/c of vague/ambiguous instructions

-random b/c amount of error varies from one case to another in unpredictable ways

Systematic versus random errors II

 

Random errors make our measures unreliable. If a measure is unreliable, it cannot be valid because at least some of the differences in the values assigned to observations will result from random measurement errors.

 

BUT a reliable measure is not necessarily valid. This is because reliability is only threatened by random error—whereas validity is threatened by both random error and systematic error.

 

Systematic errors are no threat to reliability precisely because they are systematic i.e. they consistently affect our measurement results. Could, after the fact, introduce a control variable to deal with the bias from systematic error.

 

Content Validity I—Face Validity

Content validity is concerned with the substance, or content, of what is being measured. It addresses directly the question: are we measuring what we think we are measuring?

 

Validity is the basic problem of social science.

 

To have content validity, a measure must be both appropriate and complete.

 

If we wanted to measure public education in cities: we may try to count the number of teachers in city schools, this is inappropriate.

 

Face validity involves the criterion of appropriateness—can knowledgeable people be persuaded that the measure is an appropriate indicator of the target concept? Ask experts. Some measures are based on such direct observation of the behaviour in question that there seems to be no reason to question their validity. Ex: state law to present license of compliance visibly. We shouldn’t trust the face value alone.

 

Potential problems:

-the method relies on subjective judgment

-there are no replicable rules for evaluating the measure (can’t say how expert reached their decision)

Intersubjectivity enhances confidence in the face validation approach.

 

Content validity II—sampling validity

Sampling validity involves the criterion of completeness—does our measure represent the full range of meaning on the target concept?

 

This approach assumes that every concept has a theoretical universe of content consisting of all the things that could possibly be observed about the property it represents. A valid measure is one that constitutes a representative sample of this universe of content.

 

Potential problems:

  • the method relies on subjective judgment
  • there are no replicable rules for evaluating the measure (nominal definitions are crucial for this reason)
  • it is difficult to specify the universe of content of abstract concepts
  • it is even harder to represent that content completely/adequately

 

Criterion-related validity I (pragmatic, empirical, predictive, concurrent)

 

Criterion-related validity assumes that an indicator is valid if there is an empirical correspondence between the results obtained using the indicator and the results obtained using another indicator of the same concept that is already known (or assumed) to be valid.

 

Ex: street light test: multiple indicators improve the chance of validity.

 

There are two types of criterion-related validity:

 

  • concurrent criterion-related validity simply involves comparing the results with those obtained using another indicator.
  • predictive criterion-related validity involves asking how well the indicator predicts a behavior that is known to reflect the concept being measured e.g. how well do LSAT scores predict performance in law school?

 

The emphasis in both cases is on the correlation between our indicator and the criterion (hence the alternative names: pragmatic validity and empirical validity).

Criterion-related validity II

 

This form of validation raises three questions:

  • why not use the criterion instead? In some cases, the criterion may be impractical or expensive to use. In other cases, we need to measure the property before we make use of the criterion (i.e. we want to measure aptitude for law school before we admit students).
  • how do we know the criterion is valid?
  • what if we lack a valid criterion? This is typically the case unless we are engaged in applied policy research.

 

Construct Validity I

 

Construct validity involves relating an indicator to an overall theoretical framework.

 

Based on our theoretical understanding of the concept we want to measure and on previous research, we postulate various relationships between that concept and other specified concepts. The indicator is valid to the extent that we observe the predicted relationships.

 

These relationships are in addition to the ones that are the focus of our research.

 

e.g. we want to test a theory about the relationship between political efficacy and political engagement. We might try to validate our indicators of efficacy by seeing whether they produce the relationship we would expect with indicators of education (i.e. the more education people have, the more efficacious they will feel).

 

This is known as external validation.

-different from criterion-related (looking at your measure & another measure of same concept) here (looking at your measure & other measure of different concept)

 

Construct validity II

 

The process of external validation is very much like testing a hypothesis. The problem is that, like any hypothesis, the predicted relationships may not hold. This could mean any one of three things:

  • Our indicator is not valid
  • the theoretical framework that generated the predicted relationships is flawed.
  • the indicators of the other concepts were not valid.

 

The solution is to conduct multiple tests. If most of the predicted relationships hold, we can be confident that our indicator is valid. If most of the predicted relationships fail to hold, we would have to conclude that our indicator is the problem.

 

Construct validity III

Convergent-discriminant validity (also known as the multi-trait multi-method matrix method) is a more sophisticated form of construct validity.

 

Convergent validity (also known as internal validity) means that different methods of measuring the same concept should produce similar results.

 

Discriminant validity means that two indicators should not correlate highly if they measure different properties, even if they involve similar methods of measurement.

 

Construct Validity IV

 

The convergent-discriminant approach requires indicators of at least two different concepts, each measured using at least two different methods. When these indicators are correlated, we should observe the following pattern:

Concept A/Method 1   Concept B/Method 2

 

Concept A/Method 2 high correlation           low correlation

Concept B/Method 1 low correlation            high correlation

 

This approach is difficult to implement because we typically cannot use more than one method for measuring our concepts. However, this approach can be approximated by comparing alternative indicators of different concepts. (Concept A/Indicator 1, etc.)

 

NOTE: we cannot always be certain that our measures of the key concept are valid, and we should therefore always be careful about concluding that a measure is valid or invalid from any one test of validity.

 

Assessing Reliability (don’t need to know)

 

Assessing reliability is basically an empirical matter.

 

The best way to achieve high reliability is to be aware of the sources of unreliability and to guard against them.

 

There are four major ways of assessing reliability.

The test-retest method

 

The test-retest method corresponds most closely to the conceptual definition of reliability i.e. if we repeat the measurement process on the same cases, will we get the same results?

 

This method is intuitively appealing, but it has important drawbacks:

-it may not be feasible

-there is the risk of reactivity e.g. in a survey, respondents may consciously strive to appear consistent in their responses (over-estimate reliability); respondents may pay less attention the second time around (under-estimate reliability); the fact of being interviewed the first time may change responses the second time around (under-estimate reliability).

– real change may occur in the cases being measured between the first and the second measurement period (under-estimate reliability).

 

This approach is most appropriate with non-reactive methods of data collection, like content analysis.

The alternative forms (or parallel forms) method

 

The alternative forms (or parallel forms) method involves using two parallel forms of the measuring instrument on the same cases.

 

The advantages of this method are:

 

  • there is no reactivity problem because no case is measured twice using the same measuring instrument.
  • there is no time elapse between the measurements so there is no confounding effect from possible changes in the cases themselves.
  • feasibility

 

The disadvantages of this method are:

-difficulty of ensuring that the two forms are parallel.

-difficulty of coming up with two measuring instruments.

 

The alternative forms (or parallel forms) method

A variant of this method is the split-half method. It avoids the problem of having to come up with two parallel forms. The researcher comes up with a single measuring instrument with twice as many items as needed. Reliability is assessed by randomly dividing the items in half and comparing the results. If the randomization works properly, the two halves should be equivalent.

 

The disadvantages of this method are:

  • the difficulty of coming up with sufficient items.
  • making sure that the two halves really are equivalent (randomization will not ensure equivalence if the number of items involved is small).
  • different splits may lead to different assessments of reliability.

The internal consistency method

 

The most common approach to assessing internal consistency is the calculation of coefficient Alpha. This coefficient is based on the average correlation for every possible combination of items into two half-tests. Items that produce low correlations are deleted.

 

Possible values of coefficient Alpha range from 0 to 1. An Alpha of 0.8 is conventionally taken as denoting an acceptable level of reliability

 

This method shares the advantages of the alternative forms method while avoiding the problem of having to determine equivalence.

The Subsample method

 

The subsample method is used in survey research. It involves dividing the sample randomly into several subsamples. If the subsamples are large enough, randomization should ensure that the subsamples are similar in composition. The same items are administered to each subsample and reliability is assessed by the similarity of responses across the subsamples.

 

The advantages of this method are:

  • there is no reactivity problem because no case is measured twice using the same measuring instrument.
  • there is no time elapse between the measurements so there is no confounding effect from possible changes in the cases themselves.
  • no need to come up with twice as many items as needed.

The disadvantages are:

  • a large sample size is required in order for randomization to produce equivalent subsamples.

 Topic 19: Scaling

Overview

What is scaling?

Five criteria for assessing scales

Likert scaling

Guttman Scaling

 

What is scaling?

 

Scaling involves rank-ordering individuals in terms of whether they possess more (or less) of the target property e.g. alienation, political interest, authoritarianism

We’re trying to assign a single representative value or score to a complex attitude or behaviour.

 

Ex: College student might be judged on a myriad of possible levels.

 

The individual’s score on the scale is determined by his or her responses to a series of questions, each of which provides some indication of the individual’s relative alienation, political interest, etc.

 

Combining items to form a scale serves two important functions:

  • reduces measurement error and thus enhances reliability and validity. A single item may produce idiosyncratic results and/or capture only a limited aspect of the target property
  • simplifies data analysis

-scale is measuring instrument, therefore must remember properties of good measures

 

Ex: The Cubans are evil and cannot be trusted: need to be more specific in statesments.

 

Five criteria for assessing scales

  • unidimensionality—the scale should measure one property and one property only
  • linearity and equal intervals—increasing scores should correspond to increasing amounts of the target property and the scores should be based on interchangeable units
  • reliability—the scale should assign values consistently
  • validity—the scale should measure the target property
  • reproducibility—knowing an individual’s total score should enable us to predict correctly which items s/he agreed with and which items s/he disagreed with

 

Likert scaling I

 

Likert’s primary concern was unidimensionality.

 

He eliminated the need for judges (as required by Thurstone’s method) by getting respondents in a pilot sample to place themselves on an attitude continuum running from “strongly agree” to “strongly disagree” on a series of statements relating to the attitude to be measured.

 

Likert scaling requires a pool of attitude statements, some indicating a favourable attitude and some indicating an unfavourable attitude—but none worded so blandly that almost everyone would agree or so extremely that almost everyone would disagree.

 

These statements are administered to a pilot sample of 100 or more respondents who are similar to those who will be participating in the survey proper. Each respondent is asked to indicate how strongly s/he agrees or disagrees with each statement.

 

Each respondent’s responses are scored. Scores typically range from 1 to 5 (more complex scoring schemes have been shown to possess no advantages). The researcher has to decide whether ‘1’ indicates a very favourable attitude or a very unfavourable attitude. It does not matter as long as the scoring is consistent.

 

If ‘5’ indicates a very favourable attitude, strongly agreeing with a favourable statement is scored ‘5’ and so is disagreeing with an unfavourable statement.

 

Once the individual responses have been scored, a total score is computed for each respondent by simply adding up the scores for each statement (hence the alternative name of summated rating scale). If there are 20 statements, possible scores will range from 20 to 100.

Likert scaling III

 

The next step is to perform an item analysis to determine which are the best items to retain in the final scale. The purpose of this analysis is to ensure unidimensionality. There are three different ways to do this:

  • correlate each statement with a reliable criterion that is known or assumed to reflect the target attitude and retain those statements that produce the highest correlations. Such external criteria are typically not available.
  • internal consistency method—for each statement, correlate the score with the respondent’s total score minus the score for that statement. Retain those statements that produce the highest correlations. Factor analysis (correlate every item with every other tieam. Search for measurs that intercorrelate highly) offers a more sophisticated way of ensuring internal consistency

 

-Both ways of ensuring internal consistency have been criticized for violating the assumptions underlying the statistical methods employed (i.e. using ratio-level methods with ordinal-level data)

  • index of item discrimination—retain those statements that best distinguish between respondents scoring in the top 25% and respondents scoring in the bottom 25%. If respondents with high scores and respondents with low scores respond similarly to a given statement, it cannot be measuring the same attitude as the statements as a whole.

 

-Once the statements have been selected, the scale is administered to respondents in the survey proper and their total scores are calculated. Scores are typically averaged in order to yield a scale that runs from 1 to 5 (purists use the median score since the level of measurement is only ordinal).

 

Advantages of Likert scales:

  • reliability—respondents like the format and find it easier to answer when they can qualify their agreement or disagreement. Perform consistently
  • ease of construction
  • unidimensionality—if the statements are internally consistent and/or discriminate among respondents, it is likely that they are all measuring the same attitude.

 

Disadvantages

-lack of reproducibility—the same total score (or average score) can be obtained in many different ways. Two respondents may have the same total score and yet have answered quite differently

-unidimensionality is no guarantee of validity.

-lack of equal intervals—this criticism is questionable since it is unrealistic to think that we could come up with equal ‘units’ of alienation, interest, authoritarianism, etc.

-measuring the same thing, but not necessarily the target property

 

Guttman scaling I

 

In Guttman scaling, the twin concerns are achieving unidimensionality and reproducibility. Reproducibility means that we can predict a respondent’s responses to individual scale items knowing only his or her total score

Specifically, Guttman scaling enables us to predict each respondent’s responses to individual items with no more than 10% error for the sample as a whole.

 

The items that comprise a Guttman scale have the properties of being ordinal and cumulative. (can rank order in terms of having more or less of the property)

 

The scale is like a ladder—if someone has reached a higher rung, we can be fairly sure that they have climbed the lower rungs as well. Similarly, if the respondent says ‘yes’ to an item that indicates more of the property being measured, we can be reasonably confident that s/he will also have said ‘yes’ to all of the items that indicate less of the property.

-aim for somewhat equal intervals, avoid a big leap.

 

Guttman scaling II

 

Creating a Guttman scale involves using scalogram analysis to test a set of items for scalability. Scalogram analysis enables us to see how far our items and people’s responses to them deviate from perfect reproducibility. Scalability is indicated by a coefficient of reproducibility of .90 or higher.

 

It involves arranging and re-arranging both the items and the respondents in a table. The items are ordered across the top of the table from most to least according to the number of ‘yes’ responses they received. Respondents are ordered down the side of the table from most to least according to how many ‘yes’ answers they gave. Software is available for this purpose.

Guttman scaling II

 

The aim is to achieve a triangular pattern:

Items that produce too many deviations from these patterns are dropped and so are redundant items (i.e. items that do not lead to greater differentiation among respondents). Also dropped are items to which almost everyone said ‘yes’ (or almost everyone said ‘no’) to guard against inflated estimates of reproducibility.

 

If we have a large sample of respondents, we should randomly divide the sample into subsamples and repeat the scalogram analysis for each subsample to check for consistency.

 

Advantages of Guttman scaling:

  • while there is no guarantee of unidimensionality, it is likely that items that meet the test of scalability are measuring the same property.
  • reproducibility is high by definition.
  • produces short but highly effective scales.
  • can be used to scale behaviours and events (e.g. political participation, acts of international aggression) as well as attitudes.

Disadvantages

-may be impossible to achieve an acceptable level of reproducibility.

-items may scale in a pilot study but not in the survey proper. Not all areas of study will yield an acceptable Guttman’s scale.

Topic 20:Designing a sample

Overview

Probability versus non-probability sampling

Simple random samples

Systematic random samples

Proportionate stratified random samples

Disproportionate stratified random samples

Multi-stage random cluster samples

Convenience samples

Purposive samples

Quota samples

Probability versus non-probablity sampling

 

In probability (or random) sampling, every member of the population has a known and non-zero probability of being included in the sample.

 

In non-probability (or non-random) sampling, there is no way of specifying the probability of inclusion and there is no assurance that every member of the population has at least some probability of inclusion.

 

Probability sampling has two crucial advantages:

-it avoids conscious or unconscious bias on the researcher’s part because the research has no say in deciding which cases get included

-it allows us to use inferential statistics to estimate the likelihood that our sample results differ from those we would have observed if we had studied the entire population.

Despite these advantages, non-probability sampling is used when:

  • the advantages of convenience and economy outweigh the risk of having an unrepresentative sample. Short notice.
  • no population list or surrogate population list is available. Can only do probability if access to full population list.

 

Simple random samples

 

Simple random sampling is the most basic probability sampling design and forms the basis for more complex designs.

 

Simple random sampling gives every member of the population an equal probability of inclusion and gives every possible combination (of the desired sample size) of members of the population an equal probability of inclusion.

 

For a small population, a simple random sample can be drawn using the lottery method. For larger samples, a random number generator is used.

 

Disadvantages:

  • can produce extreme samples (e.g. only the rich, only the poor) because every possible combination of people has an equal probability of inclusion. This is improbable, but it is not impossible.
  • tedious and time-consuming unless a population list is available in an electronic format.

Systematic random samples I

 

Systematic random sampling involves dividing the total population size by the desired sample size to yield the sampling interval (which is conventionally denoted ‘k’). Then, beginning with a randomly selected person from among the first k people, the researcher selects every kth person. Example:

Population size = 10,000   Desired sample size = 500 k = 10,000/500 = 20

The researcher would randomly select one person from among the first 20—say, the 14th person–and then select every 20th person (14, 34, 54, 74, etc.)

 

Provided the first person is selected randomly, there is a priori no restriction on the probability of inclusion.

Systematic random samples II

 

Advantages:

  • less cumbersome than simple random sampling—only one random number is required and thereafter it is simply a matter of counting off every kth
  • reduces the risk of extreme samples since only combinations of people k people apart have an equal probability of inclusion.

Disadvantages

-can produce extreme samples if there is a cyclical order in the population list and this order coincides with the sampling interval.

-Only feasibly with small populations

 

Proportionate stratified random samples I

Proportionate stratified random sampling is used to ensure that key groups within the population are represented in the correct proportion. It provides a better solution to the problem of extreme samples.

 

Instead of sampling the entire population, the population is divided into homogeneous groups, or ‘strata’, and a series of samples is selected, one from each stratum. These samples are then combined to produce a representative sample of the population as a whole.

 

The number of people selected from each stratum is proportional to that stratum’s share of the population. Simple random sampling or systematic random sampling is used to select the samples from the strata and so there is no departure from the principle of randomness

The stratification variables must be:

  • relevant to the phenomenon to be explained i.e. people within strata should be similar with respect to the DV—and people in different strata should differ with respect to the DV.
  • operationalizable—this means that we require information about the value of each person in the population on the stratification variable(s) before conducting our study.

Advantages

  • avoids extreme samples for the characteristics that are used to stratify the population
  • increases the level of accuracy for a given total sample size OR achieves the same accuracy at a lower cost. This follows from the formula that is used to calculate the confidence interval

Stratification reduces variability (S)–and the less variability there is in the population being sampled, the smaller the error term (E) will be. Or, conversely, the less variability there is, the smaller the sample size (N) can be to achieve the same level of accuracy (E)

Disproportionate stratified random samples

 

Disproportionate stratified random sampling is the same as proportionate stratified random sampling except that the research deliberately over-samples some strata and/or under-samples others.

 

This is done for analytical reasons:

  • to facilitate statistical analysis by having an equal number of cases in the different categories of the IV.
  • to ensure sufficient cases for meaningful analysis where a stratum is small but substantively or theoretically important.

By definition, people belonging to some strata have a higher probability of inclusion. This is no problem when the sub-samples are being analysed separately or comparatively. However, if the sub-samples are combined into a single sample, corrective weights must be used ensure proportionality.

 

Multi-stage random cluster samples I

 

All the methods described so far require a complete list of the population. Multi-stage cluster sampling is used when no population list is available (e.g. all university students in Canada, all eligible voters, all Catholics). Sampling proceeds in stage.

 

At the first stage, the researcher randomly selects groupings, or ‘clusters’, of population members (e.g. a university is a ‘cluster’ of university students). At the second stage, the researcher randomly selects people from within the selected ‘clusters’.  So lists only have to be obtained and/or compiled for the selected clust65ers.

 

Depending on the population being sampled, several stages may be involved.

 

e.g. randomly selecting electoral districts, then randomly selecting polling divisions within the selected districts, and finally selecting eligible voters from the selected polling divisions.

 

Or randomly selecting school boards, then randomly selecting schools from within the selected school boards, then randomly selecting students from within the selected schools.

Advantages

  • obviates the need for a complete population list.
  • reduces costs in sampling a geographically scattered population by concentrating interviews within selected localities.

Disadvantages

  • increases the risk of sampling error because each stage has its associated risk of sampling error.

Accuracy can be increased by:

  • increasing the sample size—but there is a trade-off between increasing the number of clusters to be selected and increasing the number of cases to be selected from those clusters.
  • increasing accuracy by reducing variability—i.e. combine stratification with multistage random cluster sampling.

-so far: all avoid bias. Enable to use inferential statistics, can be simple/complex.

 

Convenience samples

 

There are three different basic non-probability sampling designs. In increasing order of desirability, they are: convenience sampling, purposive sampling and quota sampling.

 

Convenience sampling is just what its name implies—the researcher selects whatever people happen to be conveniently available e.g. the first 100 people who agree to be interviewed, students in an introductory psychology class.

 

This method is easy and inexpensive—but it is likely to yield unrepresentative samples. It should only be used (if at all) for pilot studies or for pre-testing questions.

Purposive samples

 

Purposive (or judgmental) sampling offers a better approach. The researcher uses his or her judgement and knowledge of the target population to select the sample, purposively trying to obtain a sample that appears to be representative.

 

With this method, the probability of being included depends entirely upon the judgement of the researcher.

 

In the hands of a skilled researcher, this method has been known to yield surprisingly accurate sample estimates.

Quota samples I

 

Quota sampling is the most sophisticated method of drawing a non-probability sample. The goal is to select a sample that represents a microcosm of the target population.

 

Interviewers are given a quota of individuals to select, specified by attributes such as age, sex, ethnicity, education, and income. They are required to select individuals displaying various combinations of these characteristics in proportion to their share of the population.

Quota sampling II

 

This method is generally superior to convenience or purposive sampling, but it has several limitations:

  • it requires up-to-date and accurate information about the target population.
  • there is ample opportunity for bias—the only constraint is that interviewers fill their quotas. The selected individuals may display the requisite combination of characteristics, but that does not guarantee their representativeness.
  • the number of characteristics that can be taken into account in determining quotas is limited. Say there are four characteristics—sex plus religion (4 categories), ethnicity (3 categories), and education (4 categories). That means 2 x 4 x 3 x 4 = 96 different types of people i.e. it becomes prohibitively expensive to track down people who meet the quota requirements.

 

TOPIC 21 Data Gathering Techniques

Overview:

Basic Ethical Principles

The meaning of informed consent

Why can the principle of informed consent be problematic?

The cost-benefit approach

 

Basic Ethical Principles:

  • There should be no deception involved in the research
  • There should be no harm (physical, psychological or emotional) done to participants.
  • Participation should be voluntary
  • Participation should be based on informed consent.

 

The Meaning of Informed Consent:

Informed consent can be defined as ‘the procedure in which individuals choose whether to participate in an investigation after being informed of facts that would be likely to influence their decision.

 

This definition raises 4 issues:

– Competence – do participants have the mental or emotional capacity to provide consent?

– Voluntarism – are participants in a situation where they can exercise self-determination

– Full information – do participants have the information they need to give informed consent?

– Comprehension – do participants understand the potential risk involved?

 

Why can the principle of informed consent be problematic?

How much information is needed to consent to be ‘informed’?

  • What if it is extremely important that participants not know the true purpose of the study?
  • The trade-off between ethnical considerations and methodological considerate is often cast in terms of a conflict of rights.

Balancing Respect for Human dignity, Free and Informed Consent, Vulnerable people. Privacy and confidentiality, justice and inclusiveness, harms and benefits, minimizing harm May cause embarrassment, loss of trust in social relations, lower self-esteem. There can be cases of risk of physical harm: Rex Brynen interviews people diplomatic in bag for information.

 

The Cost-benefit Approach:

  • The cost-benefit approach involved weighing the potential contribution to knowledge and human welfare against the potential negative effects on the dignity and welfare of the participants.

This approach can be problematic:

  • The ethical issues involved can be subtle, ambiguous and debatable.
  • We are not necessarily weighing predictable costs and benefits but possible costs and benefits.
  • The process of balancing cost and benefits is necessarily subjective and value-laden.

 

Milgram’s obedience to authority: this is the ethical research. It was the research that triggered the ethical questions.

  • Emotional psychological stress shapes people’s actions.
  • Milgram test: if someone gave the wrong answer they were shocked at an incrementally higher rate from shock to shock.

 

TOPIC 22 Observational-Methods

What is observational research?

Some advantages of observational research

The trade-offs involved in observational research

Types of observational research

Other drawback of observational research.

 

What is Observational Research?

  • It is the direct observation of political behaviour as it occurs in the natural setting. The researcher can study the behaviour as it occurs.
  • Observational research differs from other methods, observational research melds data collection and theory generating. The researcher doesn’t come in with carefully formulated hypothesis.
  • Data collection and data analysis are not discrete stages. Instead, the researcher attempts to develop a generalized understanding of an unfolding process over an extended time period, through a blend of induction and deduction.

 

Some advantages of observational research

  • Flexibility – the research can modify the research design in the light of emerging theoretical understandings and/or changes in the situation being studied.
  • Feasibility – no elaborate preparations necessary.
  • Low cost – observational research does not require expensive equipment or staff.
  • Depth of understanding – observational research enable the research to develop a comprehensive and nuanced understanding.
  • External validity – behaviour is studied in its natural setting (minimizes or eliminates artificiality)
  • Contextual understanding – the researcher is able to analyse the context in which behaviour occurs.
  • Immediacy – the research does not have to rely on participants’ recall.

The Trade-Offs involved in observational research

Ethical considerations, reactivity and access.

  • If people know they are being observed, their behaviour may be affected. They may even refuse permission. BUT if they are observed without their permission or under false pretences in order to avoid the reactivity problem and/or solve the access problem, the research becomes ethically problematic.

 

Types of observational research I

  • Covert participant observation is intended to solve both the reactivity problem and the access problem. The researcher is either a genuine participant in what is being observed or pretend to be a genuine participant.
  • The researcher’s true identity is unknown to the other participants. They perceive the researcher to be just another participant BUT:
  • This type of observational study raises significant ethical issues (lack of informed consent, deception, violation of privacy).
  • It does not necessarily solve the reactivity problem (the research’s own behaviour may affect the behaviour under study).
  • There is a risk of getting caught up in the assumed role.

 

Types of observational research II

  • Assuming the role of participant-as-observer is intended to resolve the ethical issue, but poses problem of reactivity.
  • The researchers participates fully in the behaviour under study, but make it clear that he or she is also undertaking research.
  • The difficulty with this type of research is being accepted in this role. Access may be denied.

 

Types of observational research III

  • In the role of observer-as-participant, the researcher identifies him or herself as a researcher and makes no pretence of being a participant.
  • There are still the problems of access and reactivity, but there is less risk of getting caught up in the behaviour that is being observed.
  • Finally, there is the role of complete observer. The researcher observes the behaviour without becoming part of it in any way. Typically, the behaviour is being observe in a setting that is regularly open to the public.
  • This role avoids ethical dilemmas and the problems of access and reactivity. The researcher is less likely to lose his or her scholarly perspective, but is also less likely to develop a full appreciation of the bahviour under study.

 

Other drawbacks of observational research:

  • Unreliability – there are ample opportunities for random error and we cannot be sure that another research observing the same behaviour would draw the same conclusion.
  • Lack of generalizability – because of the personal nature of the observations and the potential for biased ‘samples’
  • Low transmissibility and replicability.

 

Difference btwn inferential & descriptive statistics and example of each

Al Franken – Giant of the Senate Review

Al Franken is Funny:

I read his Big Fat Lies and the Liars Who Tell Them after stumbling on it at a retirement community library in South West Florida. Sometimes the jokes fall flat. He’s not hilariously funny. But he’s funny. Knows how to construct a setup and punchline appropriately. The Giant of the Senate is hilarious. Reading his biography of his time in comedy and politics is enjoyable. Here’s what I got out of it:

Al Franken Is Awesome, This Book is Inspirational:

The book lays out how laws are really passed or not passed. For example, the Republican obstruction during the Obama years was something anyone could see; but the depravity of the whole thing is made clear in this book. Delay, delay, delay; it’s rather frustrating. Franken can be a bit sanctimonious at times about American values, sometimes conflating US values with Democrat values, but that’s okay, he only went to Harvard i.e. not the smartest cookie. However, Franken’s stories about how difficult it is to pass legislation is very instructive. He mentions Saskatchewan although he miss pronounced it. He talks about Indigenous rights in Minnesota and Indigenous education. He does however recognize that the media is partly to blame for the free media Trump was able to garner throughout the primary and real race. Overall, I’m really impressed and might start a podcast, I’m that impressed.

About SNL, John Belushi, the De-humanizer and the Show-Horse versus the Work Horse:

SNL writers believe that you have to write jokes for people who don’t know about politics and reward those who do with a few wink wink jokes. When Bob Woodward wrote the book on John Blushi’s comedic career, the book excessively forces on Belushi’s cocaine addiction which Franken also had for a time (the SNL work load was grueling).  Franken had a joke about how kids were downloading bestiality which was really well constructed and which I won’t reproduce here, but his opponents turned the joke into a serious statement about being pro-bestiality. Franken realized he had to be serious in the US Senate.

Fundraising is 80% of What a US Senator has to Do, and that’s Sad:

Franken describes all these ridiculous calls that he has to do and it’s all about money. First of all, his sales pitch in the book is terrible. “Hi, you democrat, me democrat, me need money, you money have.” And sure he gets money from comedian friends. Franken seems to have come into the US Senate as an original, and let as a copy. Still Franken is cool, don’t get me wrong.

Franken is a Partisan Thinker, and that’s Sad:

Franken doesn’t realize what Jane Jacobs and Socrates knew; that ideology is a means of aggregating vote totals. It’s a means to power, and politicians, use ideology to bolster their support. Language is inexact and leads to hilarious word based debates rather than data based debates. Democrats and Republicans are ideologicaly camps that simply prioritize different areas of political resource allocation. End of the discussion, do I need to draw a diagram. Fuck! Franken says that the US Senate is not a discursive dialogue…Sounds like it. But perhaps, Franken is part of the problem? Franken says; Democrats have better ideas just not as good as explaining those more nuanced ideas….eye roll. He then says things like “if you believe in clean water, you’re a Democrat.” His persuasion is partisan and hammy….

Franken on Climate Change, part of the problem is language:

Franken is so bound by his upbringing in an era where there was no internet. It’s sad. Here’s an example of the language problem. The Climate Change discussion is interesting, the Republicans and Democrats are binary; their intellect deficient. One says that the climate is always changing and Franken says, man is causing climate change. Um, you’re both wrong! Nuanced politicians! Are there any? Probably not. Why? because they need to corral a spectrum of thought into a binary. Of course, humanity is impacting earth’s climate, the very house you are sitting is human created. Do you really think that humans are not effecting even at a 1% rate on the planet’s Earth? The debate should be really around what to do about the fact that our species might be under major duress in 100 years time if we continue to change the parts per million of CO2 in the atmosphere. Denying that CO2 is being produced at a greater rate by humans (who have the brains to reduce or increase production of CO2 i.e. not a volcano) is not what Republicans are doing for the most part. It’s about the economy stupid and about predicting the future of weather being, um, difficult at best.

Al Franken Is Thoughtful but Still Partisan, Did I Mention That?:

Franken thinks in terms of society. He sees the government as a positive contributor. He also believes that lawyers are over-represented in politics. You have to pull yourself up by the bootstraps by the truth is that you need a pair of boots and only the government can provide you boots. Any of those statements disagreeable to you?

The Ted Cruz Chapter Is Hilarious:

That is all.

What Now for Al Franken?:

There is a line for comedians that you cannot cross. That line is retro-active. Humour can be weird but there are major risks when you make off colour jokes or abuse you fame to force yourself on others. The lesson is clear as day: leave your sexual impulses at home.  Franken resigned so that Democrats could take the upper-hand against Roy Moore AND because at the height of MeToo: due process did not matter in the court of public opinion.

If You Cannot Afford to Take Risks, You Should Not Be Involved In Politics

Politics is a blood sport. You can only commit transgressions if first, the transgression is understandable, common or somehow justifiable AND you are really good at getting things done in public office and can truly lead with a dynamic vision. Otherwise, you’re like a provincial court judge or other bureaucrat; any significant mistake will standout because you have been trying to be perfect at all times.

Tocqueville’s Democracy in America – As a Framework for The Future

It’s the most important work on American democracy and the US in the 1830s. Democracy in America is a very long book 1000 pages though. The truth is that every American and every Political Scientist should read it.

Two ways to look at it:

  1. It’s a historical artifact: it’s historical.
  2. Work of political science and sociology.

The French Revolution ruined the de Tocqueville family wealth. The author studied, Voltaire, Rouseau, Pascal. In the 1830 July Revolution , Tocqueville takes the oath for the new Burbons. Tocqueville wanted to try looking into the US for prison reform. However, he wanted to identify lessons from US democracy, it’s inclination; what should we fear or hope for in this new democratic movement emerging in the US? The Trail of Tears occurred in the 1830s….Also the Nullification Crisis. There was also slavery; bu Tocqueville observed a ‘classless’ society.

Funny Associations:

  • The Voluntary Association / Local Sovereignty
  • American Bible Society; Temperance Society;
  • The Lady’s Association for the Benefit of Gentle Women of Good Family Reduced In Fortune Below the State of Comfort To Which They Have Been Accustomed.
  • Voluntary Associations: don’t rely on the government to solve their problems.
  • Democracy at the local level then is far more robust. Tocqueville and his co-author won a cash prize for their research.
  • The federal government was very small; voluntary association was central and patriotism is evident.

  • The Hierarchies of Power could be crushed as long as we are all being treated free and equal….and meeting up to talk about it.
  • Freedom and Equality are mutually re-inforcing. But then we asked;
  • Freedom and Equality seem to pull in different direction….
  • Locke wanted to separate powers; but it’s an institutional device.
  • How to combine popular rule with political wisdom?
  • “1835 Democracy in America”
  • America is a blank slate. Tocqueville thought that France would become like America: democracy is likely to revert back to monarchy.
  • Equality of conditions: this is the equality of conditions (equality of opportunity). It’s a gradual spread of the concept.

Features of American Democracy:

I) Local government: localism: local democracies are the cradle of civil society in townships. The institutions of putting the democracy in the reach of all the people were not that expensive to build. The people are legislating and organizing. Alexis de Tocqueville told his readers to read Rousseau every day;

The township format itself is Aristotelian. The township exists by nature. There is the old Polis character described by Aristotle which Tocqueville believes is very important for a democratic society.

II) Civil Association: these voluntary groups are immensely powerful and energizing. There is the mother science concept; uniting in associations. Trying to fix common goals; civic association.

Robert Putnam: happy for social capital. The decline in association is the Bowling Alone phenomenon. These are not natural times; It’s a learned activity; the Civic Society goes into decline as our isolation cripples our Civic Associations.

Are we in a couch potato crisis? Yes, in 2018!!

III) Spirit of Religion: America is primarily a puritan democracy; early Puritanisms. Religion will not disappear because of the decline of faith; it’s rather a shift in faith. We can’t separate faith: dignity of the individual. Tocqueville looked at religion purely for social effects.

Increase the number of factions in order to prevent anyone from being the dominant one.

The idea of democracy does claim that this idea that political correctness is a danger.

Moral of the State:

  • Compassion, restiveness,
  • Democracy has made us gentler: broadcast tv has made us indifferent to others in our group.
  • Bill Clinton “I feel your pain.”

Political Educator: – There is a divine

  • Restful. We want to ask what kind of people we create.
  • What is the democratic statecraft? A new political science; it’s based on a novel history of human agency; as any reader knows there is a power in history.
  • It’s like we are part of an immense process. 
  • Certainly the pendulum has swung away from civil society in many ways. But generally online interactions are positive.

Leadership Under Duress: 2 Technologies Used by Officers of the Law – Gwent

These Welsh Police officers showed real courage, professionalism and calm under true duress. In this video, the assailant refused to put his knives down, after so many requests to obey law enforcement. Follow the law, obey police officers (if you live in an advanced western democracy). The guy didn’t do that. And he even asked that they not record the situation etc. Clueless! The police are an extension of the government that we pay for as citizens.

What’s really interesting is the use of tasers and body cams; the police (who are serving their community at great personal risk) managed, through quick action, to disarm this dangerous person. While some may argue that statistically other professions are more risky (Alaskan Crab Fishing), I would argue that the kind of risk needs to be included in an stat used to evaluate risk. Stabbing death is a bit more horrendous than falling off a boat. And yes, police get compensated for their hard work and they are aware of the risk but this kind of courage is commendable. Unfortunately, taser guns do not completely incapacitate an assailant, hopefully that technology will be available soon.

 

How Finance Is Used In Business Operations | Appreciating Depreciation!

Dollarama: Looking at their Industry leadership in Canada and Earnings Management 101

  1. competing on price
  2. cost leadership
  3. cheap product retailer

Dollarama IPOed in 2009 and become a Canadian darling trading as TSX:DOL. Then in June 2014 which is the first quarter of their 2015 financial year (fillings standard), they changed the useful life of all their stores & additional fixed assets from 10 years to 15 years. So what? Well, this spreads the rate of depreciation over a longer life time. Assets stay on the balance sheet longer and that means that Dollarama has a lower amount of depreciation expense per year. Since depreciation expense take away net income, the effect is that: in Dollarama’s case:

  • they gained $0.04 per share increase by in effect tweaking the depreciation rates of their stores.
  • David Milstead of the Globe and Mail said the reasoning behind this is that “Dollarama needed to keep the P/E at 21 versus 16.”

What Milstead is arguing is that Dollarama had to hit analyst expectations. Gross margin was down in the first quarter so how was that going happen?

You can imagine the C-suite discussion: “We wanted to maintain a premium valuation.” What’s a bit shocking is that adjustments on the depreciation rate of their stores made up for exactly the short-fall. Weird right? Although Dollarama could legitimately set depreciation for their stores at 15 years; it is plausible deniability. The kind of activity that draws similar behaviour in both corporate and political decision-making. Only through a premium valuation can Dollarama maintain its $21 Price over Earnings ratio. However in the long term, this C-suite decision would have a future impact on CAPEX. Cost leadership by gross margins were down and trending lower. They can’t increase prices; they have nothing to divest. They have few receivables (i.e. beyond gift cards). They could delay paying suppliers? Or they could only fiddle with depreciation. What’s the future impact on depreciation? The net income goes down in future years!

Depreciation as Form of Earnings Management

Recall that Depreciation is used to explain how much of an Asset’s value is used up. And then it is matched the expenses of an asset against the income that asset earns. Depreciation is used for income tax purposes to degrade assets thus reducing the tax burden on a business.

A straight-line depreciation would have a tangible asset worth $500K with a 5-year useful life. Every accounting year, the firm expenses $100K which is matched with the money that the tangible asset creates each year. Therefore, when you change the Depreciation from $500K over 5 years to 10 years, that means you would expense $50K per year rather than $100K which means the asset stays on the books for longer.

Depreciation sits on the balance sheet as a reduction from the total gross amount of a company’s long-term Property, Plant and Equipment PP&E. In other words, if you keep the PP&E longer on the balance sheet, it will benefit the earnings / net income. When an asset is retired or sold, the total amount of the accumulate depreciation associated with that asset is reversed, completely removing all record of the asset from a company’s books.

Depreciation expense is a non-cash expense because the monthly charge on a company’s income statement is made by a monthly recurring depreciation entry. A depreciation expense on a company’s income statement is debited and a company’s accumulate depreciation expense is credited on its balance sheet. If you reduce the depreciation expense over a specific period, it is the depreciation expense itself that reduces the earnings as a non-cash charge on a company’s income statement.

Off-Balance Sheet Assets: Leasing – Capital vs Operating

Operating lease is off the balance sheet only appearing in the Income Statement.

A true lease

Capital Leases: the lessor transfers ownership of the asset to the lessee who gets the depreciation for the asset. The lessee has to list the capital lease and gets the depreciation from that asset.

Operating Leases: are where the lessor owns the asset and all the costs and benefits (depreciation) associated with it. The lessee merely rents out the asset and pays a lease fee. The tangible asset is off the balance sheet.

Knife-edge criteria e.g. – under US GAAP, a lease is a capital lease if any one of these four hold:

  • Length of Lease extends to >= 75% of Useful Life
  • Ownership is transferred of title at end of the lease
  • Bargain Basement clause at the end of the lease
  • PV(Payments) @ “appropriate” discount rate >= 90% of Fair Value

Firms will often structure leases to ‘just’ avoid capital lease. i.e. Many Operating Leases be a Capital Lease in disguise.

Operating Lease:

Debit                    Rent Expense

Credit                   Cash

You have no application on the balance sheet:

Capital Lease:

Debit                    Asset

Credit                   Liability

Plausible deniability: you have a plausible justification that masks (convincingly!) another objective. Create your own schedule, flexibility, aim for the highest support, money. The true economic reality of a firm and what is presented in the financial statements: The role of financial statements in valuation does not rely solely on reported profit.

Operating Lease versus Capitalized Lease

Operating lease: the transaction appears on the Income Statement. >= 75% of it’s useful life. Ratios are better: Return on Assets (sales are not in Operating Lease). Fewer journal entries. ROA is Return on Assets = Net Income / Average Total Assets. Capitalized Lease is BS + IS

  • Debit Rent Expense
  • Capitalized Lease; the transaction appears on the Balance Sheet and Income Statement.
  • Debit into Expenses
  • Debit Liability
  • Credit Cash

Who is Fooled?

Banks can observe this activity as well. They are not fooled.

Off Balance Sheet Finance

Debit Covenant: when a company is given a target performance range by investors that they must follow.

Ebit/Int

Off Balance Sheet Finance

But

Ebit/Int+Rent

Moody’s Agencies: {Not Fooled Either} Air Canada – standard metrics + Moody’s standard adjustments. 8.3 billion adjust to long term debt. Balance Sheet: -> present value of Capital Lease

Air Canada Case

You need to get the present value of capital leases. Air Canada added Cap Leases. Investors trade on heuristics: so yes, Investors can be FOOLED. Take Operating Lease for the next 5 years. Cost of Debt 7.32% You can almost double the debt load. If you capitalize your Operating Leases the firm looks totally screwed. Take all the debt to the end of 2004

Intangible Asset (Off the Balance Sheet)

  • Intangible assets include intellectual property, brand equity and goodwill.
  • Intangible assets are categorized into two categories;

1) those that appear on the financial statement and;

2) those that do not appear on the financial statement.

R&D Expenses are almost always expensed. There are some expections; software firms where accounting standard allow for the capitalization of development expenses after product feasibility has been demonstrated. This rule applies for internally generated intangibles.

If the pharmaceutical firm develops a patent, that patent is not recognized on the balance sheet.

If the pharmaceutical firm purchases patents form other: the value of the patent appears on the balance sheet. It is difficult to determine the value of intangibles. Spending money on R&D or advertising does not guarantee a benefit to the firm. However, when assets are acquired there is an explicit or implicit valuation for the acquired intangible asset.

Is R&D an Asset?

In the US all R&D is expensed. On the balance sheet; there is a liability. There matching entry is on the shareholders’ equity and deferred taxes.

What about Advertising: Energizer Bunny didn’t improve Duracell’s sale

Goodwill              |                            <- Intangible

|

Net Assets           |

|

Valeant Case R&D is expensed in the US but R&D is Goodwill and Intangibles.

What are the future benefits of its assets?

Goodwill: I can’t sell that much.

Intangible assets are massive: you can put the intangibles in the Goodwill and not in Net Assets. ROA (return on assets) was low 1.2% organically

The telltale signs of distortion on in the ROE, firms with intangible industries will have ROEs much higher than firms in other industries and also much higher than their costs of equity.

ROE =                   Net Income / Total Shareholder’s Equity

ROA =                   Net Income / Total Assets

You have R&D of $10 billion as Intel. But your balance sheet shows under $5 billion of identifiable intangible assets. Intel R&D using a three year straight line amortization period; what this means is that you don’t expense R&D as incurred but capitalize it and amortize it over a three year period.

Intel is close to steady state which means the impact is more muted.

You only grow Goodwill by acquisition.

  • 13 Billion Down
  • 2 Billion
  • ROA 8.4%

ROA inflates if the most important assets isn’t in the denominator. You are growing by acquisition. Synergies: not tangible, you can’t give to another firm. 11 Billion out 13 billion means you have negative equity.

On Balance Sheet – Intangible Assets

Why On Balance Sheet distorted?

In this case, the intangible assets are overstated on the balance sheet.

Two types of intangible assets:

  • Amortization period not appropriate (for finite lived intangibles): these are patents which expire after a set duration. Finite lived intangible assets are amortized over their useful life.
  • Impairment not taken (indefinite life intangibles): these are goodwill or trademarks. Indefinite duration assets are typically evaluated periodically for impairment.

Where Crucial

In addition to I.P. intensive industries, also M&A intensive firms/industries. HP and CGI for example.

Telltale Signs
Others taking impairments, especially for goodwill while this firm is not taking impairments.

Firms hate to take goodwill impairments as these are a tacit admission that the acquisition was a failure: goodwill will reflect overpayment.

Remember that Goodwill impairments are not tax deductible while fixed asset impairments are! Therefore with Goodwill impairments, the impact will be felt in Net Income and Correspondlu in Shareholder’s equity.

Hewlett Packard | Autonomy

The HP acquisition of Autonomy: ‘Get Rid of Goodwill?

You need to do an Impairment Test (for your information) there are 4 steps.

PV = FCF/(r-g)

The g will massively impact your PV, so in terms of financials you can massively miscalculated the PV.

Valeant doesn’t tell you about the inputs into the Goodwill impairment and we get wildly different results. HP had a 9 billion 2012 impacted due to the Autonomy situation.

Autonomy $9 billion, they paid a premium of 65% over the target’s trading price. HP recorded a $6.9 billion goodwill and $4.3 billion of other intangible assets. HP overpaid and they had to write down $8.8 billion. $5.7 billion was written off goodwill, while $3.1 billion was written off the intangible assets.

On the Income Statement you see charges for both these impairments; The balance sheet, impairments of $8.8 billion totally towards goodwill and other intangibles and an entire impact of $8.8 billion on equity as these impairments are non-deductible.

  1. flexibility of write downs

+

  1. Poor profitability

Therefore, it’s possible that there was a Big Bath: Autonomy founder suggests as much: it’s a write off quickly. Write off over 10 years versus write it all down now.

They choose to write it down NOW, indicating a Big Bath.

HP’s Big Bath

https://en.wikipedia.org/wiki/HP_Autonomy

Liability Distortions

RRSP and 401K are Defined Contribution Plans:

Remember: Conservative Accounting Versus Aggressive Accounting

  • Being conservative is Asset DOWN Income DOWN
  • Being aggressive Assets UP and Income UP

Deferred Revenue

Why do deferred revenues get distorted? Deferred revenues can get distorted when firms manipulate the criteria for revenue recognition. If a firm is aggressive, it may choose to recognize revenue prematurely, increasing revenue and thereby income, and lowering the deferred revenue liability.

Where it matters?

Deferred revenues are important in businesses where the operating cycles are long; where project often span multiple years and where there is a mismatch between the receipts of payments from customers and provision of goods/services.

Example, Microstrategy was accused of aggressive revenue recognition. Essentially, had multi-period contracts for software service Instead of deferring the revenue and recognixing it over the life of the contract. For the year 1999, Microstrategy admitted to overstating approxiametaly $50 million in revenue.

Deferred Revenue

  • Instead they should treat it as deferred revenue
  • When firms get paid, before they provide a good/service, they should not treat it as revenue.
  • In the future year, they recognize revenue (cr.) and remove the deferred revenue (dr.)
  • Often firms are aggressive and prematurely recognize revenue
  • Adjustments are similar to channel stuffing with an increase in liability (deferred revenue) instead of decrease in an asset (accounts receivable).
  • Micro-strategy example – prematurely recognized 50 million of revenue (and 2 million of associated cost)

Reserves

When you recognize the impact of the expenditures before the expenditures actually occur.

  • Warranty Expenses Debt $5 million
  • Warranty Reserves Credit $5 million

Reserves can be distorted at both times either at origination or at the time when then the expense is incurred. You can have expenses diverted to reserves instead of the income statement.

Where Might This Be Crucial?

In industries where warranties are crucial. Restructuring reserves are crucial as often the ‘turnaround that has been shown by the new management can be a fiction of the accounting treatments.

Telltale Signs:

Unexpected improvement in cost ratios on the income statement, along with a corresponding decline in reservices.

Pensions

Defined Contribution Plans:

  • You have Assets and Liabilities which are equal to each other.
  • These plans tie you to a given company for example:
  • GM: $50 BILLION liability because they had a contribution plan.
  • Car company: $2500 per car went to cover Post Retirement Healthcare costs.
  • The risk is with the employee.
  • Contribution Plans are portable?

https://en.wikipedia.org/wiki/Defined_contribution_plan

Defined Benefit Plans:

  • Risk is to the employer.
  • The present value of the future payments to be made to the pension plan is the PBO (projected benefit obligation). If the Net Assets of a pension exceed the PBO, the pension plan is adequately funded.
  • If the Net Asses are lower than the PBO, then the pension is underfunded.
  • This is a portable plan which you can take with you anywhere you like.

https://en.wikipedia.org/wiki/Individual_Pension_Plan

Pension Plan sits under the COGS, SG&A or some other expense category. Pension Expense has three components: a) service cost, b) interest cost and c) return on plan.

Distortion in Pension Accounting:

Some because of the complexity of the standard and some because of the manipulation.

Where might this be crucial? Pension related issues are important in labour intensive and unionized industries, usually more so in older legacy companies.

How old are my fixed assets: accumulated breakdown: Apple outsources to Foxconn. Apple has Operating leases for their Apple Stores.

Discontinued Operation

When firms discontinue a line of business, the results are then restated for past periods: usuall past 3 income statements and last 2 balance sheets….

They want to prevent firms from disposing of entire lines of businesses in order to improve net income. Amy gains and losses will be reported separately. The Net Assets from discontinued operations: this represents the assets less liabilities of the discontinue operations.

Changed in Accounting Principles

You could change accounting principles regularly in order to confuse Equity Researchers.

Cumulative Effect of Accounting with Pro-Forma Disclosure

  • Say a company changes from one method of depreciation to another.

Governments love depreciating assets;

  • New assets get depreciated well before new investment.
  • Citizens hate to see reinvestment in older infrastructure.
  • The government only wants to invest in older infrastructure when it really really needs to be fixed.
  • They have to issue debt or raise taxes to get more revenue.

Grss PP&E                                                        Acc Dep

_______________                          _____________________

11billion                                                                           Credit 4billion

10.7bilion                                                                        Credit .700billion

Net PP&E

7 |

BA versus Luftansa:

  • BA’s long-haul planes depreciate at a lower rate because they have fewer take offs and landings.
  • Luft has less turnaround times, smaller planes therefore higher rates of depreciation.

British Airways

Gross PP&E                                      Depreciation                                     Assets

Layer useful left                              high

18 to 25 years for their assets

Lufthansa

  • When you balance those two together you get
  • 12 years for their assets.

Lufthansa has smaller planed, depreciation is much fast, more takes and landings. Gross PP&E/Useful Life = Depreciation Expense. 12 years depreciation with a 15% salvage value.

Average Gross PP&E

  • Useful Life

BA          .60                        .61

Luft        .63                        .64

Gross PP&E                                       Acc Dept

Dr.          |                                                          | Cr.

$23.2B    |                                                          |$12.6B

Net PP&E

Dr.          |

19.6B      |

So if you were to use the same depreciation. Equity increases when you depreciate faster…

How Mainstream Publications Overlook Their Own Weirdness and Just Blame Facebook…

[Disclaimer this is a non-partisan publication]
And I am no Facebook apologist, but I thought it was worth raising awareness about the following:

[Transcript] Hey, I had to talk about this because I noticed, this morning, something really interesting, and I mean, more interesting than your standard cat video while you’re scrolling through Instagram. I was on Twitter and I clicked on a link to a really cool story called “Watch a Robot ‘Hen,’ Robot Chicken, with some chicks, flock of chicks.” And when you scroll to the bottom of this article, you’ll notice some moderately spooky or weird links from Outbrain and I think we need to look at Outbrain, but let me just show you on my phone what it looks like.

So, on my phone, I don’t know if you can see here, but the link at the bottom… Where’s my… Yeah, there’s my finger. The link at the bottom, one of them says, “Justin Trudeau about to legalize something controversial.” You click on that link, it takes you to this web page, which I will provide a link to in the video. You can see it right now probably. So I’m just voicing over what I see. Now, isn’t it kind of interesting this content is basically false or low-quality news? It’s not from the CNN website. If you look at the top URL, it’s not from CNN. It’s from something called insiderentertainment.com, and Outbrain is promoting it. At the bottom of the page, you can see what it’s really about. It’s about bingo. Fair play. I know that Wired is a reputable publisher and I know that Outbrain is really reputable as well, and so they post this in order to draw traffic to commercial interest.

Now, imagine if this was actually not true (which it obviously is not true): Justin Trudeau has legalized gambling to cover costs. It’s, basically an attack on the current liberal government in Canada. So this is Outbrain directly on Wired magazine, a reputable technology publication, which has probably seen hard times. Why are they seeing hard times? Facebook is eroding their revenue. YouTube is eroding their revenue. PewDiePie is getting 2 million hits per video and “The Washington Post” is only getting 1 million hits. This isn’t fair. So, what do we need to do? We should be attacking Facebook as publications. We should be criticizing them in particular and there’s some legitimate arguments. There are very legitimate arguments regarding Facebook, but what’s being overlooked is this hilarious Outbrain and Taboola redirection network.