Big Data

With the growth of computing power available to us, we are not only able to manipulate data in new ways but also take into account vast amounts of it. Furthermore, there is increasing comprehensiveness in the collection of data not to mention the detail we are now able to delve into.

I will start with some more traditional date – demographics. We have already been extremely competent in dealing with a wide variety of data, starting at the point of collection up to manipulation. This has been the manner in which we examine the development of nations and whether we are making progress. In the past couple of years however there has been the growth of composite indexes, which look to tell us a lot more, notably HDI becoming IHDI with an accounting for inequality. We are able to develop data this way due to the increasing ease of collecting it.

Here we may differentiate however between developed and developing countries, as access to the internet and mediums of communication leads to even more niche data being collected, allowing us to examine our lives at a micro level while still in macro scale. The best available example of this is London; the BBC recently published images of London in terms of data maps. This verifies the extent to which all elements of our lives are being put into data in a form that may be analyzed and visualized.

Screen Shot 2014-11-18 at 13.44.23

This data map shows luminance of photos taken of tourist attractions upload to the popular photo sharing website Flickr. While it is true that not everyone will use the photo sharing website, or share their photos, we can still see what kind of tourist traffic certain areas get. The map not only showing popular destinations but also routes that may be taken between destinations, the above is only an excerpt of a London wide map. This is a level beyond info-graphics and a type of data that would not be gained through taking a census.

Screen Shot 2014-11-18 at 13.45.33

The data map above shows the route taken of daily commutes between the Home Counties and London. This gives an idea of the most popular travel routes, and where the typical commuting population resides. Furthermore, it shows the willingness of people to commute specific distances. A step beyond this would consider the tube travel routes, and those travelling by car or bus (which could theoretically be achieved by considering those who pay congestion charge). The step further taken here is the data for the exit of London train stations shown below:

Screen Shot 2014-11-18 at 13.45.03

This gives a more in-depth look at how people travel, and where they travel. The complexity of this data is made more tangible through the map, however in this we could consider CO2 emissions, cost of travel, and a whole host of other factors. This type of traffic data is exceptional in where it may be taken, giving us a level of insight previously not afforded by simple census data.

We can even go into a niche understanding of specific areas, in this example looking at twitter traffic in regards to the popularity of London football clubs:

Screen Shot 2014-11-18 at 13.59.22

The growth of big data is what I believe will take economics to the next level of understanding of human behavior and decision-making. Think of the amount of data social networks such as Facebook and Twitter hold about users, or Google in regards to popular searches and specific search traits of people in a given country. Not only is the Internet providing a better image of our lives through data, but also there is increasing amount of data collected from practical activity. Take the increasing use of black boxes in cars, giving information about average speed, intensity of braking, time of driving, etc.

This type of data has many applications, but I believe it will be the most useful to microeconomics. As it is the area where we have to leave most things as theoretical based on at times loose relationships, this depth of data provides the opportunity to delve further into our behavior. Take for example the classic work or shirk scenario, considering leisure hours and potential pay offs.

As an economic community I feel there is a movement towards ensuring that a large variety of data is made publicly available. Of note, Christine Lagarde announced today in Washington D.C. that all of the IMF’s data will be available for free online from the 1st of January. Meaning that there will be an even greater plethora of data to pick and choose from.


All images are subject to copyright – London: The Information Capital by James Cheshire and Oliver Uberti


Copyright Almog Adir © 2014 · All Rights Reserved · My Website

The Failings of Price Mechanism

The free market is defined by the allocation of resources based upon the functioning of the price mechanism. This takes into account the equilibrium between demand and supply schedules. In this essay I will argue that while the price mechanism may be efficient, there are cases where it fails. Specifically focusing on the case of monopolies and then market failure whereby there are externalities.

The price mechanism does provide efficiency, through the price signaling created by demand and supply schedules. This brings us to a point of equilibrium that is Pareto optimal as neither consumer or producer can be better off without the other being worse off. Furthermore, at this equilibrium we also achieve allocative efficiency, suggesting that there is the best allocation of resources possible. This is shown through the diagram below, at the equilibrium of Qe – Pe.


We may note that there is producer and consumer surplus, this defines our parameters of Pareto efficiency, any move from the market clearing equilibrium either the producer or consumer may be worse-off. This is limited by not considering the type of market structure that is present.

The monopoly market structure provides evidence for the possible inefficiency that may be incurred through the price mechanism system. This is a consequence of monopolies being able to set their prices, and looking to profit maximise at the point where MC=MR as seen below:

P2The scenario above takes the market away from away from allocative efficiency, and incurs a deadweight loss. This is a result of there being social welfare, which is not taken by the consumer or producer (shown below).


This prices out a range of consumers, so the deadweight loss represents transactions that could have occurred and would have been socially beneficial, but have not occurred. This point is not Pareto optimal as the consumer is worse-off, nor is it allocatively efficient. Allocative efficiency and Pareto optimality could be achieved if the monopoly were to go to the equilibrium of Qe-Pe, this would mean the firm prices at the point where MC=AR, returning to market clearing.

It is important to note that a monopoly may achieve dynamic efficiency, which is unlikely for a perfectly competitive firm as a consequence of having normal profit. The monopoly gains abnormal profit (as shown by the profit margin in graph 2) that can be used for research and development. Taking the example of pharmaceutical companies, it can be noted that this abnormal profit is required for new drugs to be developed, otherwise at the cost of innovation perfectly competitive firms would not be able to supply new or better drugs which may benefit society.

We must also consider inefficiency incurred through firms pursuing aggressive tactics such as using the abnormal profit to set up legal barriers, attempt to takeover smaller firms, and abuse the economics of scale to lower the price in order to stop firms entering the market. This generates greater inefficiency in terms of reducing consumer’s welfare while also ensuring future inefficiency as barriers to entry. An example of this behaviour was shown by the De Beers diamond cartel in the 1980s, where it had a market share of up to 90% (Zimnisky, Paul). This was achieved through persuading independent mines to join the cartel; if they chose not to they would flood the market with diamonds reducing their price thereby pricing them out. They also dictated price through stockpiling in order to limit supply (Zimnisky, Paul). Exemplifying firm behaviour through the price mechanism that yields inefficiency in terms of societies welfare.

The price mechanism may also fail to provide efficiency as a consequence of externalities, which are an effect on a third-party that was not involved in the original transaction that is not accounted for in the price. The degree of inefficiency tends to correlate with the typology of the good in question. The manner in which the market forces operate suggests that the price mechanism is still operating efficiently due to the market clearing, as the externality is not directly shown. We must also consider that there can also be a deadweight loss in this scenario. The externality as an inefficiency may be shown through considering marginal private cost and marginal social cost, in that the market equilibrium is at the price and quantity that corresponds with marginal private cost, not accounting for the social cost:


However, this can be countered by the efficiency stated through the Coase theorem. Suggesting that if bargaining in regards to an externality can occur, then an efficient outcome will be reached, providing there are negligible transaction costs. This scenario is shown below:


In this the net social gain is a result of bargaining in regards to the externality between the two parties, which results in net social gain. This is due to the loss of profit, take for the conventional example used of wind turbines and noise. The turbine company is willing to compensate people suffering from the noise due to the greater gain of using the wind turbines. There is also the opportunity to be Pareto optimal at Q1. However, this theory is limited by assuming the negligible transaction costs, which in reality tends to not be the case.

Consequently, we may examine methods of dealing with externalities, such as taxation and compensation. The case of tax effectively raises the cost of the good or service in order to account for the cost of the externality. Depending on the elasticity of demand we may note a change in the quantity consumed, regardless of that the price is “corrected”. Shown below:


Here the red highlighted area represents the active externality. With the size of the tax shown through the shifting upwards of price. Now at Q1 we operate at a Pareto optimal point, while the cost is at C2, when originally at the cost of C2 there would have been Q2 consumed. When we consider compensation, the total compensation paid is up to the point where cost first meets marginal social cost. With the Pareto optimal quantity at Q1, this makes up for the cost that is unaccounted for in the original transaction, as shown below:


Even though these are theoretically capable of dealing with an externality in order to yield an efficient outcome, we must consider that the government carries them out. Therefore, there is government inefficiency that may be carried through the initial inefficiency of the price mechanism, this is due to the inability to determine the degree of compensation or tax that is required to bring price to the point of market clearing.

Take for example the externality present with industrial fishing. While fishing occurs we may assume that the amount of fishing occurring is so that there is supply to meet demand in order for the market to clear at equilibrium. However, this does not account for the negative effect on fish stocks. Due to the demand for fish exceeding the reproductive rate of fish there is a strain on fish stock. Furthermore, the methods by which fishing occurs such as trawling may adversely affect other sea life. Quotas can prove inefficient in dealing with this problem due to the lack of infrastructure related to dealing with each fishing boat and the policing required. Moreover, this can lead to fisherman using the same methods then stock dumping which also carries negative effects towards the health of sea life. We would not consider tax, as it is not in the government’s interest to raise the cost of living for the population, this leaves compensation. This may be towards subsidising fish farms, reducing the burden on natural fish stocks. Additionally, it could go towards subsiding the fisherman to the decrease in revenue if they were to fish by less efficient but more environmentally friendly means.

In conclusion, we may note that the price mechanism can at times be relied upon for providing efficiency. However, it may fail when we consider market structures such as monopolies, or the case of externalities where there are inefficiencies that are unaccounted for in the market equilibrium. To refine this analysis of price mechanism efficiency we would consider the typology of the goods in services in question such as merit and normal goods, or common access resources.

Almog Adir



Zimnisky, Paul. “A Diamond Market No Longer Controlled By De Beers.” Kitco Commentary. 06 June 2013. Web. 02 Nov. 2014.

Copyright Almog Adir © 2014 · All Rights Reserved · My Website

Nurses, Death, and Terror on The Terraces

In this article I am going to look at the narrative side to econometric analysis, and what we gain from understanding bundles of data through an instrumental variable. There are various limitations to this type of analysis, in my opinion the most major one being the requirement of a natural experiment, and all the factors, which may go wrong alongside that. This is going to be based on a lecture by Matt Dickson a researcher who is currently looking at the raising of the school leaving age and consequent effects on the labour force survey. 

Policing and Crime

This is based on the academic paper “Panic on the Streets of London: Police, Crime, and the July 2005 terror attacks”

The government has a specific interest in policing and crime through an economic perspective due to the associated costs and its effect on the general population. The statistics used for this analysis are the total amount of recorded crimes per 1000 of population, and the amount of police officers per 1000 of the population, for a respective year in the United Kingdom.

We can see that the crime rate and police rate increased alongside each other from the 1960s onwards, but from 1990 onwards there is a decline. This applied for violent crimes, robbery, etc. In this we can conclude that more police officers does not facilitate more crime and vice-versa. In this there is a positive correlation, which does not tell us about causation in the relationship.

The academic paper mentioned above uses a fixed variable in order to examine the correlation, in that the influx of police was rare due to the terrorist threat. This allows a natural experiment to be created, as the increase in police had nothing to do with “conventional” crime.

The paper specifies a treatment group that consists of the London boroughs of Westminster, Kensington, Camden, Islington, and Tower Hamlet, while the controlled group is every other borough in London. There was hardly any difference in police deployment throughout those areas before the attacks. During the operation of increased policing due to the attacks where there was effectively police on every corner. Now considering the crime rate we see that before the period crime in the treatment areas was a bit higher, but then with expanded police force we see that the crime rate drops after and during the attack period for the treatment group. This had shown a definite decrease in crime rate, but it required a massive spike in the number of police per 1000. The data allows us to establish that there was an approximate elasticity of how changing policy changes the crime rate. Resulting in an elasticity of 0.38, so a 10% increase in police numbers reduces crime by approximately 4%.

The critique that we may employ here is that due to the very nature of the natural experiment, it may have had an unaccounted for effect on the operation of crime. There is also no control comparison so a terrorist attack without a changed police response. Making it difficult to immediately take this statistical analysis at face value, clarifying the limitation of natural experiment.

However, the results are still interesting. Allowing us to expand our analysis into what kind of policing would be required to keep crime at a desired level, accepting the fact that we cannot eradicate crime. The cost of developing this kind of police force can be brought into consideration or whether other methods can be introduced which would be as effective but at a lower cost, in this noting the CCTV network developed throughout London.

Nurses Pay & Death

In healthcare payment for nurses is based on equity, in that everyone does the same job therefore they should have the same pay. However, this may have unintended consequences on the service that nurses provide, and the extent to which agency nurses are employed.

If we consider where there is a greater use of agency nurses and death rate from heart attack we see a concentration around London, the Home Counties, and a few Northern cities. In this one may suggest that the quality of agency nurses is lower hence the death rate. However, we can note that there may be reverse causation. Hospitals with high death rates struggle to employ and maintain staff; thereby making agency nurses a requirement. It is also an assumption that agency nurses are more likely to give poor service.

We can breakdown this relationship through considering outside wages as a third variable because it affects the use of agency nurses but not necessarily the quality of the hospital. Carol Propper and John Van Reenan outline the employment of agency nurses and the quality of healthcare in greater detail in the paper “Can Pay Regulation Kill? Panel Data Evidence on the Effect of Labour Markets on Hospital Performance”

School and Wages

How much is education worth?

Education has an impact on fiscal policy, welfare, labour, and crime. Increasing education could lead to a decrease in crime, and a variety of other relationships. Highlighting the importance we place on education in economics.

Considering a distribution of hourly wages and years of schooling we may note a positive relationship whereby years of schooling increases alongside hourly wage. Some problems we run into when considering education:

  • Education is not randomly distributed amongst the population.
  • People choose how long to remain in education.
  • There are unobserved and unmeasureable factors that lead to higher wages.
  • We hope that we can pick two random people and by years of education be able to assign them a wage

The natural experiment used here looks at the school leaving ages from 1949 to 1967:

The people who were forced to stay in school just stayed the one more year after 15. This made them move from the 15 age group to the 16 age group leaving, at that point around 60% had left school but this had a positive effect on wages.

RoSLA – Raising of the School Leaving Age

Pre-RoSLA and Post-RoSLA comparison, the average age of leaving education increased, and meant that 1/3 people stayed in school longer. The policy change made this an effective natural experiment. We can consider the indifference difference:

A whole extra year of schooling meant a 6% increase in hourly wage. Why did it have this effect?

Breaking up periods, when you could leave school. This resulted in a far greater proportion with academic qualification for those that could not leave before the summer exams. It does it for a greater extent to those that are post-RoSLA.

Dickson and Smith in 2011 have looked in detail at the labour force survey and the effect of raising the school leaving age if we were to exploit another increase in the minimum leaving age. This would be exploited again, by moving the age bracket again achieving higher qualification, resulting in a greater wage in theory. The question brought here is to what extent will this relationship hold?

Education is always a difficult variable due to our inability to measure to what extent it has been effective, the issue that arises is that even if everyone got the same education individuals would have gained different benefit. Let alone the fact that there are a variety of programs and courses, which pupils may take.

Copyright Almog Adir © 2014 · All Rights Reserved · My Website

Growth & Structural Reform

Production Possibility Frontier & Aggregate Supply:

There are many determinants for a shift in aggregate supply; this would mean an increase in real output without an increase in the price level.


Some examples of the determinants are:

  • Education: Increased productivity, capabilities, and efficiency of the labour force
  • Innovation/R&D: The product may become more useful or easier to manufacture
  • Government Regulation or Subsidy: Encourages production, or deregulates an immobile market.
  • Transport/Infrastructure: If there was better transport then people could work more often rather than waste time in traffic,  development of infrastructure develops efficiency.


Analysis from Article:

Using Evidence from the Article Explain the Impact of Investment on the UK’s PPF?

Increasing the quality of university education and teachers may mean that the workforce becomes more efficient and productive. The article highlights the need to invest in human capital. This would lead to an outward shit for the PPF because there is a greater potential for production. If this potential were to be realised there would be a shift to the right for the aggregate supply curve.

The article mentions that the government should target investment towards equipment rather than property, increasing government investment into R&D and general innovation. The lack of innovation is identified through the lack of patents submitted. The issue surrounding R&D and capital investments are the long pay-off periods, whereas financial products pay-off in the short term. If more money were to be invested in long term research and development projects there is an outward shift for the PPF as there is a higher possibility of products being manufactured with greater efficiency.

Finally, the article recognises that British infrastructure is considered “mediocre” being ranked 24th in the world. It relates this to government failure, and the amount of time it takes to get energy bills through and the time it takes for projects to come to fruition. If there were to a boost in infrastructure spending, then there is the potential for an increase in productivity leading to an outward shift in the PPF.

Evaluate the Argument That Structural Investments Alone Are Not Enough to Stimulate Growth?

There are many theories and manners of approaching how best to stimulate growth, the article heavily leans toward the Salter Cycle. This can be summarised as an increase in productivity and efficiency, resulting in reduced inputs of land, labour, and capital while achieving a great output. This what the article highlights as structural investment, i.e. improving education, improving transport, and stimulating research and development. This does work to stimulate growth however it must be realised that humans can only ever be so efficient or productive, and that factors such as capital and land become scarcer in developed countries.

It is true that the government needs to stimulate development within Britain; it is unacceptable to continue supporting financial institutions that don’t contribute to growth. Energy and energy efficiency are two factors which are integral to stimulating growth within an economy, simply because when there is a greater quantity of energy and at a cheaper price more is used. This is where the American government unlike the British government took a lead and has effectively introduced shale fracking to slash gas prices down and increase consumption. The British government has been slow to develop supporting infrastructure and R&D for the implementation of fracking, when a recent geology report displayed the abundance of shale formation across Britain. This highlights the need for structural investments, but into sectors that have optimistic prospects for the future.

The other methods of approaching growth stimulus can be equally as effective. There is the classical approach of increasing free trade between countries, and the development of trade agreements to stimulate production resulting in general economic growth. In the article there is a display of a real GDP per person graph, it shows that Britain had the highest real GDP in 1870. This was a time when Britain had abundant trade from its colonies (without restriction due to naval dominance and to an extent exploitation), and the expansion of trade into the “new world”.

Structural investment will assist Britain in re-modernising; however it can be argued that it is best suited to developing economies that still have a greater abundance of land, labour, and emerging capital. One possible route is the development of military infrastructure; this would mean creating more aircraft carriers and submarines. This has worked to an extent to help stimulate American growth as it announced that two new aircraft carriers are going to be developed, and the roll out of the successor to the F-22 Raptor.

Another possible approach to growth stimulation is to induce a state of semi-isolationism which had worked for East-Asian economies in the 90s. The crash for the East-Asian economies can be attributed to the liberalization of markets which had stifled growth due to speculation. Creating a state of semi-isolation reduces the inefficiency of market speculation, and makes a country more self-dependent, and this may be realised through structural investment. Overall, it can be recognised that structural investment are a necessity, but it must be coupled with a new economic approach to achieve not only growth but sustainable growth.



Oligopoly Vs. Monopolistic Competition Air Berlin Case Study

Case Study: Air Berlin

Theory of the Firm Analysis

Fixed Costs:

  • Airplanes
  • Labour (salaries)
  • Airport Facilities

Variable Costs:

  • Fuel
  • Landing Fees

Define Fixed Cost: A cost that does not vary with output

Define Variable Cost: A cost that varies with output

The airplane is a fixed cost as once it is paid for the only variance in cost is through its operation. In the article it is mentioned that they look to sell 8 airplanes in the interest of reducing fixed costs as to reintroduce capital into the firm. An airplane in the majority of cases is bought outright, and this is established by the firm’s ability to sell it.

The fuel is a variable cost due to the possible change in price of fuel supplied to the airlines. The price of fuel is dependent on the microeconomic relationship of supply & demand, as the more routes an airline runs the more fuel it will need to purchase. In the article it identifies that jet fuel prices are “highly volatile” suggesting that this cost changes dependent on output.

The second variable cost is the landing fees, as more airplanes fly the greater the landing fees, as more airplanes have to pay the fee. There is also the nature of the cost of landing fees which may change; this is identified in the article through competition with Lufthansa for a central hub.

Case for Oligopoly:

Main Factors:

  • Interdependency
  • Competition
  • Etihad (market power)
  • Sustained Losses
  • Abnormal Profit & Reserves

Interdependency is a clear signal of oligopolistic competition, this is identified in the article as the need to develop a new airport. To continue competition both Air Berlin and Lufthansa must expand and open new routes, which is why there is the development of another central airport in Berlin. Currently, Lufthansa have an advantage as they have more docks for airplanes at “Tegel Airport”

There is clear evidence of competition with Lufthansa, as Air Berlin is stated to be the second-largest airline in Germany. This means that domestically the two big players are Lufthansa and Air Berlin, but the article goes on to mention the involvement of Etihad Airlines. The article also identifies that there is the intention of competition in the long run against alternative airlines, as Mehdorn states that “We want to strengthen our profile as an airline company… we need long-haul flights.”

The fact that the losses of the firm are being sustained provides clear evidence that Air Berlin is part of an oligopoly. Firstly, losses being sustained shows that the firm is not part of perfect competition, in perfect competition the firm will stop production the moment losses are being made as it is “easier” to stop producing rather than leave the market altogether. Secondly, the sustaining of losses means that at point in time the firm was making abnormal profit to build cash reserves, a firm that operates with abnormal profit is at a point of monopoly, but Air Berlin still has its competitors.

Due to the various competitions in the domestic sector mainly between Air Berlin and Lufthansa it can be argued that a kinked demand curve is in play in the German airline sector (as displayed below).

Picture2The above graph shows that they are not competing on price, this is clearly identified in the article as there is only the mention of reducing costs, introducing more routes, greater output (new airport), and airline unions (i.e. One-world Alliance). Lufthansa is shown to have a lower marginal cost due to economies of scale, as Lufthansa is a considerably bigger airline globally and in Germany in comparison to Air Berlin.

In the long run Air Berlin may be able to cut its costs down so that there is a return to normal profit, and with the eventual opening of BER the firm will be able to effectively expand and possibly compete at the same level as Lufthansa. However further delays in the opening of the new airport may mean that Air Berlin will continue to eat into their cash reserves, as stated by the article the costs per month are approximately €5 million will slowly degrade the €100 million in cash reserve. In the article it is stated that Air Berlin is selling airplanes to reduce their fixed costs, while this is appropriate action in the short run, in the long run it may mean that the firm will fail to fully utilise its new facilities of the new airport.

Case for Monopolistic Competition:

Define Monopolistic Competition: The existence of monopoly for a given firm at a given period of time in the short run. The firms in the long run will compete on price and other factors (e.g. branding, quality, etc.) eventually losing monopoly power over time as firms begin to differentiate less.

Main Factors:

  • There are Many Airlines and Consumers, and no Firm Has Total Control Over Market Price
  • Existence of Asymmetric Information
  • Independent Decision Making
  • Limited Barriers to Entry & Exit in Long Run

It is clear that there are many firms in the airline industry within Germany, and that there is a spread of consumers across the different airlines. It can be argued that no firm has clear control over market price based on external information, airlines often compete on price for specific dates or near certain events, and since there is the involvement of international airlines there is no single firm with complete control over market price.

There is clear evidence of independent decision making by Air Berlin. The first independent decision was to make a move to a new airport to act as a central hub so the firm can expand; Lufthansa had not taken any action in particular to cause this move, and it is likely that Air Berlin would have done this regardless of Lufthansa.

There is to a degree an ability to enter and exit freely into the market, this is identified by Air Berlins ability to sell airplanes, reduce unprofitable routes, and as a result reduce marginal cost. There are many competitors in the airline industry, Air Berlin could sell all of its airplanes, docks, and facilities to an international airline, or even sell parts to Lufthansa. There is the possibility of a clean exit, however in the airline industry there are many barriers to entry due to the cost of airplanes and facilities.


The diagram above shows the short run loss of Air Berlin, it can be noted that there is a shift in marginal revenue and therefore average revenue as the article states that Air Berlin begins to lose customers. There is also the issue of high fixed costs which drives the average cost curve above average revenue, as the cost of the new airport facilities is considerable.

However, in the long run it can be argued that Air Berlin shall return to a point of normal profit. This is because there is the intention to sell eight airplanes to reduce those high fixed costs. There is also the mention of introducing long haul flights and the reduction of unprofitable routes which is the reduction of the supply to meet demand. This is shown in the diagram below as the firm returns to a position of normal profit.