What is true when a market is Allocatively efficient?

A focus on allocative efficiency and outcomes highlights the contribution of knowledge and records to achieving public value, in particular, via accountability to public trust, confidence and legitimacy.

From: Records Management and Knowledge Mobilisation, 2012

Achieving added value: efficiency, effectiveness and public value

Stephen Harries, in Records Management and Knowledge Mobilisation, 2012

Value for the organisation or for the public realm?

Allocative efficiency asks: are we doing the right things in the first place? It helps to determine whether resources should be allocated to one activity in preference to another; technical efficiency then follows on from this allocation. Allocative efficiency is concerned with spending limited resources in the areas that are best able to maximise public value and is the province of elected representatives and citizens; technical efficiency is concerned with making the most of resources allocated and is the province of managers. The purpose of government is to add public value; although a full discussion of all that entails is beyond the scope of this book, any particular function must still be capable of expressing how it adds to public value:

using a common framework to demonstrate added value that facilitates comparing and contrasting with other services and functions;

clarifying the kinds of value that are created and finding a balance between them;

discerning the impact which that added value has on the delivery of outcomes;

developing a suite of measures that is appropriate to the balance of values determined.

Effectiveness in the public sector is concerned with the extent to which activities achieve their purpose – why they exist, how they create value. Many organisations have policy statements along the lines of ‘effective records management is necessary to achieve compliance/improve decision-making/ reduce costs’, but if we only consider effectiveness as the achievement of records management purposes, we are still caught in the self-referential cycle – that the purpose of records management is diligent records management – and by under-substantiated claims about benefits for organisational performance. Considering effectiveness leads to considering the impact, potential or actual, on delivering those wider outcomes which increase public value.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781843346531500081

Handbook of Health Economics

Jeremiah Hurley, in Handbook of Health Economics, 2000

3.4.4. Moral hazard

Moral hazard refers to the tendency for insurance coverage to induce behavioral responses that raise the expected losses that are insured, because it increases either the likelihood of a loss or the size of a loss. Those with health insurance coverage may take less care to avoid illness or injury knowing that they will not have to bear the associated financial consequences. In general, this is probably not a large source of moral hazard in the health sector, as the financial consequences are only a portion of the total “costs” associated with illness or injury, which often include pain and suffering.

Of more importance in the health sector is moral hazard associated with the fact that once an insurable event occurs, because an insured individual does not have to pay for the full cost of treatment, the individual may incur higher total costs than in the absence of insurance. The increased expenditures associated with such moral hazard result from the behavioral responses of either patients or providers: patients, whose care is now subsidized may (and would be expected to) demand a greater quantity of services; providers, knowing that patients do not bear the full cost of services, may increase the quantity of treatments recommended and/or the prices of those services.

Moral hazard has the potential to limit the range of insurance contracts that can be offered, decreasing allocative efficiency. To remain in business, an insurance organization has to set a premium based on ex post losses in the presence of insurance, but individuals may make their consumption decisions on the basis of ex ante expected losses. Individuals willing to purchase an insurance contract based on ex ante losses, find such contracts unavailable. Hence, moral hazard can lead to missing, or at least incomplete markets, for risk-bearing [Evans (1984)].

A second type of allocative efficiency loss arises from the “excess” utilization generated by insurance, which creates an excess burden [Pauly (1968)]. The argument is as follows. Assume the health care market is as depicted in Figure 2. Price P0, equals the long run (constant) marginal cost of care.39 In the absence of insurance, Q0 care will be consumed; under full insurance that provides first-dollar coverage, Q1 care will be consumed. For each unit of increased consumption under insurance (Q1 – Q0), the marginal cost exceeds the marginal benefit, generating an excess burden for the economy. Moral hazard can be eliminated by increasing the price that the consumer faces to P0, but, of course, this completely eliminates insurance coverage.

What is true when a market is Allocatively efficient?

Figure 2. Neo-classical analysis of moral hazard.

This analysis provides the foundation for the argument that optimal insurance coverage must balance the competing welfare consequences of insurance. On the one hand, insurance increases welfare by reducing risk for individuals, with (subject to some caveats) the welfare gain directly related to the extent of coverage. On the other hand, insurance creates a welfare burden through moral hazard. Hence, optimal insurance must balance these competing welfare effects by including patient cost-sharing provisions [Zeckhauser (1970)].

The positive and normative basis of the analysis, however, remains controversial. From a positive perspective, the analysis assumes that health care is produced in a perfectly competitive market by profit-maximizing firms supplying care at a price equal to its long-run marginal cost. Health care, however, is dominated by highly regulated nonprofit and not-only-for-profit providers, so it is not clear that the supply curve represents the true opportunity cost of the resources used to produce the care provided.

Normatively, the analysis rests on an standard welfare interpretation of the demand curve. “Excess” or “inefficient” is defined solely with reference to the market demand derived from preferences backed by willingness-to-pay. Even if one accepts welfarism, as we saw above, informational problems may invalidate the assumption of consumer sovereignty, which in turn invalidates the normative interpretation of the demand curve. Because cost-sharing selectively reduces utilization on the basis of ability and willingness-to-pay rather than on the basis of need for health care, cost-sharing may reduce care that is effective and needed. Hence, from an extra-welfarist perspective, care between Q0 and Q1 is not necessarily wasteful or inefficient when viewed against the standards of need and health improvement (indeed, by these standards care to the left of Q0 may well be inefficient). In fact, studies of cost-sharing demonstrate that it reduces both necessary and unnecessary care [Lohr et al. (1986), Rice (1992), Stoddart et al. (1994)].

Rather than demand-side cost-sharing policies to address moral hazard, an alternative is to intervene on the supply-side to reduce selectively ineffective or inappropriate utilization. Because of their informational advantage, providers are in the best position to judge what utilization cannot be expected to improve health. Such efforts vary from instilling a culture of evidence-based practice, regulatory initiatives associated with managed care (utilization review, pre-authorization programs, and practice guidelines), and designing funding models that attempt to align the incentives of providers with issues of efficiency. In fact, this is much of what health reform in the 1990s has been about.

This “standard” model of insurance may be seriously incomplete as a basis for policy prescriptions in the context of health care markets. Nothing about the model is specific to health care – simply by re-labeling the axes it would be just as suitable for analyzing the welfare improving effects of house insurance, automobile insurance, flight insurance, and so on. The sole effect of insurance in the model is to lower the price of a good that enters individual utility functions directly and that is produced and exchanged in competitive markets through arms-length relationships among well-informed buyers and sellers, all of which we know to be uncharacteristic of most of health care.40

Active, interventionist insurers have a much larger impact in the health care market than do insurers in other markets such as housing or automobiles. Under universal house insurance, the proportion of all housing transactions (purchases or renovations/repairs) covered by the house insurance contract is small; the same is true for automobile repair (though a higher proportion of the automobile repairs is probably covered in some way by insurance). For both of these insured goods (and many others), a large proportion of purchases of the insured good happen in the absence of an insured loss, outside any insurance contract. In contrast, the vast majority of health care purchases occur only in the presence of an insurable loss (i.e., ill-health). Hence, insurers and insurance play a much more dominant role in the dynamics of health care markets.

Weisbrod (1991), for example, argues that the static analysis of the welfare loss associated with excess utilization induced by insurance is incomplete. He posits that, in a dynamic analysis, the level and extent of health care insurance and the development of health care technologies are endogenous. Because insurance coverage affects the expected returns to R&D investments in health technology, the spread of insurance is an important factor in explaining the post-war growth of technology. The development of new technologies, however, also affects the demand for health care insurance. Extensive insurance coverage combined with retrospective, cost-based reimbursement encouraged the development of costly technologies that offered minimal increases in quality. The combination may even encourage the development of technologies we would not collectively be willing to pay for, inducing potentially negative welfare effects. In contrast, prospective reimbursement, which dominates today in many countries, encourages the development of cost-reducing technologies that have minimal negative effects on aspects of quality that can easily be monitored by patients, but that may have negative effects (especially combined with behavioral incentives facing providers under prospective reimbursement) on aspects of quality not easily monitored by patients. Weisbrod's analysis is, by his own admission, more speculative than definitive, but a key message is simply that at present we do not have well-developed models with which to explore the behavioral and normative aspects of the dynamics between insurance, health care and technological development, though these issues are of crucial importance for the design of health care systems.

Evans (1983) argues that one cannot fruitfully understand the rationale for, or the welfare effect of, universal, first-dollar public insurance using the standard insurance model. The potential welfare implications of universal, first-dollar public insurance (with capital financed separately), such as that found in Canada, can be understood only by simultaneously considering asymmetry of information, the attendant agency relationship and potential for the supply-side to influence resource allocation; externalities; the dynamics between insurance, providers and technological development and diffusion; and broader social goals concerning income redistribution (generally favoring redistribution from the healthy wealthy to the sick poor). All of these potential effects of single-payer public insurance fall outside the standard insurance model, and can be understood only in light of the full nature of health care and health care markets.41

More generally these analyses highlight why analyzing each feature of health care in isolation provides only limited guidance to policy. Health care is a classic second best world in which one cannot be sure that prescriptions to fix one source of inefficiency, based on models that do not reflect the other distinctive features of health care, will in fact improve resource allocation. Jointly analyzing the features of health care, and the markets for health care and health care insurance in particular, can lead to policy prescriptions quite different than may be derived considering each in isolation.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/S1574006400801614

XE among US financial institutions: c.1991–2017

Roger Frantz, in The Beginnings of Behavioral Economics, 2020

English, Grosskopf, Hayes, and Yaisawarng (1993). Banks' asset size

The authors estimate X and allocative efficiency among 442 US banks whose output and input data is taken from the Fed’s Functional Cost Analysis program. They use a parametric technique. The average dollar value of all outputs, investment income plus loans, was $135 million: the banks are relatively small. The average level of X-efficiency was 0.75. Average X-efficiency among banks with state charters, national charters, unit branch banks, and branching banks was 0.74, 0.77, 0.75, and 0.76, respectively. Banks with total assets less than $40 million, the smallest asset group, was 0.72. Banks with total assets greater than $300 million, the largest asset group, was 0.86. The authors also show that banks are on the whole allocatively inefficient. Specifically, banks offered too many consumer and commercial loans, and did not have enough investment income. Second, they should increase real estate loans at the expense of investment income. Third, more real estate loans and fewer consumer loans. The result of allocative inefficiency is that banks were not producing the revenue maximizing output mix.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128152898000083

Efficiency in Health Care, Concepts of

D. Gyrd-Hansen, in Encyclopedia of Health Economics, 2014

Why Measuring Efficiency is Pertinent in the Context of Health Care

In theory the market for goods will automatically reach production and allocative efficiency if certain criteria are fulfilled. On the demand side, buyers in the market must be facing the full price of the good at the point of purchase and they must be able to make rational choices based on perfect and full information of the good. On the supply side, suppliers must be profit maximizers, there should be many competing suppliers, and there should not be factors deterring suppliers from moving easily in and out of the market.

In the market for health care services, these criteria are not fulfilled. First, there is a high degree of asymmetry of information, and those demanding health care services are not necessarily fully aware of which services they need, nor are they always able to judge the effectiveness of the services. Moreover, there is uncertainty regarding when the services are needed and how much they will cost. The economic uncertainty creates a market for health insurance, which means that the condition of the buyer facing the full price of the good is often not fulfilled. On the supply side, suppliers have been restricted from freely accessing the market in order to protect the less than perfectly informed patient/consumer. For example, doctors and other health care personnel have to be certified. Further, there has been a push for establishing nonprofit health care organizations on the market, again in order to protect the patient from profit-seeking suppliers.

Hence, on the supply side there are factors, which undermine a competitive market and thus the mechanisms, which will ensure that health care services are produced at minimum cost. This means that production efficiency is not guaranteed. At the same time, consumers/patients are often not equipped to judge which health care services they require and are unlikely to face the full price at the time of purchase. This means that there is insufficient basis for ensuring allocative efficiency. Consequently, production efficiency and allocative efficiency are not guaranteed by market forces, and ensuring efficiency on the market for health care services is, therefore, an important issue for health care planners, politicians, and health economists.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123756787002029

X-efficiency. An intervening variable

Roger Frantz, in The Beginnings of Behavioral Economics, 2020

III Final comments on this chapter, and looking ahead to Chapter 7

Leibenstein believed that efficiency was important in economics but that allocative efficiency was very small, maybe .001 to .0001 of GNP. The other form of efficiency was of an nature, so Leibenstein called it X (efficiency). It is an efficiency of the internal workings of the firm. It has been estimated to be about .04 of GNP. One reason why X is larger than allocative is that the latter is only for Qc-Qm, while X covers all outputs.

X was developed because allocative and, in general, neoclassical theory is not the be all and end all of economic behavior. It leaves gaps between stimuluses and responses, for example, inputs and outputs. X-efficiency fills in the gaps to better understand the links between inputs and outputs, or output and costs. This makes X-efficiency theory an intervening variable. The assumptions of the theory are those which help explain the links between inputs and outputs, things such as effort discretion incomplete contracts and production functions.

Leibenstein used X-efficiency theory to explain the effect of group norms on effort, and game theory to explain productivity. He favored procedural rationality over substantive rationality and listed numerous procedures which lead to rational behavior. No experiments were done, which makes him a first generation behavioral economist, an “old” behavioral guy.

The next several chapters are a summary of empirical research on X-efficiency theory. They cover all regions of the world and many different industries.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978012815289800006X

Standing on the edge: Lucas in the Chicago tradition

Peter Galbács, in The Friedman-Lucas Transition in Macroeconomics, 2020

Friedman as a domineering character of post-war Chicago economics

A belief in the functioning, pervasiveness, ubiquity, ineradicableness, and allocative efficiency of a free and individualistic market economy (Stigler, 1988, p. 164) and a bias towards market solutions, whilst neglecting some of the consequences of the resulting power structure (Samuels, 1976a, p. 382); favouring neoclassical economics and extending it to cover a diverse set of individual and social phenomena (Samuels, 1976b, pp. 3–4); a strong scepticism against Keynes’s ideas, bringing to the fore the relationship between macroeconomic fluctuations and money-supply dynamics and, at the same time, focusing on the money stock as the instrument to stabilize the economy or, as the other side of the same coin, regarding significant contractions as phenomena triggered by drops in the money supply; and adding a strong empiricist flavour to economics (Hammond, 2011, p. 38), meaning the placing of theory in an empirical context: Chicago economics was organized around dogmas Friedman very effectively advocated. On this showing the question remains open whether Friedman was ‘the’ founder or ‘the’ best-known adherent of the school. No doubt, for the public he became the face of Chicago economics (Ebenstein, 2015, pp. 122–123; Becker, 1991, p. 146). Even though all the assessments on Chicago economics can be regarded as attempts to answer the questions over the exact date of establishment, the names of the leader and the members, and the list of tenets, all commentators agree that Chicago economics has been a theory underpinned by a distinguishing methodology and, at the same time, a resulting economic policy stance or even a philosophy.1

A list is always easy to draw up, whilst it may be difficult for us to come to an agreement on its content. A component of Miller’s catalogue of features is Chicago and Friedman’s alleged equating actual and ideal markets. It is highly unlikely that anyone would neglect drawing the distinction, even if this charge against all versions of neoclassical economics is voiced time and again. The problem stems from the positivist methodology, where Friedman (1953/2009) regards neoclassical models as ideal types, which, in spite of Weber’s objections, are easy to interpret as objects of desire or as ideals to strive for. This is exactly the misstep Wilber and Wisman (1975) take. If (1) our model is built upon the assumptions of perfect competition and market clearing, and (2) we conceive competition as beneficial to real societies, then it is easy to take a further step and jump to the conclusion that the model’s abstract environment and reality are identical. To complete this amalgamation, however, one needs a further statement. Thus Miller added a third claim to the previous two. Accordingly, (3) real markets are supposed to work along the rules of perfect competition. Advocating economic freedom and market mechanisms does not lead to mixing up ‘there is’ and ‘there should be’ nevertheless. To be specific, (1) does not imply (2) and (2) does not imply (3) either. The charge is thus empty.

In the background of the amalgamation argument, however, there also stands an implicit methodological consideration, according to which the elements of reality omitted and distorted by the model of (1) are so negligible that their existence (or rather their almost non-existence) does not prevent us from intermingling. In the opposite case, the obvious presence of abstraction and idealization would certainly stop us from putting the sign of equality even when (1) and (3) are true. In other words, on the basis of some sober methodological considerations we would by no means regard model and reality as identical even if perfect competition known from theory ruled the actual markets.

This latter conclusion is important as it leaves open the question of the ontological status of perfect competition (it is only an assumption and limiting concept or a dominant and perceivable component of reality). Thus the charge of intermingling is rejected not on the ground of what Chicago economics and Friedman thought of the degree of freedom of real markets. Due to the presence of abstraction and idealization in theorizing these two settings cannot be regarded as identical in any sensible way. Moreover, the question of the ontological status cannot be answered without doubts. What does it mean that for Chicago economics ‘the power of business and personal wealth in the market is greatly exaggerated by critics’ (Samuels, 1976b, p. 9)? Is it that monopolies do exist, however, they are unimportant (Stigler, 1988, p. 164)? Or that their overall impact on consumers is rather beneficial (Van Horn & Mirowski, 2010, p. 198 and pp. 204–205)? A similarly contentious issue would be the neutrality of money, which holds in reality, but only in the long run. Instead, the argument underlining the nature of abstraction and idealization is placed on a methodological footing. This problem recurs in Chapter 4, where in addition to some brief remarks on the neoclassical founding fathers Lucas’s models are introduced as cases of an accurate separation of reality and theory. This short detour, however, is insignificant in terms of the main thread of the present discussion.

As it was Friedman who dominated the school in the 1950s (besides him and George Stigler, Aaron Director’s and Allen Wallis’s contributions stand out), everybody in his academic environment was eager to be absorbed in his ideas. Thanks to his intellectual power, at Chicago he usually encountered agreement (Reder, 1982, p. 32). Martin Bronfenbrenner (1962), by defining post-war Chicago economics with reference to Friedman’s economic policy suggestions (associating economic freedom with allocative efficiency, whilst neglecting the distributional effects of economic policies) and methodological recommendations (disregarding descriptive realism), also emphasizes Friedman’s impact. Samuelson (1991, p. 538) refers to the post-war developments as the ‘Friedman Chicago school’, contrasting it with the ‘Knightian Chicago school’ that was of fundamentally different nature in terms of its ideas on economic and social policy or political philosophy. All in all, Friedman’s dominance is a recurrent element in the narratives.

Miller (1962) placed a huge emphasis upon Friedman’s position whilst kept away from the trap of characterizing Chicago economics as a Friedmanian one-man show. He applied the formulas ‘Friedman and others’ and ‘other economists in the school’ or ‘Friedman and other modern Chicagoans’ for description. At the same time, he could also avoid rushing into tedious battologies. And he is admittedly right: Friedman being left to his own devices, so to speak, would have never been able to transform general economic thinking. Stigler (1977, pp. 2–3) explained this fact along arguments taken from the sociology of science. Any science at any time consists of dogmas that are more stable than a single scholar could considerably change the doctrinal composition of her discipline. This explanation establishes a harmony between Friedman and the members of the school by emphasizing the integral role of the latter. In many respects, and in contrast with a professor-disciples relationship, this role is equal in rank. A school of scholars of equal academic status has the power to exert long-lasting effects on the body of science.

It is interesting to note that Stigler (1962, p. 71) as an insider disapproved of these proposed definitions of Chicago economics. It is especially the emphasis upon Friedman’s dominance that evoked his objection2 (‘[Friedman] has not been ignored at Chicago, but I believe that his influence on policy views has been greater elsewhere than here’). On the one hand, he had reservations about the label ‘Chicago economics’ obscuring some essential disagreements. As Chicago economists have always been divided along diverse and serious differences of thought even in one generation, let alone intergenerationally, for him it made no sense to talk about a uniform camp. And on the other hand, as he argued, the views attributed to Chicago economics failed to define Chicago economists. The vast majority of these ideas were shared by a lot of non-Chicagoan professionals. As a consequence, Stigler regarded these suggested definitions only as some precarious attempts to identify Chicago economics with Friedman’s framework—which, however, does not preclude that such definitions are true. More than a decade after Miller’s provocative and oft-debated paper Friedman was still widely believed to be the defining character of Chicago economics. For instance, Wilber and Wisman (1975) agreed to study the methodological principles of Chicago economics in terms of Friedman’s methodology.

Intriguing as it is, Friedman (1974, p. 11) himself explicitly approved of the earlier theory- and methodology-based definitions.3 Not only did he mention the extension of neoclassical theory or the quantity-theoretic interpretation of monetary policy as some elements of Chicago economics’ distinctive approach, but he also underlined the unity of theory and facts or empirical data. The latter, however, is not a distinctive feature of Chicago economics for the Chicago school of sociology also emphasized this integrity. Thus Friedman did not narrow the label ‘Chicago school’ to economics, but extended it to pragmatic philosophy, sociology, or even political science. In other words, Chicago economics seems to have been a constituent part of a comprehensive Chicago school of social sciences. All the characteristics that well describe Chicago economics (or sociology) can be related to a broader field of Chicagoan social scientific thinking.

Taking into account the huge emphasis upon Friedman’s contribution, the mature form of Chicago economics had not emerged before Friedman’s time (Emmett, 2008; Ebenstein, 2015, pp. 100–101). As far as the chronological order is concerned, the first ‘official’ reference to Chicago economics comes from 1947 in Aaron Director’s 1947 preface to Henry Simons’ ‘Economic policy for a free society’ published in 1948 (Ebenstein, 2003, p. 166), where Director (1947/1948) labelled Simons as the head of the school notwithstanding. Stigler (1988, pp. 148–150) also argued in favour of this genealogy: he dated the foundation to the post-war years and the wide professional acknowledgement of the school’s existence to the mid-1950s.4 Jacob Viner was in two minds about the birth of the school, whilst he also reported on its well-organized operation from 1946, not sorting himself amongst the members (Jacob Viner’s letter to Don Patinkin. November 24, 1969. George Stigler papers. Box 14. ‘The Chicago school of economics, 1979’ folder). Similarly, Emmett (2015) dates the formation period to the late 1940s and early 1950s.

The ultimate foundations of Chicago economics, however, are to be found earlier, and this fact raises the issue of continuity with the interwar period. As the history of modern Chicago economics started in the interwar years, it is by no means astounding that in his review on the pre-history Friedman (1974, pp. 14–15) denies all forms of conformity and a closed or uniform orthodoxy. For Friedman the Department of Economics was a leading centre of economic heterodoxy, where disputes formed an integral part of education. Rutherford (2010) describes the period prior to the modern school as the years of powerful heterodoxy. This heterodoxy was of course not independent of the general heterodoxy having characterized interwar economic thinking in America, which ceased to exist by the post-war period, and did so not only in Chicago. Reder (1982, p. 3) refuses to relate the adherents of heterodoxy to the members of the post-war Chicago school. On this account the history of Chicago economics was a shift from heterodoxy towards neoclassical orthodoxy. This shift did not lead to the extinction of heterodox views, though these ideas have lost their dominance over orthodox thinking.

Dwelling upon the nature of interrelations, in his reconstruction Emmett (2006a, 2015) emphasizing Frank H. Knight’s role suggests a continuity interspersed with dramatic changes and breaks, and so does Reder (1982). Miller (1962, pp. 64–65) also accepts the idea of a special mix of continuity and breaches. However, some commentators disagree on the narratives about continuity and Knight’s alleged central role. Bronfenbrenner (1962, pp. 72–73) discusses two separate factions, and with reference to the interwar school he underlines the impact of Jacob Viner and the Knight-protégé Henry Simons. In terms of the evolution of the Department of Economics the idea of continuity can be emphasized only at the price of some serious distortions. To weaken the plausibility of the continuity-based narratives further, for Coats (1963) the appearance of the Knight–Viner–Simons gang also resulted in a break. Van Horn and Mirowski (2009, 2010) arguing against the ‘conventional’ narrative regard the emergence of the Chicago school as an endeavour in neoliberal political philosophy and emphasize the role played by Henry Simons, Aaron Director and Hayek. As ‘the policy and scientific views interact’ (Stigler, 1979, p. 6), or as ‘[t]he relationship between a school’s scientific work and its policy position is surely reciprocal’ (Stigler, 1977, p. 8), probably both narratives are correct: Chicago economics can easily be identified ‘by its characteristic policy position’ (Stigler, 1962/1964?, p. 1). It is a fact of life that different facets of the same development may be dominated by different personalities. Reciprocity stands in the fact that policy views require scientific underpinning and any social scientific theory has policy implications. An accentuation of the political philosophy thread can at best uncover some facets the narratives focused on theoretical developments underplay.

A minor albeit illuminative detail: in Section 2 of his paper on Viner, Samuelson (1991) devotes most of the space to Knight, introducing Viner’s personality and his role at the Department through Knight’s personality and role. Indeed, the post-war development of the school was controlled by names who were members of the ‘Knight circle’ or ‘Knight affinity group’ back in the 1930s (Emmett, 2009, p. 146). This is the reason why Henderson (1976) also underlines Knight’s contribution. He was the founder and the teacher in one person surrounded by students.5 The only exception is Theodore William Schultz, who responding to some department initiations launched the workshop system that later proved to be so vital to the school’s success (see Section 2.1.2).

As far as the success of the modern school is concerned, the most important achievement of the interwar era was the establishment of the strong price-theoretic tradition (Van Overtveldt, 2007, pp. 76–81). On Emmett’s account of the process the school was founded in the 1920–30s and it was of course Knight and Viner who by their teaching activity laid the foundations of modern price theory for the school of the 1950s. Hammond (2010) agrees on this periodization. Price theory took its place in the curriculum as a framework necessary but not sufficient for understanding social actions and for analyzing individual behaviour taken in a broad sense: as a discipline of limited relevance6 (Emmett, 1998a, 2009, pp. 148–149).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128165652000026

Handbook of Health Economics

Pedro Pita Barros, Luigi Siciliani, in Handbook of Health Economics, 2011

3.2.3.2 Cost Efficiency

Non-profit and for-profit hospitals may also differ in terms costs, technical and allocative efficiency, cost-containment effort, revenues, and obviously profits. Moreover, such differences may be dampened or amplified depending on the degree of competition in the market, and the payment system (for example, DRG, fixed budget, or cost reimbursement).

In line with Eggleston et al. (2008), Shen et al. (2007) conduct a meta-analysis which compares in the US the financial performance of for-profit, non-profit, and government hospitals. The authors focus on four dimensions: costs, revenues, the profit margin (defined as revenues minus profits divided by revenues), and efficiency. The focus is on the hospital as a whole. The basic empirical approach is to regress total cost, revenues, and profit margins against a dummy variable for hospital ownership (and other control variables). Studies on efficiency involve a two-step procedure where, first, efficiency is measured (either through parametric—often stochastic frontiers techniques—or non-parametric ones—like Data Envelopment Analysis or Free Disposal Hull). They then conduct a meta-analysis to explore variations in results. Overall, the study finds little difference in cost among types of hospital. For-profit hospitals have more revenue and profits than not-for-profit ones but the difference is modest in economic terms. Government and not-for-profit hospitals do not differ significantly in terms of profits and revenues. For-profit hospitals tend to be more efficient than non-profit ones. Rosko (2001) employs a sample of 1,631 hospitals in the US during the period 1990–1996 and finds that for-profit hospitals are more inefficient.

Several recent studies test for differences in efficiency between public and private hospitals in different European countries. The results are mixed. Barbetta et al. (2007) use a sample of 500 Italian hospitals. They employ both Data Envelopment Analysis and (econometric) stochastic frontiers techniques to compare how public and private non-profit hospitals responded to the introduction of a DRG-based payment system in the Italian NHS. They focus on technical efficiency, the output being measured in terms of number of patients treated and inpatient days, and the input in terms of personnel and number of beds (as a proxy of capital). They find that generally non-profit hospitals were more efficient than public ones before the introduction of the DRG system (when they differed in the payment system), and that mean efficiency converged after the introduction of the DRG system. Herr (2008) uses a large sample of 1,500 German hospitals to test for differences in cost and allocative efficiency. Technical efficiency is measured through a production frontier (as in Barbetta et al., 2007). Cost efficiency is measured regressing total hospital costs on output and input prices. The study employs stochastic-frontier analysis. Since the data set is a panel, weaker distributional assumptions on the error terms are required to estimate the degree of efficiency. It finds that private and non-profit hospitals are less costly and more technically efficient than public hospitals. Farsi and Filippini (2008) also use stochastic frontier methods to test for differences in cost efficiency between public, for-profit, and non-profit hospitals in Switzerland employing stochastic frontier techniques and find no significant differences by ownership. Marini et al. (2008) investigate the change of hospital status in England from “public hospital” to “Foundation Trust,” a status which confers more financial independence in the management of eventual surpluses, and less monitoring and control. Since the introduction of Foundation Trusts was phased, they use difference-in-difference methods taking into account potential endogeneity due to the voluntary decision of becoming a Foundation Trust. The study finds that the new status had limited impact.

In most countries entry of public and private hospitals is highly regulated. Chakravarty et al. (2006) provide evidence that in the US for-profit hospitals are more likely to enter/exit the market in response to demand shocks compared to non-profit ones.

Gaynor and Vogt (2003) estimate a structural model in the Californian hospital industry and find that non-profit hospitals tend to have less elastic demand (to price) and lower prices. They suggest that non-profit hospitals behave in a similar fashion to for-profit ones, but crucially as if they had a lower marginal cost.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780444535924000153

Handbook of The Economics of Innovation, Vol. 1

Wesley M. Cohen, in Handbook of the Economics of Innovation, 2010

1 Introduction

For much of the twentieth century, industrial organization economists examined the determinants of market structure and its effect on price competition and allocative efficiency, largely disregarding technological change. The writings of Joseph Schumpeter in the first half of the century pushed economists to appreciate the fundamental role of technological progress in affecting economic growth and social welfare. Since that time, economists have increasingly appreciated the economic significance of technological progress, and it is now common to hear that a firm’s, an industry’s, or even a nation’s capacity to progress technologically underpins its long-run economic performance. Stimulated by Schumpeter’s writings and Solow’s (1957) subsequent “discovery” of the contribution of technological change to economic growth, industrial organization economists have conducted numerous empirical studies on the determination of innovative activity and performance. In this chapter we review the empirical literature and highlight the questions addressed, the approaches adopted, and impediments to progress in the field.

Some of these impediments are ironically due to Schumpeter himself. In making his case for the importance of innovation broadly construed, Schumpeter rejected the antitrust orthodoxy of his day. He argued that the large firm operating in a concentrated market had become the locus of technological progress, and, therefore, an industrial organization of large monopolistic firms offered decisive welfare advantages.

Provoked by these claims, industrial organization economists (e.g., Mason, 1951) became preoccupied with the effects of firm size and market concentration on innovation and neglected other, perhaps more fundamental determinants of technological progress. This review will briefly examine this literature on the relationship between innovation and market structure and firm size. We will then, however, review more recent research that has both recast the “neo-Schumpeterian” preoccupations and moved beyond them to study the determinants of technical advance more broadly.

This review updates and draws heavily from the survey written by the author (Cohen, 1995), which in turn drew extensively from a prior survey written by Richard Levin and the author (Cohen and Levin, 1989). As in the prior surveys, we review the empirical literature on the characteristics of industries and firms that influence industrial innovation. In addition to the empirical literature, we also selectively review the case study and institutional literature that often provides richer, more subtle interpretations of the relationships among innovation, market structure, and industry and firm characteristics. Although the literature considered here is extensive, this survey examines only studies of innovation that fall under the rubric of industrial organization economics. Moreover, given a rapid growth in this literature since the 1995 review, the review of the more recent empirical literature will be selective. Section 2 reviews the “neo-Schumpeterian” literature that examines the effects of firm size and market concentration upon innovation. After a brief synopsis of the empirical findings of the Schumpeterian literature, Section 2.3 focuses on questions of interpretation and the identification of major gaps in the empirical literature. Section 3 discusses the more modest literature that considers the effect on innovation of firm characteristics other than size. Section 4 covers recent literature that considers three classes of factors that affect interindustry variation in innovative activity and performance: demand, appropriability, and technological opportunity conditions. We conclude in Section 5.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/S016972181001004X

Evaluating Efficiency of a Health Care System in the Developed World

B. Hollingsworth, in Encyclopedia of Health Economics, 2014

Abstract

This article discusses alternative means for measuring efficiency. The foundations of efficiency measurement are built upon in terms of the measurement of ‘technical’ and ‘allocative’ efficiency using the two main techniques available: ‘data envelopment analysis’ and ‘stochastic frontier analysis’. The basis of these comes from work in economics published over 50 years ago, but the methods of using ratios to measure how inefficient production is compared to an ‘ideal’ of efficiency remain the same. The differences between techniques are explored, and some words of caution given as to their practical application. Finally some guidelines for how to use efficiency measurement are listed.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123756787002042

Pricing and User Fees

P. Dupas, in Encyclopedia of Health Economics, 2014

Abstract

This article is concerned with the issue of user fees (or user charges) for public health services. The implications of user fees for cost-effectiveness, allocative efficiency, equity, progressivity of public healthcare spending, and quality of service are discussed. Each of these is a desirable end in itself, and so each is an important factor in the optimal pricing decision; however, they are not always compatible with each other. Furthermore, they all have to be financed from a single, and typically constrained, budget. Thus governments have to tradeoff over them. The theory and empirical evidence on the effects of user fees on each factor are reviewed.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123756787001231

How do you know if a market is allocatively efficient?

A firm is allocatively efficient when its price is equal to its marginal costs (that is, P = MC) in a perfect market.

Which of the following statements are true of allocative efficiency?

Which of the following statements are true about allocative efficiency? - The marginal cost and marginal benefit of producing each unit of output is equal.

What is allocative efficiency quizlet?

What is allocative efficiency? A situation in which resources are allocated such that the last unit of output produced provides a marginal benefit to consumers equal to the marginal cost of producing it.

What does allocative efficiency depend on?

Allocative efficiency is based on the amount of production, while productive efficiency is based on the method of production.