SSPP banner

Spring 2008 | Volume 4 | Issue 1


Article

Rethinking the economistís evaluation toolkit in light of sustainability policy

Stefan Hajkowicz
CSIRO Sustainable Ecosystems, 306 Carmody Road, St. Lucia, QLD 4067 Australia (email: Stefan.Hajkowicz@csiro.au)

The dominant economic evaluation technique is benefit-cost analysis (BCA). However, sustainability policy must handle outcomes that cannot easily be quantified in monetary units. Multiple criteria analysis (MCA) is emerging as an alternative, and/or complementary, economic evaluation tool. The economics profession has been slow to adopt MCA. This paper first explores the role of MCA within the economistís evaluation toolkit alongside BCA, cost-effectiveness analysis (CEA), and cost-utility analysis (CUA) and then proposes a process for selecting an appropriate evaluation method. The choice of technique will depend on the extent to which environmental goods can be valued in monetary units. The paper argues that MCA has an expanded role to play alongside BCA (and the other methods) to ensure that sustainability policies are realized.

KEYWORDS: cost analysis, economic analysis, environmental economics, resource management, management tools, policy, evaluation

Citation: Hajkowicz, S. 2008. Rethinking the economist’s evaluation toolkit in light of sustainability policy. Sustainability: Science, Practice, & Policy 4(1):17-24. http://sspp.proquest.com/archives/vol4iss1/0709-021.hajkowicz.html.

Published online March 25, 2008


Introduction

Multiple criteria analysis (MCA) is an evaluation framework that ranks or scores the performance of decision options (e.g., policies, projects, locations) against multiple objectives measured in different units. Typically, the criteria are weighted by decision makers to reflect their relative importance.1 The MCA approach emerged within the field of operations research during World War II, with early applications in military planning (e.g., Eckenrode, 1965). The MCA’s theoretical foundations can be traced back to multiattribute utility theory (MAUT) developed by Keeney & Raiffa (1993) and axioms of utility measurement first supplied by von Neumann & Morgenstern (1944).

In environmental and resource economics, MCA has mostly received a positive reception. Many researchers find it a useful supplement to conventional benefit-cost analysis (BCA) when intangible nonmarket goods are important (Eder et al. 1997; Heilman et al. 1997; Joubert et al. 1997; Prato, 1999; Fernandes et al. 1999; Dunning et al. 2000). MCA has hundreds of applications in natural resource management (for reviews see Romero & Rehman, 1987; Hayashi, 2000). However, not all resource economists are convinced. For example, Bennett (2005) refers to MCA as an “avoidance strategy” to sidestep a rigorous and complete BCA.

Given that MCA application is becoming increasingly common, such criticisms are worth exploring. This paper examines MCA’s role in the economic appraisal of policy options in light of sustainability requirements.2 It argues that MCA is a valid and useful evaluation tool for sustainability appraisal when nonmarket impacts are important.

What is Multiple Criteria Analysis (MCA)?

The use of MCA to support public and private sector policy decisions has steadily grown since the 1970s. Two decades ago, Romero & Rehman (1987) reviewed 150 MCA applications in fisheries, forestry, water, and land resource applications. More recently, Hayashi (2000) reviewed over 80 published studies in agriculture. In energy planning, Pohekar & Ramachandran (2004) identify more than 90 published MCA applications. Steuer & Na (2003) examine 265 applications of MCA in the field of financial decision making. Today there are hundreds of MCA techniques (for a recent review see Figueira et al. 2005a) and Weistroffer et al. (2005) identify 81 MCA software packages, many of which are commercially available.

An MCA model can be represented with an evaluation matrix (X) of n options and m criteria. The evaluation matrix contains performance measures where xi,j is the raw performance score assigned to option i against criterion j. Typically, though not always, the relative importance of criteria is measured with a weights vector W where wj represents the importance of the jth criterion. Both W and X may contain qualitative (ordinal) or quantitative (cardinal) data. An evaluation matrix is often structured as follows:

 

Option i=1

Option i=2

Option i=n

Criterion j=1

xi=1,j=1

xi=2,j=1

xi=n,j=1

Criterion j=2

xi=1,j=2

xi=2,j=2

xi=n,j=2

Criterion j=m

xi=1,j=m

xi=2,j=m

xi=n,j=m

An MCA model always has at least two criteria and two options. If the purpose of the MCA is discrete choice, i.e., to select one or more options, an initial check can be made for strict dominance, that is, for options that are outperformed by another option on all criteria. If vi,j is the transformed performance score (where a higher value is better) of xi,j, option i can be considered strictly dominated by i' if:

equation

The pretest for strict dominance can sometimes make the decision analysis elementary, negating the requirement for more advanced MCA models.

Figure 1 The multiple criteria analysis decision making process (adapted from Hajkowicz, 2003).

Figure 1 The multiple criteria analysis decision making process (adapted from Hajkowicz, 2003).

The stages of MCA (Figure 1) include:

  1. Problem structuring: This crucial stage of MCA, typically requiring the bulk of the effort, involves the identification of criteria and decision options and obtaining performance measures (Janssen, 2001).
  2. Criteria weighting: This involves obtaining information from decision makers about the relative importance of criteria. Weights may be expressed at either an ordinal or cardinal measurement level.
  3. Criteria transforming: As the criteria are in different units they need to be transformed into commensurate units prior to aggregation in the ranking or scoring function.
  4. Option ranking and/or scoring: The weights and transformed performance measures are combined to determine the overall performance of each option, relative to other options.
  5. Sensitivity analysis and decision making: Variation of MCA methods, performance measures, and weights test the sensitivity of the result. The decision maker(s) can then make a final choice.

Among a wide variety of MCA algorithms available to attain a final ranking or scoring of the decision options, some of the more common are the Analytic Hierarchy Process (AHP) (Saaty, 1987), weighted summation (Figueira et al. 2005b), ELECTRE (Roy, 1968; Figueira et al. 2005b), PROMETHEE (Brans et al. 1986), and Compromise Programming (Zeleny, 1973; Abrishamchi et al. 2005).3 These are a few of the many different methods to “solve” an MCA problem. It has been shown that changing the method can alter the result, although the differences are typically minor (Gershon & Duckstein, 1983; Ozelkan & Duckstein, 1996; Eder et al. 1997; Raju et al. 2000). Choosing the best MCA method for a given task is a considerable challenge (Tecle, 1992). Consideration needs to be given to the measurement scale of evaluation data (ordinal or cardinal), the nature of criteria transformations, the presence of uncertain input data, the existence of intercriterion dependencies, the number of decision makers (individual, group, or society), and how decision makers would like to interact with the decision model.

Arguably, the most commonly applied MCA technique, possibly by virtue of its relative ease of computation, is linear-weighted summation (Howard, 1991; Zanakis et al. 1998). This approach determines overall performance scores for decision options (ui) by:

        equation 1                                        (1)

where:

equation 2

equation 2

Alternative Economic Evaluation Frameworks

The appropriateness of MCA depends upon the suitability of other economic evaluation frameworks. Four main economic evaluation frameworks are available:

  • Benefit-cost analysis (BCA)
  • Cost-effectiveness analysis (CEA)
  • Cost-utility analysis (CUA)
  • Multiple criteria analysis (MCA)

A CEA can be performed when the benefits of the decision options are adequately measured by a single unit, e.g., tons of soil. Costs in CEA are still computed with standard discounted cash flow (DCF) analysis. The aim is to identify the option which (a) achieves a target outcome at least cost; or (b) maximizes the outcome measure subject to cost constraint.

In CUA, the costs are still computed via standard DCF, but the benefits are measured by multiple attributes in different units. CUA emerged in the early 1980s in healthcare economics (Drummond et al. 1997). Today, Quality Adjusted Life Years (QALYs) are routinely calculated to measure the nonmarket benefits of patient treatment or healthcare programs. The attributes used to determine a QALY score are sensation, mobility, emotion, cognition, self care, pain, and fertility. These are weighted and collapsed into a single numeraire (unit of value) using multiattribute utility theory. The CUA approach is now well established in healthcare economics and has emerging application in environmental and resource economics (Cullen et al. 2001).

Although the term “CUA” is not used by Ribaudo et al. (2001), they describe how such an approach was applied to select conservation contracts under the United States Conservation Reserve Program (CRP). The benefits of contracts were measured with a multiattributed environmental benefits index (EBI). Combining the cost of each option with the EBI enabled purchasing decisions. The BushTender program in Victoria (Australia) is predicated on a similar concept and makes purchasing decisions on the basis of a biodiversity benefits index (BBI) and contract cost (Stoneham et al. 2003).

The process for choosing which of BCA, CEA, CUA, and MCA to apply depends largely on the valuation of benefits (Figure 2). If benefits are adequately measured in monetary units, then BCA provides an appropriate framework. If this is not the case, the analyst will need to contemplate nonmarket valuation (NMV), which will require attention to both reliability and cost effectiveness. If it is decided that NMV is not feasible or worthwhile, then CUA may be appropriate. If there is no monetary cost data, e.g., the options are strategic policy directions, then MCA can be used. It is also noted that MCA can be used with “cost” as one of the criteria.

Figure 2  Process for choosing whether to use BCA, CEA, CUA or MCA.

Figure 2 Process for choosing whether to use BCA, CEA, CUA or MCA.

This article argues that all four frameworks are solidly founded, able to measure benefits adequately, and potentially applicable in different situations. None is inherently better or more robust and all are based on solid theoretical foundations. The key determinant of which to use relates to valuation.

The Limits to Valuation?

The valuation of environmental resources has attracted considerable attention over the past several decades (Adamowicz, 2004). The appropriateness of different valuation techniques, and the suitability of valuation itself, has been heavily debated. There are three main approaches to valuing environmental resources.

First, cost savings and avoidance (CSA) is a set of valuation techniques that is limited to market impacts. It includes measures of preventative and mitigatory expenditure (e.g., Spurgeon, 1998), lost production (e.g., Hajkowicz & Young, 2005), ameliorative expenditure (e.g., Abdalla et al. 1992) and asset damage repair costs (e.g., Tol, 1996) as a consequence of an environmental problem. These analyses typically ask: “How much is the environmental problem costing?” Or, conversely, “How much is being saved because of the presence of a well functioning environment?”

Second, revealed preference techniques estimate the price of a nonmarket good from a closely related proxy market good. Hedonic pricing (e.g., Pearson et al. 2002) and the travel cost method (e.g., Chen et al. 2004) are types of revealed preference techniques. They assess the premium being paid in a real market (e.g., property market) to access a nonmarket environmental good (e.g., scenic views).

Third, stated preference techniques are based on hypothetical questions that are posed to survey respondents. Contingent valuation asks survey respondents about their willingness to pay (WTP) for environmental goods or willingness to accept (WTA) compensation for the loss of environmental goods (e.g., Carson et al. 2003). Choice modelling asks respondents to select bundles of environmental goods at different costs and infers prices from their choices (e.g., van Bueren & Bennet, 2004).

If the analyst considers these methods feasible, accurate, and comprehensive, then the flowchart recommends using BCA (Figure 2). While all three approaches provide effective tools for policy analysis, this paper argues that valuation has limitations.

Both CSA and revealed pricing are methodologically strong, but are limited in scope. In contrast, stated preference has practically limitless scope, but is methodologically weaker (Strijker et al. 2000). This means that not all outcomes can be valued in all cases.

CSA and revealed preferences source data from real markets and thereby avoid the methodological difficulties associated with surveys. The drawback with these techniques lies not in their methodology, but in their scope. CSA limits valuation to market impacts, excluding nonmarket goods such as landscape aesthetics and biodiversity preservation. For revealed pricing, scope may be limited due to unavailability of a proxy market for many environmental goods.

Stated preference techniques can broaden the scope of valuation. However, this introduces significant methodological difficulties associated with consumer-preference surveys (Sagoff, 1988; Diamond & Hausman, 1994; McFadden, 1999; Ludwig 2000; Whittington, 2002). Despite advances over time, two problems persist in stated preference survey designs: (a) the marketplace is hypothetical, which creates uncertainties about real consumer behavior; and (b) the respondent is often unfamiliar with, or unaware of, the environmental good under question. Ludwig (2000) observes:

[P]eople are asked to place prices on things that are not ordinarily priced. For some commodities, we form an opinion about a suitable price from long experience in a market. If there is no such experience and no such market, there may be little consistency among responses and little validity in inferences drawn from the responses.

These methodological issues have hindered the use of stated preference valuations by policy makers. Adamowicz’s (2004) comprehensive review of hundreds of valuation studies conducted since 1975 found that NMV results are seldom used in real policy decisions despite a vast number of academic studies. Greater use is made of valuations based on market prices, i.e., CSA approaches. This can be attributed to either a failure by policy makers to grasp the relevance of NMVs or to fundamental methodological problems of valuing highly intangible nonmarket goods.

Rather than attempting to express such intangibles in monetary units, the pathway to improved resource allocation may lie in alternative decision-making frameworks. In a review of valuation studies, Adamowicz (2004) concludes that:

The most significant advance in environmental valuation may be to move away from a focus on value and focus instead on choice behavior and data that generate information on choices. Advances in resource allocation are most likely to arise from better understanding of preferences and choice, rather than the generation of more value estimates and catalogues of these measures.

The fields of decision theory and MCA place the focus on choice behavior. They aim to provide tools and processes to help decision makers resolve tradeoffs in a transparent, auditable, and analytically robust manner. In MCA the emphasis is on decision making and value measurement is a means to that end.

Potential Pitfalls (and Solutions) in Using MCA

While nonmarket valuation involving stated preferences has methodological problems, MCA is not a panacea. Some of the common sources of error associated with MCA are:

  1. Incorrect problem structure: The selection of criteria and options to guide the MCA process (i.e., problem structuring) is typically the most crucial analytical task (Janssen, 2001). MCA failures can usually be traced back to poor problem definition. New research into MCA is developing improved means of selecting options and criteria. Mingers & Rosenhead (2004) review several “problem structuring methods” (PSMs). Scheubrien & Zionts (2006) developed an interactive computer model to assist with problem-structuring tasks.
  2. Poor performance data: If the performance measures populating the MCA matrix are inaccurate, the results will also be inaccurate. Sensitivity analysis can help determine the extent to which performance-data uncertainties actually matter (i.e., change the overall ranking) (Kangas et al. 2000; Hyde et al. 2004). Sometimes there is over reliance on qualitative performance measures such as expert-judgment scores. These can be used where there is no other data source, but Keeney & Raiffa (1993) consider quantitative performance criteria preferable. Saaty’s (1987) AHP is an MCA technique designed to elicit expert judgments when quantitative data are unavailable.
  3. Inappropriate capturing of decision-maker preferences: The weighting task is complex and can be misunderstood by decision makers. A wide range of weighting methods are available to MCA analysts (Hajkowicz et al. 2000; Roberts & Goodwin, 2002). Edwards & Barron (1994) argue for the use of “swing weights” where decision makers take criterion ranges (minimum and maximum performance scores) into account when assigning weights.
  4. Incorrect application of additive utility: Often a linear additive model provides a reasonable utility function. However, there are some cases where criteria are noncompensatory, for instance when strong performance on one criterion (e.g., in stream zinc concentrations) does not compensate for poor performance on another (e.g., in stream arsenic concentrations). Debate about additive utility models is illustrated by the United Nation’s Human Development Index (HDI).4 Sagar & Najam (1998) argue the HDI should be computed by a multiplicative utility function as opposed to the current additive form.
  5. Duplicate or overlapping criteria: This difficulty occurs when two or more criteria measure the same underlying attribute. Duplicates can sometimes be detected by searching for high intercriterion correlations. There are MCA methods designed to overcome these problems (Brauers, 2004). One approach is to search for unusually high correlations between the criteria; this could suggest they are measuring the same underlying trend.

Some of these issues were highlighted in the Netherlands when the nation’s highest administrative court (The Council of State) overruled an MCA that government agencies had used to select a hazardous waste site. The judge’s verdict centered on inappropriate MCA methodology, including a failure to set appropriate weights, poor quality data in the evaluation matrix, and inappropriate transformation functions (Janssen, 2001). None of these represented a problem with the MCA technique itself, but rather how it was applied. As with any analytical tool, imperfect MCAs will result from real world constraints like limited data and time. The field of MCA is evolving rapidly, with many new tools and software packages to help analysts and decision makers avoid methodological pitfalls (Brauers, 2004).

Comparing MCA and BCA

Several researchers have applied both MCA and BCA to the same natural resource management problem and then compared the results (Joubert et al. 1997; Strijker et al. 2000; Brauers, 2004; Brouwer & van Ek, 2004). These studies show no clear conclusion that either approach is “better,” rather, both have strengths and weaknesses. Strijker et al. (2000) argue that alternatives to BCA are “next-best solutions,” but are, nevertheless, required due to practical and methodological drawbacks with environmental valuation. They propose “minimizing the disadvantages of both methods” by using BCA results within the MCA. In a water-planning problem in Cape Town, South Africa, Joubert et al. (1997) take a similar position, suggesting that BCA and MCA are complementary tools.

The differences between BCA and MCA are summarized in Table 1. Arguably, the main difference is how the criteria weights are set and whose preferences are used. In BCA, weights are derived from the marketplace while in MCA weights are specified by decision makers. Also, when an MCA contains a cost criterion, it may be possible to compute the marginal rate of substitution for monetary (versus nonmonetary) outcomes. Some decision-support methods attempt to make this tradeoff explicit (Hammond et al. 1998).

Table 1 The Differences Between MCA and BCA

Table 1 The Differences Between MCA and BCA

Where nonmarket values prevent application of BCA, MCA has been shown to help decision makers learn and make transparent, auditable choices in what would otherwise be unstructured decisions (Prato, 1999; Hajkowicz et al. 2000; Hayashi, 2000; Robinson, 2000). MCA conforms to formal axioms of multiattribute utility theory (Keeney & Raiffa, 1993), which Schultz (2001) argues would have improved the rigor and internal consistency of the United States Environmental Protection Agency’s Index of Watershed Indicators were it applied.

Conclusion

As with any evaluation tool, MCA has bounded scope for application and introduces methodological challenges of its own. The common obstacles, and potential sources of error, in MCA applications are choosing the criteria and options; avoiding redundant (duplicate) criteria; weighting criteria; transforming criteria; selecting decision makers; and obtaining reliable performance measures. If sufficient time, effort, and skill are devoted to these tasks, MCA provides a robust and informative evaluation of decision options.

The choice of whether to apply MCA or an alternative economic evaluation framework hinges upon the question of valuation. There are strong arguments, both practical and methodological, that valuation has limited scope. Many intangible nonmarket environmental goods are beyond the realm of monetary quantification. In these cases, the adoption of CEA, CUA, or MCA can provide a more robust and methodologically sound analysis.

The argument is not for MCA to replace BCA or environmental valuation. BCA and some valuation techniques have an established place in the economist’s toolkit and will continue to inform resource-allocation decisions. Rather, the toolkit needs diversification to handle the complexities of evaluation when intangible outcomes are important. Policy makers will be in a better position to achieve sustainability outcomes if MCA is made available alongside more conventional methods.

Acknowledgement

Gratitude is expressed to three anonymous reviewers who provided detailed, thoughtful, and helpful comments on the draft manuscript.


Notes

1 Criteria are defined here as the attributes (or indicators) used to measure performance against the decision makers’ objectives.

2 Options are defined here as the items (alternatives) being chosen, ranked, or scored by the decision maker.

3 See Figueira (2005a) for a more detailed review of MCA algorithms.

4 The Human Development Index (HDI) has been published by the United Nations Development Program since 1990 for each country. Providing an alternative measure of whole-of-country performance to gross domestic product, it is defined by indicators of educational status, life expectancy, and a logarithmically adjusted measure of income.


References

Abdalla, C., Roach, B., & Epp, D. 1992. Valuing environmental quality changes using averting expenditures: an application to groundwater contamination. Land Economics 68(2):163–169.

Abrishamchi, A., Ebrahimian, A., Tajrishi, M., & Marino, M. 2005. Case study: application of multicriteria decision making to urban water supply. Journal of Water Resources Planning and Management 131(4):326–335.

Adamowicz, W. 2004. What’s it worth? An examination of historical trends and future directions in environmental valuation. The Australian Journal of Agricultural and Resource Economics 48(3):419–444.

Bennet, J. 2005. Australasian environmental economics: contributions, conflicts and cop-outs. The Australian Journal of Agricultural and Resource Economics 49(3):243–261.

Brans, J., Vincke, P., & Marshal B. 1986. How to select and how to rank projects: The PROMETHEE method. European Journal of Operational Research 24(2):228–238.

Brauers, W. 2004. Optimization Methods for a Stakeholder Society: A Revolution in Economic Thinking by Multi-Objective Optimization. Boston, MA: Kluwer.

Brouwer, R. & van Ek, R. 2004. Integrated ecological, economic and social impact assessment of alternative flood control policies in the Netherlands. Ecological Economics 50(1–2):1–21.

Carson, R., Mitchell, R., Hanemann, M., Kopp, R., Presser, S., & Ruud, P. 2003. Contingent valuation and lost passive use: damages from the Exxon Valdez oil spill. Environmental and Resource Economics 25(3):257–286.

Chen W., Hong, H., Liu, Y., Zhang, L., Hou, X., & Raymond, M. 2004. Recreation demand and economic value: An application of travel cost method for Xiamen Island. China Economic Review 15(4):398–406.

Cullen. R., Fairburn, G., & Hughey, K. 2001. Measuring the productivity of threatened species programs. Ecological Economics 39(1):53–66.

Diamond, P. & Hausman, J. 1994. Contingent valuation: is some number better than no number? Journal of Economic Perspectives 8(4):45–64.

Drummond, M., O’Brien, B., Stoddart, G., & Torrance, G. 1997. Methods for the Economic Evaluation of Healthcare Programmes. Oxford: Oxford Medical Publications.

Dunning, D., Ross, Q., & Merkhofer, M. 2000. Multiattribute utility analysis for addressing Section 316(b) of the Clean Water Act. Environmental Science and Policy 3(S1):7–14.

Eckenrode, R. 1965. Weighting multiple criteria. Management Science 12(3):180–192.

Eder, G., Duckstein, L., & Nachtnebel, H. 1997. Ranking water resource projects and evaluating criteria by multicriterion Q-analysis: an Austrian case study. Journal of Multi-Criteria Decision Analysis 6(5):259–271.

Edwards, W. & Barron, F. 1994. SMARTS and SMARTER: Improved simple methods for multiattribute utility measurement. Organisational Behaviour and Human Decision Processes 60(3):306–325.

Fernandes, L., Ridgley, M., & van't Hof, T. 1999. Multiple criteria analysis integrates economic, ecological and social objectives for coral reef managers. Coral Reefs 18(4):393–402.

Figueira, J., G. Salvatore, & M. Ehrgott (Eds.). 2005a. Multiple Criteria Decision Analysis: State of the Art Surveys. New York: Springer.

Figueira, J., Mousseau, V., & Roy, B. 2005b. ELECTRE methods. In J. Figueira, G. Salvatore, & M. Ehrgott (Eds.), Multiple Criteria Decision Analysis: State of the Art Surveys. pp. 133–162. New York: Springer.

Gershon, M. & Duckstein, L. 1983. Multiobjective approaches to river basin planning. Journal of the Water Resources Planning and Management 109(1):13–28.

Hajkowicz, S., McDonald, G., & Smith, P. 2000. An evaluation of multiple objective decision support weighting techniques in natural resource management. Journal of Environmental Planning and Management 43(4):505–518.

Hajkowicz, S. 2003. Allocating environmental funds amongst regions: Fairness, efficiency and transparency. In E. Beriatos, C. Brebbia, H. Coccossis, & A. Kungolos (Eds.), Sustainable Planning and Development. pp. 169–187. Wessex: WIT Press.

Hajkowicz, S. & Young, M. 2005. Costing yield loss from acidity, sodicity and dryland salinity to Australian agriculture. Land Degradation and Development 16(5):417–433.

Hammond, J., Keeney, R., & Raiffa, H. 1998. Smart Choices: A Practical Guide to Making Better Decisions. Boston, MA: Harvard Business School Press.

Hayashi, K. 2000. Multicriteria analysis for agricultural resource management: A critical survey and future perspectives. European Journal of Operational Research 122(2):486–500.

Heilman, P., Yakowitz, D., & Lane, L. 1997. Targeting farms to improve water quality. Applied Mathematics and Computation 83(2–3):173–194.

Howard, A. 1991. A critical look at multiple criteria decision making techniques with reference to forestry applications. Canadian Journal of Forest Research 21(4):1649–1659.

Hyde, K., Maier, H., & Colby, C. 2004. Reliability-based approach to multicriteria decision analysis for water resources. Journal of Water Resources Planning and Management 130(6):429–438.

Janssen, R. 2001. On the use of multi-criteria analysis in environmental impact assessment in The Netherlands. Journal of Multi-Criteria Decision Analysis 10(2):101–109.

Joubert, A., Leiman, A., de Klerk, H., Katau, S., & Aggenbach, J. 1997. Fynbos vegetation and the supply of water: a comparison of multi-criteria decision analysis and cost benefit analysis. Ecological Economics 22(2):123–140.

Kangas, J., Store, R., Leskinen, P., & Mehtätalo, L. 2000. Improving the quality of landscape ecological forest planning by utilizing advanced decision-support tools. Forest Ecology and Management 132(2–3):157–171.

Keeney, R. & Raiffa, H. 1993. Decisions with Multiple Objectives: Preferences and Value Tradeoffs. New York: Cambridge University Press.

Ludwig, D. 2000. Limitations of economic valuation of ecosystems. Ecosystems 3(1):31–35.

McFadden, D. 1999. Rationality for economists? Journal of Risk and Uncertainty 19(1-3):73–105.

Mingers, R. & Rosenhead, J. 2004. Problem structuring methods in action. European Journal of Operational Research 152(3):530–554.

Ozelkan, E. & Duckstein, L. 1996. Analysing water resources alternatives and handling criteria by multi criterion decision techniques. Journal of Environmental Management 48(1): 69–96.

Pearson, L., Tisdell, C., & Lisle, A. 2002. The impact of Noosa National Park on surrounding property values: An application of the hedonic price method. Economic Analysis and Policy 32(2):155–171.

Pohekar, S. & Ramachandran, M. 2004. Application of multi-criteria decision making to sustainable energy planning: A review. Renewable and Sustainable Energy Reviews 8:365–381.

Prato, T. 1999. Multiple attribute decision analysis for ecosystem management. Ecological Economics 30(2):207–222.

Raju, S., Duckstein, L., & Arondel, C. 2000. Multicriterion analysis for sustainable water resources planning: a case study in Spain. Water Resources Management 14(6):435–456.

Ribaudo, M., Hoag, D., Smith, M., & Heimlich, R. 2001. Environmental indices and the politics of the conservation reserve program. Ecological Indicators 1(1):11–20.

Roberts, R. & Goodwin, P. 2002. Weight approximations in multi-attribute decision models. Journal of Multi-Criteria Decision Analysis 11(6):291–303.

Robinson, J. 2000. Does MODSS offer an alternative to traditional approaches to natural resource management decision making? Australasian Journal of Environmental Management 7(3):170–180.

Romero, C. & Rehman, T. 1987. Natural resource management and the use of multiple criteria decision making techniques: a review. European Review of Agricultural Economics 14(1):61–89.

Roy, B. 1968. Classement et choix en présence de points de vue multiples (la méthode ELECTRE). Revue d'information et de Recherche Opérationnelle8:57–75.

Saaty, T. 1987. The analytic hierarchy process: What it is and how it is used. Mathematical Modelling 9(3–5):161–176.

Sagar, A. & Najam, A, 1998. The human development index: A critical review. Ecological Economics 25(3):249–264.

Sagoff, M. 1988. The Economy of the Earth: Philosophy, Law, and the Environment. New York: Cambridge University Press.

Scheubrein, R. & Zionts, S. 2006. A problem structuring front end for a multiple criteria decision support system. Computers & Operations Research 33(1):18–31.

Schultz, M. 2001. A critique of EPA’s index of watershed indicators. Journal of Environmental Management 62(4):429–442.

Spurgeon, J. 1998. The socio-economic costs and benefits of coastal habitat rehabilitation and creation. Marine Pollution Bulletin 37(8–12):373–382.

Steuer, R. & Na, P. 2003. Multiple criteria decision making combined with finance: A categorized bibliographic study. European Journal of Operational Research 150(3):496–515.

Stoneham, G., Chaudhri, V., Ha, A., & Strappazzon, L. 2003. Auctions for contracts: An empirical examination of Victrorias BushTender trial. The Australian Journal of Agricultural and Resource Economics 47(4):477–500.

Strijker D., Sijtsma, F., & Wiersma, D. 2000. Evaluation of nature conservation: An application to the Dutch Ecological Network. Environmental and Resource Economics 16(4):363–378.

Tecle, A. 1992. Selecting a multicriterion decision making technique for watershed resources management. Water Resources Bulletin 28(1):129–140.

Tol, R. 1996. The damage costs of climate change towards a dynamic representation. Ecological Economics 19(1):67–90.

van Bueren, M. & Bennett, J. 2004. Towards the development of a transferable set of value estimates for environmental attributes. The Australian Journal of Agricultural and Resource Economics 48(1):1–32.

von Neumann, J. & Morgenstern, O. 1944. Theory of Games and Economic Behaviour. Princeton, NJ: Princeton University Press.

Weistroffer, H., Smith, C., & Narula, S. 2005. MCDM software. In J. Figueira, G. Salvatore, & M. Ehrgott (Eds.), Multiple Criteria Decision Analysis: State of the Art Surveys. pp. 989–1018. New York: Springer.

Whittington, D. 2002. Improving the performance of contingent valuation studies in developing countries. Environmental and Resource Economics 22(1–2):323–367.

Zanakis, S., Solomon, A., Wishart, N., & Dublish, S. 1998. Multi-attribute decision making: A simulation comparison of selected methods. European Journal of Operational Research 107(3):507–529.

 Zeleny, M. 1973. Compromise programming. In J. Cocharane & M. Zeleny (Eds.), Multiple Criteria Decision Making. pp. 262–301. Columbia, SC: University of Southern Carolina Press.

© 2008 Hajkowicz


 
Published by ProQuest

email: ejournal@csa.com
http://sspp.proquest.com