SSPP banner

Fall 2014 | Volume 10 | Issue 2


Evaluating effectiveness of open assessments on alternative biofuel sources

Vilma Sandström,1 Jouni T. Tuomisto,2 Sami Majaniemi,2 Teemu Rintala,2 & Mikko V. Pohjola2
1Department of Environmental Sciences, University of Helsinki, P.O. Box 65, Helsinki FI-00014 Finland (email:
2National Institute for Health and Welfare, P.O. Box 95, Kuopio FI-70701 Finland

Biofuels have raised controversy regarding their environmental, social, and economic sustainability. The complexity of biofuel decisions and investments by both industry and society requires integration of scientific knowledge, public information, and values from a diversity of sources. Environmental assessments can identify multiple impacts of different options. Open and collaborative knowledge creation can support decisions in two ways: by building trust and credibility and by developing more robust understanding of key issues. Open assessment is a decision-support method that allows widespread participation in a transparent and freely accessible process. In this article, we evaluate two open assessment case studies concerning biodiesel production. The evaluation compiles the participants' views regarding the potential of the assessment process to influence decisions in terms of quality of content, applicability, efficiency, and openness. According to the evaluation, openness can be feasibly implemented and is much appreciated by participants. More experience using broad and active participation is needed for further development of methods and tools. However, the currently common practices of closed and disengaged processes limit decision making. In addition, suitable tools and practices, as well as the inclusion of participants with appropriate skills, are needed to facilitate open collaboration.

KEYWORDS: ecosystem resilience, decision making, green development. socioeconomic aspects, work, national planning

Citation: Sandström, V., Tuomisto, J., Majaniemi, S., Rintala, T., & Pohjola, M. 2014. Evaluating effectiveness of open assessments on alternative biofuel sources. Sustainability: Science, Practice, & Policy 10(2):29-40.

Published online May 16, 2014


Growing energy demand and greenhouse-gas emissions from fossil fuels have increased interest in the production of renewable energy. Biofuels have been the focus of particular attention—especially liquid or gaseous fuels for transport produced from biomass (European Parliament, Council of the European Union, 2009). Accordingly, biofuel production has become a rapidly growing industry as it is considered to aid in reducing greenhouse-gas (GHG) emissions from the transport sector, decrease dependence on fossil fuels, and contribute to the economic growth of developing countries (Ryan et al. 2005; Cassman & Liska, 2007; Mathews, 2008). To help reach GHG emission-reduction goals, the European Union (EU) has set a target to cover at least 10% of the transport sector’s energy demand with renewable energy resources by 2020 (European Parliament, Council of the European Union, 2009).

However, the sustainability of biofuels has been criticized, and whether they ultimately offer carbon savings depends on how they are produced (Fargione et al. 2008; Giampietro & Mayumi, 2009; Tilman et al. 2009; Gomiero et al. 2010). Converting native ecosystems to biofuel production can create so-called “biofuel-carbon debt” and release much more carbon dioxide (CO2) than the annual GHG reductions these biofuels provide by replacing fossil fuels (Fargione et al. 2008). In addition, the use of nitrogen fertilizers in crop production of commonly used biofuels, such as biodiesel from rapeseed and bioethanol from corn, can contribute as much or more to global warming as the combustion of fossil fuels (Crutzen et al. 2008). The production process itself can also require more energy input from fossil fuels than is created (Pimentel & Patzek, 2005). Furthermore, many other ethical and environmental issues must be addressed, such as conflict between biofuel production and global food security, use of limited water resources, tensions between outside land owners and indigenous communities, competition with grazing wild and domesticated animals, and possible threats to biodiversity and soil fertility (Giampetro & Mayumi, 2009; Tilman et al. 2009; Gomiero et al. 2010). Due to the many controversial aspects associated with biofuel production, its sustainability can vary significantly. In short, the adoption of biofuels may not be the relatively straightforward solution to climate change that it first appears to be.

Before making new investments in biofuel production and supply, industrial decision makers have to consider a wide range of scientific and nonscientific information that involves financial, environmental, and social aspects of the production chain. For example, GHG emissions, as well as other impacts and costs of biofuel production, can vary considerably across different raw materials and production sites. In addition, stakeholder and public perspectives at local, regional, and global levels pose significant constraints on biofuel-investment decisions.

Different kinds of assessments, which may apply models as well as decision-analytical methods such as multicriteria analysis, can generate knowledge for such decisions (see Jakeman et al. 1998; Zopounidis & Pardalos, 2010; Pohjola et al. 2012). Various participatory techniques can also accompany these procedures (see e.g., Pohjola & Tuomisto, 2011), but the effectiveness of the support they provide faces many possible limitations (Matthews et al. 2011; Pohjola & Tuomisto, 2011; Pohjola et al. 2012; 2013). Where the implications of decisions are complex and difficult to anticipate, multiple needs, interests, and perspectives must be accounted for (Figure 1).

Figure 1 Complex decisions need to take account of multiple needs, interests, and sources of knowledge. Reproduced from Tijhuis et al. 2012 with permission.

Figure 1 Complex decisions need to take account of multiple needs, interests, and sources of knowledge. Reproduced from Tijhuis et al. 2012 with permission.

Public participation and stakeholder involvement in assessment and policy making are built on the ideas of democracy and participatory policy (Fiorino, 1990) and enhance acceptance, integrate local knowledge with scientific information, and produce more flexible and transparent decisions (van den Hove, 2000; Reed, 2008). However, Cashmore (2004), among others, claims that environmental assessment is often more about process and procedure than purpose and effects (see also Jay et al. 2007). Only a few approaches even explicitly consider assessment performance in terms of placing the results in their intended contexts (Pohjola et al. 2012; 2013). Correspondingly, although participation is increasingly appreciated as a part of environmental assessment and decision making, its implementation has often concentrated on process and access rather than outcomes (Doelle & Sinclair, 2006).

The open-assessment method was created at the National Institute for Health and Welfare in Finland to provide a means for more purposive and effective decision-making support in open collaboration (Pohjola & Tuomisto, 2011, Pohjola et al. 2012). It aims to support decision making through systematic analysis of different options and by providing a forum for all involved parties to collect and integrate knowledge and perspectives. As its name implies, open assessment differs from most common assessment approaches in terms of its degree of openness.

In principle, all interested parties are allowed to participate and contribute (Pohjola & Tuomisto, 2011). In addition, tight linkage between assessments and the use of their results is a necessity (Pohjola et al. 2012; Pohjola et al. 2013). Ideally, all parts of the process should be transparent and all content subject to open scientific criticism. The method emphasizes substantive content rather than participatory procedures just for the sake of participation. This means that the assessors seek all relevant views rather than a set of views expressed by a balanced representative group. Furthermore, arguments are evaluated based on relevance and how they hold up against criticism rather than how many participants support them. In this thinking, there is no need to discriminate against participants with vested interests as long as there is a large enough pool of participants and information sources. An assessor manages the potentially large number of contributions by defining clear discussion topics based on the purpose of the assessment and requiring that each contributor link her argument to one of the topics in a relevant way.

If the above-mentioned fundamental principles (for more a detailed list of principles, see Pohjola et al. 2011) are not violated, open assessments can address almost any topic and apply many methods for assessing impacts, risks, benefits, and so forth to analyze decision options. The open collaboration, however, needs to be facilitated by sufficient tools and practices as well as assessment coordinators, which help in connecting the knowledge and views of different participants into shared descriptions of the issues being assessed. One essential means for avoiding divergence and vandalism1 in open assessment is the application of structured discussion for sorting out irrelevant, unreasoned, or repeated statements when compiling participant contributions.2

Open assessments are usually conducted in the Opasnet web-workspace3 which is specially designed to host them (Pohjola et al. 2011). (Opasnet is discussed more in the next section.) Open assessment and Opasnet have been developed in several research projects involving both national and international collaboration, e.g., EU-funded projects involving partners from more than 30 European countries.4 They are currently one of the central assessment tools in the Department of Environmental Health of the National Institute for Health and Welfare in Finland (THL). Application examples include an EU-funded research project for developing healthy city-level climate policies in seven cities in Europe and China;5 a Water Guide for online modeling of health effects of microbiological hazards in drinking water in support of water-safety planning in waterworks according to EU legislation;6 and a governmental project for developing the use of environmental and health knowledge in municipal policy making in Finland.7 Currently, the Opasnet in English contains in total 2,700 pages and more than 30 open assessments,8 while the Opasnet in Finnish contains 1,500 pages and 50 open assessments. Most open assessments until now have addressed environmental and health issues, but the methods and tools are applicable to all practical knowledge support in open collaboration. Most of the open assessments conducted to date have been coordinated by THL, but they can be initiated, as well as coordinated, by any interested party. Opasnet is located in the open Internet and is available for public use.

In this article, we discuss two open assessment case studies that evaluated the feasibility of alternative biodiesel feedstocks: the Jatropha curcas oil plant (referred to as jatropha) and the oil extracted from waste byproducts from fish farming. These assessments were limited to examining biodiesel production (not bioalcohol or other forms of bioenergy). The studies were requested and financed by the Neste Oil Corporation and performed by THL in the open Opasnet workspace. The information produced was primarily collected and analyzed to support the decision-making processes of the primary user (Neste Oil), but at the same time it was also intended to be applicable in other societal decision-making situations.

This article aims to contribute to the development of improved assessment and policy-making methods, tools, and practices by evaluating the effectiveness of these two open-assessment cases in terms of their quality of content, applicability, efficiency, and openness (Pohjola & Tuomisto, 2011; Pohjola et al. 2013). By effectiveness, we mean the influence of the assessment on the outcomes (i.e., changes in values, attitudes, and behavior in society) (Matthews et al. 2011), but in practice what is possible to evaluate actually reflects the likelihood of an assessment achieving the desired results (Hokkanen & Kojo, 2003). Open assessment has been developed as a means for overcoming the limitations of effectiveness in science-based decision support (cf., Matthews et al. 2011; Pohjola & Tuomisto, 2011; Pohjola et al. 2013). Therefore, evaluation of its application is important both to the development of open assessment and Opasnet, but also with respect to science-based decision support in general.

The research questions we sought to answer were as follows:

  • Was the open-assessment method in the Opasnet workspace a feasible means with which to conduct the assessments? Did it provide means for all participants to influence the assessments? Did the assessments influence the knowledge of the primary users and other participants?
  • Was the evaluation approach (described above) feasible for evaluating assessment effectiveness?

We first present the two open assessment case studies, discuss the theory behind our evaluation approach, and describe the implementation of the evaluation. Then we present and discuss the results of the evaluation and ultimately draw conclusions with respect to their implications to assessment theory and practice.

Material and Methods

Open Assessment Case Studies

Environmental health assessments are typically rather limited in openness (Pohjola et al. 2012) and therefore need a proof of concept for more open approaches. Scientific rather than practical interests have motivated many previous open assessments that THL has conducted. Therefore, the department was interested in testing open assessment in real-life situations with a clear customer need. The basic idea behind open assessments is to collect information and create knowledge needed for decision making in broad collaboration (Figure 2). The information is organized as an assessment that predicts the impacts of different decision options on some outcomes of interest. Decisions, outcomes, and other issues are modeled as distinct parts that are, in practice, webpages in Opasnet, an open-web workspace designed specifically for conducting these assessments.

Users of Opasnet can collect, synthesize, describe, discuss, and distribute information using a wiki, upload data, and build and run computational models. The main interface between Opasnet and its users is the wiki, which is built on the Mediawiki platform. Many of its basic functions resemble tools available in Wikipedia, but many additional functionalities have also been developed. For example, users have access to tools for modeling and the pages are specifically structured to aid the organization of information. The wiki provides a forum for participants to collaborate on developing well-founded solutions to practical problems (see Figure 2). Opasnet is located in the open Internet, so anyone can read its contents and post comments on its pages. Editing of the pages, however, requires logging into the system though there are no constraints on who can create a user account.

Figure 2 In open assessment members of a society adopt different roles in relation to identifying needs, formulating assessments, making decisions, and taking actions. An assessment page in Opasnet plays a central role in collecting observations (orange arrows; here of an undesired event, a toxic liquid spill) and spreading information (green arrows) to and from members. Knowledge-based actions (blue arrow) are taken to clear up the spill. Reproduced from Pohjola & Tuomisto, 2011, with permission.

Figure 2 In open assessment members of a society adopt different roles in relation to identifying needs, formulating assessments, making decisions, and taking actions. An assessment page in Opasnet plays a central role in collecting observations (orange arrows; here of an undesired event, a toxic liquid spill) and spreading information (green arrows) to and from members. Knowledge-based actions (blue arrow) are taken to clear up the spill. Reproduced from Pohjola & Tuomisto, 2011, with permission.

Both assessment cases were performed in Opasnet and each part of the assessments (e.g., cultivation of jatropha, amount of oil produced from the harvest, and social impacts) was described, discussed, and estimated on a separate page in the workspace. Both assessments were published in Opasnet in Finnish, but an English summary is also provided.9

The two biofuel assessments in question were requested and financed by Neste Oil Corporation (a Finnish refining and marketing company focusing on advanced, cleaner traffic fuels) and were performed by THL between June 2011 and February 2012. The primary aim of the assessments was to investigate the feasibility of two potentially useful alternative raw materials in biodiesel production, jatropha and waste fish oil. The focus was on the environmental, climate, and social impacts and the acceptance of production by Finnish stakeholders. The production of biodiesel from waste fish oil was considered to occur in Southeast Asia using local or regional raw materials. The location for biodiesel production from jatropha and the origin of the raw material was not geographically specified.

Neste Oil had previously assessed palm oil as an ecological and economical alternative to fossil fuels. To the company’s surprise, there was major outrage and a hostile campaign after Neste Oil started fuel production based on palm oil in Singapore in 2010 (Greenpeace, 2011). In the wake of this experience, Neste Oil had an interest in understanding how open, participatory assessments would work in practice and seeing whether such a process could explore social attitudes about potentially sensitive topics. Jatropha and waste fish oil had already been identified from economic and technical perspectives as potentially feasible raw materials for biodiesel production. As a Finland-based company, Neste Oil was primarily interested in the views of Finnish stakeholders. However, participants were invited to discuss the issue from either a global or local point of view. In addition to producing supporting information for Neste Oil and social decision makers, the assessments aimed to increase the awareness and knowledge of both stakeholders and the public on two new potential alternatives for biofuel sources.

The work started in June 2011. The topics and underlying motivation were clarified in discussions with THL and Neste Oil. Assessments started as exploratory with no exact questions given, as described later. A key interest was to identify potential reasons for not using jatropha or waste fish oil in biofuel production. In the beginning, more attention was paid to jatropha, because the first information sources found were optimistic about jatropha cultivation on poor lands. The scoping of the waste fish oil case was not clarified at the start and the assessment team started out by studying ocean fishing. The current mega-trend of depletion of ocean-fishery stocks (e.g., Myers & Worm, 2003) was seen as a major obstacle to using waste oil from the fish industry in large quantities, at least for long periods. Only later was the scope redirected by Neste Oil to Southeast Asian fish farming.

The assessments were performed by a group of assessment coordinators (four experts in environmental science, open assessment, and/or modeling). In addition, seven university or high school students worked as summer trainees doing most of the practical work. The coordinators were not experts in jatropha, the fish industry, or fuel distillation, so they focused more on environmental issues and attitudes. Neste Oil contributed mostly in assessment scoping and was the primary user of the results. In addition, a group of stakeholders was invited to contribute to any part of the assessments. Because these were conducted as open assessments, it was possible for anyone else to participate.

Most assessment was done between June and August 2011. The information sources included both scientific and nonscientific journal articles and webpages. The findings were summarized on wiki pages of the Opasnet assessment workspace (numerically when possible) and used as parts of the computational models built for the assessments. Uncertainties were described as probabilities handled by Monte Carlo simulation algorithms. The models, coded in an open source R language, were built in the same web workspace as all the text content, and can be read and run by anyone directly from the webpages.

In September 2011, THL and Neste Oil presented and discussed the draft results. Jatropha was not found to be a very productive plant unless cultivated on rich land, possibly leading to land-use competition with food crops. In contrast, waste oil from the fish industry turned out to be more promising, especially when Neste Oil wanted to focus on the large fish-farming industry in Southeast Asia. This stimulated a period of new data collection. Key findings are presented on the assessment page in Opasnet.10

Assessment coordinators contacted stakeholder groups in October and November 2011 and invited them to participate. Participation intended 1) to inform stakeholders about the assessments and their results, 2) to help in collecting further information, and 3) to get a comprehensive picture of different stakeholders’ views. Altogether, eighteen stakeholders (including three energy companies, a human rights organization, five environmental organizations, and ten researchers, research centers, or expert organizations—a detailed list is provided on the assessment page) were invited to comment on the existing assessment and introduce new perspectives. Feedback was received from six groups or individuals. The entire process took place on a wiki-based open website.

The stakeholders were asked to comment on the assessments and to argue whether from their point of view it was feasible to invest in jatropha or waste fish oil as feedstock for biodiesel production. All feedback was included in the assessments as formal argumentation and relevant page contents. Conclusions were updated when warranted. A few new aspects were raised (e.g., about the role of EU climate policies), but the main conclusions did not change due to the feedback.

The central conclusion about jatropha was that it might be useful in small-scale fuel-oil production—especially if the plant has other simultaneous uses such as prevention of erosion—but not on a large-scale industrial basis. The main conclusion about waste fish oil was that it is a promising source and seems available in sufficient quantities, at least in Southeast Asia. However, ecologically it has dual impacts and the balance is uncertain: it reduces the waste stream from the fish industry, but it may stimulate the primary process of fish farming and its potentially harmful impacts.

By February 2012, the assessments had reached sufficient maturity. The website and a small seminar were the final products for Neste Oil. Finally, all participants (Neste Oil, stakeholders, summer trainees, and coordinators) were asked to evaluate the final output and the assessment process. The evaluation is described in more detail below.

Assessment Effectiveness

While the assessments were conducted allowing open participation via an Internet-based workspace, we also adopted a novel approach to evaluate their effectiveness. This approach is based on the frameworks “properties of good assessment” and “dimensions of openness” recently developed in the EU-funded INTARESE and BENERIS projects (mentioned in the Introduction), and it considers assessment effectiveness in terms of quality of content, applicability, efficiency, and openness (Tuomisto & Pohjola, 2007; Pohjola & Tuomisto, 2011). The actual interest in evaluating effectiveness of assessments is the changes they provoke (Matthews et al. 2011), but as it would require follow-up and post-hoc analysis, such evaluation would provide little guidance to the assessments in question. Therefore, the evaluation approach adopted here focuses on identifying the assessments’ potential to serve their explicated purposes (cf. Hokkanen & Kojo, 2003) and thereby providing guidance to the design and execution phases of assessment. Simultaneously, it also creates a basis for possible post-hoc analyses of effectiveness that address the realization of that potential. It should be noted, however, that in the case-study assessments discussed here, evaluation of effectiveness was done only after delivering the results, not as an intrinsic part of the assessment.

This study used two separate frameworks to evaluate the effectiveness and performance of the assessment. The first framework, called “dimensions of openness” (Pohjola & Tuomisto, 2011), was designed as a tool for characterizing the approaches and settings of supporting decision making by means of science-based analysis and participation. It considers the possibilities and constraints for assessors and participants to influence the decisions and resultant actions in terms of:

  • Scope of participation: Who is allowed to participate in the process?
  • Access to information: What information about the issue is made available to participants?
  • Timing of openness: When are participants invited or allowed to participate?
  • Scope of contribution: To which aspects of the issue are participants invited or allowed to contribute?
  • Impact of contribution: How much are participant contributions allowed to influence outcomes? In other words, how much weight is given to participant contributions?

In this study, the framework “dimensions of openness” was primarily applied to evaluate the effectiveness of participation by characterizing the possibilities of the invited stakeholders influencing the assessment as well as the decisions and actions of the primary user, Neste Oil, due to the assessment. It should be noted, however, that the framework itself is not limited to considering only external participation, but encompasses all activities in assessment-policy interaction.

The second framework, called “properties of good assessment framework” was designed as a tool for evaluating and managing performance of models and assessments, particularly in the context of environment and health (Tuomisto & Pohjola 2007).11 It considers the potential of the processes and outputs of assessments to meet their explicated purposes and to influence the decision processes and consequential actions that they address in terms of their 1) quality of content, 2) applicability, and 3) efficiency (Table 1). The first version of the framework was published in 2007 (Tuomisto & Pohjola, 2007) and it has recently been updated. In this study, we applied a slightly simplified version (Table 1), particularly emphasizing the properties characterizing applicability, as the basis for the participant-evaluation questionnaires.

Table 1 A simplified version of the properties of good assessment applied as a basis for the evaluation questionnaire for the biofuel-assessment participants.

Table 1 A simplified version of the properties of good assessment applied as a basis for the evaluation questionnaire for the biofuel-assessment participants.

In Table 1, the description column provides a general explanation of the meaning of each category or property. The question column then attempts to explicate what the description intends by providing potential sample questions for evaluating a model or assessment in terms of that category or property. The category quality of content characterizes the information content in the assessment output. The properties under applicability characterize both the output and the process according to their capability of delivering the information content to the intended use. Attributes under efficiency capture how much output is delivered relative to the effort spent.

These two frameworks—dimensions of openness and properties of good assessment—overlap most apparently in relation to the properties under the applicability category. For example, have the needs of different participants been taken into account in scoping and question-setting of the assessment (relevance)? To what extent is it possible for different participants to contribute to assessment (availability)? Is assessment content comprehensible to all participants (usability)? Is it possible to participate to an acceptable degree (acceptability)? The focus in this study is, however, on evaluating the potential of participatory influence in assessment and the potential of the assessment to influence its primary users as well as other participants.

Evaluation of Assessment Effectiveness

After completing the assessments and delivering the results to the primary user, all participants were contacted again and asked to evaluate the assessments’ performance by completing evaluation questionnaires based on the properties of good assessments and dimensions of openness frameworks described above.12 To get comprehensive feedback of the assessments, the questionnaires were sent to the representatives of the primary user (n=3), invited stakeholders (n=19), and summer trainees (n=7) as well as the assessment coordinators at THL (n=4).

Due to the different roles and perspectives that were adopted, the questionnaires were slightly modified for each participating group (see also Table 2). For example, Question 1 on inclusion of stakeholder contributions was targeted only to stakeholder representatives. Questions 2–6, addressing quality of content and applicability of assessment, were targeted to all groups. Question 7, on assessment efficiency, was directed only to the primary users.13 Respondents were free to choose which questions they answered, and some respondents did not feel capable or willing to answer every question. Both assessments (jatropha and waste fish oil cases) were evaluated together, but the respondents were allowed to provide differing numerical values or comments on each assessment if needed. For each question, the respondents were asked to specify a numerical integer value between the range of 1–5 (one meaning bad and five meaning good). In addition, textual comments could be added to accompany the numerical values.

Table 2 Characteristics of numerical data from the evaluation questionnaires.

Table 2 Characteristics of numerical data from the evaluation questionnaires. 

We received in total twelve responses (38% response rate). Three of these responses were from primary users, four from stakeholders (one with only textual comments), two from summer trainees, and three from assessment coordinators. Although the number of responses was low, the group of respondents can be considered as representative of those participants who were actively involved in and contributed to the assessments. With such a small data set it was not, however, possible to meaningfully conduct proper statistical analyses. Therefore, this article presents only the characteristics, including number of respondents (n), range from lowest to highest score, as well as arithmetic mean (mean), of the numerical data for each question. In addition, the averages for questions 3–6 (applicability) and questions 1–7 (effectiveness) were calculated using all existing values and omitting missing values.14

The characteristics of the numerical data can only be considered as providing some indication of possible variation between evaluations of different properties and by different respondents. Accordingly, the focus of evaluation is on the textual comments provided with the numerical responses. These are scrutinized along with informal communications during the assessments, subjective experiences among the assessment coordinators, as well as the indications based on numerical results in the next section as an overall evaluation of assessment effectiveness in these two cases.


The characteristics of the numerical evaluation results solicited from all participants provide some indication of possible variation between evaluations of different properties and by different respondents (see Table 2). On a general level, these data can be interpreted as indicating that the participants evaluated the assessments as at least moderately effective across all questions. External participants—for example, users and stakeholders—are slightly more critical in their evaluations than the internal participants—comprising summer trainees and assessment coordinators—regarding some questions, especially applicability. The data are, however, too small to statistically test this observation.15 These findings are nonetheless complementary to the analysis of textual comments from respondents discussed below in terms of openness, quality of content, applicability, and efficiency.


In theory, anyone could participate in the biofuel assessments. In practice, active participation was mainly reduced to the three representatives of the user (from Neste Oil) who initiated and followed the assessment, the seven summer trainees doing most of the assessment work, the four coordinators guiding the assessment process, and the eighteen stakeholders who were invited to participate (of which six actually contributed to the assessment). The webpage statistics, however, also show some passive participation in the assessment, meaning that outside of the active participants some people were reading the assessment wiki pages, but not contributing to the content. (Unfortunately, the webpage statistics are so coarse that it is not possible to know the identities of the readers.)

The webpages of the two assessments were downloaded a total of 9,403 times between June 2011 and June 2012. This level of engagement made the biofuel assessments some of the most popular in Opasnet. A download was counted every time someone loaded a page for reading and when someone saved contributions on a page. In the beginning, the coordinators and the summer trainees made most of the downloads, as implied by the 5,673 downloads already during the active work period from June to August 2011. There are no detailed statistics about who was actually downloading the pages, but time-wise data imply significant activity from outside the contributor group during the commenting period of November–February (2,422 downloads). This number was clearly lower earlier, during the updating period (730 downloads) and after the commenting period (578 downloads). In addition, there has been continuous interest after June 2012, with the assessment pages downloaded 4,770 times between June 2012 and November 2013, when there were few downloads by the coordinators or summer workers.

The extent of the downloads suggests clear activity related to the assessments and that the information produced was indeed recognized by a fairly large group of interested people. It, however, merits recognizing that biofuels are at present highly topical for a much larger group of people than we were able to reach. In a sense, the potential of informing people was larger than what actually was realized, perhaps because marketing of the results was not set as a high priority and the related activities were modest.

The assessments were performed in the Opasnet website and all of the information in the assessment pages was available not only to all active participants, but also to anyone interested throughout the process. All contributions were added on the assessment pages, even if they had been provided via other means of communication than the Opasnet wiki. However, during the course of assessments, discussions with the users (Neste Oil Corporation) revealed that they did not openly share all the information on hand regarding the assessed topics or their business interests, which affected both the process and the output of the assessments. It is also difficult to estimate how much relevant knowledge possessed by the invited persons may have been left out of the assessments due to unwillingness to participate, technical difficulties of contributing, and other reasons.

In principle, it was possible to follow and contribute to the progress of the assessments continuously via the Opasnet workspace. The assessments developed rapidly between June and August 2011 as a result of work by the summer trainees, and at a slower rate until February 2012 mainly due to effort by the assessment coordinators. We contacted the stakeholders only after some months of work, when the assessments were already in quite highly developed form, and most of the stakeholder contributions were given in reaction to our request at this stage. The users’ contributions were mostly provided via meetings or discussions with the assessment coordinators regarding initiation, intermediate checkpoint, and delivery of assessment results. Only a few spontaneous contributions were made by the stakeholders and users outside these somewhat formal contribution periods.

All participants, and in fact anyone, were allowed to comment and contribute to all parts of the assessment, except for the assessment questions discussed between the users and coordinators. All contributions were compiled in Opasnet and integrated into the assessment. Stakeholder comments were listed on corresponding assessment pages in the form of formal argumentation and relevant page contents and conclusions were updated accordingly when needed.16 Because most stakeholder participants had not worked with wikis, many contributions were provided by more conventional means (e.g., by e-mail or through discussion) and added to corresponding Opasnet pages by the assessment coordinators.

Altogether, all types of contributions were allowed to influence the assessment content. However, one stakeholder representative expressed her slight dissatisfaction in the evaluation that her point regarding use of fish waste in biogas production did not result in an updating of the assessment scope, although the point was agreed among the assessment participants to be generally relevant to bioenergy production. According to open-assessment practices, the discussion was, however, included in the assessment.

At this point, it is not easy to estimate what impact the assessment, and all contributions to it, has had on the users’ decisions and actions. The assessments did not link directly to any ongoing decision process. In addition, the evaluation comments indicate that the assessments confirmed rather than changed user understanding of the assessed topics. The influence on practical decisions and actions likely remained, at best, moderate.

Quality of Content

Participants in the open assessment considered the quality of the outputs to be relatively good. They particularly appreciated the inclusion of various aspects of the production chain in the assessment. In addition, the stakeholder comments had little impact on the output, as most of the information provided was already found in the assessments (which is an indication of their comprehensiveness). However, the formulation of assessment questions was too vague and thereby it was difficult to answer them accurately. We clarified the questions during the assessment, and eventually addressed many issues outside the scope of the final questions. In addition, some of the participants criticized the reliability of the resultant information, as not all the most up-to-date information was used.


The participants evaluated applicability overall as relatively good, that is with mean scores of questions 3–6 average and all attributes under “applicability” above three on a 1–5 scale, but with some variation across questions and respondents (Table 2). With respect to relevance, the phrasing of the original research questions was too vague to provide users with the information they desired. This situation was partly because the approach was new for the users and they did not succeed in expressing their needs very clearly at the beginning of the process. Eventually, after the research questions were specified, both of the assessments succeeded relatively well in fulfilling users’ needs in terms of describing the feasibility of jatropha and waste fish oil as biofuel feedstock. In addition, both users and stakeholders appreciated the opportunity to get acquainted with the new open-assessment methodology.

Although the assessments were performed in the Opasnet environment and were freely accessible to anyone interested throughout the process, users did not evaluate them as having reached their target participants—potential contributors or users—very well. This is probably due to limited active participation. However, as mentioned above, the webpage statistics indicate that the open assessment did reach many (passive) participants outside of the active group.

Judgments about usability (one attribute under the category “applicability”) seemed to split the opinions among participants most clearly. While users and stakeholders mostly believed that the assessments had little or no influence on their understanding of the topics, those most involved in making the assessment (i.e., summer trainees and coordinators) considered them very enlightening. This could be because users and active stakeholders were already quite familiar with the assessed topics, while summer trainees and coordinators had not yet achieved this level of knowledge. In addition, some stakeholders found the Opasnet workspace difficult to use, which limited, or even prevented, their active participation.

The acceptability of the assessments and the way they were performed was considered good by the participants. The openness of the process was especially appreciated, as was mentioned above. Within the questions on applicability, the greatest agreement among respondents was about acceptability.


The efficiency (as explained in the evaluation of assessment effectiveness section in materials and methods and Tables 1 and 2) was considered good by the three users who responded. Intriguingly, the work by a relatively small group of nonexperts, namely the seven summer trainees, resulted in high quality assessments in a reasonably short time. However, as one user representative pointed out, efficiency also depends on how much active participants can be attracted. Due to the similarity of the two assessments, each benefited from the other because many variables could be developed simultaneously. Even though the assessments are no longer active, their information remains in the Opasnet workspace and can be easily accessed and used by anyone (e.g., in assessments on related topics).


The questionnaire results and other feedback from participants indicate that the assessments worked out reasonably well in the eyes of the participants. Indeed, when considering the assessments only as processes of answer-seeking and information production, this is probably a correct interpretation. In addition, it likely gives a fair characterization of the potential of the assessments and the methods they applied in delivering that information to users. However, as discussed above in relation to openness, in practice the assessment did not achieve much of its potential in terms of practical decision support.

If we consider the effects as changes in the knowledge of members of society and the decisions and actions influenced by this knowledge, it is hard to see that these assessments have changed, or will change, the world much, even if their contents were of good quality and their production efficient and credible. Based on the evaluation, we can point out two specific aspects responsible for the gap between potential and realization. First, the mere possibility for unlimited continuous participation did not result in broad and active collaboration. Second, the active and open involvement of the users is crucial for obtaining meaningful assessment outputs and creating true linkage between the assessment and use of its results. Both of these aspects are relevant for the development of assessment methods and tools, but particularly for the development of collaborative practices for creation and use of knowledge. It is not enough that the methods and tools allow for openness and deep engagement if the assessment participants choose to act according to the traditional closed and disengaged models of participation, assessment, and decision making (cf., Pohjola et al. 2011; Pohjola & Tuomisto, 2011).

The more-or-less positive evaluation in these sample cases of limited participation is not sufficient proof for the effectiveness of the open-assessment method and the usefulness of the Opasnet workspace. Experiences with broader collaboration and deeper engagement are needed. However, neither does the evaluation indicate that the method and the workspace are not functional. Despite some difficulties, the approach enabled the assessments to fulfill the objectives set for them, and the participants particularly appreciated the openness of the process. Although some participants faced some difficulties in using the Opasnet workspace, inclusion of all contributions worked well because of the technical assistance given by the coordinators. Also, the transparency and the possibility of criticizing all steps of the assessment process make open assessment worth consideration. After all, the effectiveness of conventional assessments and models is usually low (Matthews et al. 2011; Pohjola et al. 2011; Pohjola et al. 2013). By means of open assessment and Opasnet, all the assessment information at least remains openly accessible on the Internet for possible subsequent use, increasing potential future effectiveness.

The open-assessment method and Opasnet workspace provided functional means and tools to work openly and transparently and to involve various stakeholders in the assessment process. However, the realization of broad, active, and continuous collaboration among a wide array of participants remained far from its potential in the two biofuel-assessment cases. This shortfall was mainly due to the relatively low active involvement of both users and stakeholders.

Evaluation Approach

The applied evaluation approach, based on the “dimensions of openness” and “properties of good assessment” frameworks, appeared usable for evaluating assessment effectiveness. Despite the limitation that it emphasizes potential effectiveness rather than actual changes influenced by assessments, it still provides a more comprehensive and meaningful characterization of aspects of assessment effectiveness (cf., Matthews et al. 2011; Pohjola et al. 2011, 2013). It also helps to illuminate the strengths, weaknesses, and points of improvement needed and provides support in conveying the assessment information to its intended use. However, this evaluation exercise was done only after the assessments were completed, and the capabilities of this approach to aid design and execution of assessments were not sufficiently tested.


The possibility to participate in and influence the decision-making processes regarding environmental and health issues is important for stakeholders and the public in general. In this article we present and evaluate two environmental open assessment case studies, where openness was implemented as a principal characteristic of the procedure. Based on the evaluation we advance four conclusions.

First, the open-assessment method and the Opasnet workspace are feasible means for performing assessments and providing decision support, at least regarding topics with broad public interest and importance as in the two case studies described here. Experiences with broader and more active participation are needed to guide further development.

Second, participants in the two case studies rated their possibility to contribute to the assessments as “good.” In fact, open assessment and the Opasnet workspace provided all assessment participants a wider range of ways to contribute to assessments than they were willing, capable, or ready to use. Indeed, the variety of choices may have even prevented some participants from contributing.

Third, the assessment cases did not greatly influence the users and other participants, except for teaching about the assessed topics to the summer trainees and coordinators, who were most involved in making and conducting the assessments. This shortcoming was largely due to a weak link between the assessment and its use, in other words weak user involvement. More active engagement is needed to realize the assessments’ potential in terms of effectively supporting practical decision making.

Finally, the evaluation approach applied in the two assessment cases was shown to be feasible for evaluating assessments and decision support in general. However, the small number of responses to the evaluation questionnaires did not allow for proper application of statistical analysis within the evaluation framework.

On a general level, functional methods and tools allowing for openness and effectiveness in assessment already exist. Stakeholders also seem to appreciate an open approach. The currently common practices and attitudes adapted for closed and disengaged processes, however, still limit open collaboration in practical science-based decision support. In addition, opportunities still exist for improving the user-friendliness of Opasnet and other tools for collaborative assessment. Furthermore, sufficient skills and knowledge for coordination, particularly compiling participant contributions into clear shared descriptions, is important for the success of open assessments.



The authors would like to thank the Neste Oil Corporation for initiating and participating in the biofuel assessments discussed here. In addition, we express our appreciation to the summer trainees at THL (Minttu Hämäläinen, Pauli Ordén, Tiia Sorjonen, Jaakko örmälä, Matleena Tuomisto, Johannes Kröger, and Elina Hirvonen) who did most of the work in performing the assessments. Thank you also to the anonymous reviewers for their constructive comments.


1 Intentional disturbance of social processes taking place on the Internet is commonly referred to as “vandalism,” whether it comes in the form of disruption or undermining.

2 See

3 See

4 European Union Sixth Framework Programs INTARESE (Integrated Assessment of Health Risks of Environmental Stressors in Europe). HEIMTSA (Health and Environment Integrated Methodology and Toolbox for Scenario Assessment), and BENERIS (Benefit-Risk Assessment for Food: An Iterative Value-of-Information Approach).


6 Water Guide: for the model, but documentation exists in Finnish only in

7 Tekaisu Project:

8 See

9 See

10 See

11 See an updated version of the approach from http://en.opasnet. org/w/Properties_of_good_assessment.

12See for questionnaire.

13See for questionnaire.

14 The results of some statistical tests can be found at http://en.

15 See Analyses.

16 See for examples.


Cashmore, M. 2004. The role of science in environmental impact assessment: process and procedure versus purpose in the development of theory. Environmental Impact Assessment Review 24(4):403–426.

Cassman, K. & Liska, A. 2007. Food and fuel for all: realistic or foolish? Biofuels, Bioproducts, and Biorefining 1(1):18–23.

Crutzen, P., Mosier, A., Smith, K. & Winiwarter, W. 2008. N2O release from agro-biofuel production negates global warming reduction by replacing fossil fuels. Atmospheric Chemistry and Physics Discussion. 7:11191–11205.

Doelle, M. & Sinclair, A. 2006. Time for a new approach to public participation in EA: promoting cooperation and consensus for sustainability. Environmental Impact Assessment Review 26(2):185–205.

European Parliament, Council of the European Union. 2009. Directive 2009/28/EC of the European Parliament and of the Council of 23 April 2009. Official Journal of the European Union L140:16–62.

Fargione, J., Hill, J., Tilman, D., Polasky, S., & Hawthorne, P. 2008. Land clearing and the biofuel carbon debt. Science 319(5867):1235–1238.

Fiorino, D. 1990. Citizen participation and environmental risk: a survey of institutional mechanisms. Science, Technology, and Human Values 15(2):226–243.

Giampietro, M. & Mayumi, K. 2009. The Biofuel Delusion: The Fallacy of Large Scale Agro-Biofuels Production. London: Earthscan.

Gomiero, T., Paoletti, M., & Pimentel, D. 2010. Biofuels: ethics and concern for the limits of human appropriation of ecosystem services. Journal of Agriculture and Environmental Ethics 23(5):403–434.

Greenpeace. 2011. Protest Against Neste Oil’s Palm Oil Diesel: The Huge Refinery in Singapore Accelerates Rain Forest Destruction. September 1, 2012.

Hokkanen, P. & Kojo, M. 2003. Ympäristövaikutusten Arviointimenettelyn Vaikutus Päätöksentekoon [How Environmental Impact Assessment Influences Decision-Making].Helsinki: Edita Prima Oy (in Finnish).

Jakeman, A., A. Voinov, A. Rizzoli, & S. Chen (Eds.). 1998. Environmental Modelling, Software, and Decision Support. Amsterdam: Elsevier.

Jay, S., Jones, C., Slinn, P., & Wood, C. 2007. Environmental impact assessment: retrospect and prospect. Environmental Impact Assessment Review 27(4):287–300.

Mathews, J. 2008. Biofuels, climate change and industrial development: can the tropical South build 2000 biorefineries in the next decade? Biofuels, Bioproducts, and Biorefining 2(2):103–125.

Matthews, K., Rivington, M., Blackstock, K., McCrum, G., Buchan, K., & Miller, D. 2011. Raising the bar? The challenges of evaluating the outcomes of environmental modelling and software. Environmental Modelling & Software 26(3):247–257.

Myers, R. & Worm, B. 2003. Rapid worldwide depletion of predatory fish communities. Nature 423(6937):280–283.

Pimentel, D. & Patzek, T. 2005. Ethanol production using corn, switchgrass, and wood: biodiesel production using soybean and sunflower. Natural Resources Research 14(1):65–76.

Pohjola, M., Pohjola, P., Paavola, S., Bauters, M., & Tuomisto, J. 2011. Pragmatic knowledge services. Journal of Universal Computer Science 17(3):472–497.

Pohjola, M. & Tuomisto J. 2011. Openness in participation, assessment, and policy making upon issues of environment and environmental health: a review of literature and recent project results. Environmental Health 10:58

Pohjola, M., Leino, O., Kollanus, V., Tuomisto, J., Gunnlaugsdóttir, H., Holm, F., Kalogeras, N., Luteijn, J., Magnusson, S., Odekerken, G., Tijhuis, M., Ueland, Ø., White, B., & Verhagen, H. 2012. State of the art in benefit–risk analysis: environmental health. Food and Chemical Toxicology 50(1):40–55.

Pohjola, M., Tainio, M., Pohjola P., & Tuomisto, J. 2013. Process, output or outcomes? Perspectives to model and assessment performance. International Journal of Environmental Research and Public Health 10(7):2621–2642.

Reed, M. 2008. Stakeholder participation for environmental management: a literature review. Biological Conservation 141(10):2417–2431.

Ryan, L., Convery, F., & Ferreira S. 2005. Stimulating the use of biofuels in the European Union: Implications for climate change policy. Energy Policy 34(17):3184–3194.

Tijhuis, M., Pohjola, M., Gunnlaugsdóttir, H., Kalogeras, N., Leino, O., Luteijn, J., Magnússon, S., Odekerken, G., Poto, M., Tuomisto, J., Ueland, Ø., White, B., Holm, F., & Verhagen, H., 2012. Looking beyond borders: integrating best practices in benefit-risk analysis into the field of food and nutrition. Food and Chemical Toxicology 50(1):77–93.

Tilman, D., Socolow, R., Foley, J., Hill, J., Larson, E., Lynd, L., Pacal, S., Reilly, J., Searchinger, T., Somervile, C., & Williams, R., 2009. Beneficial biofuels: the food, energy, and environment trilemma. Science 325(5938):270–271.

Tuomisto, J. & M. Pohjola (Eds.). 2007. Open Risk Assessment: A New Way of Providing Information for Decision-Making. Kuopio: KTL–National Public Health Institute.

van den Hove, S. 2000. Participatory approaches to environmental policy-making: the European Commission Climate Policy Process as a case study. Ecological Economics 33(3):457–472.

Zopounidis, C. & P. Pardalos (Eds.). 2010. Handbook of Multicriteria Analysis (Applied Optimization). New York: Springer.

© 2014 Sandström et al. CC-BY Attribution 4.0 License.

Published by ProQuest