Christopher X J. Jensen
Associate Professor, Pratt Institute

Quantitative Sustainability and the practice of Life Cycle Analysis

Posted 01 Jun 2009 / 1

Pratt Institute, where my primary duties are to teach students about ecology and evolution, is undergoing a green revolution. In many ways this is not all that remarkable: many campuses are “greening” themselves and at least pitching the idea that they are becoming more sustainable. At Pratt, there’s something slightly different going on: we aren’t just trying to green our campus, we are trying to green our students. Because Pratt is a design school and most of our students are trained to make things (from consumer products to buildings to cityscapes and lots of things in between), we have the responsibility to not just change the way our campus impacts our environment, but also to reduce the impact of our students’ future designs. There’s a tremendous power in that mandate, and also the peril of doing it poorly.

As the only trained ecologist on campus, I get a lot more credit for being knowledgeable about sustainability than I should. Part of this is understandable: the science of “ecology” is commonly mistaken for “environmental science”, in part because the word ecology has been so rampantly co-opted and overused in both popular and academic arenas. The truth is that while I may know more than most on the campus about the way life sustains itself, that doesn’t necessarily translate to knowing the nuts and bolts of sustainable design. Many Pratt faculty (and some students) greatly exceed my knowledge in the more applied aspects of sustainability, although I am actively learning about how to apply ecological principles to human problems.

For those of us in the Math and Science Department with an interest in sustainability, I think it is fair to say that one our chief concerns is about maintaining a quantitative approach to sustainability. We may not know a lot about design or the specifics of particular materials used in design, but what makes all of us as scientists different from many of our colleagues is that we have been trained to make quantitative analyses. Most of Pratt’s traditional brilliance lies in qualitative analysis: making aesthetic judgments comparing the quality of multiple works. Although there are some pitfalls inherent to this kind of analysis, it is by and large the appropriate manner in which to judge creative works; I certainly do not want to see dry, constrained metrics used to pick award-winning designs.

And yet you can see the setup here for a culture conflict between those of us who are trained to judge quantitatively and those us who are trained to judge qualitatively. In no arena is this conflict more likely to cause problems than the area of sustainability. Why? Well, in a few words, because for a design to be truly labeled as “more sustainable” than another, it has to be judged quantitatively. Judging the sustainability of a designed object based on qualitative comparison is dubious at best, and often leads to the promotion of a design or practice that actually has a greater environmental impact than other alternatives. We call this “greenwashing”, and it is important to note that greenwashing does not need to involve intentional obfuscation; any claim of lowered impact or “sustainability” that is not based on quantitative analysis could be wrong, and therefore could be a form of greenwashing.

The problem is: how do we make a quantitative analysis of how much impact should be assigned to a design or practice? The answer is not as simple as turning the process over to those of us who are trained to make quantitative analyses. While a scientist might be more comfortable with the kind of thinking that goes into making this estimate, a designer certainly is more likely to understand all of the components and processes that go into making a particular designed object, and both of us need to rely on the highly-specific-yet-broad-ranging science used to estimate impacts. If the people who design things are to be empowered to consider the impacts of their designs, perhaps aided by those of us familiar with these kinds of quantitative comparisons, we need some kind of tool that allows us to integrate design components with scientific estimates of impact.

On Saturday, May 23rd, 2009, Pratt Institute’s Center for Sustainable Design Studies hosted a workshop on life cycle analysis (LCA) presented by Sustainable Minds. I attended the workshop and was inspired by the promise and potential in the LCA approach to designing sustainable products. For those of us who care about making quantitative claims about so-called “sustainable designs”, LCA is the answer.

Sustainable Minds (SM) is an internet startup company designed to deliver software that makes LCA accessible to a broader base of designers and design educators. There’s a big gap between most designers and the expert practitioners of LCA; Sustainable Minds endeavors to bridge that gap through software that is user-friendly enough to engage everyday designers in the LCA process. Sustainable Minds’ first product, still in the beta testing process, is a web-implemented version of the Okala program. Presenting the workshop were Phillip White and Louise St. Pierre, two of three designers responsible for creating Okala; also present was Sustainable Minds CEO Terry Swack and a couple of other SM staffers.

Okala is a multifaceted program, with both a software and curriculum component, and originally emerged as a project of the Ecodesign Section of the Industrial Designers Society of America. The software tool developed by Okala is a very comprehensive version of LCA wrapped in a user-friendly interface. As far as I can gather, Sustainable Minds has helped create a means of distributing the software tool developed by the Okala project.

I am a skeptic first and foremost, especially when it comes to companies promoting any product, but right from the outset I was very impressed with the integrity of Phillip and Louise’s approach. Okala is a program rooted in the sincere desire to promote sustainability, starting with their differentiation between “ecological design” and “sustainable design”. Ecological design exists as the intersection between two criteria, economic viability and environmental impact:

This image was reproduced from the Okala Design Guide, which can be found here.

In contrast, the Okala designers view sustainable design as also including a social justice component:

This image was reproduced from the Okala Design Guide, which can be found here.

I appreciate this more rigorous approach to sustainability, as I think that it is a much more whole definition of what sustainability means. Given that many of our environmental problems are global, we cannot confine our solutions to a subset of the world’s population. Only once the entirety of the world’s population is pulled out poverty will we be able to establish a truly sustainable global economy; poverty leads to instability which leads to short-term environmentally-damaging exploitation of resources (among other problems).

The pitch for LCA as a means of assessing product impact was similarly convincing. Phillip and Louise used the following diagram to summarize the advantages of LCA:

This image was reproduced from the Okala Design Guide, which can be found here.

I am not at all familiar with most of the other impact assessment tools compared to LCA in this diagram, so I cannot vouch for the accuracy of this depiction. But assuming it is accurate, you can see that LCA provides the superior option for assessing impacts in two dimensions: 1. It is based on objective (i.e. quantitative) measures; and 2. It is comprehensive, assessing all impacts of a particular design. What’s perhaps most disturbing about this diagram is that there are so many assessment tools that are not quantitative in their approach. Many people still don’t seem to get it that you cannot just “describe” the benefits of a sustainable design or practice.

What the Sustainable Minds (SM) implementation of Okala does is simultaneously simple and comprehensive. The designer creates a design “project” and then compares different iterations of that project called “concepts”. This comparison is completely relativistic, and this is a crucial point: there is no such thing as a “green product” because all products have negative impacts, therefore we can only compare the relative impact of different products. The question that the program allows you to answer is simple: which of my different concepts for this project has the lowest impact, and how do different design decisions affect the overall impact?

To imagine how this question is answered, you first need to understand the input that the program asks for. When assessing the impact of a designed product, you must decide what the “boundary” of your analysis will be. Will you include only the manufacture of the product, or are you also interested in its transport and delivery? Will packaging materials be included in your analysis? Will you consider the disposal impacts of all or some of the components of your product? Should the resources consumed during a lifetime of using this product be considered in this analysis? What makes the SM software powerful is that you have the option to draw a very large boundary around your design, allowing for the most inclusive and holistic analysis of impact. Once you have established the “system boundary” for this design, you need to create a “system bill of materials” (SBoM).

The SBoM is a list of everything within the boundaries of your analysis that goes into manufacturing, shipping, using, and disposing of each of your design concepts. At minimum you need to know what materials will go into making your product and what processes will be used to turn those materials into a finished product. You can also specify other sources of impact such as means of transport, resources used during the lifetime of the product, or disposal method. There’s a learning component to this process because it enables quantitative experimentation: you can change one facet of your design, creating two alternative concepts, and then compare the relative impacts of these concepts.

Once you have entered in all of the SBoM providing all the information necessary to assess your product within your prescribed system boundary, the SM program goes to work at crunching the numbers for you. There is instant gratification in this process, as the calculations are made and updated instantaneously, but it’s important to slow down this process to understand what’s actually being done. First, we must understand that each material or process that goes into making a product can have multiple impacts. A given plastic may offgas multiple chemicals and require a particular amount of energy to produce and deplete a particular kind of petrochemical. We want to consider all of these impacts, so for each of the materials or processes that go into our analysis we need to characterize their impacts. These impacts fall into ten categories:

This image was reproduced from the Okala Design Guide, which can be found here.

These categories are a subset of all the Tool for the Reduction and Assessment of Chemical and Other Environmental Impacts (TRACI) impacts defined by the U.S. Environmental Protection Agency. Notably missing are a couple of the criteria listed under TRACI, “land use” and “water use”; I don’t know exactly why the entire list of TRACI impacts is not considered in the Okala calculations.

To characterize these impacts means to put them into a common currency within each impact category. For instance, a particular manufacturing process may produce methane and carbon dioxide, both of which contribute to global warming. But molecule for molecule methane and carbon dioxide don’t produce the same amount atmospheric forcing, so part of the characterization process is to put all the different chemicals that impact global warming into a common and equivalent unit. Once you have done that you can sum all the global warming impacts of all the different materials and processes that go into your design concept, thus characterizing the global warming impact.

LCA could stop at this step. In comparing product concepts, we would have a set of bar graphs representing the relative intensity of each concept for each impact. The problem with this sort of comparison is that it can be inconclusive: if one concept has high global warming impacts and low ecotoxicity and the alternative concept has low global warming impacts and high ecotoxicity, how do we decide which has the lower overall impact?

The way that the SM program solves this problem is to aggregate all of the impacts in each category into a single “Okala score”. It does so in two steps. First, the scores in each category are normalized. Without getting too deeply into the normalization process (which, to be honest, took me awhile to grapple with), this scales impacts measured in totally different units into a single unit of impact (Okala “points”). This is done using baseline estimates of per capita impact within a particular geographical area (the U.S. for our purposes and in the case of this software) to normalize the impact estimate into a certain number of “impact points”. Once all impacts are normalized we can compare them because they now have the same units. However, simply adding them together is not meaningful; there’s no reason to believe that a certain number of points in one category are equivalent to that same number of points in another category. We need to weight the points in each category so that we can meaningfully sum them to give each design concept we have proposed a single “Okala score”. Hopefully that weighting system gives a valuable estimate of each concept’s impact.

Although there can be some scientific data brought to bear on the question of how to weight different impact categories, hopefully it’s clear that this begins to be the point at which science blends into policy. It’s valuable to turn all impact categories into a single impact score, but doing so involves setting a social agenda. For instance, we might compare the potential economic impacts of global warming with the economic costs of fighting and treating cancer caused by industrial toxins in an attempt to assign a relative importance to cancer impacts versus global warming impacts. But even that attempt to “quantify” the relative importance of these impacts is laden with value judgments that are completely constructed within a particular cultural value system: why, after all, should money be the way we assign value to a particular impact?

I find this part of the process fascinating, in particular because it is where science ends and society and culture begin. In good faith we should try to come up with a weighting system that best meets our socially-constructed goals. Since we want to meet these goals on a global scale, there needs to be compromise in defining our goals. Here’s the way that I would think of it: imagine for a second that we live in a world where LCA is used commonly and at some level (consumer choice, international regulation) there’s strong favoring of products with the lowest LCA impact scores. I know that is a long way from where we are, but that is where we want to be. But if you imagine being there, with everyone using the same LCA system with the same weighting system comparing different kinds of impact, it’s not a given that we attain our sustainability goals. Why? Because the weighting system has to be in line with our values, and it will probably take some tinkering to get this mathematical weighting process in line with our actual goals.

For instance, if we overvalue global warming in our LCA we might start drastically reducing that impact, but perhaps in doing so we might undervalue cancer impacts. We’d know that our weighting system was wrong if cancer rates failed to decrease in the manner we valued, at which point we would have to consider the tradeoffs involved in altering the way we weight different impact categories.

Maybe this all seems like a head-spinning detail, but the devil is in the details. At this workshop, some of the Sustainable Minds team members really took certification systems like LEED to task, and to understand why you have to understand the meaning of the weighting process. Certification processes inherently construct their own weighting systems. In the extreme they do so by only including particular kinds of impact (for instance: carbon footprint), which is equivalent to setting the weight of other impacts to zero. More moderately they may have their own idiosyncratic weighting system. Whatever the value systems inherent in different certification scoring systems, the fundamental problem is that each certification organization has its own definition of what “environmental impact” means. Maybe this diversity seems innocuous, but it is not. Because there are various certification systems overseen by various organizations with various systems of estimating impact, companies looking to “green” their products can game the system by “shopping for the best score”. If I am producing a product that produces few greenhouse gases but is laced with all sorts of toxic metals I will look for the certification that is most closely linked to carbon footprint. My product can be stamped with a “green certification”, even though this certification could deceive the consumer into believing that the produce is on the whole “green”. Clearly it is greenwashing to call your toxin-producing low-carbon-footprint product “green”, but that’s what the certification label would say.

Essentially the question of how to weight impacts comes down to defining sustainability, which is the problematic but critical goal of any quantitative analysis of environmental impact. To its credit, the SM program tries to avoid biased weighting through two basic practices: 1. It includes a very comprehensive list of impact categories, which means that most impacts are considered in the weighting process; and 2. It uses characterization and normalization values set by the U.S. E.P.A. and weights set by the National Institute of Standards and Technology (NIST). Although I can’t be confident that these weights are the best match for my definition of sustainability, at least we can hope they are the result of a strong peer-reviewed process down at the E.P.A. and NIST.

During the workshop we got to take the web-based SM software for a spin in small groups of participating students, faculty, and staff. Our assignment was to rethink the way that a particular product, the Little Tikes Wave Climber, was designed based on its overall environmental impact. We first set up the original product as our “reference” concept. To do this we simply needed to enter in data about the product into the SBoM. We were provided with this data, which included the amount and type of plastic used to make the product, the packaging material, and transport and disposal methods. We also had to specify the lifetime of the product and a meaningful unit of use: we were told the product would last ten years and that we should consider the “unit of use” as one year of being played on.

Once you have entered in all the data for your reference concept, the SM software displays the Okala score for that concept, showing a quick visualization of how different impact categories contribute to that score. For the Little Tikes Wave Climber, the top impact categories were global warming (presumably because of the material used and the transportation involved), acidification (because processes involved in making this produced a lot sulfates?), and human toxicity (not surprising given the amount and kind of plastic involved). The score we generated, 559.56 millipoints per year of use, was still kind of useless because we didn’t have anything to compare it to. It’s important to remember that the only impact score we can really assign a clear absolute value to would be zero, which probably describes very few practices or designs. Comparing the score we generated to other design concepts is the only way to meaningfully understand its impact. Some may object to this relativistic approach, but its all we have. And I would point out that evolution works this way, selecting not the ideal design but the “best of the available designs”. As long as we are making progress in reducing impacts and monitoring our progress relative to chronic and impending environmental problems, we can use this score comparison to great effect.

Our group decided to compare several wood-based designs to the all-plastic wave climber. We arrived at wood after using the software to come to the conclusion that no other plastic on our list of possible materials had a lower impact than the HDPE from which the wave climber is constructed. Our first concept, which involved a lot of wood and a stainless steel slide, actually had a larger impact that the original plastic version. First learning experience: it turns out that the impacts of steel are pretty huge. Wood, on the other hand, has a pretty low profile. We figured we couldn’t build a fun and durable slide out of wood alone, so we incorporated an aluminum slide, which brought us way under the impact of the original plastic design. We also experimented with reduced packaging and transport, and learned that although these do reduce the impact, the biggest impact reduction overall resulted from a switch from plastic to wood. All the progressive parents out there rejoice!

The software has some strong visualization tools, including an overall comparison of all the design concepts. Here’s what our group’s summary looked like:

This image from the Sustainable Minds website, which can be found here.

As you can see, there were some pretty dramatic differences in impact of different designs, with our locally-made and transported design having less than a quarter of the impact of the original Little Tikes design. What emerged from this short exercise was value of using this software as a “design playground”. Although the process of actually accessing what materials and processes will go into your design involves doing some research and footwork, once you have that basic data you can really experiment with alternatives. I can’t call this sort of experimentation anything other than “playful”, analogous to other free-flowing creative processes, albeit with a score at the end.

The SM software seems well-thought-out and pedagogically sound and I am excited to see it implemented in Pratt’s courses in sustainable design. Its chief application will be in the field of industrial design, although I think that it should also be used to teach other kinds of designers the process of LCA. Even though an architect can’t directly use this software to assess the impact of her building design, she can use the conceptual understandings gleaned from consumer product LCA to scale the LCA process up to comparing building designs. Architects also have to learn how to assess the impact of the components they use to build their buildings (windows, fixtures, construction materials), and ask for LCA of these components.

One question that remains to be answered is how the SM implementation of Okala will be distributed to students. While a web-based subscription service makes a lot of sense because it allows users to access the most updated version of the software from anywhere, it does bring up the issue of cost. In order for Pratt and its students to make the SM software a regular part of their work, the subscription cost needs to be reasonable enough to justify maintaining the subscription along the career path from student to working professional. Part of making the cost reasonable has to be giving preferential pricing to students and academics. It will be interesting to see what the business model for this software will be once SM goes out of its beta stage. I hope it is an affordable application, because it would be a tragedy for this valuable tool to be underutilized due to its expense.

A Major Post, Center for Sustainable Design Studies, Greenwashing, Life Cycle Analysis, Pratt Institute, Quantitative Analysis, Sustainability

1 Comment to "Quantitative Sustainability and the practice of Life Cycle Analysis"

Terry Swack, CEO 4th June 2009 at 2:32 pm

Hi Christopher– great write-up, thanks. We enjoyed having you in the workshop. In response to your last paragraph, yes, we have pricing for professionals, educators and students, all of which are extremely reasonable for the very reason you mention! People can email subscriptions@sustainableminds.com to request a trial subscription and pricing info.

Leave a Reply