One of the most difficult challenges that my non-major students face is gaining access to the scientific process. Although almost all of my students have been given some version of the “scientific method”, very few of them have any real sense of how to go about assessing the validity of claims that “sound scientific”. Of course there is always the peer-reviewed primary scientific literature to lean upon… if you actually have access to it. Not surprisingly, the library at Pratt Institute does not maintain electronic access subscriptions to a wealth of scientific journals. So, by and large, my students must rely on the internet to do research for my courses.
This can get ugly.
I am not one of those professors who believes that the internet is wholly evil. I do not fail students for even allowing the word “Wikipedia” to drift through their minds. I tend to be a little more skeptical about all sources, so the last thing that I want my students to believe is that if you can blow dust off the top of a source it is valuable and if it starts with “http” it is junk. But some of the sources that my students come up with on their research projects can be utterly frightening, and I have discovered that when asked they generally do not have valuable rules for deciding whether a source should be trusted. At best students suggest that the sources they use should “cite sources”, and at worst I get easy-to-follow-but-useless rules like “if the webpage ends in .edu or .net it is better than if it ends in .com”.
The challenge of the day is to make good judgments about internet sources. One Wikipedia page might be the very best source on a particular topic and another might be pure fabrication. There’s very little quality control on most sites — whether they are run by an academic institution, a corporation, or some Joe off the street — so it takes a critical eye to decide what is credible information. In a way, the ready availability of information of suspect quality on the internet has one major benefit: it has reinforced the idea that all information is suspect.
One of the beauties of science is that it offers some pretty easy rules for deciding whether or not ideas make sense. Good ideas should make predictions, and these predictions should be testable, and if these predictions have been validated through some form of testing we maintain our faith in the ideas that spawned them. The scientific process generally follows a very clear path from question to hypotheses to predictions to tests, but most students do not see these steps along the path very clearly. They also have a hard time distinguishing between scientific hypotheses, which are a critical component of the scientific method, and actual scientific evidence in support of particular hypotheses.
Nowhere is this confusion more apparent than in evolutionary biology. This is in large part the fault of the “nature media”, which tosses around hypothetical explanations without clearly distinguishing between untested and tested hypotheses. If you search the internet for evolutionary explanations for particular organismal traits, you will find that almost all explanations present hypotheses without bothering to say whether these hypotheses have been tested.
I wanted to create an in-class activity that would provide students with a supervised opportunity to tackle some of these issues. How do I decide if an internet source is credible? What constitutes scientific evidence? How do I tell the difference between the valuable conjecture provided by untested hypotheses and those explanations that have been subjected to testing?
During the second week of my course in Ecology, we cover individual traits and behaviors with a particular emphasis on the process of evolution. Students learn about adaptation and how natural selection produces adaptive traits. Traditionally I have done a case study exercise in class where students make predictions about parental selection in coots, but I have found this activity to be a little too tangential and way too canned.
This year I decided to try a new activity that I am calling “Sourcing the Source of Natural Selection”. The goals of the activity are three-fold:
- Challenge students to explain how particular animal traits were produced by natural selection using the internet as their sole source of information.
- Provide students with some guidance on what internet sources should and should not be trusted.
- Empower students to distinguish between hypotheses and actual scientific evidence.
I reserved the computer lab for a 90-minute portion of my class and set up our Learning Management System to host five forums for five different groups of four to five students each. When students arrived at the computer lab, I gave them one of the following worksheets:
Group A (Offset ears in the Great Horned Owl) Group B (Bold and visible coloration in the striped skunk) Group C (Schooling by Horse-eyed Jacks) Group D (Stotting by Springbok Gazelles) Group E (Counter-shading found in the Carribbean reef shark)
Each group was given a particular trait to investigate and asked to find sources of information that explain how natural selection might have shaped the focal trait. I tried to pick a mixture of different traits that would yield plenty of hypotheses that may or may not have been thoroughly tested. I gave students about fifteen minutes to find as many sources as they could, posting them to their group’s forum. During this stage of the activity, the forum allowed students in a particular group to see what progress the group had made, but I also encouraged them to speak to each other directly in the computer lab (it is a little creepy how easily my students go into virtual mode and begin interacting exclusively through the computer when their group mates are sitting right next to them).
Once each group had accumulated a variety of sources, I asked the groups to scrutinize each other’s work in a rotational pattern. In scrutinizing the work of another group, I asked students to look at two issues:
- Is the source credible?; and
- Does the source provide “valuable conjecture” (i.e. a reasonable hypothesis) or “scientific evidence” (i.e. a conclusive test of the predictions of one or more hypotheses).
Using the reply feature of the forum, students directly commented on the posts of the other group. What is so interesting about this process is that students are generally far better critics than they are creators. They will quite accurately notice weaknesses in the sources other groups, even though their own sources contained the same weaknesses. Although it is a little bit risky asking students to comment on each other’s work (because they could be either too critical or too unwilling to level criticism — I see both), I found that this worked well on this activity.
After spending about ten minutes scrutinizing the work of others, students returned to their groups to consider the comments that they received from other groups. I asked the students to process the experience by answering a few questions.
For the most part, students seemed to gain some understanding through this exercise, although some groups were challenged by the time constraints imposed by the activity. To make sure that the major concepts of the activity were internalized by most students, I went over some examples in the next session of class. All examples were pulled from the forum posts of my students, which gave me a chance to give some feedback without having to respond to every last post.
First, I started out with sources that I cited as “problematic”. One great example was a site with an article entitled “Skunk’s Strategy Not Just Black and White”. In some sense this was a pretty good find, because it actually described some research that tested hypotheses about the function of skunk coloration in relation to predators. But the problem with this “source” is that it is seated within a wholly unreliable aggregator site. This gave me a chance to explain to students that some sites are entirely designed to aggregate content and catch search engine queries, and that the motivation of such sites is not to provide reliable information but to make advertisement revenue through attracting traffic. Although this site could potentially lead to finding the real source of the press release featured, it is not a valid source to site in a paper.
The second “problematic” site that I featured was one entitled “Great Horned Owl”. This site contained a lot of interesting information on owls, including information that might have suggested to students why owls have offset ears. But if one does even some cursory research on the authority behind the site, one learns that it is maintained by a wildlife photographer. This is not to cast aspersions on wildlife photographers: this person might be quite well-read on the animals he photographs, but his site does not bother to cite any source of his claims, which makes them unreliable.
The third site that I labeled as “problematic” had the funny title “Dragoo Institute for the Betterment of Skunks and Skunk Reputations”. This was one of my favorite student finds because it was, on its face, such an unreliable source. What is this “Dragoo Institute”, and on what authority does it make all these claims about skunks? Well, if you bother to do a little exploring by clicking on the “publications” tab of the site, we learn that J.W. Dragoo is a well-published expert on a great diversity of mammals, including skunks. Dr. Dragoo even was nice enough to post a number of PDF’s of these publications. I love this example because it points out to students that just a little more effort in investigating can turn a questionable source into a goldmine of information.
Beyond looking at “problematic” sources, I also showed students some sources that I labeled as “okay”, including a weakly-cited Maryland Cooperative Extension Fact Sheet on Owls, a Fact Sheet on Fish Schooling from a reputable source that didn’t bother to provide any citations, and a well-cited and thorough Wikipedia page on shoaling and schooling. I then showed off the really great stuff that students found, including the original publication on skunks featured in the aggregator site press release discussed above, and a very thorough review article on counter-shading.
So did this activity work? I think that it helped students to think more critically about the sources they use, and in particular to understand the difference between a hypothesis and a test of that hypothesis. Just a brief search of the internet makes it pretty clear that there are a lot of “just so stories” out there about the evolutionary origin of animal traits, and that it is far more rare to find a thorough test of the predictions of such hypotheses. The ultimate test for how well this exercise worked will come when my students hand in their midterm papers and I scroll down to the bibliography to see what they decided was a valid source of scientific evidence.Adaptation, Coevolution, Ecology Education, Evolution Education, Lesson Ideas, MSCI-270, Ecology, Teaching