“A monk weighing 170 lbs begins a fast to protest a war. His weight after t days is given by W = 170e^(-0.008t). When the war ends 20 days later, how much does the monk weigh? At what rate is the monk losing weight after 20 days (before any food is consumed)?” <– That’s an actual problem from our Calculus book, which I find very amusing. Though it doesn’t really fit Dan Meyer’s definition of psuedocontext, I just get a kick out my mental picture of a monk sitting in a dark room taking a break from protesting the war to scribble away on a notepad trying to make predictions with an exponential model… There are so many word problems that force “real-life” situations into the convenient framework of whatever math topic is being presented in that section. I guess these are supposed to demonstrate to students how useful and relevant math is, but I think we all know that students just find them to be tricky and unyielding disguises to math that they generally know how to do.
There was one word problem that fit an exponential decay model to someone forgetting information, so I decided that instead of just doing the word problem, we would test the model by recreating the experiment. The day after we had a midterm exam, instead of handing back their corrected test, I put them in groups and gave them the following list of 50 three-letter syllables that I generated with a random number generator:
SOQ XAC DOB NEB BAR JYS ZYW GEK TUD ZEM GAK KUR BEN XOQ DUX BYR NIT WAP ZIJ HOG HIQ DUW CUD SAM BIM LIH JEV VEZ QEM GUL ZIQ SEQ JYV GUT XYM XAX BIQ DOJ ROM ZIV QEW JEH CYS ZEM FOM KEG DUC GYK WYQ POD
I gave them 15 minutes to memorize as many as they could and then tested them by having them write down all that they remembered. Then, I handed out the midterms and we started going over them. About 5 minutes later, I had them write down as many of the syllables as they could again. Then, we went over a few problems on the midterm… then another memory test…. then more midterm… then another memory test. They had absolutely no idea why we were doing this, so each time they groaned and complained. And they groaned even more when I opened class the next day with another trial. And then again two days after that… And then a last time a week and a half later. All without studying the list after the original 15 minutes.
Finally, I revealed the purpose of the whole experiment. We collected data and used GeoGebra to fit various models to their data. There were four different mathematical models to choose from that I found from various psychological studies (which I had loaded into a GeoGebra file with sliders so that they could move the various models around to fit their data). Each student picked the one that they thought fit their data best (a function to calculate how many words they would remember over time), took the derivative of that to calculate their “forgetting function” (a function that tells them how fast they are forgetting words at any given time), and then used both to calculate how many words they will remember in a few weeks and how fast they will be forgetting them at the point.
We graphed all of their functions on the same axes (y-axis = number of words remembered, x-axis = time in hours) to analyze which model was best and analyze how their memories compared to their classmates. The results are below. The different colors correspond to the model that each student chose.
Now, the clean final result of that graph hides how messy the model fitting part was. Though some students’ data fit well, some didn’t, at all, which was actually really nice. They really struggled trying to fit the model and hopefully realized that a lot of these models that we are dealing with in cooked textbook problems aren’t as powerful as they purport to be. If I could do it again, I would have them use more mathematically sound ways of fitting the models than just eyeballing it (I hadn’t really considered this and realize now that, though it would be an investment in time, it would make the whole thing much better).
But besides doing some authentic math that was individually tailored to each student, my favorite part of the experiment was the followup meta-cognitive discussion. We ended up having a really great conversation on how best to memorize these random things, which then led to a great discussion about how to learn and study best (especially how you should go about studying math). We talked about how some people put the words in context by using a story, some people made patterns by grouping similar items together, and the ones that didn’t do very well talked about how they just tried to memorize these random unconnected things by rote memorization. Many also noticed that throughout the closely connected trials on the first day, their number memorized actually went up, so we talked about how assessment can actually help you learn something too (in addition, of course, to regular practice).
Make it Better.
I have one simple question this time: the thing that I really didn’t like about this experiment was that it was entirely teacher centered. They were in the dark about what was going on (for experimental purposes) until the day that we collected data, fit models and did some quick calculations. How can I make this more student-centered and add elements of inquiry? I have a few ideas, but I wanted to see what other people thought.
- Word document with list of random words
- Excel spreadsheet for collecting data
- GeoGebra file with various forgetting models, ready to drop data in