Sometimes I wonder about my sanity. Our school gets a 2 week spring break. I decided to teach my calculus classes on the last day before break. And I meet one of them during the last period. So yeah, sanity?
As you know, I’ve been working on Riemann Sums. After calculating them by hand [worksheet here], I had my kids enter this program in their graphing calculators.
We of course talked about why the program actually gives you the Riemann Sum. I’m going to expect them to be able to answer a question on the assessment about it. All that calculator stuffs was on Thursday.
They were coming to school on Friday with this program entered on their calculator.
I got home on Thursday at 9ish pm and when weighing the options of “show Stand and Deliver” or “create a lesson plan,” I just couldn’t let go of the fact that if I waited until after spring break to capitalize on this program, the momentum would be lost. So lesson plan it was. And I whipped the lesson plan [.doc here] below in about 90 or 100 minutes.
I told my kids that the fundamental question that we are tackling is: what is the relationship between the number of rectangles in our Riemann Sum and the error to the true value? And then we talked about what they already know … which is as the number of rectangles increase, the error decreases. So I modified our question: what is the MATHEMATICAL relationship between the number of rectangles in our Riemann Sum and the error to the true value?
We’re studying a a semi-circle of radius 2.
And so they use the program and come up with a bunch of data
And since they are interested in N and the error, they enter those values into lists in their calculator. Looking at a graph for low N values versus the error, they see this on their calculators:
So yes, they see that as N increases, the error goes down to zero. Mr. Shah’s eyes are wide open with awe.
That’s as far as we got in class on Friday. When we return from break we’re going to find a curve to fit this data. They’re going to try to fit this data to linear, exponential, logarithmic, and power functions using their calculators. It turns out that you can find a great power function to match this data. Seriously amazing. (The others don’t turn out well at all.)
That’s only for low N (from 0 to 20). Will our power function hit the N=500 point?
YEAAAAAH! This is where I started to get excited when I was creating my lesson. Because I thought that the error function would get really off so far out in the data. But heck, it’s pretty awesome that it hits at N=500.
I was curious HOW much our error function helps us with our estimation. So the last part of the activity is having students use only 75 Riemann Sums to estimate the area of the semi-circle. The error is shown below.
Not bad. But we have found a function models how much error using 75 rectangles will give us. So adding in that correction factor gives us a NEW estimation of the true area. And how much is this new estimation from the true value?
Um, using our error function as a correction factor gives us an answer that is 5 orders of magnitude better. Instead of an error of we get an error of
.
I was really shocked and pleased by this! That’s where the lesson ends. Partly because I was too tired to make more, and partly because I want to move on in the course. I have a number of questions still lingering in my head, that I will be thinking about over break. Including why the error takes the form of a power function for this function (the semi-circle)?
Also, I still have a problem with the circularity of it all. I needed (a priori) the true area of the semi-circle to calculate the error for each N. Then we use these errors to find a curve to match the error associated with each N. This curve gives us a correction factor which gives us a better approximation to the true area. But if the point of all of this was to find the true area, we actually had to know it at the start to come up with the errors!
Some part of me wants to say that there is a good response to that. I suspect there is, and it deals with computing time. Like “say you want a good approximation. Start by assuming your true area is the Riemann Sum calculated for N=1000. Then use that to come up with the error curve. This error curve will then help you come up with a better approximation for the true area.” Or something like that. I don’t know, really. I’m just talking about nothing at the moment.
Uh…Sam….remember all of those “I stink at teaching math compared to all of you guys” thoughts you blogged about a while back? Re-read THIS post, think about the time and thought you put into this activity (even better yet, writing about it!) and erase those thoughts, man. I’ve never taught calculus, but if I had/did…I’m thinking I would be ripping out examples from the textbook on the board and hoping the kiddos were taking good notes. :) Keep it up, man.
That’s a really interesting lesson plan, and quite prescient as well, since applied mathematicians are all about describing error from numerical processes in really precise ways and then seeking ways to minimize error while also minimizing computational expense.
There are existing formulas for the error in the Trapezoid Rule and Simpson’s Rule that are usually taught alongside those rules in the calculus books. The error for the Trapezoid Rule is
i.e., it’s a power function, where M is the value in [a,b] that gives the largest value of the second derivative. Simpson’s has the fifth power on
and uses the fourth derivative. So perhaps there’s some similar error term out there for a basic Riemann sum — or perhaps that’s the sequel to your activity here!
And yes, you do need a shave. And a haircut. :)
I bet you could use the fact that it should be a power law to figure out how much to move your curve up and down. That is, if you didn’t know the true area, you would have a bunch of data of the form a * x^p + b, so you can wiggle the value of b around until the power law fits, and once you do that, then the value of b gives you the real area.
ooh, yeah! just to be clear: you’re saying that I should get this from graphing N versus the estimated area, not N versus the error… that way b would be the true area…
Ok, officially inspired to learn to program my TI. (At some point.)
Being a bit rusty at both the calculus and the TI-coding, I’m almost following to the end but not 100%. It looks to me like your TI program is taking the Reimann sum of either the rectangles below the fn, or the rectangles above, but not the average of both. Have you tried making it average the two? Would that give similar results to adding the error function you’ve generated? Or is this error fn behaving much more awesomely than that?
(I love how Firefox’s spell check flags “Reimann”, but is okay with “awesomely”.)
In the program, letting R=0 gives LEFT handed rectangles and R=1 gives RIGHT handed rectangles.
I think that taking the average of both would definitely give a better estimate (that’s called the “trapezoidal rule”). I suspect the “error curve” (N vs. the error) for that would still be a power function.
There are other estimations that one can do: the midpoint rule (my program will give you that, setting R=0.5, I think), Simpson’s Rule, and others.
I’m loving the TI screenshot + extreme closeup.
I found this blog pursuing a similar line of reasoning in Matlab. Please take a look at the page I set as my website and let me know what you think. The code is sloppy and I have since cleaned it up, but the graphs make their point.