Good Math Problems

Disp “Riemann Sums” — Programming the TI-83/84

I just finished teaching Riemann Sums, using the patented Shah Technique. I’ve always had my kids enter a program in their calculator which automatically does Left Handed and Right Handed Riemann Sums (actually it also can do midpoint!). And last year we used this program to estimate how the number of rectangles was related to the error to the true area. (That came out of me just playing around.)

The program we enter is here:

(If you want to use this, this is what you need to know. If you want the Riemann Sum of 20 left handed rectangles of y=\sqrt{x} from [2, 14], you enter A=2, B=14, N=20, and R=0. If you want right handed rectangles, you enter R=1.)

This year I decided to not go into the whole error thing like I did last year. This year I wanted students to really and more fully understand how the program worked. I always explained it, but I never really was convinced that they got it. Me up there lecturing how the program worked wasn’t really effective. So I whipped up this worksheet.

I tried to do less talking and them do more thinking (in pairs). I felt like there were a number of students who had this “OMG!” and “this is crazy” moments. Some were awed that the program worked and it gave them the answers we had been calculating by hand. Some had this amazing moment when they figured out what the variable S stood for — and how it actually calculated the Riemann Sum. And my favorite was when a couple students figured out how the R variable worked — and why R=0 gave left handed rectangles, and R=1 gave right handed rectangles.

I really enjoyed this. I think the worksheet could be tweaked to be clearer, but it’s something I see myself doing again. Well, I guess I will be doing it again tomorrow with the other calculus class. But I mean: next year.


Part II of a self-inflicted challenge: The Line of Best Fit

Over a month ago, I challenged myself to explain where the line of best fit comes from — conceptually. I started ended Part I with a question:

Our key question is now:

How are we going to be able to choose one line, out of all the possible lines I could draw, that seems like it fits the data well? (One line to rule them all…)

Another way to think of this question: is there a way to measure the “closeness” of the data to the line, so we can decide if Line A or Line B is a better fit for the data? And more importantly, is there an even better line (besides Line A or Line B) that fits the data?

And now, back to our show.

So right now we’re concerned with a measure of closeness. Can we come up with a measurement, a number, which represents how close the data is to a line? And the easy answer is: yes.

The difficulty is that we can come up with a lot of different measurements.

Measurement 1: Shortest Distance

We could measure the shortest distance from each point to the line and add all those distances up.

If I add the distance of all the dashed lines together, I get \approx 2.23+0.74+0.37+2.41+1.67=7.42.

Now let’s try a different line (but with the same points).

If I add the distance of all the dashed lines together, I get \approx 3.54+0.71+1.41+4.95+6.36=16.97.

It’s obvious that the smaller the total sum of those distances is, the “better” the line is to our data. I mean, if we had a bunch of data that fit perfectly on a line, then the sum of all those distances would be 0. And clearly with our two examples, the second line is a HORRIBLE line of best fit, while the first one seems fairly okay (but not great).

So we could use the sum of the perpendicular segments as our measurement. To find the line of best fit, we would say that we have to try out ALL possible lines (there are like, what, infinity of them? hey, you have study hall…) and find the one with the lowest sum. [1]

But, DUM DUM DUM… there are OTHER measurements you could make.

Measurement 2: Horizontal Distance

We could measure the horizontal distance from each point to the line…

If I add the distance of all the solid lines together, I get 6+2+1+6.5+4.5=23.

And for a different scenario:

If I add the distance of all the solid lines together, I get 5+2+1+7+9=24.

So if we define “closeness” to be horizontal distance (instead of the closest distance) between a point and a line, the we have a different measurement.

And yet another…

Measurement 3: Vertical Distance

We could measure the vertical distance from each point to the line…

If I add the distance of all the solid lines together, I get 2.4+0.8+0.4+2.6+1.8=8.

And for a different scenario:

If I add the distance of all the solid lines together, I get 5+1+2+7+9=24.

So if we define “closeness” to be vertical distance (instead of the closest distance or the horizontal distance) between a point and a line, the we have a different measurement.

And, in fact, we will see soon (probably in Part III) that there are actually two more measurements we can use.

So which measurement is the best?

You might say: soooo, sir, we have a ton of different measurements. Which one is the right one? The short answer: all of them. Why not? I mean, we wanted to have a measure which tells us how “good” or “bad” a line is when fitting the data, and we have done just that!

It is unsatisfying, but this is how mathematics is. We now have 3 different answers (and there can be more). Each measurement has benefits and drawbacks.

  • The benefit of the first measurement is that we are using the closest distance — and that feels (yes, I’m using feeling in math) like a really good thing. The downside is that calculating all those distances from the points to the line is exhausting and algebraically hard.
  • The benefit of the second measurement is that calculating the distance between a point and the line is relatively easy. The downside is that the horizontal distance doesn’t feel right.
  • The benefit of the third measurement is also that calculating the distance between a point at the line is relatively easy. It also is, conceptually, something deep. If the points are data that have been measured, and the line is a theoretical model for the data, then the distance is the “error” or “difference” between the measured value and the theoretical value. We are summing errors and saying that the line which the smallest sum (least total errors) is minimal. The downside is that it feels better than the second measurement, but less good than the first measurement.

But yeah, you’re upset. You wanted there to be inherently one right answer. We — using our brains — have come up with some proposals. Each have merits. We’ll soon see hone in on one type of measurement that we will use, and talk about the merits of it, and why everyone uses it so much so that it has become the standard measurement to find The Line of Best Fit.

For now, relax. We’ve done something great. Say we gave two of your friends the set of points above and had each one hand draw the line of best fit. You can decide which one did a better job just by adding a bunch of little line segments together. In fact, you have three different ways of deciding, and you have a logical justification for each!

[1] Of course, if you’re a super argumentative student, you might ask: “what if there are two, or even more, lines that have the same lowest measurement?” Well, I love that question. It’s a wonderful question. And worth investigating. Just not right here, right now. And yes, believe it or not, we will check all infinity lines soon enough. It’s possible. Math gives us shortcuts.

Books! Books! Books!

I’ve been a bit incommunicado lately. Nothing bad has happened! I’m not working harder than normal! I don’t know why but I haven’t been moved to post anything. And you know my feeling about blogging — it can’t be a chore, so don’t force it.

That being said, I wanted to share something we’ve been diverted by in multivariable calculus recently:

I gotta say, I love this class and I love working with these kids. They remind me how when you find the right thing, exploration is captivating.

This is the “book overhang problem.” The question we dealt with was: can you stack books at the edge of a table so that the top book is off the table completely (meaning if you’re looking down on the stack of the books, the top book doesn’t lie over the table at all)?

We haven’t yet found the optimal solution, but we’re going to be discussing our musings on Friday — what the best 3, 4, and 5 book configuration might be, and if we can generalize it.

Bric a Brac, Flotsam and Jetsam, This and That

It’s only Monday, but I’m wiped. For some reason, my kids were exhausted today also — zombies! This week is going to be rough, methinks. I have to come to school early every day, and I have to stay late (until 8:30pm tomorrow!) a few other days. But we endeavor, right?

In any case, I wanted to share a few things I did in my classes recently – a schmorgashborg of this and that, bric and brac.

1. A while back, I had Edmund Harriss (@gelada on twitter) come speak to a few of my classes about what real mathematicians do. He had them play with infinite tilings of the plane, by actually having them do tilings! But with weird tiles (including the Penrose Tiles), which made it all the cooler.

Fun times. I liked having something out of the ordinary for my Calculus and Algebra II kids. I think they’ll remember him coming to visit more than how to find the solution to 1D quadratic inequalities or how to find the concavity of a function.

2. With another teacher and two students, I went to the Museum of Mathematics first Math Encounters lecture titled The Geometry of Origami, from Science to Sculpture, given by MIT professor Eric Demaine on origami and math. I have seen a few talks on origami and math (in person or on video), and this was the best. I’ve already signed up for the next two lectures.

3. I needed to prepare a review for my Algebra II kids for advanced quadratics topics. If we have a review at all, I usually just whip up 8 problems and give my kids the entire class period to work on them — from the “most difficult” to the “least difficult.” I have a set of solutions that I keep at the front of the room, so students can check their work. However, I decided to try to mix things up. I wanted to use Sue Van Hattum’s Risk game… it forces students to ask themselves: what do I know and how confident am I in what I know? (It’s meta-cognitive like that).

To set it up, I talked to my kids explicitly about how the purpose of the exercise was to review, but also to be hyper-conscious about what you actually know (versus what you think you know). I put kids in pre-chosen pairs. And each pair got a booklet of the Quadratics and Inequalities Review Game (each page was cut in half and stapled). Below are the first two questions from the game.

Each group started with 100 points to wager — and they lost the points if they got the question wrong, and the gained the points if they got the question right.

Some possible game trajectories:

100 –> 150 –> 250 –> 490 etc.

100 –> 10 –> 15 –> 30 etc.

Anyway, what was great was that the game really got students engaged and talking. Each student tended to work on the problem individually, and then when they were done, they would compare with their partner.

(If you try this, you have to make sure that students know NOT to skip ahead… everyone is working on one problem at a time. Then you go over the problem, and THEN everyone starts the next problem.)

Since I don’t like review games with a time-pressure element, I also gave out a page of problems on older first quarter topics. Getting those questions correct were each worth 20 points.

I am definitely going to use this review activity again.

4. I used Maria Andersen’s Anti-Derivative Block game today (it’s like tic tac toe, where you need to get 4 in a row, and uses calculus). I didn’t teach my kids antiderivative tricks. I just told them what an antiderivative was and had them play the game. I’m currently trying to teach intuition regarding antiderivatives (many students have trouble reversing their thinking) and so I spend a day or two just working on this intuition.

5. When completing the square, another teacher in the department shared with me a great mnemonic that helps student remember what to do. She does a funny little thing recalling BOP IT. If you know BOP IT, then you know if you say it in that BOP IT voice: Halve it! Square it! Add it!

Of course this comes AFTER they can explain to you why you’re halving it, squaring it, and adding it. They have to know WHY these are part of the completing the square process, but once they do…

6. Three of my multivariable calculus students — one with an iPhone, a blackberry, and a droid — wanted to decide which one took the best picture. So each captured our triple integral lesson on their phone, and me and another teacher picked the best. The winner:

fnInt reprise

My last post was about how the TI-83/84 calculates integrals (how fnInt works), and how it messes up for when you have large intervals.

I just came from my Multivariable Calculus class, where each student had done some thinking about it. One investigated the Gauss-Kronrod quadrature. A couple others played around with fnInt and came up with some bounds for when fnInt was good and when fnInt was bad for our function f(x)=e^{-x^2}.

What we did today was to start investigating fnInt in a different way. (Yeah, my goal was to start triple integrals today… but this was way more exciting in the moment…)

We looked at \int_1^{\infty} \frac{1}{x^2}dx and used fnInt to calculate it.

It turns out that fnInt goes crazy and fails to be a good estimator at a particular large interval.

So we continued looking at \frac{1}{x^3}\frac{1}{x^4}\frac{1}{x^5}, etc. We looked at where fnInt broke down.

This is what we found out:

The left column is the exponent in \frac{1}{x^n}. The right column is the last integer you can integrate (using fnInt) to so that doesn’t give a terrible estimation of the area. (Recall we’re integrating from 1 onwards, not from 0.)

My kids are going to go home and see what they can make of this data. We hope we can use it to come up with a prediction for where fnInt will go awry for estimating the area for something like \frac{1}{x^{43}}? And maybe it’ll also work for non-integral values, like \frac{1}{x^{3.23}}? We’ll see.

…Hopefully we’ll start on triple integrals soon, though…

TI-83/84 Question

Today in multivariable calculus, we were talking generally about \int_{-\infty}^{\infty} e^{-x^2} dx. Before we embark on evaluating this integral, I wanted kids to guesstimate using their calculators what the value is.

The calculator image showed:

They had a conjecture as to what was going wrong when we expanded the interval… the calculator might be doing a finite number of Riemann Sums, then the width of each rectangle would be large andthe height (especially near the hump near 0) would be small.

Okay I’m describing it terribly… maybe a terrible picture will help.

Good conjecture. Great conjecture, in fact. But I doubted that the TI-83/84 uses Riemann Sums to do fnInt.

It was the end of class, so I sent my kids off with this one charge: investigate how the TI-83/84 calculates integrals, and see if you can’t explain why we’re getting funky answers for a large interval.

I figured I’d pose the question to you, if any of you are calculator saavy…

I wonder if it has to do with the fact that the calculator can only store so many (is it 15?) digits — as part of it?

PS. My very limited research has led me to the fact that the calculator does something called Gauss-Kronrod quadrature, which is a lot of gobbly gook to me right now.

Part I of a self-inflicted challenge: The Line of Best Fit

Here’s my challenge, created by me, for me. I want to explain where the line of best fit comes from. Not just the algorithm to find it, but conceptually how it is found. My intended audience: students in Algebra II. Where the derivation comes from? Multivariable calculus.

So here we go.

Let’s say we have a set of 5 points: (1,1), (3,5), (4, 5), (6, 8), (8,8)

We want a “line of best fit.” It’s tricky because we don’t exactly know what that might mean, quite yet, but we do know that we want a line that will pass near a lot of the points. We want the line to “model” the points. So the line and the points should be close together. In other words, even without knowing what exactly a “line of best fit” is, we can say pretty certainly that it is not:

Instead, we know it probably looks like one of the following lines:

LINE A: y=1.1x


LINE B: y=0.9x+1

Of course it doesn’t have to be either of those lines… but we can be pretty sure it will look similar to one of them. You should notice the lines are slightly different. The y-intercepts are different and the slopes are different. But both actually lie fairly close to the points. So is Line A or Line B a better model for the data? And an even more important question: might there be another line that is an even better model for the data?

In other words, our key question is now:

How are we going to be able to choose one line, out of all the possible lines I could draw, that seems like it fits the data well? (One line to rule them all…)

Another way to think of this question: is there a way to measure the “closeness” of the data to the line, so we can decide if Line A or Line B is a better fit for the data? And more importantly, is there an even better line (besides Line A or Line B) that fits the data?

(Part II to come…)

UPDATE: Part II here