I was hunting for a book on my bookshelves when I got distracted and started browsing. In one book, I came across this great idea that I didn’t want to lose. So I thought I’d type it here in an attempt to remember.

One of the hard things about working with derivatives, for me, is that I can easily get caught up in the wonderful (to me, annoying to my kids) algebra. We have the chain rule, the product rule, the quotient rule, and strange and funky derivatives like the derivatives of the inverse trig functions. And I admit it. I *love* going overboard with these sorts of questions. There’s something really cool about being able to have an answer to a problem take up the length of a page. *It looks cool, darnit!* And when we get to this point in the curriculum, I often lose sight of the meaning of the derivative. **The process takes precedence.** And for weeks, we’re swimming (drowning?) in a sea of equations.

When I get to that point, I hope to remember to give my kids this problem:

Find the derivative of . I’m confident that by the time I’m done with them, my kids will get .

**But then I have to ask them to sketch a graph of **.

This great setup is on pages 64 and 65 of Ian Stewart’s *Concepts of Modern Mathematics*. He continues, describing what happened when he gave this problem to his class:

This caused great consternation, because it revealed that the formula didn’t make any sense. For any value of , is at most equal to 1, so . Since logarithms of negative numbers cannot be defined, the value does not exist; the formula is a fraud.

On the other hand, the ‘derivative’ … does make sense for certain values of …

Some people might enjoy living in a world where one can take a function which does not exist, differentiate it, and end up with one that does exist. I am not one of them.

There’s a great moral here, about remembering that taking the derivative of a function *means* something. Yes, you can talk about composition of functions and domains and ranges and all that stuff, but that’s not the enduring understanding I would pull from this. It is: divorcing calculus from meaning and focusing on routine procedures is a dangerous road to travel — so one must always be vigilant.

It actually reminds me of one of my most favorite calculus problems, which to solve it needs one to stop focusing on procedure and start *thinking.* I would never give this to my calculus kids, but for the very high achieving AP Calculus BC kid, this might throw them for a loop (in a good way):

I first saw this problem in Loren C. Larson’s *Problem-Solving Through Problems* (pages 32-33). I don’t quite want to share the solution in case you want to try it yourself. After the jump, I’ll throw down the answer (but not solution) so you can see if you got it right.

Well, actually, $log(log(sin(x))$ does make sense over the complex numbers (with the usual caveat about multiples of $2 \pi i$.

Trying again, with a different syntax for embedding LaTeX:

Well, actually, does make sense over the complex numbers (with the usual caveat about multiples of .

This is why I try to emphasize depth of understanding in addition to procedural fluency with my Algebra kids. If they don’t know WHY they’re supposed to do what they need to do, they might as well be monkeys at typewriters, trying to compose The Tempest in one draft.

- Elizabeth (aka @cheesemonkeysf on Twitter)

I was a bit suspicious of Stewart’s example when I read it: as we know when taking antiderivatives, the correct function to work with is not log(x) but log|x|. The function log|log|sin(x)|| makes perfect sense, and has the advertised derivative.

More to the point, though, is that these immensely complicated formulas we ask students to differentiate have no use whatsoever. In my life as a physicist and mathematician, I can count on one hand the number of times I’ve had to use the quotient rule, and on two hands the number of times I have used the product rule. Seriously.

In the past couple of years I have reworked my calculus sequence with the goal of getting students to really understanding differential equations (where all the real applications are) as smoothly as possible, and dumping all the things not needed to get there. The first time we used the product rule in the calculus sequence? Half-way through Tenenbaum and Pollard’s Ordinary Differential Equations.

My use of calculus these days is rather limited: usually for a simple optimization problem. The product rule is one of the few things I actually use. Of course, I never bother memorizing the quotient rule, since it is just the product rule and the chain rule, which are easier to apply directly most of the time.

I can’t remember the last time I used a differential equation—probably over 30 years ago. Different engineering professions call for different parts of calculus, so leaving things out because you don’t use them personally probably does a disservice to those who do need them.

I’m still a little pissed at all the math professors who taught me calculus, real anlaysis, complex analysis, and lots of other stuff but never mentioned Lagrangian multipliers, which I found out about as a professor from a student who had been better taught (he’d had an engineering education instead of a math education). The Lagrangian multipliers are routinely useful, while most of the stuff I learned as an undergrad and as a math grad student has never been of use to me.

[...] personal experience with grades, 21st Century Educator shared material on (un)assisted discovery, Continuous Everywhere but Differentiable Nowhere let students get lost in algebra with a twist, Math Hombre took on big numbers, and blanchetblog [...]

On a broader level, I worry that the standard curriculum emphasizes so much algebra acrobatics that it just turns off kids to any further math. I think kids form the viewpoint that studying math is about solving increasingly complicated algebra equations, and it doesn’t surprise me that many don’t choose to continue on that route.

[...] When you get too lost in the algebra… (samjshah.com) [...]

ok, i know the most recent comments here are 3 years old, but i gotta ask anyway.

so i loved this, but i am a little curious…i’ve never seen “log(x)” to mean anything but “log base 10 of x.” so when reading through this problem, that was my assumption. but your solution only works when i read it as “log base e.” otherwise, my denominator has a [ln(10)]^2 in it.

to make matters worse, wolfram alpha does the same thing! but when i found a calculator that shows steps, i was able to figure out that that’s where my issue was.

http://symbolab.com/solver/derivative-calculator/%5Cfrac%7Bd%7D%7Bdx%7D%28%5Clog_%7Be%7D%28%5Clog_%7Be%7D%28%5Csin%28x%29%29%29%29

http://symbolab.com/solver/derivative-calculator/%5Cfrac%7Bd%7D%7Bdx%7D%28%5Clog_%7B10%7D%28%5Clog_%7B10%7D%28%5Csin%28x%29%29%29%29

thanks to anyone who can clear this up for me.

Hi! Actually in high school, log(x) does tend to mean log base 10 of x. However there is a convention in many branches of higher mathematics for log(x) to mean log base e of x. So you’ve hit upon a discrepancy that actually exists!

To a mathematician, “log” is not specific about the base. If you care what base is used, you must specify: ln, log_{10}, log_{2}, … . For many applications in math, the mutiplicative factor you get by changing bases is irrelevant, and so the base is ignored.

It was the designers of math libraries for programming languages who made the error of using the name log(x) instead of ln(x) as the fundamental function, rather than requiring log to have a base as an extra argument.