As previously noted, I probably won't do a proper review of Made to Stick, even though I should. However, I've found myself thinking about it a lot this week, and now I'll try and make you think about it, too.
Made to Stick was partially inspired by Gladwell's excellent The Tipping Point, and one aspect it shares with that book is the way it focuses on practical applications. TTP was useful to marketers, inventors, salesmen, political activists, anyone who wanted to change the behavior of people, whether by making them buy Hush Puppies or embracing a new political party. Similarly, MTS focuses on how to present ideas to people in a way that will make an impact: if you apply the six principles the book describes, people will remember your message for years.
The reason why I've been thinking of this, other than because it's a good book, is because this week I've been attending a training course. This isn't the first training I've been in; it isn't the best or the worst class I've taken, either. However, since it's the first course I've taken since reading the book, I've found that I'm a lot more attuned to what is and isn't working, and consciously thinking about ways in which the instructor could improve his presentation.
The authors describe a tendency called "The Curse of Knowledge." The people who are most likely to teach a subject or argue a point are those who already know the most about it; however, because they know so much about it, it's difficult for them to understand the perspective of someone who isn't familiar with it. Someone with a PhD in anthropology knows a great deal about the field, but will make assumptions about what his students already know, presume that they are already interested in the topic, and otherwise is naturally inclined to orient his lecture in a manner that makes sense for other anthropologists but may be confusing and frustrating for incoming freshmen who know next to nothing about the subject.
I was thinking about this a lot yesterday. Most of our training takes place using a particular software program that the instructor and several students are extremely familiar with, most likely using it every day, but which I and several other students have hardly ever used before. The instructor will say something like, "And now we need to import the shape as a symbol," and will rapidly execute a series of maneuvers. Because he is an expert user, this really is one action to him: of course importing the shape as a symbol implies all these other steps. However, for the new learner like me, I need to have it broken down more patiently: "First click on the Actions menu, then select Import, then choose As Symbol from the submenu. On the next screen, highlight this radio button..." The instructor doesn't think that he is skipping information or going too fast, because he naturally views this operation through the lens of his own experience, not through the eyes of someone who hasn't used this program before.
(This post may come off as critical or upset, but I really don't mean it that way. I myself am guilty of these sins and more, and would be just as bad if I were, say, teaching a class on writing BREW applications in Visual Studio.)
One of the items MTS champions is "Concrete": to help your idea stick in people's minds, you should use specific examples, rather than talk about things generally. Once again, the Curse of Knowledge is at work here: experts in a field are inclined to be interested in abstractions and generalizations, but these don't make sense to someone who doesn't have the background of concrete knowledge to support those higher-level constructs. I realized that most software training classes I've taken have been guilty of being insufficiently concrete. Other programmers will instantly know what I am talking about: variables are given names like "foo", "bar", "myShape", "myFunction", and other purposefully meaningless names. In a sense this is deliberate, because the name really is unimportant as far as the language is concerned; you could give it any name you wanted to. In practice, though, you would never use a variable name like "foo" or "myShape" in a real program, because a real program has a purpose, and to make the program comprehensible to other programmers you need to select names that describe what something is or what it does.
When I'm in the process of taking a programming course, any given exercise will make perfect sense for me. I'll name a variable "myShape" and follow the steps in the description, and at least at that point, understand what it does. However, stepping through this doesn't give my mind many "hooks" into what I just did. Next week, when I'm presenting this material to the rest of my team, the odds that I'll recall any specific exercise are rather low, and as a result I may forget to describe some particular techniques I "learned" in the course. By contrast, if I were creating, say, a calculator application, or a thermometer, then that would give my mind more ways to remember things. "mercuryMeter" is much more memorable than "myShape", and I might recall the way I manipulated the meter, which would prompt me to remember the commands I used to make it grow and shrink.
(Incidentally, THIS VERY POST is guilty of not being sufficiently concrete. I'm always a little uncomfortable writing about work, and you'll notice that I'm not saying exactly what the class is, what I'm learning, or what I hope to do with it. Sorry. If I were to include that information, odds are better that you would remember the post and realize why it's important to me.)
One of the best professors I ever had was Dr. Ken Goldman at Washington University (in St. Louis, natch). Not only was he an excellent teacher, but he did a great job of teaching computer science, which as I've mentioned above is frequently not taught in a way that makes sense for beginners. Looking back at his class, I can see that he nailed all six of the principles given in MTS. On the specific topic of concreteness, all of the labs for his course and the programs he gave in lectures were programs ABOUT something, often something memorable. I remember in particular an algorithm he designed for "the marriage problem", made even more specific by running on a data set that included Ken and Sally Goldman, Bill and Hilary Clinton, and so on. I still remember that lecture eight years later; what are the odds I would if the data set was, say, "Man 1", "Man 2", "Woman 1", "Woman 2", and so on? Again, programmers would tend to think in the latter sense, because the whole purpose of the program is that it's generic enough so it doesn't matter who the people are. While that's accurate, it isn't very sticky, and I'm grateful to Dr. Goldman for recognizing that.
Plenty more I could say on this topic, but then I would run the risk of distracting from the main point. Wait a minute... I don't have a point! I'm free! Free!