8
$\begingroup$

I’ve been thinking of ways to improve my self-learning techniques in mathematics. Specifically, how to best “learn” definitions without resorting to memorizing them.

I have thought of one approach so far, and I would like to get some feedback on its potential limitations and how this approach can be improved. If this approach is already known, a name for it would also be appreciated.

Before explaining my approach, I will assume that a mathematical definition can be expressed in the following format:

We say that X is Y if condition A, condition B, ..., and condition Z are satisfied.

where condition A, condition B, ..., and condition Z either evaluate to True or False depending on the nature of object X, and Y is the name given to object X if all these conditions are satisfied.

For example, in the context of sequences, the definition of a bounded sequence can be expressed as:

We say that a sequence $a_n$ (X) is bounded (Y) if there exists a $M \in \mathbb R$ such that for every $n \in \mathbb N$, $|a_n| \leq M$ (condition A)

where "sequence $a_n$" is the object X, "bounded" is Y, and "there exists a $M \in \mathbb R$ such that for every $n \in \mathbb N$, $|a_n| \leq M$" is condition A.

Then, my self-learning approach consists of the following two steps:

  1. Given a mathematical definition, identify the conditions A, B, ..., and Z in the definition.

For example, given a set $X$ and a collection of subsets $\mathcal F$ of $X$, we call $\mathcal F$ an algebra over $X$ if it satisfies the following 3 conditions:

(a) Closed under complements in $X$.
(b) Contains the empty set.
(c) Closed under finite unions.

(a), (b), and (c) are the conditions we are looking for.

  1. Once the conditions have been identified, construct examples that satisfy different combinations of these conditions.

In the example above for an algebra $\mathcal F$ over $X$, we first construct a set $X$ and a corresponding collection of subsets $\mathcal F$ of $X$ that satisfy conditions (a), (b), and (c).

Then, we construct concrete examples that satisfy condition (a) but not conditions (b) and (c), other examples that satisfy conditions (a) and (b) but not condition (c), other examples that satisfy condition (b) but not (a) and (c), and so on. That is, we construct examples that are not algebras over $X$ by satisfying a combination of the conditions but not all of them. As another example, we can also choose to satisfy none of these conditions.

I believe this approach helps with internalizing/remembering a definition without the need to explicitly memorize it. However, I may be wrong.

$\endgroup$

2 Answers 2

10
$\begingroup$

I don't have a specific name for this, but YES! Do this. This is infinitely better than simply trying to memorize a definition!

Initially, you are seeing the reasons why each part of the definition is there. If we have (a) and (c), but not (b), then you have a concrete example (or counterexample, I guess). Over time, you build up a bag of those counterexamples: "Oh, this is not an algebra, since it does not..."

Where I think you get the most out of this, then, is to combine this approach with also thinking about the basic theorems related to this definition. What "basic" properties of an algebra can you deduce from this definition. Which of the parts (a), (b), and/or (c) are necessary for which parts of the proof/which other properties.

Working through the definitions and theorem together shows you how the various pieces fit together. Later on, you'll remember the counterexamples -- i.e. this thing is not quite an algebra because... -- and that will help you remember that part of the definition that you always seem to forget!

Even for simpler definitions: The word "nontrivial" is important in the definition: A set of vectors is said to be linearly independent provided that no nontrivial linear combination is the zero vector.

Without that word, of course, no set would ever be linearly independent, since I could choose all of the scalars to be zero! But I know that linear independence is a real and important thing. What did I forget? Oh, right: nontrivial!

To use another linear algebra example. Whenever I teach the Invertible Matrix Theorem (a long list of properties of a matrix equivalent to being invertible), I tell my students that they should be able to pick any two properties and connect them by a chain that they understand:

Suppose $A$ is and $n \times n$ matrix which has linearly independent columns. Connect this to the transformation given by $\mathbf{x} \mapsto A\mathbf{x}$ being onto.

I want them to be able to do something like:

  • Linearly independent columns means that $A\mathbf{x} = \mathbf{0}$ has only the trivial solution $\mathbf{x} = \mathbf{0}$, since $A\mathbf{x}$ is a linear combination of the columns of $A$
  • This means that $A$ has no free variables, since otherwise we would have more than one solution.
  • Therefore, $A$ has a pivot in each column
  • Since $A$ is square, then $A$ has a pivot in each row
  • The matrix equation $A\mathbf{x} = \mathbf{b}$ is then always consistent for any choice of $\mathbf{b}$ since we can only have inconsistency if there is a row in $A$ without a pivot
  • Thus, for any $\mathbf{b}$ there is a choice of $\mathbf{x}$ so that $A\mathbf{x} = \mathbf{b}$ and thus our mapping is onto.

Of course, there are lots of other ways to get here (especially once we can talk about column & null space and dimension), but that is not the point. I strongly encourage my students to work through these kind of examples -- and your kind: what if the columns were not linearly independent. Why would the mapping not be onto?

Finally -- and sorry, this has gone on a long time -- I would also encourage you, especially if you are self-learning: doubt every theorem in the book you are using! Try to construct counterexamples. Spend time thinking about why they all fail to actually be counterexamples. What is it about the precision of the definition and the statement of the theorem that keeps your counterexample from "working."

$\endgroup$
1
  • 2
    $\begingroup$ Wow, thank you very much for your comprehensive feedback! I really appreciate it. I will wait 1-2 days to allow other answers before accepting this one. Thank you again. $\endgroup$
    – mhdadk
    Commented yesterday
1
$\begingroup$

The approach you describe is effective at understanding a definition in isolation. A follow on you can add to this is "given that this thing did not meet the criterion, what could it do?" Many of the structures you consider when you do your combinatorial approach have been studied and have interesting properties of their own.

The other thing I'd note is that your approach only goes in the direction of being more fundamental. It doesn't look like it ever adds a condition. Things like "what it the operation was also commutative" or "what if this space has a norm" often lead to interesting places. Given you are working with abstract algebra, I'd give the example of a monoid vs a group. If you start from monoids, and only take things away from the definition, you may not reach the idea of a group (which requires adding a property) until much later. Groups in particular have some astonishingly nice properties, and I find them to be an interesting contrast to other structures.

$\endgroup$
1
  • $\begingroup$ (+1) Very insightful. Thank you. $\endgroup$
    – mhdadk
    Commented 7 hours ago

Not the answer you're looking for? Browse other questions tagged or ask your own question.