I think a real problem in this area is the belief that there is “one true notation” and that everything is unambiguous and clearly defined.

Yes, conventions have emerged, people tend to use the same sort of notation in a given context, but in the main, the notation should be regarded as an aide memoire, something to guide you.

You say that you’re struggling because of “the math notations and zero explanation of it in the context.” Can you give us some examples? Maybe getting a start on it with a careful discussion of a few examples will unblock the difficulty you’re having.


> I think a real problem in this area is the belief that there is “one true notation” and that everything is unambiguous and clearly defined.

One main cause for this belief is that in a programming there is one true noation (or rather, a separate one for each language) that is unambiguous and clearly defined.

I dislike maths notation as I find it lacks rigour.


Formulas would also be easier to read if they would not name all their variables and functions with one character.

If programmers would write code like that (even fortran programmers use 3 characters), noone would be able to understand the code…


> I dislike maths notation as I find it lacks rigour.

I see this a lot from programmers, but in essence, you seem to be complaining that maths notation isn’t what you want it to be, but is instead something else that mathematicians (and physicists and engineers) find useful.


> there is one true noation (or rather, a separate one for each language) that is unambiguous and clearly defined.

This is such a disingenuous take. How many of the source code files you write are 100% self contained and well defined? I’d bet not a single one of them are. You reference libraries, you depend on specific compiler/runtime/OS versions, you reference other files etc. If you take a look at any of these scientific papers you call “badly defined”, did you really go through all of the referenced papers and look if they defined the things you didn’t get? If not then you can’t be sure that the paper uses undefined notation. If you argue that it is too much work to go through that many references, well that is what you would have to do to understand one of your program files.


One can look at the source code to a program, the libraries it uses, the compiler for the language, and the ISA spec for the machine language the compiler generates. You can know that there are no hidden unspecified quantities because programs can’t work without being specified.

When you get down to the microcode of the CPU that implements the ISA you might have an issue if it’s ill-specified. You might be talking about an ISA like RISC-V, though, specified at a level sufficient to go down to the gates. You might be talking about an ISA like 6502 where the gate-level implementations have been reverse-engineered.

You can take programming all the way down boolean logic if you need to and the tools are readily available. They don’t rely on you “just knowing” something.


Came here to say the same thing harshly and laced with profanity. I guess I can back off a bit from that now.

I was filled with crushing disappointment when I learned mathematical notation is “shorthand” and there isn’t a formal grammar. Same goes for learning writers take “shortcuts” with the expectation the reader will “fill in the gaps”. Ostensibly this is so the writer can do “less writing” and the reader can do “less reading”.

There’s so much “pure” and “universal” about math, but the humans who write about it are too lazy to write about it in a rigorous manner.

I can’t write software w/ the expectation the computer “just knows” or that it will “fill in the gaps”. Sure– I can call libraries, write in a higher-level language to let the compiler make machine language for me, etc. I can inspect and understand the underlying implementations if I want to, though. Nothing relies on the machine “just knowing”.

It’s feels like the same goddamn laziness that plagues every other human endeavor outside of programming. People can’t be bothered to be exact about things because being exact is hard and people avoid hard work.

“We’ll have a face-to-face to discuss this there’s too much here to put in an email.”


I’m glad I’m not the only person like this. I’ve never liked tradition math notation and found it about as useful as traditional musical notation, that is, hard to read for the layman and for no other reason than “this is how people have been doing it for a long time”. Maybe I’m the minority, but when I read a CS paper I mostly ignore the maths and then go to the source code or pseudocode to see how the algorithm was implemented.


What you’re looking at is calculus, specifically differentiation. This is pretty core to understanding physics, because so much of physics depends on the time-evolving state of things. That’s fundamentally what’s happening here.

The triangle, for example, is the upper-case greek letter delta, which in calculus represents ‘change of’. You might have heard of ‘delta-T’ with respect to ‘change of time’.

In calculus, upper-case delta means ‘change over a finite time’ vs lower-case delta meaning ‘instantaneous change’. The practical upshot, for example, is that the lower-case is the instantaneous rate-of-change at an instant in time, whereas the upper-case is the change over a whole time (e.g. the average rate of change per second for time = 0 seconds to time = 3 seconds).

If you are trying to grok this, I would suggest an introductory calculus or pre-calculus resource. It doesn’t have to be a uni textbook – higher-level high school maths usually teaches this. In this particular case, the Khan Academy would be my recommendation because it is about the right level (we’re not talking esoteric higher-level university knowledge here) and it is eminently accessable. For example, this link may be a good starter in this instance:

https://www.youtube.com/watch?v=MeU-KzdCBps


You say “There’s a formula with a triangle …” without telling me where. That’s not real helpful, and you’re making me do the work to find out what you’re talking about. If you want assistance to get started, you need to be more explicit.

However, I have done that work, so I’ve looked, and in the second column of page 210 there’s a “formula with a triangle”:

t_c = 5 middot 10^{-5} sqrt( V / Dt )

… where the “D” I’ve used is where the triangle appears in the formula.

But that can’t be it, because just two lines above it we have:

“For a pulse of width Dt, the critical time …”

So that’s stating that “Dt” is the width of the pulse, and should be thought of as a single term.

So maybe that’s the wrong formula, or maybe it was just a bad example. So trying to be more helpful, the “triangle” is a Greek capital delta and means different things in different places. However, it is often used to mean “a small change in”.

https://en.wikipedia.org/wiki/%CE%94T

FWIW … at a glance I can’t see where that result is derived, it appears simply to be stated without explanation. I might be wrong, I’ve not read the rest of the paper.


I feel you’re coming at this without appreciating your body of prior knowledge. Intended or not, your statment “But that can’t be it, because just two lines above it we have…” assumes a whole lot of knowledge.

You and I both know that it reads as one term, but for someone unfamiliar with calculus but exposed to algebra they are drilled to understand separate graphemes as separate items, because the algebraic ‘multiply’ is so often implied, e.g. 3x = 3 x as two individual ‘things’.

I think there’s merit in explaining the concept of delta representing change, because it’s not obvious. For example, when I was taught the concept in school, my teacher explicitly started with doing a finite change with numbers, then representing it in terms of ‘x’ and ‘y’, then merged them into the delta symbol. That’s a substantial intuitive stepping stone and I think it’s pretty reasonable that someone may not find this immediately apparent.


I agree completely that I’m coming at this with a lot of background knowledge, but if I’m reading in an unfamiliar field and I see a symbol I don’t recognise, I look in the surrounding text to see if the symbol appears nearby. As I say, “Δt” appears immediately above … that’s a clue. As you say, it’s drilled in at school that everything is represented by a single glyph, and if these are juxtaposed then it means multiplication, and that is another thing to unlearn.

But I think the problem isn’t the specifics of the “Δ”, it’s the meta-problem of believing that symbols have a “one true meaning” instead of being defined by the scope.

I agree that explaining the delta notation would be helpful, but that’s like giving someone a fish, or making them a fire. They are fed for one day, or warm for one night, it’s the underlying misconceptions that need addressing so they can learn to fish and be fed, or set on fire and be warm, for the remainder of their life.


I absolutely agree with your comments regarding teaching the underlying approach to digesting a paper. You definitely raise good points, especially the ‘one true meaning’ comment. I should state that I’m not discounting the value of your point, especially given this clarification, however I guess that when I reflect on my experience in my time learning this, the time I best learnt was via initial expalnation, then worked example, then customary warning of corner-cases and here-be-dragons.

e: I also think, on reflection, that a signfigicant part of your ability to grok a new paper per your comments is your comfort in approaching these concepts due to your familiarity. Think of learning a new language – once you have a feel for it, you’re likely more comfortable exploring new concepts within it, however when you’re faced with it from the start you probably feel very lost and apprehensive.

I feel that understanding calculus is a fairly fundamental step in the ‘language of maths’, teaching that symbols don’t necessarily represent numbers but can represent concepts (e.g. delta being change). This isn’t something you encounter until then, but once you do you begin to understand the characters associated iwth integrals, matricies, etc. in a way that you may not have previously with algebra alone.


I think that this is indeed the formula in GP’s question. And indeed sometimes math notation is obtuse like that. It looks like 2 terms, but the triangle goes together with the t as a single term. At other times it might be called “dt” and despite looking like a multiplication of 2 variables (d and t, or triangle and t in this case) it’s just a single variable with a named made of 2 characters.

The important thing here is that “For a pulse of width Dt” is the definition of this variable, but this can be easily missed if you’re not used to this naming convention.


> it’s just a single variable with a named made of 2 characters.

I have this same problem with programming, when I have to deal with code written by non-mathematicians. They tend to use all these stupid variables with more than one letter and that confuses the heck out of me.


Sorry I didn’t mean to make you work for me, but it’s a PDF and I didn’t know how to explain better the position (maybe I should have told you the first formula on page X).

For you it was a D, for me it was a triangle and I didn’t get the meaning of that Dt. Maybe it’s just a too advanced paper for my knowledge.


BTW … you say:

> Maybe it’s just a too advanced paper for my knowledge.

Maybe it is for now … the point being that if you start at the beginning, chip away at it, search for terms on the ‘net, read multiple times, try to work through it, and then ask people when you’re really stuck, that’s one way of making progress.

You can, instead, enroll in an on-line course, or night-school, and learn all this stuff from the ground up, but it will almost certainly take longer. Your knowledge would be better grounded and more secure, but learning how to read, investigate, search, work, then ask, is a far greater skill that “taking a course”.

Others have answered your specific question about the delta symbol, but there are deeper processes/problems/questions here:

Not all concepts or values or represented by a single glyph, sometimes there are multi-glyph “symbols”, such as “Δt” in your example.

When you see a symbol you don’t recognise, read the surrounding text. The symbol will almost always be referenced or described.

The notation isn’t universal. Often it’s an aid to your memory, to write in a succinct form the thing that has been described elsewhere.

In these senses, it’s very much a language more akin to natural languages than computer languages. The formulas are things used to express a meaning, not things to be executed.

Specific questions about specific notation can be answered more directly, but to really get along with mathematical notation you need to “read like math” and not “read like a novel”.

None of this is correct, all of it is intended to give you a sense of how to make progress.


I’m just saying “D” because I can’t immediately type the symbol here and it was easier just to use that. Not least, I didn’t know if that was the formula you meant.

But as I say, immediately above the formula it says:

“For a pulse of width ∆t, the critical time …”

So that really is saying exactly what that cluster of symbols means. There will be things like this everywhere as you read stuff. Things are rarely completely undefined, but you are expected to be reading along.

And you need to work. I just typed this into DDG:

“What does ∆t mean?”

The very first hit is this:

https://en.wikipedia.org/wiki/Delta_%28letter%29

That gives you a lot of context for what the symbol means, and this is the sort of thing you’ll need to do. You need to stop, look at the thing you don’t understand, read around in the nearby text, then type a question (or two, or three) into a search engine.


The triangle, or “delta”, is used to indicate a tiny change in the following variable.

Let’s say you go on a journey, and the distance you’ve travelled so far is “x” and the time so far is “t”.

Then your average velocity since the beginning is x / t .

But, if you want to know your current velocity, that would be delta x divided by delta t .

The delta is usually used in a “limiting” sense – you can get a more accurate measurement of your velocity by measuring the change in x during a tiny time interval. The tinier the interval, the more accurate the estimate of current velocity.

What I’m talking about here is the first steps in learning differential calculus. You could look for that at kahnacademy.com. You might also benefit by looking at their “precalculus” courses.

Just keep plugging away at it, the concepts take awhile to seep in. Attaining mathematical maturity takes years.


I’ll use this as an example for the point I’m trying to make in my comment https://news.ycombinator.com/item?id=29341727

Please don’t take this the wrong way. It is not meant to be demeaning, and it is not meant to be gatekeeping (quite the contrary!). But: If you do not know what a derivative is, then learning that that symbol means derivative (assuming that it does, I have not actually looked at what you link to) will help you next to nothing. OK, you’ll have something to google, but if you don’t already have some idea what that is, there is no way you will get through the paper that way.

I hope you take this as motivation to take the time to properly learn the fundamentals of mathematics (such as for example calculus for the topic of derivatives).


Please don’t take this the wrong way, but if you’re going to comment on something, you should probably first read the thing on which you wish to comment!

The “Δ” in this case was not a derivative.

I agree that taking a course on calculus might be the best way to proceed, but in this case, calculus is not needed (at this stage).

FWIW, I largely agree with the comment you reference. People seem to think that all they need to do is “learn math notation” and then they will be able to read and understand the math, and then do their own. That’s not the case, the notation is almost an epiphenomenon, and not the thing itself.

Even so, when reading a paper, having some familiarity with the notation is needed, so it’s an understandable question.


Looks like you need to grind through an elementary calculus book. With the exercises, you may think you build intuition by reading just the definitions, but half of the understanding is tacit and you get through the exercises.

If you’re trying to get into signal processing, it’ll involve calculus in complex numbers, and knowledge of that is often gained through plodding through proofs and exercises over and over.


For about $5 you can find an old (around 1960-1969) edition of the “CRC Handbook of Standard Mathematical Tables. I’ve owned two of the 17th edition published in 1969, because back then hand calculators didn’t exist and many of the functions used in mathematics had to be looked up in books, like what is the square root of 217. Engineers used these handbooks extensively back then.

Now, of course, you have the internet and it can tell you what the square root of 217 is. Consequently, the value of these used CRC handbooks is low and many are available on eBay for a few dollars. Pick up a cheap one and in it you will find many useless pages of tables covering square roots and trigonometry, but you will also find pages of formulas and explanations of mathematical terms and symbols.

Don’t pay too much for these books because the internet and handheld calculators have pretty much removed the need from them, but that is how I first learned the meanings of many mathematical symbols and formulas.

You might also look for books of “mathematical formulas” in you local bookstores. Math is an old field and the notations you are stumbling over have likely been used for 100 years, like the triangle you were wondering about. (Actually the triangle is the upper case greek letter delta. Delta T refers to an amount of time, usually called an interval of time.)

Unfortunately, because math is an old subject it is a big subject. So big that no one person is expert in every part of math. The math covered in high school is kind of the starting point. All branches of mathematics basically start from there and spread out. If you feel you are rusty on your high school math, start there and look for a review book or study guide in those subjects, usually called Algebra 1 and Algebra 2. If you recall your Algebra 1 and 2, take a look at the books on pre-calculus. The normal progression is one year for each of the following courses in order, Algebra 1, Geometry, Algebra 2, Pre-Calculus, and Calculus. This is just the beginning of math proficiency, but by the time you get through Calculus you will be able to read the paper you referenced.

Is it really a year for each of those subjects? It can be done faster but math proficiency is a lot of work. Like learning to be a good golfer, it would be unusual to become a 10 handicap in less than 5 years of doing hours of golf every week.


As a starting point you can check out the notation appendices from my books:
https://minireference.com/static/excerpts/noBSmathphys_v5_pr…
https://minireference.com/static/excerpts/noBSLA_v2_preview….
You can also see this excerpt here on set notation https://minireference.com/static/excerpts/set_notation.pdf

That covers most of the basics, but I think your real question is how to learn all those concepts, not just the notation for them, which will require learning/reviewing relevant math topics. If you’re interested in post-high-school topics, I would highly recommend linear algebra, since it is a very versatile subject with lots of applications (more so than calculus).

As ColinWright pointed out, there is no one true notation and sometimes authors of textbooks will use slightly different notation for the same concepts, especially for more advanced topics. For basic stuff though, there is kind of a “most common” notation, that most books use and in fact there is a related ISO standard you can check out: https://people.engr.ncsu.edu/jwilson/files/mathsigns.pdf#pag…

Good luck on your math studies. There’s a lot of stuff to pick up, but most of it has “nice APIs” and will be fun to learn.


Naively, I would say the following:

1) Search youtube for multiple videos by different people on the topic you want to learn. Watch them without expecting to understand them at first. There is a delayed effect. Each content creator will explain it slightly differently and you will find that it will make sense once you’ve heard it explained several different times and ways.

I will read the chapter summary for a 1k page math book repeatedly until I understand the big picture. Then I will repeated skim the chapters I least understand until I understand its big picture. I need to know the terms and concepts before I try to understand the formulas. I will do this until I get too confused to read more then I will take a break for a few hours/days and start again.

2) You have to rewrite the formulas in your own language. At first you will use a lot of long descriptions but quickly you will get tired and you will start to abbreviate. Eventually, you get the point where you will prefer the terse math notation because it is just too tedious to write it out in longer words.

3) You might have to pause the current topic you are struggling with and learn the math that underlies it. This means a topic that should take 1 month to learn might actually take 1 year because you need to understand all that it is based on.

4) Try to find an applied implementation. For example photogrammetry applies a lot of linear algebra. It is easer to learn linear algebra if you find an implementation of photogrammetry and try to rewrite it. This forces you to completely understand how the math works. You should read the parts of the math books that you need.


Maybe a problem is trying to learn it by reading it.

I was a college math major, and I admit that I might have flunked out had I been told to learn my math subjects by reading them from the textbooks without the support of the classroom environment. It may be that the books are “easy to read if a teacher is teaching them to you.”

Talking and writing math also helped me. Maybe it’s easier to learn a “language” if it’s a two way street and involves more of the senses.

Perhaps a substitute to reading the stuff straight from a book might be to find some good video lectures. Also, work the chapter problems, which will get your brain and hands involved in a more active way.

As others might have mentioned, there’s no strict formal math notation. It’s the opposite of a compiled programming language. In fact, math people who learn programming are first told: “The computer is stupid, it only understands exactly what you write.” In math, you’re expected to read past and gloss over the slight irregularities of the language and fill in gaps or react to sudden introduction of a new symbol or notational form by just rolling with it.


First, just to state the obvious, if you can accurately describe a notation in words, you can do an Internet search for it.

When that fails, math.stackexchange.com is a very active and helpful resource. You can ask what certain notation means, and upload a screenshot since it’s not always easy to describe math notation in words.

If you don’t want to wait for a human response, Detexify (https://detexify.kirelabs.org/classify.html) is an awesome site where you can hand draw math notation and it’ll tell you the LaTeX code for it. That often gives a better clue for what to search for.

For example you could draw an upside down triangle, and see that one of the ways to express this in LaTeX is nabla. Then you can look up the Wikipedia article on the Nabla symbol. (Of course in this case you could easily have just searched “math upside down triangle symbol” and the first result is a Math Stackechange thread answering this).


I learned it by asking peers in grad school what stuff meant. And working through the math myself (it was a slog at first) and then writing stuff out it in LaTeX. When one is forced to learn something because one needs to take courses and to graduate, the human brain someone figures out a way.

A lot of it is convention, so you do need a social approach – ie asking others in your field. For me it was my peers, but these days there’s Math stack exchange, google, and math forums. Also, first few chapters of an intro Real Analysis text is usually a good primer to most common math notation.

When I started grad school I didn’t know many math social norms, like the unstated one that vectors (say x) were usually in column form by convention unless otherwise stated (in undergrad calc and physics, vectors we’re usually in row form). I spent a lot of time being stymied by why matrix and vector sizes were wrong and why x’ A x worked. Or that the dot product was x’x (in undergrad it was x.x). It sounds like I lacked preparation but the reality was no one told me these things in undergrad. (I should also note that I was not a math major; the engineering curriculum didn’t expose me much to advanced math notation. Math majors will probably have a different experience.)


I think the problem is that there is no authoritative text, that I know of, and as ColinWright says, the same ideas can be notated differently by different fields or sometimes by different authors in the same field (though often they converge if they are in the same community).

Wikipedia has been helpful sometimes but otherwise I have found reading a lot of papers on the same topic has been useful. However, this is kind of an “organic” and slow way of learning notation common to a specific field.


The Greek alphabet would like to thank all the scholars for the centuries of overloading and offer a “tee hee hee” to all of the students tormented by attendant ambiguities.

Tough love, kids.


There is no single authoritative source for mathematical notation. That said, there are a lot of common conventions. You could do worse than this NIST document if it’s just a notation question:

https://dlmf.nist.gov/front/introduction

Of course, if the real problem is that you need to learn some mathematical constructs, that is a different problem. The good news is that there’s a lot of material online, the bad news is that not all of it is good… I often like Khan Academy when it covers the topic.

I wish you luck!


If math was a programming language, all mathematicians would be fired for terrible naming conventions and horrible misuse of syntax freedom.

Honestly, most math formulas can be turned into something that looks like C/C++/C#/Java/JavaScript/TypeScript code and become infinitely more readable and understandable.

Sadly, TypeScript is one of the languages that is attempting to move back to idiocy by having generics named a single letter. Bastards.


I think good first resource would be the book and lecture notes in an introductory university course treating the specific domain you are interested in because often lots of things in notation are domain specific. Lots of good open university lectures out there, if not sure from where to start the MIT open courseware used to be a good first guess for accessing materials.

As a sidenote I have MSc in Physics with a good dollop of maths involved and I am quite clueless when looking at a new domain so it’s not as if university degree in non-related subject would be of any help…


Do you mean all the introductory mathematics books you tried fail to properly explain the notation ?

Or that the notation differs from books to books ?

(In my case, I learned the notation via French math textbooks, and in the first day of college/uni we litteraly went back to “There is a set of things called natural numbers, and we call this set N, and there is this one thing called 0, and there is a notion of successor, and if you keep taking the successor it’s called ‘+’, and…” etc..

But then, the French, Bourbaki-style of teaching math is veeeeeeeery strict on notations.


Practice, just like you learned programming.
“The Context” gives you the meaning for the notation, sadly. You have to kind of know it to understand the notation properly.


You can also get sufficiently angry and just write out linear algebra books and what not in Agda / Coq / Lean if it pisses you off so much (I’ve done a bunch of exercises in Coq)


I like the approach they took in Structure and Interpretation of Classical Mechanics, where the whole book is done in Scheme:

    (define ((Lagrange-equations Lagrangian) q)
      (- (D (compose ((partial 2) Lagrangian) (Gamma q)))
         (compose ((partial 1) Lagrangian) (Gamma q))))


Mathematics is a lingo and notations are mostly convention. Luckily people generally follow the same conventions, so my best advice if you want to learn about a specific topic is to work through the introductory texts! If you want to learn calculus find an introductory college text. Statistics? There are traditional textbooks like Introduction to Statistical Learning. The introductory texts generally do explain notation which may become assumed knowledge for more advanced texts, or as you seem to be wanting to read, academic papers. If those texts are still too difficult, then maybe move down to highschool text first.

Think about it this way. A scientist, wanting to communicate his ideas with fellow academics, is not going to spend more than half the paper on pedantics and explaining notations which everyone in their field would understand. Else what is the purpose of creating the notations? They might as well write their formulas and algorithms COBOL style!

Ultimately mathematics, like most human-invented languages, is highly tribal and has no fixed rules. And I believe we are much richer for it! Mathematicians constantly invent new syntax to express new ideas. If there was some formal reference they had to keep on hand every time they need to write an equation that would hamper their speed of thought and creativity. How would one even invent something new if you need to get the syntax approved first!

TL;DR: Treat math notation as any other human language. Find some introductory texts on the subject matter you are interested in to be “inducted” into the tribe


Well, the real fun is deciphering a lower case xi – ξ – when written on the blackboard (or whiteboard), specially compared to a lower case zeta – ζ (fortunately way less commonly used).

As all the others already told you. you don’t learn by reading alone.


You might be better picking an area, and trying to work out the notation relating to that area e.g. vectors / matrices / calculus etc. As Colin says below there are often multiple equivalent ways of representing things across different fields and timeframes. I seem to remember maths I studies in Elec Eng looking different but equivalent to the way it was represented in other disciplines


I sometimes think math notation is a conspiracy against the clever but lazy.
Being able to pronounce the greek alphabet is a start, as you can use your ear and literary mind once you have that, but when you encounter <...>, as in an unpronouncable symbol, the meaningless abstraction becomes a black box and destroys information for you.

Smart people often don’t know the difference between an elegant abstraction that conveys a concept and a black box shorthand for signalling pre-shared knowledge to others. It’s the difference between compressing ideas into essential relationships, and using an exclusive code word.

This fellow does a brilliant job at explaining the origin of a constant by taking you along the path of discovery with him, whereas many “teachers” would start with a definition like “Feigenbaum means 4.669,” which is the least meaningful aspect to someone who doesn’t know why. https://www.veritasium.com/videos/2020/1/29/this-equation-wi…

It wasn’t until decades after school that it clicked for me that a lot of concepts in math aren’t numbers at all, but refer to relationships and relative proporitons and the interactions of different types of things, which are in effect just shapes, but ones we can’t draw simply, and so we can only specify them using notations with numbers. I think most brains have some low level of natural synesthesia, and the way we approach math in high school has been by imposing a three legged race on anyone who tries it instead.

Pi is a great example, as it’s a proportion in a relationship between a regular line you can imagine, and the circle made from it. There isn’t much else important about it othat than it applies to everything, and it’s the first irrational number we found. You can speculate that a line is just a stick some ancients found on the ground and so its unit is “1 stick” long, which makes it an integer, but when you rotate the stick around one end, the circular path it traces has a constant proportion to its length, because it’s the stick and there is nothing else acting on it, but amazingly that proportion that describes that relationship pops out of the single integer dimension and yields a whole new type of unique number that is no longer an integer. The least interesting or meaningful thing about pi is that it is 3.141 etc. High school math teaching conflates computation and reasoning, and invents gumption traps by going depth first into ideas that make much more sense in their breadth-first contexts and relationships to other things, which also seems like a conspiracy to keep people ignorant.

Just yesterday I floated the idea of a book club salon idea for “Content, Methods, and Meaning,” where starting from any level, each session 2-3 participants pick and learn the same chapter separately and do their best to give a 15 minute explanation of it to the rest of the group. It’s on the first year syllabus of a few universities, and it’s a breadth-first approach to a lot of the important foundational ideas.

The intent is I think we only know anything as well as we can teach it, so the challenge is to learn by teaching, and you have to teach it to someone smart but without the background. Long comment, but keep at it, dumber people than you have got further with mere persistance.


Khan academy and Schaum’s Outlines are your friends.

Then some textbooks with exercises (e.g. Axler on lin alg).

The notation is usually an expression of a mental model, so just approaching via notation may cause some degree of confusion.


Related question, does anyone know of any websites/books that have mathematical notation vs the computer code representing the same formula side by side? I find that seeing it in code helps me grasp it very quickly.


I’ve run into this problem as well and it’s put me off learning TLA+ and information theory, which bums me out. I assume there’s a Khan Academy class that would help but it’s hard to find.


Could it be that you are trying to read things that are a bit too advanced? Maybe look for some first year university lecture notes? In general, if you cannot follow something, try to find some other materials on the same subject, preferably more basic ones.


I hear this question asked quite often, particularly on HN. I think the question is quite backwards. There is little value alone in learning “math notation”, even ignoring what many people point out (there is no one “math notation”). “Math notation”, at best, translates into mathematical concepts. Words, if you will, but words with very specific meaning. Understanding those concepts is the crux of the matter! That is what takes effort – and the effort needed is that of learning mathematics. After that, one may still struggle with bad (or “original”, or “different”, or “overloaded”, or “idiotic”, or…) notation, of course, but there is little use in learning said notation(s) on their own.

I’ve been repeatedly called a gatekeeper for this stance here on HN, but really: notation is a red herring. To understand math written in “math notation”, you first have to understand the math at hand. After that, notation is less of an issue (even though it may still be present). Of course the same applies to other fields, but I suspect that the question crops up more often regarding mathematics because it has a level of precision not seen in any other field. Therefore a lot more precision tends to hide behind each symbol than the casual observer may be aware of.