The Great Courses, organized

A really useful resource for being introduced to new subjects are the Great Courses series from The Teaching Company. If you haven’t heard of them, they’re first- or second-year university-level lecture courses on hundreds of different subject areas, with each lecture typically lasting 30 minutes and usually comprising multiples of 12 lectures. There are a number of sample lectures available on YouTube and on Audible they have a landing page dedicated to the Great Courses.

One difficulty is that with dozens or hundreds of courses it can be difficult to sort through the heap if you don’t already know exactly what you’re looking for. I had this problem when I was trying to see if they had an Introduction to Sociology or an Understanding Sociology series (I’m in the boat Robin Hanson describes here where as an outsider it’s not clear to me what the difference is between economics and sociology) and came up with nothing. At the same time I became interested in how the various academic disciplines relate to each other. Wikipedia was very helpful here.

So I put two and two together and began mapping various Great Courses series as best I could to the academic disciplines. It could never be done perfectly, since the academic disciplines have fuzzy boundaries, compete for different subject matters (is anatomy more a part of biology or medicine?), and the courses themselves don’t bother to follow clear boundaries. I found out as I was doing it (in stages) that there were many more courses than I had initially supposed and I kept discovering areas with dozens of courses that I had hitherto overlooked.

I did my best and the result is now a top-level page on this humble blog, found here. Every time I look at it I end up making small changes and I add to it as The Teaching Company produces new series so it’ll be a living document. I hope others find it useful.

Posted in General

Book Review: Approaching Infinity

Approaching Infinity by Michael Huemer

“Infinity” is a concept that, if you’re not careful, can really bite you in the ass. In his latest book, Approaching Infinity, the philosopher Michael Huemer attempts to sharpen our idea of infinity to address two areas of concern.

One is the nature of infinite regresses. A famous one is the Regress of Causes, where one event needs to be caused by another event, but that event needs to be caused, etc all the way down the line. Thomists’ attempt to to address this regress is to stipulate an uncaused “unmoved mover” that starts the whole process going. Note that this was a regress that people thought they needed to solve; Huemer calls it thus a “viscous regress”. Other regresses are classified as “benign” though, like starting from the postulate that some proposition P is true. Then it’s true that P is true. It’s also true that it is true that P is true. And so on. Nobody really complains about that regress.

The other area is a number of famous “paradoxes of the infinite”. Examples include Zeno’s paradox, Thomson’s lamp, Galileo’s paradox, and Hilbert’s hotel (there are 17 paradoxes discussed in all). In each case, there’s a paradox that occurs when we assume an infinity is involved. Take Galileo’s paradox (please!): which are great in extent, the natural numbers (1, 2, 3, 4, 5, …) or the perfect squares (1, 4, 9, 16, 25, …)? At first it seems like there should be more natural numbers, since for any finite list of numbers from 1 to n there will be both perfect square and non-squares like 7 or 18. But you can map every natural number to a square (just square the number!) so (1 ↔ 1), (2 ↔ 4), (3 ↔ 9), (4 ↔ 16), and so on. Since every spot on that ladder is filled, it looks as though there are just as many perfect squares as natural numbers. A paradox!

Huemer goes over two classical accounts of the infinite, that of Aristotle and that of Georg Cantor, and finds both wanting in various ways. There are multiple chapters of the philosophy of numbers, sets, and geometrical points that I think fairly present the views of the usual Cantorian orthodoxy before poking holes in them. Even if you ultimately disagree with Huemer’s account, I think it’s a very readable and enjoyable introduction to issues in the philosophy of mathematics.

We then come to Huemer’s own account: extrinsic infinities are allowable or at least possible, whereas intrinsic infinities are not. An extrinsic property is one that changes when you change the “size” of the object in question. So things like size itself, volume, mass, energy content, etc. If you double the size of a block of wood, you double its mass. An intrinsic quantity is one that is comparably scale-invariant; things like temperature, speed, color, etc. If you imagine one cup of boiling water and then bring another cup of boiling water together with it, the temperature of the water does not change.

With this theory of the infinite in hand, Huemer is able to (mostly) resolve the 17 paradoxes and give an account of viscous and benign regresses. For the case of the Regress of Causes, and infinitude of causes going back in time is an extrinsic one, and so in principle non-problematic. The Thomist proposal of an “unmoved mover” who is infinitely powerful fails on this account, though, since such an entity would involve infinite intrinsic magnitudes.

If you’re interested in understanding infinite quantities which if you’ve done any work in the STEM fields you’ll have come across, I wholeheartedly recommend Approaching Infinity.

Posted in Logic, Mathematics, Reviews

Book Review: The Theory That Would Not Die

The Theory That Would Not Die: How Bayes’ Rule Cracked the Enigma Code, Hunted Down Russian Submarines, and Emerged Triumphant from Two Centuries of Controversy by Sharon Bertsch McGrayne

If you have a passing familiarity with statistics, you’ve probably come across the centuries old debate between frequentists and Bayesians. Most users of statistics and probability today are probably all too happy to avoid the debate and “just use what works” (though, that doesn’t prevent science from being really hard). The striking thing I learned from the book is just how heated and vicious the debate has gotten in the past (and present), so it’s no wonder that even if there is an Objectively True Outcome people would want to avoid the debate just to be able to get some work down.

I won’t give a lengthy synopsis of the book since Luke Muehlhauser already did a very thorough job some time ago. One of the big takeaways is that in a fairer world it would be called Laplacian statistics with Thomas Bayes credited with having an early inkling. One thing that McGrayne says is that English-speaking scientists grew up not knowing about Laplace and hence the preference for the English Bayes. At least in my experience if this is true it’s pretty outdated, since engineers and physicists (at the very least) learn about the Laplacian, Laplace’s equation, and the Laplace transform.

Two things I wanted from the book but didn’t get were the origin story and effects of Cox’s theorem and how ET Jaynes fits into modern Bayesian thinker, particular in regards to his acclaimed posthumous book Probability Theory: The Logic of Science. There’s no mention of Cox’s and there are only two mentions of Jaynes as basically a voice in the wilderness.

However, that was just my wishlist. My main complaint is that neophyte readers won’t come away with a deep understanding of what actually was being debated. I understand that it’s hard to explain statistical techniques without becoming a math-ridden textbook, but to a great extent many of the points of contention just sound like arcana if you don’t have prior experience with statistics. If you’re somewhat interested in statistics and want to know what all the controversy is about, I’m not sure that this book will clear things up for you. It’s mostly just historical anecdote light reading.

 

Posted in Mathematics, Reviews, Science (general)

From a contradiction you can prove anything

Premise:
0. P ∧ P’
Theorem
1. P (simplification from 0)
2. P’ (simplification from 0)
3. P ∨ M (addition from 1)
4. M (disjunctive syllogism from 2 and 3) Q.E.D.

Or in words: P and not P are true. Therefore, P is true. Also, not P is true. Since P is true, P or some other statement M are true. But since not P is true, P or M must collapse into just M.

But we just made M up. It can be whatever we want. Let’s substitute in some actual sentences to see how this works:

P = It is raining
Not P = It is not raining
M = Earth is made out of cottage cheese

It is both raining and not raining. Therefore, it is raining. Also, it is not raining. If it’s raining, that means that we can say that it is true that it is raining or Earth is made out of cottage cheese. But it’s not raining, so in order for that to be true, Earth must be made out of cottage cheese.

The logic is valid (and fairly straightforward), but the conclusion is categorically wrong. Note that this relies on the logician’s use of inclusive as opposed to exclusive or. The latter is what we use in everyday language, but the former is what we mean by “or” in a deductive argument.

So if you’re ever examining your own beliefs and discover a contradiction, be warned!

Posted in Logic, Rationality

Huffman codes, language, and mythical creatures

Let’s say you’re a telegraph operator, and you can send bits of 0 or 1 down the line one at a time. Each bit has a cost associated with it; the longer the message the more you have to pay. To simplify things, let’s say the alphabet only consists of A, B, C, and D and that they occur with frequencies:

A 50%
B 20%
C 15%
D 15%

While these numbers are of course made up, it is true that different letters of the alphabet occur with different frequencies.

How then should we encode our messages? A naive method would be to use two bits for each letter, since we have four letters total and 2² = 4. Then we would have perhaps

A 00
B 01
C 10
D 11

But remember, it costs us a little to send each bit. Can we do better?

We can, using what’s called Huffman coding. The general procedure is to take the most common letter (in this case A), and give it a code that we can unambiguously terminate on. So we’ll give it the code 0, so if we ever get a message that starts with 0, we immediately know it’s an A and we can start afresh figuring out the next letter (in our previous naive encoding, if we received a 0 we wouldn’t know if it were an A or a B before getting the next bit).

I won’t go into the algorithm to generate Huffman codes (see the Wikipedia page or an algorithms textbook for the full details; it’s not complicated) and will just give the results:

A 0
B 10
C 110
D 111

On the face of it this doesn’t look promising. Whereas before we only ever had to send two digits, now we sometimes have to send three! But remember that those are the least frequent letters in our alphabet. Can we get an actual measure of how long a “typical” message will be? The answer is yes, by multiply the frequencies of each letter by their length in bits:

\displaystyle 0.5 \times 1 \text{ bit} + 0.2 \times 2\text{ bits} + 2(0.15 \times 3\text{ bits}) = 1.8\text{ bits}

Which is better than our naive encoding of 2 bits. Try decoding the following message (solution at the end of the post):

010101100111

The general philosophy behind Huffman coding is to make the common short and the uncommon long. If we were designing a language, we would want the most common words to be the shortest, and to some extent that’s how English operates! The most common words are almost all one syllable long. Just imagine if instead of “the” we used “Brobdingnagian” and instead of “a” we used “floccinaucinihilipilification.” It would take forever to say anything at all.

All this is to bring me to the observation that led me to write today’s post: why are the names of some mythical creatures so high up in English’s Huffman coding? Elf, dwarf, troll, ghost, wight… or if we allow ourselves two syllables dragon, ogre, werewolf, vampire… I think it says something colorful about the English-speaking peoples that certain mythical creatures are so common as to warrant short names!

And other peoples of course. The inspiration for all of this was that I saw that the Japanese term for dragon was “ryuu” and thought “That’s an awfully short word for a made-up creature.”

Decoding solution: ABBCAD

Posted in Mathematics, Science (general)

Book Review: The Undercover Economist

The Undercover Economist by Tim Harford

I no longer enjoy popular physics books. Usually I quit part way through thinking “I’d rather just be rereading my textbooks,” since I’m at the level where I’m aware of how much deeper and more precise is the topic at hand (and how many caveats there are) that just can’t be covered in a popular-level introduction. A friend of mine recently expressed a similar point:

I was on a 60-person camping trip a while ago and some random dudes started talking about infinity and up-arrow notation and Graham’s number. I had to get up from the campfire and leave.

Now when it comes to other fields where I don’t have the same background I usually greatly enjoy popular-level introductions. The marginal return of insights gained per words read for me in fields like biology, philosophy, linguistics, etc is high. But as I gain some semblance of mastery in any particular discipline, I more and more would rather just be reading textbooks or even papers in the literature. If you will, one of my goals in life is to increase the number of things I have to walk away from the campfire over. Economics is starting to be that way for me.

I came across a quote from EconLog that I think characterizes this point very well, from the commenter “enronal”:

…I think economists are too apologetic about their tools, cost functions nothwithstanding [sic]. I started Econ 101 believing the price system and the economy were a fascist, corporate plot. Cost functions, along with supply and demand functions and their relatives, showed me I’d been seeing as through a glass darkly. Those relentlessly logical pictures taught me more in a few pages than all the trendy, lefty sociological tomes that till then had me believing I was a sharp intellectual. The hyper-mathematical graduate economics for math’s sake may be marginally relevant, but the basic tools of economics are nothing to apologize for.

The Undercover Economist is I think quite good as a popular econ introduction, and a number of professional economists have expressed the same opinion. In particular I really enjoyed the discussion of relative scarcity and marginal land in the first chapter as was developed by David Ricardo in 1817. It alone I think is worth the price of admission, and the principles are quickly applied to coffee shops and the oil market.

Subsequent chapters deal with a variety of interesting and fundamental ideas in economics, including price discrimination and opportunity cost, how prices in a competitive market act as signals of information that lead to efficiency, the existence of positive and negative externalities and what if anything to do about them, and markets with incomplete information. There are chapters devoted to auction theory, why bad institutions trap countries like Cameroon in a perpetual state of poverty, why economists are so pro-free trade, and on how China became rapidly richer when it abandoned doctrinaire Maoism and began taking the leash off of its markets.

All good stuff, though I felt the lack of depth required by a shortish popular intro. For example there’s much discussion about how to structure medical incentive schemes to deal with the high cost of medical care, but little attention directed at why medical care is so expensive to begin with and what if anything “we” could do about it. I don’t blame these books for not covering everything in suitable detail; I’m just saying that for me it’s becoming more and more the case that I should just read the actual textbooks now.

There is however room for what we might think of as a “popular-level textbook.” Something that follows the same logic as an actual textbook but is lighter on the mathematical details. For econ I think that David Friedman’s Hidden Order: The Economics of Everyday Life fits that bill, since there’s a much greater exploration of analytical tools like supply and demand diagrams as compared to The Undercover Economist (I’m currently in the midst of reading Hidden Order). When friends of mine who are complete econ neophytes ask for book recommendations, my current advice is to start with the more narrative Undercover Economist and then chase it with the logic of Hidden Order.

Posted in Economics, Reviews

Year in review 2015

This year’s civilization metrics remain the same, with two small changes: I’ve slightly adjusted the ATLAS integrated luminosity values from “recorded” to “delivered” (which is again distinct from “good for physics”) since CERN makes it easier to get at those numbers, and the EIA has adjusted the way it shows solar production so I’ve taken that into account. The table is getting so wide that next year I’ll have to start dropping the early years, or maybe I’ll keep the first year and delete subsequent years so “how we started” is always visible. I have 366 of your Earth days to decide.

2011 2012 2013 2014 2015
Supercomputer (PFLOPs) 10.51 17.59 33.86 33.86 33.86
Known exoplanets 716 854 1055 1855 2041
ATLAS integ. lumin. (fb-1) 5.46 28.54 28.54 28.54 32.89
GenBank base pairs (billions) 135.1 148.4 156.2 184.9 203.9
US peak solar production (thousand MW-hr) 229.2 527.1 987.5 2,857.8 4,089.6
Kardashev score 0.7240 0.7248 0.7257 0.7264 NA
World population (billions) 7.0 7.1 7.1 7.2 7.3

The most popular post I’ve done by far was on SpaceX’s launch rate. The good news in that front is the recent successful landing of the first stage of their Falcon 9 rocket.

I published a paper this year, and I current have two more in the pipe, five by five. The published one is on a certain method in quantum mechanics for 1D systems, and the two in production papers are for 2D and 3D.

Last year I mentioned how I was juggling an iPod touch and my HTC One phone. That stupidity ceased when I bought an iPhone 6 Plus and I further increased the utility of listening to podcasts and audiobooks by buying some wireless headphones.

Posted in General