Programming
- Designing Go Libraries
- “The Best Programming Advice I Ever Got” with Rob Pike
Ken [Thompson] taught me that thinking before debugging is extremely important. If you dive into the bug, you tend to fix the local issue in the code, but if you think about the bug first, how the bug came to be, you often find and correct a higher-level problem in the code that will improve the design and prevent further bugs.
I recognize this is largely a matter of style. Some people insist on line-by-line tool-driven debugging for everything. But I now believe that thinking—without looking at the code—is the best debugging tool of all, because it leads to better software.
- I sped up serde_json strings by 20%
- Notes on structured concurrency, or: Go statement considered harmful
- ReDoS “vulnerabilities” and misaligned incentives
- You might want to use panics for error handling
- The post is (probably intentionally) misleading named: the author agrees that
Result
’s interface is preferable to exceptions but posits that handling the error case by unwinding instead of manually propagating up the call stack is a better implementation strategy.
- The post is (probably intentionally) misleading named: the author agrees that
- Algorithms for Modern Hardware
- Only skimmed the instruction parallelism and memory sections; on my bucket list to come back and follow the code more closely some day.
- CPU Cache
- I’ve been meaning to read about CPU caches for some time; the post below on false sharing
popping up on my HN feed finally pushed me to do it. My takeaways:
- Modern CPUs comprise multiple cores, which can run concurrently.
- Accessing RAM is slow, so there are various hardware caches that are physically closer to cores and store recently accessed data.
- Caches are arranged in levels: L1 is fastest to access, is usually per-core, and has the smallest size. L2 cache is larger but slower to access, and so on. Since some cache levels are per-core, there needs to be some synchronization mechanism (ensuring cache coherence.)
- Within a cache, data is organized along cache lines—chunks of contiguous memory; when reading an address, the whole cache line is loaded.
- I’ve been meaning to read about CPU caches for some time; the post below on false sharing
popping up on my HN feed finally pushed me to do it. My takeaways:
- This is your brain on false sharing
- Rust Deep Dive: Borked Vtables and Barking Cats
- Unsafely accessing and manipulating the vtable of dyn trait objects in Rust. For fun, I then did the same thing in Go.
- Go Data Structures: Interfaces
- fast-check: how it works
- A deep dive into the internals of property-based testing frameworks; I really enjoyed the discussion of shrinking (which still feels like pure magic, even after learning how it’s implemented!)
Math and Science
-
Quanta: Mathematicians Prove Hawking Wrong About the Most Extreme Black Holes
-
A wonderful coincidence or an expected connection: why $\pi^2 \approx g$
- I’d always written off $\pi^2 \approx 9.87$ being close to $g = \pu{9.81 m s^-2}$ as a fluke of nature, but this is not so—the meter was originally defined by the length of a pendulum with period $\pu{2 s}$, and hence $$ T = 2\pi \sqrt\frac{L}{g} $$ means at some point $\pi^2$ was exactly equal to $g$ expressed in $\pu{m s^-2}$.
-
Quanta: Grad Students Find Inevitable Patterns in Big Sets of Numbers