Blog posts on EPHhttps://ericphanson.com/blog/Recent content in Blog posts on EPHHugo -- gohugo.ioen-usWhy you might avoid `deepcopy` in Juliahttps://ericphanson.com/blog/2024/why-you-might-avoid-deepcopy-in-julia/Sun, 21 Jul 2024 00:00:00 +0000https://ericphanson.com/blog/2024/why-you-might-avoid-deepcopy-in-julia/Why use deepcopy? In Julia, copy is a function which creates a shallow copy. For example:
julia> a = [1] # vector with one element, namely 1 1-element Vector{Int64}: 1 julia> b = [a] # vector with one element, `a` 1-element Vector{Vector{Int64}}: [1] julia> b2 = copy(b) # new vector, also with one element which is `a` 1-element Vector{Vector{Int64}}: [1] julia> push!(a, 2) # mutate `a` so it contains 1 and 2 2-element Vector{Int64}: 1 2 julia> b # since `b` contains `a`, we can see its (nested) contents have changed 1-element Vector{Vector{Int64}}: [1, 2] julia> b2 # same for `b2`!Learning algorithmic techniques: dynamic programminghttps://ericphanson.com/blog/2019/learning-algorithmic-techniques-dynamic-programming/Sun, 10 Nov 2019 00:00:00 +0000https://ericphanson.com/blog/2019/learning-algorithmic-techniques-dynamic-programming/Three nice techniques In the past months, I’ve found myself really appreciating some nice techniques for constructing algorithms. These are probably quite familiar to those with a computer science background, but are new to me. There are three in particular that I have in mind; I’ll just highlight the first two here, and then discuss the third in detail.
The first such technique I learned about was solving the traveling salesman problem via mixed-integer programming, using lazy constraints.When do we lose correlations under Markovian evolution?https://ericphanson.com/blog/2019/when-do-we-lose-correlations-under-markovian-evolution/Fri, 22 Mar 2019 00:00:00 +0000https://ericphanson.com/blog/2019/when-do-we-lose-correlations-under-markovian-evolution/⊕ This post started as a talk for the CCIMI retreat, the slides of which are available here. But I added a lot of words to turn this into a blog post, so I encourage you to stay here instead!
Consider the following divided box with \(N=20\) particlesA discrete-time Ehrenfest model
.
Toggle animation At each time step, exactly one particle jumps from one side of the box to the other.Another example of using type domain information in Juliahttps://ericphanson.com/blog/2019/another-example-of-using-type-domain-information-in-julia/Sat, 09 Mar 2019 00:00:00 +0000https://ericphanson.com/blog/2019/another-example-of-using-type-domain-information-in-julia/In a previous post, I discussed using type domain information to speed up generation of random density matrices with small dimension in Julia. There, we gave the Julia compiler knowledge of the dimension of the matrices at the time it generates code, instead of passing that dimension as a runtime variable, and saw significant runtime speedups as a consequence. This time, let’s push this further by giving the compiler a whole vector of numbers instead of a single integer.Carathéodory's theorem and the Holevo capacityhttps://ericphanson.com/blog/2018/caratheodorys-theorem-and-the-holevo-capacity/Tue, 18 Dec 2018 00:00:00 +0000https://ericphanson.com/blog/2018/caratheodorys-theorem-and-the-holevo-capacity/Let \(\Lambda: \mathcal{B}(\mathcal{H})\to \mathcal{B}(\mathcal{H})\) be a quantum channelcompletely positive and trace-preserving map
on a finite-dimensional Hilbert space \(\mathcal{H}\), and let \(d:= \dim \mathcal{H}\). The Holevo capacity of \(\Lambda\) is defined as \[ \chi(\Lambda) := \sup_{\{p_k, \rho_k\}} \left[ S\Big(\sum_k p_k \Lambda(\rho_k)\Big) - \sum_k p_k S( \Lambda(\rho_k)) \right] \] where supremum is over ensembles of quantum states \(\rho_k\) and probability distributions \(\{p_k\}\). The HSW theorem (Holevo 1998; Schumacher and Westmoreland 1997) states that the product state classical capacity of a quantum channel \(\Lambda\) is given by its Holevo capacity, thus providing the quantity with an operational meaning.Fast small random density matriceshttps://ericphanson.com/blog/2018/fast-small-random-density-matrices/Sun, 23 Sep 2018 20:07:42 +0000https://ericphanson.com/blog/2018/fast-small-random-density-matrices/Update (19 January 2019): I looked at this code again, and realized I had made a few basic mistakes, such as constructing a statically sized normally-distributed random matrix via SMatrix{d,d,Float64, d*d}(randn(Float64, d,d)) which constructs a random matrix (allocating memory dynamically) and then converts it to a static SMatrix, instead of randn(SMatrix{d,d,Float64}) which directly constructs an SMatrix without dynamic memory allocation. I also hadn’t written the function randsimplexpt in a very good way.Website rewritehttps://ericphanson.com/blog/2018/website-rewrite/Sat, 25 Aug 2018 19:26:42 -0400https://ericphanson.com/blog/2018/website-rewrite/I decided to redo my website using Hugo (before I was using Jekyll). Both are static site generators: you write in Markdown, a simple clean formatting language, and it generates HTML webpages following a consistent format. It’s great because the generation only has to occur once after the Markdown is written, and nothing active has to happen for each user of the website (unlike a so-called dynamic website). This reduces costs, increases reliability, and improves securitythe user needs very little access to the server hosting the webpages in a static siteLocally maximizing the Rényi entropieshttps://ericphanson.com/blog/2018/locally-maximizing-the-renyi-entropies/Sat, 25 Aug 2018 00:00:00 +0000https://ericphanson.com/blog/2018/locally-maximizing-the-renyi-entropies/As I was rewriting my website, I found some visualizations I had stored on my old website to show a collaborator, and I figured it was worth writing a little to have a more proper place to put them; hence this post 😊.
Probability distributions on three letters consist just of three non-negative numbers which add up to 1, which we can see as a vector in \(\mathbb{R}^3\). The set of all such distributions form a simplex, which looks like a 2D triangle laying in \(\mathbb{R}^3\):How to make an index in LaTeXhttps://ericphanson.com/blog/2018/how-to-make-an-index-in-latex/Wed, 20 Jun 2018 00:00:00 +0000https://ericphanson.com/blog/2018/how-to-make-an-index-in-latex/I’m sure there are many ways to make an index in LaTeX, but I was asked this question recently and thought I’d put my response here, which is based on what I did in my combinatorics notes.
In the preamble, make a command \defw (short for “define word”), use the package imakeidx, and call \makeindex:
\usepackage{imakeidx} \usepackage{xparse} \NewDocumentCommand{\defw}{m o}{% {\emph{#1}}% \IfNoValueTF{#2} {\index{#1}} {\index{#2}}% } \makeindex Generate the index at the end of the file with \printindex, like one does with a bibliography.arXiv-search (the sad goodbye-for-now post)https://ericphanson.com/blog/2018/arxiv-search/Mon, 14 May 2018 00:00:00 +0000https://ericphanson.com/blog/2018/arxiv-search/Update (26 January 10): The source code is now available under a MIT license at https://github.com/ericphanson/arxiv-search.
For a few months, I and a few others were working on a project we called “arxiv-search”, an attempt to search and sort all of the arxiv (~1 million papers). We were inspired by Andrej Karpathy’s arxiv sanity preserver which is an excellent tool for a limited set of papers (~50,000). Starting from that project, we ended up writing a new backend and frontend.A synchronized dance of eigenvalueshttps://ericphanson.com/blog/2017/a-synchronized-dance-of-eigenvalues/Tue, 26 Sep 2017 00:00:00 +0000https://ericphanson.com/blog/2017/a-synchronized-dance-of-eigenvalues/The motivation behind this post is to show some off some nice gifs. But I thought maybe they aren’t actually interesting without any context, so below I’ll try to explain what the gifs are about.
Brief mathematical introduction to Perron-Frobenius theory Consider the set of \(n\times n\) matrices \(M_n\), with complex entries. The algebraic structure (scalar multiplication by complex numbers, addition and multiplication of matrices) plays well with the operator norm, \[ \|A\| = \sup_{ v\in \mathbb{C}^n: \|v\|_2=1 } \|A v\|_2 \] and the involution \(*\) (i.Perceptron demohttps://ericphanson.com/blog/2017/perceptron-demo/Wed, 19 Jul 2017 00:00:00 +0000https://ericphanson.com/blog/2017/perceptron-demo/A javascript demonstration of the perceptron algorithm, written for a reading group session. This was originally a separate webpage, but when I rewrote my website I decided to make it a blog post. This post can still be reached from /perceptron-demo, though.
Perceptron Perceptron is a very simple binary classification online learning algorithm, dating from the 1950s1. Such an algorithm tries to classify input (here, points in the plane) into one of two categories.The arrow of time in RIShttps://ericphanson.com/blog/2017/the-arrow-of-time-in-ris/Wed, 18 Jan 2017 21:09:03 +0000https://ericphanson.com/blog/2017/the-arrow-of-time-in-ris/Repeated interaction systems (RIS) Introduction to arrow of time A more careful description of the forward and backward processes Hypothesis testing on the arrow of time Connection to Landauer’s Principle I gave an informal talk today on the arrow of time in repeated interaction systems and I thought I’d write about it here.
Repeated interaction systems (RIS) For a recent review of RIS, see arxiv/1305.2472. We also introduce them in my and my coauthor’s work Landauer’s Principle in Repeated Interaction Systems.The traveling salesman and 10 lines of Pythonhttps://ericphanson.com/blog/2016/the-traveling-salesman-and-10-lines-of-python/Tue, 25 Oct 2016 00:00:00 +0000https://ericphanson.com/blog/2016/the-traveling-salesman-and-10-lines-of-python/Update (21 May 18): It turns out this post is one of the top hits on google for “python travelling salesmen”! That means a lot of people who want to solve the travelling salesmen problem in python end up here. While I tried to do a good job explaining a simple algorithm for this, it was for a challenge to make a progam in 10 lines of code or fewer.Setting up SublimeTexthttps://ericphanson.com/blog/2016/setting-up-sublimetext/Fri, 14 Oct 2016 00:00:00 +0000https://ericphanson.com/blog/2016/setting-up-sublimetext/I’m setting up a new laptop and figured it was a good chance to document setting up SublimeText for LaTeX, as quasi follow up to my recent post about live LaTeXing. However, installing LaTeX can be hard, and configuring the SublimeText package LaTeXTools to work with LaTeX has the potential to be hard (often just works, but if not, can be confusing), and I won’t write about those parts, because they are better documented elsewhere.Live notetaking with LaTeXhttps://ericphanson.com/blog/2016/live-notetaking-with-latex/Sat, 24 Sep 2016 00:00:00 +0000https://ericphanson.com/blog/2016/live-notetaking-with-latex/A friend suggested I write a guide about live notetaking in LaTeX, since it’s pretty useful and something I have a fair amount of experience with. I had been making attempts at live-LaTeXing course notes for a year or two before I was able to make it through a semester long class; the first course I fully LaTeX’d was Vojkan Jaksic’s excellent Analysis 3 (introduction to metric spaces and topology)PDF.Landauer's Principle and the balance equationhttps://ericphanson.com/blog/2016/landauers-principle-and-the-balance-equation/Mon, 29 Feb 2016 21:50:00 +0000https://ericphanson.com/blog/2016/landauers-principle-and-the-balance-equation/I’ve been working on my thesisEdit: this was my master’s thesis
over reading week, and I think I’ve finished my introduction to Landauer’s Principle. I ended up writing a pretty detailed derivation of the balance equation, and thus Landauer’s bound, so I thought it might be useful to post here.
Landauer's principle states that there is a minimal energetic cost for a state transformation \(\rho^\text{i}\to \rho^\text{f}\) on a system \(\mathcal{S}\) via the action of a thermal reservoir \(\mathcal{E}\) at temperature \((k_B\beta)^{-1}\)⊕ \(k_B \approx 1.Cantor's set and functionhttps://ericphanson.com/blog/2016/cantors-set-and-function/Tue, 09 Feb 2016 00:00:00 +0000https://ericphanson.com/blog/2016/cantors-set-and-function/Cantor’s Set Construction Properties Cantor’s function I wrote these notes in February 2016 for an Analysis 2 tutorial when I was a teaching assistant at McGill, and always intended to put them here eventually; before August 2018 though, I hadn’t translated them to something web-friendly and only had posted a PDFThe web version has slightly improved wording in some parts.
.
Cantor’s Set Cantor’s set is an interesting subset of \([0,1]\), with properties that help illuminate concepts in analysis.Completeness Ihttps://ericphanson.com/blog/2016/completeness-i/Fri, 01 Jan 2016 00:00:00 +0000https://ericphanson.com/blog/2016/completeness-i/Last semester, I helped a friend review McGill’s Analyis 3 course by trying to provide a better feel for completeness; this post will be a slightly edited version of that. Originally, I wanted to write about both completeness and compactness, and their connections, but I ended up only getting to completeness, and actually not everything I wanted to talk about. So I’ll call this Completeness I, leaving open the possibility for more of these in the future.New bloghttps://ericphanson.com/blog/2015/new-blog/Mon, 14 Dec 2015 00:00:00 +0000https://ericphanson.com/blog/2015/new-blog/I’ve made a new blog!
I started a “coffee blog” a month or so ago but only posted once (I think I’ll bring that post over here, too). I do like the idea though, and want to write about math and my research. So this should become a place for me to do that.
Part of my impetus is that journals and traditional publication methods don’t provide a pathway for discussing failed attempts.