It does a decent job of conveying the essential idea for a broader readership: perturb a graph through its adjacency matrix just enough to make the universality conjecture hold for the distribution of eigenvalues -> analytically establish that the perturbation was so small that the result would carry back to the original adjacency matrix (I imagine this is an analytical estimate bounding the distance between distributions in terms of the perturbation) -> use the determined distribution to study the probability of the second eigenvalue being concentrated around the Alon-Bopanna number.
I haven't had a chance to read the paper and don't work in graph theory but close enough to have enjoyed the article.
Ah, I knew I recognized these from somewhere! There was another paper released a while ago about Hamiltonian cycles in expander graphs: https://news.ycombinator.com/item?id=40609753
69% looks surprising like the answer to this puzzle:
> The numbers 1–n are randomly placed into n boxes in a line. There are n people who are each able to look into half the boxes. While they are allowed to coordinate who looks into which boxes beforehand, they are taken out one at a time to choose which half of the boxes they will peek at. The goal is for the first person to find the number one, the second person to find the number two, and so on. If any of them fail to find their number, the whole group loses. What is the probability they lose if they use the optimal strategy?
I wonder if there's a connection to regular graphs here.
If you'll take a followup question: what is the best that can be achieved if you must decide which boxes to open before seeing what's inside any of them?
(That was how I understood the original description, and I was having a really hard time imagining a strategy other than "make sure there is an assignment of numbers to boxes that can satisfy the plan".)
Might be fruitful to apply this on p2p mesh networks.
I suppose you should be able to make a model describing how the relationship between the fraction of byzantine nodes affects the probability distribution of connectedness. Then you could figure out what algorithm parameters would put you within desired bounds for tolerated ratios of byzantine.
Byzantine has I think been misused. It’s the least number of good members you need to be successful, not the best number. I think there’s a reason parliamentary systems have a supermajority rule for making certain kinds of changes and a simple majority for others and we should probably be doing the same when we model systems.
It is simple enough for an adversarial system to subvert some members via collusion and others via obstruction. Take something like Consul which can elect new members and remove old ones (often necessary in modern cloud architectures). What does 50.1% mean when the divisor can be changed?
And meshes are extremely weird because the whole point is to connect nodes that cannot mutually see each other. It is quite difficult to know for sure if you’re hallucinating the existence of a node two hops away from yourself. You’ve never seen each other, except maybe when the weather was just right one day months ago.
> Byzantine has I think been misused. It’s the least number of good members you need to be successful, not the best number.
Could you elaborate? It sounds like you are talking more about challenges of distributed consensus (elections, raft). What I have in mind is distributed peering algorithms for decentralized networks. No consensus, elections, or quorum required. You may wish to run consensus algos on top of such networks but that's one layer up, if you will.
Byzantine in the context of unpermissioned networks is often explained as the sybil problem, which maps to the issues you mention.
Applying OP to this setting wouldn't mitigate that but I'm thinking it can be used as a framework to model and reason about these systems. Perhaps even prove certain properties (assuming some form of sybil resistance mechanism, I guess).
Consider the following computer sciency problem: construct an acyclic network of "sorting gates" (which take x and y as input and output min(x,y) and max(x,y)) so that it sorts n inputs.
A merge-sort like algorithm had been known that worked in O(n(logn)^2) gates. It was an open problem for a while if it could be done with O(nlogn) gates (which would be the best possible). This was settled in the affirmative via a construction using expander graphs.
Asking a candidate to solve proofs for a typical SWE interview in 2025 tells you that they don't know how to hire and likely google'd the answers before the interview themselves.
Unless you are a research scientist at an AI research company or top hedge fund, the interviewer should be best prepared to answer why they really need someone to know these proofs in their actual job.
> The fraction turned out to be approximately 69%, making the graphs neither common nor rare.
The wording kinda bothers me... Either 31% or 69% is exceedingly common.
Rare would be asymptotically few, or constant but smaller than e.g. 1 in 2^256.
I guess the article covers it's working definition of common, ever so briefly:
> that if you randomly select a graph from a large bucket of possibilities, you can practically guarantee it to be an optimal expander.
So it's not a reliable property, either way.
The article reads as written by someone who just learned about graphs, it focuses so much on the bet and so less on explaining Ramanujan expanders
Not sure I agree.
It does a decent job of conveying the essential idea for a broader readership: perturb a graph through its adjacency matrix just enough to make the universality conjecture hold for the distribution of eigenvalues -> analytically establish that the perturbation was so small that the result would carry back to the original adjacency matrix (I imagine this is an analytical estimate bounding the distance between distributions in terms of the perturbation) -> use the determined distribution to study the probability of the second eigenvalue being concentrated around the Alon-Bopanna number.
I haven't had a chance to read the paper and don't work in graph theory but close enough to have enjoyed the article.
I agree with you, I work with graph algebra libraries and this article did a very nice job.
Ah, I knew I recognized these from somewhere! There was another paper released a while ago about Hamiltonian cycles in expander graphs: https://news.ycombinator.com/item?id=40609753
Also see a quite unrelated paper about a property of expander graphs: https://news.ycombinator.com/item?id=36856881
These are certainly popular objects!
69% looks surprising like the answer to this puzzle:
> The numbers 1–n are randomly placed into n boxes in a line. There are n people who are each able to look into half the boxes. While they are allowed to coordinate who looks into which boxes beforehand, they are taken out one at a time to choose which half of the boxes they will peek at. The goal is for the first person to find the number one, the second person to find the number two, and so on. If any of them fail to find their number, the whole group loses. What is the probability they lose if they use the optimal strategy?
I wonder if there's a connection to regular graphs here.
> 69% looks surprising like the answer to this puzzle:
Haven't read the paper, but wonder if this is the same ln(2) that comes up as the constant in rate monotonic scheduling.
So you alternate so that you're always looking at as many new boxes as possible? Or are the people allowed to communicate?
The quanta article talks about the connections to regular graphs.
Does that puzzle have a name, or a place to read more about it?
Problem 7 here: https://mathcontest.unm.edu/PastContests/2016-2017/2016-2017...
Or a Veritasium video here: https://www.youtube.com/watch?v=iSNsgj1OCLA
If you'll take a followup question: what is the best that can be achieved if you must decide which boxes to open before seeing what's inside any of them?
(That was how I understood the original description, and I was having a really hard time imagining a strategy other than "make sure there is an assignment of numbers to boxes that can satisfy the plan".)
Might be fruitful to apply this on p2p mesh networks. I suppose you should be able to make a model describing how the relationship between the fraction of byzantine nodes affects the probability distribution of connectedness. Then you could figure out what algorithm parameters would put you within desired bounds for tolerated ratios of byzantine.
Byzantine has I think been misused. It’s the least number of good members you need to be successful, not the best number. I think there’s a reason parliamentary systems have a supermajority rule for making certain kinds of changes and a simple majority for others and we should probably be doing the same when we model systems.
It is simple enough for an adversarial system to subvert some members via collusion and others via obstruction. Take something like Consul which can elect new members and remove old ones (often necessary in modern cloud architectures). What does 50.1% mean when the divisor can be changed?
And meshes are extremely weird because the whole point is to connect nodes that cannot mutually see each other. It is quite difficult to know for sure if you’re hallucinating the existence of a node two hops away from yourself. You’ve never seen each other, except maybe when the weather was just right one day months ago.
> Byzantine has I think been misused. It’s the least number of good members you need to be successful, not the best number.
Could you elaborate? It sounds like you are talking more about challenges of distributed consensus (elections, raft). What I have in mind is distributed peering algorithms for decentralized networks. No consensus, elections, or quorum required. You may wish to run consensus algos on top of such networks but that's one layer up, if you will.
Byzantine in the context of unpermissioned networks is often explained as the sybil problem, which maps to the issues you mention.
Applying OP to this setting wouldn't mitigate that but I'm thinking it can be used as a framework to model and reason about these systems. Perhaps even prove certain properties (assuming some form of sybil resistance mechanism, I guess).
Expander graphs are cool.
Consider the following computer sciency problem: construct an acyclic network of "sorting gates" (which take x and y as input and output min(x,y) and max(x,y)) so that it sorts n inputs.
A merge-sort like algorithm had been known that worked in O(n(logn)^2) gates. It was an open problem for a while if it could be done with O(nlogn) gates (which would be the best possible). This was settled in the affirmative via a construction using expander graphs.
Coming to a leetcode interview soon near you!
Asking a candidate to solve proofs for a typical SWE interview in 2025 tells you that they don't know how to hire and likely google'd the answers before the interview themselves.
Unless you are a research scientist at an AI research company or top hedge fund, the interviewer should be best prepared to answer why they really need someone to know these proofs in their actual job.
[dead]