Marc Andreessen, co-founder of Netscape and Andreessen Horowitz, on driven people:
I define drive as self-motivation—people who will walk right through brick walls, on their own power, without having to be asked, to achieve whatever goal is in front of them.
People with drive push and push and push and push and push until they succeed.
Winston Churchill after the evacuation of Dunkirk:
“We shall not flag or fail. We shall go on to the end, we shall fight in France, we shall fight on the seas and oceans, we shall fight with growing confidence and growing strength in the air, we shall defend our Island, whatever the cost may be, we shall fight on the beaches, we shall fight on the landing grounds, we shall fight in the fields and in the streets, we shall fight in the hills; we shall never surrender.”
Conceptually the <ViewTransition> component is like a DOM fragment that transitions its children in its own isolate/snapshot. The API works by wrapping a DOM node or inner component:
The default is name="auto" which will automatically assign a view-transition-name to the inner DOM node. That way you can add a View Transition to a Component without controlling its DOM nodes styling otherwise.
Rob Pike in the “Things to Do” section of his February 2000 paper while working at Bell Labs:
Go back to thinking about and building systems. Narrowness is irrelevant; breadth is relevant: it’s the essence of system.
Work on how systems behave and work, not just how they compare. Concentrate on interfaces and architecture, not just engineering.
Be courageous. Try different things; experiment. Try to give a cool demo.
Get involved in the details.
Via X
Perhaps the best calls to arms in the history of systems research. Penned by one of the few true renaissance talents in the industry. If you haven't read it, you should. pic.twitter.com/kBwHeoeHSC
A lot has happened in the world of Large Language Models over the course of 2024. Here’s a review of things we figured out about the field in the past twelve months, plus my attempt at identifying key themes and pivotal moments.
We already knew LLMs were spookily good at writing code. If you prompt them right, it turns out they can build you a full interactive application using HTML, CSS and JavaScript (and tools like React if you wire up some extra supporting build mechanisms)—often in a single prompt.
[…] the Chatbot Arena team introduced a whole new leaderboard for this feature, driven by users building the same interactive app twice with two different models and voting on the answer. Hard to come up with a more convincing argument that this feature is now a commodity that can be effectively implemented against all of the leading models.
Apple’s mlx-lm Python supports running a wide range of MLX-compatible models on my Mac, with excellent performance. mlx-community on Hugging Face offers more than 1,000 models that have been converted to the necessary format.
Prince Canuma’s excellent, fast moving mlx-vlm project brings vision LLMs to Apple Silicon as well. I used that recently to run Qwen’s QvQ.
MLX is used by Exo which is a very fast way to get started running models locally.
I get it. There are plenty of reasons to dislike this technology—the environmental impact, the (lack of) ethics of the training data, the lack of reliability, the negative applications, the potential impact on people’s jobs.
[…]
LLMs absolutely warrant criticism. We need to be talking through these problems, finding ways to mitigate them and helping people learn how to use these tools responsibly in ways where the positive applications outweigh the negative.
I think telling people that this whole field is environmentally catastrophic plagiarism machines that constantly make things up is doing those people a disservice, no matter how much truth that represents. There is genuine value to be had here, but getting to that value is unintuitive and needs guidance.
So we face a dilemma: how can we maintain the privacy benefits of local models while still accessing real-time information?
[…]
The problem is that this database is hosted on a server somewhere in the cloud. The host of the server can see our query and the response of that query, compromising our privacy.
A potential solution would be to download all the data locally. That way, we could query the database locally keeping the query and response private. But this is impractical – Twitter alone generates 200TB of text per year. Your phone can’t store that much data.
What we want is a way for the server to process our query without ever seeing what we were asking for or what results we got back. Let’s see if we can find a way to do that by looking at how search works.
How modern search engines work:
Modern search engines convert both documents and queries into normalized vectors such that similar items end up close together. This conversion is called “embedding”. For instance:
"I love cats" → A = [0.7, 0.71] (normalized)
"I like felines" → B = [0.5, 0.866] (normalized)
"Buy cheap stocks" → C = [-0.8, 0.6] (normalized)
Figure 4: Visualization of the three normalized vectors in 2D space. Vectors A and B point in similar directions (high similarity), while vector C is in a different direction to A (low similarity).
To find relevant documents, we look for vectors that point in similar directions – meaning they have a small angle between them. The cosine of this angle gives us a stable, efficient similarity measure ranging from -1 to 1:
cos(0°) = 1 (vectors point same direction)
cos(90°) = 0 (vectors are perpendicular)
cos(180°) = -1 (vectors point opposite directions)
The cosine similarity formula is shown below, where a and b are vectors with components a₁, a₂, etc. and b₁, b₂, etc., θ is the angle between them, · represents the dot product, and ||a|| denotes the length (magnitude) of vector a:
However, when vectors are normalized (meaning their length equals 1), the denominator becomes 1·1 = 1, leaving us with just the dot product:
cos(θ) = a·b = a₁b₁ + a₂b₂ + a₃b₃ + ...
This is why working with normalized vectors is so efficient – the similarity calculation reduces to simply multiplying corresponding elements and adding up the results.
Let’s compute the similarities between our example vectors:
The first high value (0.964) tells us vectors A and B point in similar directions – just as we’d expect for two phrases about cats! The second value (-0.254) shows that vectors A and C point in quite different directions, indicating unrelated topics.
This seemingly simple calculation – just multiplications and additions – is the key to private search.
Via X
Introducing EXO Private Search
Privacy-preserving web search for local LLMs.
Augments local LLMs with realtime search using Linearly Homomorphic Encryption: – Live data from X, crypto, Wikipedia (more coming) – 100,000x less data transfer than client sync – <2s round-trip time… pic.twitter.com/r6qXIiaD6z