• NVIDIA to Manufacture American-Made AI Supercomputers in US for First Time ↗

    NVIDIA:

    Together with leading manufacturing partners, the company has commissioned more than a million square feet of manufacturing space to build and test NVIDIA Blackwell chips in Arizona and AI supercomputers in Texas.

    NVIDIA Blackwell chips have started production at TSMC’s chip plants in Phoenix, Arizona. NVIDIA is building supercomputer manufacturing plants in Texas, with Foxconn in Houston and with Wistron in Dallas. Mass production at both plants is expected to ramp up in the next 12-15 months.

    The AI chip and supercomputer supply chain is complex and demands the most advanced manufacturing, packaging, assembly and test technologies. NVIDIA is partnering with Amkor and SPIL for packaging and testing operations in Arizona.

    Within the next four years, NVIDIA plans to produce up to half a trillion dollars of AI infrastructure in the United States through partnerships with TSMC, Foxconn, Wistron, Amkor and SPIL. These world-leading companies are deepening their partnership with NVIDIA, growing their businesses while expanding their global footprint and hardening supply chain resilience.

    Previously: Apple will spend more than $500 billion in the U.S. over the next four years.

  • Visualizing all books of the world in ISBN-Space ↗

    phiresky:

    How could we effectively visualize 100,000,000 books or more at once? There’s lots of data to view: Titles, authors, which countries the books come from, which publishers, how old they are, how many libraries hold them, whether they are available digitally, etc.

    International Standard Book Numbers (ISBNs) are 13-digit numbers that are assigned to almost all published books. Since the first three digits are fixed (currently only 978- and 979-) and the last digit is a checksum, this means the total ISBN13-Space only has two billion slots. Here is my interactive visualization of that space.

  • Turbocharging V8 with mutable heap numbers ↗

    Victor Gomes:

    At V8, we’re constantly striving to improve JavaScript performance. As part of this effort, we recently revisited the JetStream2 benchmark suite to eliminate performance cliffs. This post details a specific optimization we made that yielded a significant 2.5ximprovement in the async-fs benchmark, contributing to a noticeable boost in the overall score. The optimization was inspired by the benchmark, but such patterns do appear in real-world code.