Block, the fintech company founded by Twitter’s Jack Dorsey, is the first in North America to implement the Nvidia DGX SuperPOD with DGX GB200 systems. This move marks a significant leap in the company’s AI infrastructure, enabling advanced AI research and development, particularly in open-source AI models.
The Nvidia DGX SuperPOD is a high-performance AI computing system built for complex AI tasks, such as training large language models (LLMs) and generative AI applications. Powered by Nvidia’s Grace Blackwell architecture, the system integrates multiple DGX units equipped with advanced GPUs, high-speed networking, and efficient liquid cooling, optimizing AI processes across industries like healthcare, robotics, and autonomous systems.
The DGX GB200, which plays a central role in this infrastructure, features Nvidia’s cutting-edge Blackwell GPUs. This allows Block to process large datasets and multi-trillion-parameter AI models, helping businesses achieve faster insights, reduced downtime, and optimized operational costs.
Charlie Boyle, Vice President of DGX platforms at Nvidia, explains, “With the DGX GB200 systems, Block’s engineering and research teams can develop frontier open-source AI models that address complex, real-world challenges with state-of-the-art AI supercomputing.”
Block’s Expanding AI Focus: From Deepfake Detection to Generative AI
Block’s AI research has focused on projects like deepfake detection and generative speech models. With the Nvidia DGX SuperPOD, the company plans to expand its AI efforts, especially in generative AI applications. Dhanji R. Prasanna, Block’s Chief Technology Officer, explains, “The AI landscape is shifting dramatically. At Block, we are committed to not only applying AI to current challenges but also exploring and building openly to push the boundaries of AI.”
As part of a growing trend, fintech companies like Block are investing in AI infrastructure to remain competitive, positioning themselves alongside tech giants in driving innovation in financial services and beyond.
To optimize its AI models, Block has partnered with Lambda, an AI cloud provider, to test and refine its models before full deployment. Lambda’s 1-Click Clusters, featuring Nvidia H100 GPUs and InfiniBand networking, provide scalable access to GPU resources for machine learning engineers. These clusters are now upgraded with Nvidia’s Blackwell architecture, further enhancing Block’s AI capabilities.
Charlie Boyle emphasizes the need for robust AI infrastructure: “As AI models become more complex, businesses require infrastructure that can keep pace with innovation.”
Additionally, Block’s AI operations will be hosted at Equinix’s data center, ensuring low-latency connections for efficient training and deployment of AI models. Jon Lin, Chief Business Officer at Equinix, states, “Frontier AI models demand the latest AI chips, like Nvidia’s DGX SuperPOD, and the flexibility offered by Equinix’s cloud-adjacent platform allows companies like Block to scale their AI operations.”
Block’s Commitment to Open-Source AI Development
Alongside its investment in Nvidia’s AI infrastructure, Block recently launched “goose,” an open-source AI agent framework designed to connect large language models with real-world applications. Initially focused on software engineering, the project is exploring broader uses.
Dhanji reiterates Block’s commitment to open-source AI: “We believe AI should be accessible and beneficial to everyone in the financial ecosystem. This infrastructure investment reflects our dedication to an open-source approach, sharing our progress with the wider community.”