Elon Musk’s artificial intelligence company, xAI, is reportedly preparing a massive $20 billion fundraising round, combining both equity and debt — with Nvidia playing a key role in supplying and financing the required chips.
According to reports, the funding package will consist of $7–8 billion in new equity and up to $12 billion in debt, arranged through a special-purpose vehicle (SPV). This SPV will buy Nvidia GPUs and lease them back to xAI, allowing the company to scale its AI training operations without directly shouldering all the upfront costs. Nvidia itself is expected to contribute as much as $2 billion to the equity round.
The setup effectively ensures xAI’s access to Nvidia’s hardware — a valuable advantage amid the ongoing GPU shortage — while giving Nvidia a stake in one of the largest AI computing efforts in the U.S. The GPUs will power Colossus 2, xAI’s 100-megawatt data center in Memphis, which went live earlier this year. Musk reportedly aims to double the GPU count at the facility to 200,000 units.
Interestingly, this follows Musk’s public denial in September of earlier reports claiming xAI was raising $10 billion at a $200 billion valuation, which he dismissed as “fake news.”
Power supply remains a controversial aspect of xAI’s expansion. Since the Memphis grid infrastructure is still being developed, the company has been using methane gas turbines to generate electricity on-site. Regulators and environmental groups have raised concerns that these turbines were installed and operated without full permits, a claim xAI has not publicly contested.
The Southern Environmental Law Center (SELC) reports that a second site in Memphis is already under consideration, potentially adding 40 to 90 turbines and generating up to 1.5 gigawatts — far exceeding the needs of Colossus 2’s initial phase. Internal documents suggest the broader plan is to bypass grid limitations by creating independent power sources, regardless of environmental objections.
If this $20 billion deal goes through, xAI will not only secure the GPU supply needed for massive AI model training, but also gain greater operational independence to run its infrastructure entirely on its own terms.