SoftBank’s aggressive push into artificial intelligence is beginning to unsettle investors, with concerns mounting over the scale—and risk—of its deepening financial ties to OpenAI. What was once seen as a bold, future-defining bet is now being reassessed amid intensifying competition and growing geopolitical unease surrounding the use of AI in military operations.

At the centre of investor anxiety is SoftBank’s heavy capital exposure to OpenAI-linked ventures, including multi-billion-dollar infrastructure and deployment projects. While the strategy promises long-term dominance in AI, critics argue it is also highly leveraged and dependent on uncertain returns. Large-scale initiatives like joint AI infrastructure programmes have already faced delays and funding questions, reinforcing doubts about execution timelines and profitability.
The unease is compounded by the rapidly shifting competitive landscape. Rivals such as Google and Anthropic are accelerating their own AI deployments, particularly in enterprise and government sectors. The race is no longer just about innovation—it is about securing strategic clients, including national governments, where contracts are lucrative but politically sensitive.
That sensitivity has come sharply into focus with the growing militarisation of AI. Recent developments show OpenAI expanding its reach into U.S. government and defence systems through cloud partnerships, marking a significant shift from earlier, more cautious positioning. This move places the company at the heart of a high-stakes ecosystem where technology, ethics, and national security intersect.
At the same time, Anthropic’s clash with the Pentagon has highlighted deep divisions within the industry. The company resisted demands to remove safeguards limiting the use of its AI in surveillance and autonomous weapons, leading to its designation as a “supply chain risk” and exclusion from defence contracts. The fallout underscores a broader dilemma: whether AI firms should prioritise ethical constraints or commercial and strategic advantage.
These tensions are not abstract. Reports linking AI tools to military operations—including controversial incidents tied to conflict scenarios involving Iran—have amplified public and investor concern about how quickly the technology is being deployed in real-world combat environments. The possibility that commercial AI systems are already embedded in sensitive military decision-making has sparked debate about oversight, accountability, and long-term global stability.
For SoftBank, the implications are significant. Its investment thesis depends on OpenAI maintaining both technological leadership and regulatory acceptance. But as governments tighten scrutiny and public debate intensifies, the pathway to monetisation becomes less predictable.
In this evolving landscape, AI is no longer just a growth story—it is a geopolitical asset class. And for investors, that shift introduces a new layer of risk, where returns are tied not only to innovation, but to policy, ethics, and the uncertain trajectory of global conflict.


