Not long ago, major tech companies hesitated to engage in military partnerships.
In 2018, Google faced intense backlash for its role in “Project Maven,” a Pentagon initiative that used AI to enhance drone strike accuracy. The controversy led to mass employee protests, multiple resignations, and ultimately, Google’s withdrawal from the project. The incident ignited broader discussions about Silicon Valley’s relationship with defense technology.
By 2019, London-based AI firm Faculty AI, which provides software and technical expertise to governments and businesses, began navigating the ethics of military work.
A Strategic Shift Toward Defense in the Military
“The military ensures security, but without industry’s latest technologies, it can’t fully benefit,” explains Andrew van der Lem, head of Faculty’s defense division. “We saw an opportunity to contribute responsibly.”
Van der Lem, a former UK government official, joined Faculty in 2021 to explore defense-sector opportunities. In just four years, the company’s defense unit has expanded to 70 employees—over a fifth of its total workforce.
Faculty’s entry into the defense sector coincided with a shift in investor attitudes. Growing global tensions, particularly Russia’s invasion of Ukraine and rising geopolitical concerns about China, have fueled interest in military AI applications.
To address internal concerns, Faculty allows employees to opt out of defense projects. Van der Lem estimates “dozens” of staff members choose not to participate, but there have been no resignations over the issue.
Faculty’s first major defense collaboration was with the UK’s Defence Artificial Intelligence Centre (DAIC), where AI tools were tested across six projects. One breakthrough involved optimizing satellite transmissions, significantly reducing bandwidth use.
The company has since secured multiple government and corporate defense contracts. Faculty also partnered with French AI firm Mistral to facilitate business introductions in their respective markets.
Large Language Models (LLMs), like those developed by Mistral, are becoming vital military tools, says van der Lem. Beyond automating routine tasks, AI can assist decision-makers in high-pressure situations.
“We can explore scenarios like ‘How will local farmers react to a tank crossing their fields?’ or ‘What happens if we blockade a bridge?’” he explains.
The Push for AI Sovereignty
The race for AI supremacy has intensified, especially after Chinese startup DeepSeek unveiled a high-performing chatbot at a fraction of Western development costs. Concerns over China’s AI advancements, coupled with uncertainty surrounding U.S. commitment to European security, have reinforced the urgency for AI sovereignty in defense.
“Sovereignty in defense is crucial,” says van der Lem. “Ensuring a secure and independent supply chain is a priority.”
Despite occasional criticism—such as concerns raised in a Guardian article about Faculty’s ties to the UK government—van der Lem remains unfazed. “Many saw it as free publicity: ‘A UK company is helping protect the UK? Great.’”
Even as AI transforms military operations, ethical challenges persist.
“If you ask an AI how to build a dirty bomb, it should never provide an answer,” van der Lem states. “You can’t allow loopholes where it can be tricked into explaining dangerous methods.”
Faculty remains selective about its partnerships, working only with UK allies. But as AI technology advances, so do the challenges of ensuring responsible use.
“We think about these risks constantly,” van der Lem concludes.