Anduril partners with OpenAI.
- On Wednesday, Anduril and OpenAI announced a partnership that enables the deployment of advanced AI systems for "national security missions."
- The trend of AI companies reversing their bans on military use of their products and forming partnerships with defense industry giants is becoming increasingly controversial.
- Anthropic and defense contractor Palantir have teamed up with Amazon Web Services to offer U.S. intelligence and defense agencies access to their AI technology.
On Wednesday, Anduril and OpenAI announced a partnership that enables the deployment of advanced AI systems for "national security missions."
AI companies are increasingly partnering with defense industry giants and the U.S. Department of Defense, despite their earlier bans on military use of their products.
Anthropic, a startup backed by -, founded by ex-OpenAI research executives, and defense contractor announced a partnership with Amazon Web Services to provide U.S. intelligence and defense agencies access to Anthropic's Claude 3 and 3.5 family of models on AWS. Palantir signed a new five-year, up to $100 million contract to expand U.S. military access to its Maven AI warfare program.
The OpenAI-Anduril partnership announced on Wednesday aims to enhance the country's counter-unmanned aircraft systems (CUAS) and their capacity to detect, evaluate, and react to potentially deadly aerial threats in real-time, as stated in a press release. Additionally, the partnership will investigate how advanced AI models can be utilized to quickly process time-sensitive data, alleviate the workload on human operators, and enhance situational awareness.
Palmer Luckey, co-founder of Anduril, did not respond to a query about whether decreasing the burden on human operators would result in fewer humans being involved in high-stakes warfare decisions. Luckey is also the founder of Oculus VR, which he sold to Facebook in 2014.
Anduril is collaborating with OpenAI to aid human operators in safeguarding U.S. military personnel from drone attacks on the ground. Despite its policy of prohibiting the use of its AI technology to cause harm, OpenAI remains committed to its mission statement.
In January, OpenAI, which was backed by Elon Musk, quietly lifted its ban on the military use of ChatGPT and its other AI tools, coinciding with its work with the U.S. Department of Defense on AI tools, including open-source cybersecurity tools.
OpenAI's policies page previously stated that the company did not allow the usage of its models for activities that posed a high risk of physical harm, such as weapons development or military and warfare. However, in mid-January, the specific reference to the military was removed from the policy, although it still advises users not to use the service to harm themselves or others, including developing or using weapons.
After years of controversy surrounding tech companies' development of military technology, tech workers, particularly those involved in AI, have raised public concerns.
Thousands of Google employees protested Project Maven, a Pentagon project that would use Google AI to analyze drone surveillance footage, prompting concerns from employees at virtually every tech giant involved with military contracts.
Microsoft employees and over 1,500 Amazon and Google workers protested against a $480 million army contract and a $1.2 billion, multiyear contract with the Israeli government and military, respectively.
-- CNBC's Morgan Brennan contributed to this report.
Technology
You might also like
- TikTok threatens to shut down on Sunday unless Biden takes action.
- Digital Currency Group to pay $38.5 million to the SEC for misleading investors.
- Senators express concerns about OpenAI's efforts to align with Trump.
- TikTok ban is upheld by Supreme Court in a unanimous decision.
- Whitney Wolfe Herd, the founder of Bumble, will be returning as CEO.