The US Space Force has temporarily banned the use of generative AI tools amongst employees citing “data aggregation” risks, according to an internal notice reviewed by Reuters.
Air Force spokesperson Tanya Downsworth confirmed the temporary ban in a statement.
“A strategic pause on the use of generative AI and LLMs within the US Space Force has been implemented as we determine the best path forward to integrate these capabilities,” she stated.
Whilst AI has great productivity potential within workplaces, many businesses such as Samsung have banned the internal use of gen AI tools like ChatGPT.
GlobalData analyst, Will Tyson, explained to Verdict the risks chatbots could pose to businesses.
“Data entered into generative AI tools has the potential to be stored and used in training models,” Tyson stated, “Considering the sensitive nature of data that the US Space Force has access to, it has deemed it necessary to have protections in place.”
How well do you really know your competitors?
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
Tyson noted that the ban is only temporary and said the Space Force could still benefit from this technology in the future, with the caveat of higher guardrails.
David Bicknell, GlobalData principal analyst, agreed with Tyson’s sentiments.
Bicknell described the Space Force’s decision as “sensible”, stating that space and terrestrial technologies have become more interconnected than before. Additionally, he pointed out that many space-based services support military or aviation operations, making them an attractive target for cyber-attacks especially during periods of heightened geopolitical tension.
For future implementation of AI tools, Bicknell reinforced that it would be necessary to adopt security into every level of satellite design “from the ground up”.
A rigorous identity and access management must be necessary alongside a robust intrusion detection system to act as a backbone of cyber-resilient spacecraft.