The US Navy has warned against the risks of using artificial intelligence (AI) and large language models (LLMs) in military operations. In a September 2023 memo, the Navy's acting Chief Information Officer Jane Rathbun said that LLMs could pose an operational security risk because they save every prompt they are given. She also said that these tools must be verified and validated by humans.
What are LLMs?
LLMs are a type of AI that can generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way. They are trained on massive datasets of text and code, and can be used for a variety of tasks, including:
Why is the Navy concerned about LLMs?
The Navy is concerned that LLMs could be used to create deepfakes, which are videos or audio recordings that have been manipulated to make it appear as if someone is saying or doing something they never actually said or did. Deepfakes could be used to spread disinformation or propaganda, or to damage someone's reputation.
The Navy is also concerned that LLMs could be used to hack into computer systems or to steal sensitive data. LLMs could be used to generate phishing emails that are more likely to fool people, or to create malware that is more difficult to detect.
What is the Navy doing about it?
The Navy has discouraged its personnel from using LLMs for any purpose, and has said that it will be developing its own AI and LLM capabilities that are specifically designed for military use.
What can you do?
Even if you're not in the Navy, it's important to be aware of the potential risks of AI and LLMs. Here are a few things you can do to protect yourself:
By taking these steps, you can help to protect yourself from the potential risks of AI and LLMs.