Navy Warns Against AI Risks, Discourages LLM Usage

6 min read US Navy raises alarm on AI and large language models in military operations, highlighting operational security risks. Calls for human verification and validation. October 12, 2023 06:34 Navy Warns Against AI Risks, Discourages LLM Usage

The US Navy has warned against the risks of using artificial intelligence (AI) and large language models (LLMs) in military operations. In a September 2023 memo, the Navy's acting Chief Information Officer Jane Rathbun said that LLMs could pose an operational security risk because they save every prompt they are given. She also said that these tools must be verified and validated by humans.

What are LLMs?

LLMs are a type of AI that can generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way. They are trained on massive datasets of text and code, and can be used for a variety of tasks, including:

  • Generating realistic and creative text formats, like poems, code, scripts, musical pieces, email, letters, etc.
  • Translating languages accurately and fluently.
  • Answering questions in a comprehensive and informative way, even if they are open ended, challenging, or strange.

Why is the Navy concerned about LLMs?

The Navy is concerned that LLMs could be used to create deepfakes, which are videos or audio recordings that have been manipulated to make it appear as if someone is saying or doing something they never actually said or did. Deepfakes could be used to spread disinformation or propaganda, or to damage someone's reputation.

The Navy is also concerned that LLMs could be used to hack into computer systems or to steal sensitive data. LLMs could be used to generate phishing emails that are more likely to fool people, or to create malware that is more difficult to detect.

What is the Navy doing about it?

The Navy has discouraged its personnel from using LLMs for any purpose, and has said that it will be developing its own AI and LLM capabilities that are specifically designed for military use.

What can you do?

Even if you're not in the Navy, it's important to be aware of the potential risks of AI and LLMs. Here are a few things you can do to protect yourself:

  • Be critical of the information you see online. Don't believe everything you read.
  • Fact-check information before you share it. There are a number of reliable fact-checking websites and organizations that you can use.
  • Be careful about what information you share online. Don't share any information that you wouldn't want to fall into the wrong hands.
  • Use strong passwords and enable two-factor authentication on all of your online accounts.

By taking these steps, you can help to protect yourself from the potential risks of AI and LLMs.

User Comments (0)

Add Comment
We'll never share your email with anyone else.

img