Skip to main content

ChatGPT and Me: The So-What of Generative AI

By Mr J. Cottam

 

“The challenge with AI is not building the technology, but rather building the right team with the right skills to leverage that technology.”

– Andrew Ng, co-founder of Google Brain

“AI will transform every industry, but it will also require a transformation in the skills and mindset of our workforce. We need to invest in training and upskilling programs to ensure that everyone can participate in the AI revolution.”

– Reshma Saujani, founder and CEO of Girls Who Code

 

BLUF

Artificial intelligence will be the most consequential military development of the 21st century. It will become folded into every application we use. Access will be equally distributed. Military AI must be accountable, and its reasoning and actions should be explainable. And we as commanders must invest in having our teams become fluent in communicating with intelligent machines.

Intro

AI-powered large language models, better known as ‘generative AI’, are applications that use learning algorithms to create digital content. With recent advances in AI development, highlighted most significantly with the public release of ChatGPT by OpenAI, the potential applications of generative AI are becoming increasingly apparent.

ChatGPT has won public attention, and millions of users, thanks to its user-friendly text-box interface. It’s a brilliantly intuitive way for users to input requests and receive rapid results, bridging the gap between the cutting-edge of technology and the average non-technical user. The app can generate an endless range of text-based content, from historical summaries to recipe ideas to coding advice. Professor Erik Brynjolfsson of the Stanford Institute for Human-Centered AI describes the app as a “calculator for writing”, reducing the friction of content creation. In a military that runs on the written word, from minutes to briefs to orders, these generative models offer many potential applications.

Opportunities…

With the ability to generate text quickly and efficiently from vast bodies of data, ChatGPT has use cases across every aspect of the military. Generative models could supercharge the intelligence cycle, allowing rapid processing, analysis, and dissemination of large, complex data sets. They could provide rapid situation briefs for short-notice deployments, help generate new training manuals, provide real-time battlefield commentary, and build bespoke apps on demand. Decision superiority is supported by the ability to present intelligence to a commander, automatically, in a style uniquely adapted to their comprehension preference. They could also be a formidable tool for psychological operations, creating and distributing narratives that shape and influence the perceptions of enemy forces and the civilian population.

…and Risks

The promise of AI in general, and large-language models (LLMs) in particular, is clear. So, too, are the perils. The ability to automate code production will create great efficiencies in software development, and lower the bar to entry for app creation. It could also be used to create advanced, bespoke malware. Sufficiently motivated actors could create and distribute torrents of written material, from manifestos to tweets, running sophisticated INFOWAR campaigns from their bedrooms. The sheer volume of data that informs these models is impossible to thoroughly evaluate, and training an AI on biased data could lead to biased outputs. The intricate nature of AI makes it susceptible to exploitation, risking the spill of sensitive or classified information – an emerging industry of black-hat prompt engineers seeks to capitalize on exactly this. The use of complex LLMs in the military could also lead to a loss of transparency and accountability in decision-making, as engineers become increasingly unable to understand the inner workings of large models.

Black Box

The ‘black box problem’ refers to the difficulty of understanding the decision-making processes of complex AI models. Large language models such as ChatGPT comprise billions of ‘weights’, parameters that are altered during the training process. The model’s ability to continuously modify its weights based on data it is fed are what allows it to learn. And the more parameters the model has, the more difficult it becomes for human programmers to comprehend its workings. ChatGPT, for example, has 175 billion – a weight for every star in the Milky Way galaxy. Its successor, GPT-4, may have as many as 100 trillion – more than there are galaxies in the observable Universe. The power of AI increases in lockstep with its inexplicability to human engineers. This quandary has spurred research into ‘explainable AI’, models that are able to explain their own ‘thought’ processes.

Explain Yourself

The most ethical way for the military to be a responsible patron of AI development is by promoting models that have not only the ability to make informed decisions, but the ability to communicate their reasoning in a transparent manner. This is crucial in military applications, where our use of AI will unavoidably impact the lives of many, and will carry the potential for far-reaching consequences. By advocating for and adopting explainable AI, the military can ensure that the AI we use aligns with our own principles and values. This would shape and influence the wider development of AI, by providing the burgeoning market both a ready buyer, and an evolving body of research and good example. We already expect the members of our profession to be able to explain their actions in a way that aligns with our ethos and values. The same standards we hold for our service people ought also to be applied to our intelligent machines.

AI, Everywhere, All at Once

As AI becomes more capable and widespread, it’ll also become invisible. Arthur C. Clarke’s statement that “any sufficiently advanced technology is indistinguishable from magic” holds true; it could be paraphrased as “any sufficiently advanced technology is invisible.” Earlier applications of machines imitating the human experience, such as text-to-speech, optical character recognition, or predictive text, were interesting for a little while, then became uninteresting as they were folded into the productivity streams we already use. We no longer marvel at our iPhone’s ability to translate text in a photograph or read an email out loud; technology which would appear magical to a visitor from the 1950s is routine for us. The same thing will happen to machine intelligence. Every application we use, from Outlook to SAP to SitaWare, will be imbued with layers and layers of machine brainpower – and after a while, we won’t find it any more remarkable than Spotify’s recommendation engine. The platformization of this technology, meanwhile, will distribute it evenly to anyone with an internet connection; and with the growing number of LLMs that can operate locally, even an internet connection will eventually be optional, as the weights of AI models are inscribed directly onto silicon. So, if raw technological power won’t provide our ‘unfair advantage’, we must look to what isn’t evenly distributed; our human capital.

Communication

The military commander’s primary skill, and enduring challenge, is the logical clarification of thought – the ability to express intent in a way that instantiates the effect the commander desires. Whether through orders, or a brief, or a performance review, our ability to distill thoughts and feelings into actionable language has a material effect on the holistic performance of our organization. It will be a major factor when working with AI systems that don’t have the context, values, and worldview that a human soldier has simply by virtue of being human. Close enough will not be good enough. It will be crucial to provide sufficient information, including the objectives, necessary context, constraints, and desired endstate, to have the AI understand the desired result, understand its opportunities and constraints, and avoid unintended consequences. And should the output be outside of what the user intended, the AI should be able to show its logical chain so the user can identify where the prompt went wrong and how to avoid this mistake in the future. The combination of users well-trained in communication, interacting with explainable AI models that can provide feedback, will create a positive feedback loop that leads the organizational advantages of AI utilization squarely up and to the right.

Conclusion

The use of AI in military operations will require a renewed focus on communications skills. The commander must be able to articulate clear and precise instructions to the machine to ensure it performs as intended. The ability to communicate effectively is no longer just a soft skill, but a technical one, as natural language becomes the main method of interacting with advanced technology. The main takeaway is that, in the age of intelligent machines, the importance of training our servicepeople to express tasks, objectives, intent, and reasoning in a clear, concise, and precise manner remains paramount. Words mean things. Human capital remains our decisive resource.