The launch of ChatGPT in late 2022 sparked considerable hype for generative AI chatbots. It is based on a large language model (LLM), an AI model that has been trained on a large corpus of text and built an understanding of language.
In threat intelligence, we often deal with human readable information, which unlike machine readable formats (e.g., STIX) can be hard to integrate into automated workflows. Since LLMs can process human readable information, they can be of great benefit to threat intelligence practitioners.
In this session, we will share insights from our journey implementing LLMs to optimise our operations. We will cover key uses for natural language processing tasks across the intelligence cycle, from use of summarisation tasks to reduce noise in collection phase and classification tasks to drive tagging in MISP during the processing phase, to generative tasks for reporting in the dissemination phase. We will also show how to access and fine-tune open source LLMs for your own tasks.
Our session won’t teach you the best ChatGPT prompts for OSINT collection. Instead, you will learn how even small CTI teams can punch above their weight by leveraging LLMs to work more efficiently. No data science expertise required!