Sub Total
Excluding delivery and voucher code discounts.
Go to basket

Free Delivery on all orders to UK mainland within 3 working days.


Exploring today's technology for tomorrow's possibilities

Role of LLMs and Advanced AI in Cybersecurity

Linsey Knerl
The U.S. Cybersecurity and Infrastructure Security Agency reported that in 2023, it remediated 14 million known exploited vulnerabilities and blocked over 900 million malicious DNS requests. These attacks targeted some of the nation’s most important infrastructures, including schools, public utilities, and transportation networks.
More recently, government and private entities have been working together to stop further threats by using large language learning models (LLMs) and artificial intelligence (AI) technology. Understanding how each works can help prepare you for the changes happening in cybersecurity.

Understanding LLMs and Advanced AI

What are LLMs?

Large language models are a form of “generative AI” that can recognize text and even produce new strings based on past text patterns. It uses a statistical model to anticipate the relationship and likelihood of words and then creates text from this likelihood.
Built on machine learning (ML), most LLMs get more precise over time; both are based on new texts being added to power the learning model and through human feedback. Telling an LLM that a text output was accurate, for example, affirms the use of the model and helps direct future output.
Because LLMs use huge databases of language, language examples, and user feedback, they can be used for a wide range of supervised tasks. Notable examples of LLMs include OpenAI’s ChatGPT-4, but several proprietary models are being used for private and government work today.

What is advanced AI?

Not surprisingly, advanced AI includes LLMs, and tools like Bard or ChatGPT are often the first thing people think of when talking about AI. However, AI goes much further than that single application and offers many advantages to the security field.
One use of advanced AI is threat detection for attacks before they happen by scanning and analyzing large volumes of data to look for trends. AI also maximizes resources so that threats can be addressed quickly and with fewer negative consequences. AI workflows can also free up experts to work on more complicated problems.

Applications in cybersecurity

LLMs and AI have been integrated into almost every industry, but they are producing some interesting use cases in threat detection and response.
Notable examples include:
  • Using LLMs and data from past password breaches to create stronger passwords. Useful for both consumers and enterprises, this encourages better passwords and more frequent updating of passwords (and overall better password hygiene).
  • Creating deceptive scenarios to bait attackers and deceive them into giving up their position or information about future attacks. Instead of using deepfakes to create chaos, these “imaginations” can draw out bad actors before they can do harm.
  • Using AI to develop new software tools and more secure or innovative solutions than past versions. (Github’s Copilot, for example, helped developers see a 55% increase in efficiency than developers who didn’t use it.)
  • Patch management, driven by AI insights, can identify, prioritize, and fix vulnerabilities much quicker than before. This reduces the time vulnerabilities sit unresolved while attackers can exploit them.

Enhancing Threat Intelligence and Response

LLMs help refine threat detection in a number of ways. While the technology is still in the early stages, it’s being adequately utilized here:

Conversion of raw data

What used to take analysts hundreds of hours is now part of a day’s work for LLMs. The technology can take pieces of data that are not even in the same format or naming convention, process it, and convert it into usable, recognizable data for reporting repurposes.

Breaking down of data silos

Collecting data hasn’t been the challenge that connecting data has been. LLMs and AI make it easier for disparate data sources to be integrated into a larger data ecosystem, stitching together data from all over. This makes it more likely to identify threats, as each piece is identified by tools to help teams collaborate.

Expanding coverage

LLMs can work anywhere text exists and make the most of natural language processing (NLP). Locations where text can be found include message boards, social media platforms, and the dark web. While it’s not practical to have agents and analysts monitoring these channels all hours of the day, AI technology certainly can and may even catch the things humans won’t.

Finding new threats

Finally, one of the more exciting aspects of the technology is the discovery of recent threats. Typically, we would have to wait until a new type of attack and adjust our response to account for this new threat. Now, LLMs and AI tech are tipped off to data patterns that seem likely to precede an attack based on previous attacks.
Or, it can watch one attack in process and then share that information across systems in real time. The machine learning aspects help it catch up to what's happening now instead of forcing a cybersecurity post-mortem.

Challenges and ethical considerations

In a recent CISA strategic report, the organization recognized that AI tools are adept at protecting against traditional and emerging cyber threats. However, it also acknowledges that AI software systems themselves need monitoring, protecting, and safeguarding to prevent them from being used in dangerous ways. In short, AI is the perfect example of the phrase “with great power comes great responsibility.”
Here are just a few examples of how this technology can be misused:
  • Leaking of sensitive data from the data stores and data lakes used by AI tools
  • Creation of misinformation, deep-fakes, or other false narratives to influence social and political outcomes
  • Allowance of Shadow AI or unauthorized AI systems left to run without adequate human or regulatory control
  • Adversarial machine learning, or the weaponization of machine learning models by malicious individuals to manipulate data, analysis, and outcomes
AI is increasingly under moral and ethical scrutiny, as well. While humans would most likely make major decisions regarding war, commerce, or the education of children, the data that informs those decisions could be influenced heavily by AI algorithms and analysis. Studies have shown AI to carry bias since the datasets it runs can contain human bias.
It’s worth considering if AI would result in complacency in making small decisions that add up to larger outcomes. We’ve seen how generative AI can make mistakes. Without the proper vetting and validation of datasets and analysis that can only come from humans, AI may not be considered a source of truth.

Future trends and developments

2023 was definitely the year of Gen-AI, with all eyes on how ChatGPT and other LLMs changed the way we learn and work. These technologies will continue to evolve and bring about new opportunities to protect against cybercriminals.
We may see interesting developments in the cybersecurity job market in the next year or so, with a higher demand for these positions. According to the U.S. Bureau of Labor Statistics (BLS), the number of information security analyst jobs is expected to grow 32% over the next decade; this is much higher than the industry average for all jobs of just 3%.
And while there’s no substitute for educated and experienced humans, LLMs and AI can help fill labor gaps by providing much-needed resources. The technology is already supplementing human call center agents tasked with helping victims of cybercrimes; it can screen cases, assign them to the appropriate professional, and even give suggestions for the best ways to resolve issues.
Technology will supplement people's expertise so they can focus on the most important work: keeping us safe.
We will also see either the start of legislation around appropriate AI use or clarification on executive orders like the one issued by the Biden administration last fall. Expect private companies to share their concerns about new rules promoting responsible AI developments. Whether guardrails come more in the form of explicit rules or as a larger set of industry “best methods” is yet to be seen. However, 2024 is ripe for big changes, as shared in the CISA’s latest Roadmap for Artificial Intelligence.

Advanced AI in cybersecurity: A summary

The rapid pace of AI's development means that this once futuristic idea will likely come to fruition very soon. Not only are humans who create AI tools learning how to improve upon these solutions quickly, but machines are also gaining knowledge and churning out more informed outcomes.
One takeaway to remember when looking at these advancements is that every good piece of tech can also be used for bad. Some of the very technologies that keep us safe can be exploited to do incredible harm, and cybercriminals count on us to drop our guard at some point.
What’s the solution? As with most industries using AI, the focus should always be on humans. What can they do uniquely? That’s what they should focus on, with AI and LLMs available to fill in gaps and make them more productive. As long as we have talented and diligent humans at the helm of these technologies, we can create more secure systems than ever that improve our lives while mitigating harm.

About the Author

Linsey Knerl is a contributing writer for HP Tech Takes.

Disclosure: Our site may get a share of revenue from the sale of the products featured on this page.


Prices, specifications, availability and terms of offers may change without notice. Price protection, price matching or price guarantees do not apply to Intra-day, Daily Deals or limited-time promotions. Quantity limits may apply to orders, including orders for discounted and promotional items. Despite our best efforts, a small number of items may contain pricing, typography, or photography errors. Correct prices and promotions are validated at the time your order is placed. These terms apply only to products sold by; reseller offers may vary. Items sold by are not for immediate resale. Orders that do not comply with terms, conditions, and limitations may be cancelled. Contract and volume customers not eligible.

HP’s MSRP is subject to discount. HP’s MSRP price is shown as either a stand-alone price or as a strike-through price with a discounted or promotional price also listed. Discounted or promotional pricing is indicated by the presence of an additional higher MSRP strike-through price

The following applies to HP systems with Intel 6th Gen and other future-generation processors on systems shipping with Windows 7, Windows 8, Windows 8.1 or Windows 10 Pro systems downgraded to Windows 7 Professional, Windows 8 Pro, or Windows 8.1: This version of Windows running with the processor or chipsets used in this system has limited support from Microsoft. For more information about Microsoft’s support, please see Microsoft’s Support Lifecycle FAQ at

Ultrabook, Celeron, Celeron Inside, Core Inside, Intel, Intel Logo, Intel Atom, Intel Atom Inside, Intel Core, Intel Inside, Intel Inside Logo, Intel vPro, Itanium, Itanium Inside, Pentium, Pentium Inside, vPro Inside, Xeon, Xeon Phi, Xeon Inside, and Intel Optane are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries.

In-home warranty is available only on select customizable HP desktop PCs. Need for in-home service is determined by HP support representative. Customer may be required to run system self-test programs or correct reported faults by following advice given over phone. On-site services provided only if issue can't be corrected remotely. Service not available holidays and weekends.

HP will transfer your name and address information, IP address, products ordered and associated costs and other personal information related to processing your application to Bill Me Later®. Bill Me Later will use that data under its privacy policy.

Microsoft Windows 10: Not all features are available in all editions or versions of Windows 10. Systems may require upgraded and/or separately purchased hardware, drivers, software or BIOS update to take full advantage of Windows 10 functionality. Windows 10 is automatically updated, which is always enabled. ISP fees may apply and additional requirements may apply over time for updates. See

“Best All In One Printer” and “the easiest printer you’ve ever had to set up” from Wirecutter. ©2020 The Wirecutter, Inc.. All rights reserved. Used under license.

Get Marvel’s Avengers when you purchase HP gaming PCs with qualifying 9th gen or 10th gen Intel® Core™ i5, i7 and i9 processors. Redemption code will be sent out by email within 60 days of purchase. Limited quantities and while supply lasts. Offer valid thru 12/31/2020 only while supplies last. We reserve the right to replace titles in the offer for ones of equal or greater value. Certain titles may not be available to all consumers because of age restrictions. The Offer may be changed, cancelled, or suspended at any time, for any reason, without notice, at Intel’s reasonable discretion if its fairness or integrity affected whether due to human or technical error. The Offer sponsor is Intel Corporation, 2200 Mission College Blvd., Santa Clara, CA 95054, USA. To participate you must create an Intel Digital Hub Account, purchase a qualifying product during the redemption period, enter a valid Master Key, and respond to a brief survey. Information you submit is collected, stored, processed, and used on servers in the USA. For more information on offer details, eligibility, restrictions, and our privacy policy, visit

© 2020 MARVEL. © Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries. Other names and brands may be claimed as the property of others.

The personal information you provide will be used according to the HP Privacy Statement (