15 Jun ChatGPT 3/3: KI for IT Security
The hype takes its course
The pace of advancement and proliferation of AI technology has now increased tremendously. Just in the few weeks since the last blog on this topic:
-
- appeared further offshoots and improvements of ChatGPT as well as of AI-supported software for image and video editing
- the satirical magazine “Der Postillon” released DeppGPT – an uneducated version of ChatGPT
- an AI-controlled drone of the USAF (allegedly?!) “eliminated” its operator in a simulation to achieve a better shooting result
- nearly 4000 jobs were replaced by AI in May in the USA alone (source)
And these are just examples. The fact is: AI technology is gaining increasing attention and relevance.
The fact that the use of AI also brings new dangers for IT security is something we already noted in the last blog on the subject of AI. After all, AI is not only a mirror of human knowledge, but increasingly also of human behavior and its associated strengths and weaknesses. And just as humanity is subject to basic ethical rules to a greater or lesser extent, globally applicable rules are also urgently needed for the use of AI.
But since this is beyond our sphere of influence, we would rather devote our attention to the potential of AI that is conducive to IT security.
The vision
In this context, we assume the following advantages of AI technology:
-
- Permanent access to all knowledge regarding:
- Methods and tools for vulnerability detection and remediation
- Methods and tools for assessing and tracking security risks
- Strategies and measures to reduce security risks
- Security architectures
- Application of all knowledge in near real time
- Availability and scalability
- Permanent access to all knowledge regarding:
If this is too scientific for you, perhaps the following simple image will help:
AI is like a huge team of specialists that autonomously detects, evaluates and remediates vulnerabilities around the clock and everywhere in an IT environment. In other words, a self-healing system.
That sounds dreamy, doesn’t it?
Unfortunately, this is (still) a vision, far from reality. After all, AI – just like humans – can only develop its full potential if all the necessary mechanisms are available in their full functional scope and mesh seamlessly with each other. And we have not yet encountered even the rudiments of this ideal.
What’s missing?
Mechanisms must be available to AI in the following areas:
Knowledge and Intelligence
One can certainly write a separate blog about the definition of intelligence. But in the case of human intelligence, it may be considered certain that intelligence is a gift that enables cognition and thus generates knowledge. In the case of an AI it is rather the other way round:
The supposed intelligence results from the availability of knowledge. Knowledge is programmed in the form of rule sets and is thus limited in its creativity (and ultimately error-prone like humans). So for AI to act intelligently, it needs knowledge. And preferably unlimited knowledge.
The necessary knowledge is basically available and, thanks to the Internet, can also be retrieved centrally and quickly. However, AI needs the knowledge in the form of a structured and immediately usable database.
ChatGPT, for example, can predict the weather because it uses the interface to the database of a weather service. It does not derive a forecast itself from the millions of weather data, but uses the stored results of another computer. This is because the calculation formulas required for the forecast are not available to ChatGPT and also cost too much computing time. In this respect ChatGPT relies on the immediately available knowledge/calculation result of a source considered trustworthy and reproduces this knowledge.
This is similar to a person with a low IQ quoting Goethe or Einstein. His fellow men consider this person to be intelligent only until the quotation is recognized as such. By the way, this is also the reason why humans should not stop acquiring knowledge themselves despite all AI 😉.
Supervision/Monitoring
Monitoring is certainly one of the basic disciplines of any IT operation. There are numerous proven and sophisticated tools for monitoring security-relevant information based on rules and detecting deviations (see next section). These tools can be filled with the required rule sets via interfaces. Relevant/proven rule sets are available as knowledge (with the restrictions mentioned above).
Many years of consulting experience show that even seasoned IT personnel have a hard time with this discipline. This is because the rules have to be constantly updated to deliver their benefits.
For AI, this means permanently optimizing these rule sets both on the basis of experience (i.e., specific incidents) and on the basis of initiative (i.e., suspicion). Modern endpoint protection and response (EDR) tools are already successfully using AI. Our partner SpecterOps is also already tinkering with how AI can be used in Bloodhound Enterprise to monitor Active Directory more intelligently and ultimately more effectively.
As a basis for permanent optimization, AI-based tools use both experience (their own and that of customers) and the insights gained from evaluating the “crowd”, i.e. the data and behavior patterns collected on the monitored systems.
Qualification and detection
Detection (the determination of a control deviation) is an integral part of every monitoring tool. If the rule set that triggers the detection is trustworthy and relevant, qualified detection can be assumed.
IT personnel usually rely blindly on the set of rules implemented in the monitoring tool. Questioning/re-qualifying would require too much time, which would then be lacking for correcting the deviation. And surely every administrator has been faced with the decision to ignore a supposed false alarm of the monitoring tool for reasons of time, instead of laboriously correcting the underlying rule.
AI should do better in this respect. This requires:
-
- following up on every alarm à throughput is not a problem for AI
- validate the data underlying the alarm (qualification) à ultimately, AI must reproduce the triggering rule set
- in case of a false alarm: to correct the rule set à if the previous points are implemented, this is a trifle for AI
Auditing
Auditing is a special form of monitoring. While monitoring continuously monitors individual parameters of an IT system, an audit is carried out spontaneously and in a higher-level context. A complex concept or a questionnaire derived from it is used as the target state and compared with the actual state in the course of the audit. Corrective measures, which require logic and creativity, are derived from the identified deviations from the target state.
This is precisely why external service providers are used for audits. They transfer the target state into a questionnaire (i.e., a set of rules) and then – depending on the context – talk it through with the person responsible for the system and/or check it automatically using suitable tools.
Teal uses a whole set of selected and proven tools to audit the security of the Active Directory in order to obtain as complete a picture as possible of the current situation. This is followed by a customer-specific evaluation of the results, including the derivation of recommendations for action.
The AI could perform audits spontaneously, at any time and with different focuses. To do this, it would have to
-
-
- derive concrete “questions”/measurement points from a presented complex concept
- select and execute the tools required for effective measurement
- evaluate the actual state reported back
- recommend appropriate responses (see below)
-
We are still a long way from such an AI, but at least we are already thinking about it 😉.
Reaction
If a deviation is detected, an appropriate response must be made. Since appropriateness is relative, rules must also apply to the response.
Many companies have – whether due to their own unfortunate experience or as a result of pressure – drawn up and communicated such rules for various threat scenarios. These rules cover aspects such as communication chains and responsibilities in addition to the pure approach.
Consequently, the rules/procedure descriptions quickly become “paper monsters” that are far removed from practice and are therefore rarely followed exactly. And if no rules apply, experience shows that IT personnel react humanely, i.e., impulsively, unstructured and thus inefficiently.
Such characteristics are alien to AI, which is why perhaps the greatest potential of this technology lies in an efficient reaction. At the same time, however, the reaction of an AI is also the greatest danger if it is “logical” but ultimately not useful or even destructive.
For AI to fully realize its efficiency advantage, its response must be both immediate and error-free. Yet every QA textbook explains what we have long since experienced in practice: There is no such thing as a 100% error-free system. And this can be due to the fact that not every error is clearly defined and recognized as such.
In practice, for example, it often happens that the AI is only allowed to react fully autonomously to harmless incidents, while the IT staff would rather decide for themselves when a business-critical service needs to be shut down immediately. And this (certainly justified) lack of trust in the rules set ultimately costs the important response time.
Conclusion
The following conclusions can be drawn from the above:
-
-
- AI is not only helpful for use in the area of IT security, it is urgently needed. It will therefore assume a leading role in securing IT environments in the near future.
- For AI to become an advantage, it needs clear, comprehensive and, as far as possible, error-free rules. The rules are created on the basis of human work and experience.
- Since rules are never 100% error-free or have gaps, AI must be allowed to optimize the rule set. Otherwise AI is not intelligent, but merely an automaton. Only the “freedom of decision” makes an AI “intelligent”, but thus also unpredictable and under certain circumstances it becomes a risk itself.
-
Finally, we have one more recommendation. AI, like cloud services, has enormous potential that companies can leverage. With both technologies, complex, modern and almost fully automated projects can be realized cost-effectively. However, our experience also shows that many companies need to reduce “legacy” costs in order to realize this potential.
A concrete example is the expensive production machine that was bought years ago and still works in principle, but could be more modern and safer. We often see scenarios where customers maintain the status quo for cost reasons. As a result, insecure protocols cannot be turned off, or outdated operating systems that no longer receive security updates remain in use. In the end, the status quo is not only riskier, but also more expensive. Would the AI choose the status quo in such a case?
Another example was mentioned above: An appropriate response to an anomaly must be defined. Who ultimately decides whether a critical system can be taken offline or whether this would cause far greater damage? IT often does not have complete knowledge of this and relies on additional information from application owners and management.
In general, we recommend thinking about what a modern IT operation should look like. Inventory, system hardening, updating outdated systems, but most of all authorization models, stable operations and clear responsibilities need to be modernized and reviewed.
If you would like us to advise you on this, please contact us 😊.
Sieh dir diesen Beitrag auf Instagram an
Sieh dir diesen Beitrag auf Instagram an
LATEST POSTS
-
Successful participation at it-sa 2024 – focus on resilience through system hardening
It was a special premiere for TEAL: together with our partner FB Pro GmbH, we were not only represented there as an exhibitor for the first time, but were also able to offer real added value for the 40 or so participants with ...
20 November, 2024 -
Data security with tiering – protection at every level
In this article, we give you a closer look at the importance of Microsoft Tiering for your IT security. We have looked at the underlying issues and the critical areas and systems that need to be protected to prevent total loss ...
16 October, 2024 -
it-sa 2024: Visit us in Nuremberg!
This year we will be represented for the first time together with our partner FB Pro GmbH with a stand and a specialist lecture at one of the most important IT security trade fairs in Europe: it-sa 2024 in Nuremberg...
15 August, 2024