Raymond Kent, ASTC, Assoc AIA Senior Design Leader, DLR Group Principal
Although artificial intelligence (AI) has been around for decades, it has become the new buzzword in everything from design, content creation, business analytics, and even cybersecurity, with many companies diving headfirst into a morass of information, solutions, and strategies There is little history or experience with the benefits and pitfalls of its use. Even experienced companies are struggling to determine what their strategy should be and how best to deploy it. Current buzzwords often confuse people as to what exactly artificial intelligence is compared to data analytics. Artificial intelligence is essentially a static state that learns from experience by replicating cognitive abilities, automating tasks in an autonomous manner, and can use data analysis to drive results, but does not necessarily rely on data analysis to draw conclusions about its particular data set. process.
A recent example of how hype can be harmful is the use and integration of Internet of Things (IoT) devices, which can provide users with many advantages and opportunities if deployed thoughtfully. These systems can also wreak havoc when unskilled or uninformed users go astray and piece together systems from various manufacturers of dubious origins, setting the stage for the unintentional and often disastrous actions of bad actors or well-intentioned employees. Errors create virtual open backdoors. For a while, it seemed like every technology manufacturer was putting an IoT label on their products to generate sales. This results in some products being insecure, opening vulnerable backdoors, or broadcasting data over public networks, creating opportunities for pranksters to cause problems.This reminds me of the 1983 movie War GamesStarring Matthew Broderick and Ellie Sheedy, they mistakenly enter a U.S. military supercomputer thinking they are playing a video game and nearly trigger a nuclear war.
Although just a movie, it does highlight some of the current challenges faced by artificial intelligence and cybersecurity, as no system is 100% secure and there are always two sides to a coin – good and bad. Artificial intelligence certainly has benefits, and these benefits should be explored with the advice and guidance of qualified participants to suit a company’s unique circumstances so that mission, vision, and actions can be appropriately tailored in the most cost-effective way. Unfortunately, because of the complex challenges associated with integrating and applying these strategies and tools, simply reading about them online or attending seminars is not enough, and not having qualified staff can impact efficiency. Many companies don’t realize the huge attack surface environment in today’s networks. Often, there are hundreds or thousands of devices that need attention, including personal devices that may find themselves behind a firewall, opening vulnerabilities that can be covered up. Coupled with the movement of massive amounts of data, this creates a staggering number of attack vectors that exceed the ability of humans to manage alone.
The biggest advancement in artificial intelligence that most companies can take advantage of is the development of machine learning, as War Games Supercomputers could quickly test nuclear strike scenarios to determine the best outcome and communicate the best course of action to a real person or take action themselves. Machine learning allows systems to test multiple events within given parameters at lightning speed to find the best possible outcome, while storing all these scenarios and being able to recall them for comparative analysis against new variables in a given situation. Improve response rates and accuracy of results. This improvement, coupled with human training, even allows for better analysis of specific activity, eliminating false positives or false negatives, giving security teams the best chance of making a final call.
The biggest advancement in artificial intelligence that most companies can take advantage of is the development of machine learning
It’s not all sunshine and roses, as bad guys are also harnessing the power of artificial intelligence to their advantage at a pace humans can’t keep up with. For example, hackers could use the same machine learning algorithms to locate specific data and train their attacks based on expected warning signs. Additionally, hackers can harness the power of neural networks and deep learning to develop mutated zero-day threats that can evade detection, especially if the threat is delivered through a transient device or through sophisticated phishing software also developed with artificial intelligence. introduced.Artificial intelligence can also overload a system’s defenses by injecting it with a large number of potential vulnerabilities, as War Games Supercomputers run nuclear war simulations until a vulnerability is discovered and quickly exploited. If a dataset is exfiltrated undetected, even a company’s own AI solutions can be compromised, often making it impossible to recover the correct dataset, opening a gaping hole in any defenses. Finally, bias and discrimination in AI decision-making will create more vulnerabilities, as these vulnerabilities can be exploited by a variety of sources. These biases can also lead to false positives and discriminatory practices against employees or customers, often with serious consequences.
Decisions about which AI tools to use, including custom tools rather than commercially available tools, are still more difficult than they seem. Some organizations have suspended the use of commercial products such as ChatGPT, Dali-2, Mid-Journey, and Google Bard due to concerns about the disclosure of private data and security issues related to unauthorized access to corporate databases. This also comes with the risk of using AI tools to produce service tools, such as documents, drawings, legal briefs, etc., that could put the company into conflict and potentially face litigation or other business disruption. Deepfakes are also now a reality, using artificial intelligence to generate misinformation and vulnerabilities that, if misused or misunderstood, could infiltrate corporate networks or cause reputational damage to a company.
Entering the Wild West requires considering several key factors, and having the right voice in the decision-making process regarding which strategies to deploy, and more importantly, why to deploy them, is critical to success.this is not a jurassic park The “whatever it takes” moment should happen. This is a time to spend the most money you can afford to buy the best quality tools in a thoughtful way. Pay attention not only to the quality of the dataset used to train the model, but also to the problem being solved in order to choose the right model. Consider the hardware that will support the process you are executing, as well as the necessary built-in resources currently in use and any extensions to the model development process. As technology evolves and new models become more important, scalability will become critical. Regularly take stock of the AI tools deployed by your team and evaluate their relevance and effectiveness. Did you get the results you wanted, or does the model need to be tweaked or abandoned entirely? Please pay special attention to the security, privacy, and ethical implications of any solution you decide on. Reducing bias and mitigating potential threats can save companies time and resources in the long run.
Finally, don’t forget about maintenance and operating budgets. Having a shiny new car is great, but if you can’t change the oil or don’t know how to drive it, it’s just a shiny object in the driveway. Expect the system to require this and expect the right people to manage it. Now, let’s go play a game of chess.
About the Author:
Raymond Kent is an award-winning, internationally recognized technology consultant working in the architecture and engineering sectors, regularly working with top clients across multiple industries on artificial intelligence, IoT, sustainability, augmented and virtual reality and more. Provide them with suggestions on topics. more.
3 Comments
Pingback: Are we artificially exaggerating the power of artificial intelligence in cybersecurity? – Tech Empire Solutions
Pingback: Are we artificially exaggerating the power of artificial intelligence in cybersecurity? – Paxton Willson
Pingback: Are we artificially exaggerating the power of artificial intelligence in cybersecurity? – Mary Ashley