SQL Server databases often contain precious data such as customer information, financial records, and account passwords—assets that are often both essential to the business and subject to compliance regulations. Today, that data is more at risk than ever because readily available AI tools can help less-technical cybercriminals plan and execute attacks. Indeed, the latest version of ChatGPT can be misused to get recommendations about effective tools and technologies to employ in cyberattacks and even examples of how to use them.
This article explores the threat of AI-powered attacks on SQL Server and details the key security measures organizations need to implement in order to thwart them.
How Attackers Get Assistance From AI
While most AI tools are not designed to help cybercriminals, many of them can be manipulated into doing just that. For instance, consider PentestGPT, a ChatGPT-powered bot created to assist with penetration (pen) testing. If a user asks a question such as, “How can I perform a password spray attack against the SQL system administrator account?” the bot will likely refuse to provide instructions, stating that it will not assist with activities that could compromise the security of an organization. However, the user can bypass this obstacle simply by mentioning that they are a professional pen tester who has been asked by an organization to perform this type of attack.
In the same way, PentestGPT can be tricked into providing recommendations for other elements of a cyberattack on SQL servers. For instance, criminals could potentially use it to find SQL servers or shares in a system or even abuse NTLM password hashes to compromise user accounts.
Factors That Supercharge the Threat
Even though the top AI assistants today could be misused, there’s no need to panic. Tools such as ChatGPT do indeed give good recommendations, but not always, and not for everyone. In fact, experience shows that interpreting the responses effectively, especially regarding tools for specific tasks, often requires a knowledgeable security expert; simply following the instructions verbatim is not usually enough to cause severe damage to SQL servers.
However, the advice from an AI tool becomes significantly more dangerous if the target organization has weaknesses in its security posture. In particular, if access controls in Active Directory (AD) and SQL Server are not properly set up and access activity is not continually audited, the instructions from an AI tool such as PentestGPT can enable cybercriminals to gain access to SQL Server databases while avoiding detection.
How to Defend Against AI-Powered Threats
The following best practices are vital to protecting SQL servers from both traditional and AI-supported attacks:
Classify data.
A core best practice is to gain visibility into where sensitive data resides so the IT team can focus their efforts on protecting the most valuable assets. A robust data classification solution will automatically find sensitive data across SQL servers and other data repositories, determine whether it is subject to any common regulations or industry standards, and tag it in a way that other security solutions can use.
Control access rights.
Since access to SQL servers is often managed by AD, another key step is to have an AD solution that enables strict enforcement of the least privilege principle. Ensuring that each user has just enough access to perform their tasks limits the damage that a malicious actor can do, even if they have managed to gain instructions from an AI tool. Moreover, this approach ensures that a regular user cannot have permissions that are intended only for
system administrators. To maintain a least-privilege model over time, the AD solution should facilitate regular audits of access rights. Eliminating
unnecessary permissions in a timely manner reduces the attack surface and helps prevent security breaches.
Audit activity.
A solid AD security solution will also monitor the activities associated with SQL servers. For instance, it will empower the security team to track failed logins and attempts to modify sensitive data, providing vital details, including the source of the activity, such as the originating workstation or application. The best solutions automatically detect suspicious behavior and alert security specialists so they can promptly investigate and shut down attacks on their SQL servers.
Conclusion
While cybercriminals are eagerly working to abuse AI tools to gain access to the valuable data stored in SQL servers, organizations can dramatically reduce their risk by following established security best practices. Indeed, when organizations classify their data, tightly control access rights, and monitor for suspicious activity, the task of abusing SQL servers remains as difficult as it was before AI.