A full 70% of banks in the US are spending more on their cybersecurity in 2025.
How do we know? We asked. As part of our latest Integris report, Understanding U.S. Banks Annual IT spend in 2025, we talked to nearly 1000 bank executives across the US, and the results were pretty consistent across the board. A full 74% of them admitted they didn’t think their cybersecurity spending was effective in 2024–and they want to do something about it.
With the cost of data breaches continuing to rise, it’s not hard to see why. According to IBM’s 2024 Cost of Data Breach Report, financial services have more to lose than nearly any other business sector, with losses topping $6.08 million per breach last year, compared to the national average cost of $4.88 million.
As hackers continue to up the ante with ever more sophisticated attacks, it’s tempting to dump big chunks of your IT budget on the latest shiny new cybersecurity tools. That’s great, but in my experience, the road to cyber safety has never been paved with quick fixes. If you truly want your bank to have future-focused, compliant cybersecurity, you’ll need to think holistically about your entire information technology portfolio, your IT infrastructure, and the way your staff and customers interact with it.
To demonstrate what that looks like for the average bank, I’m going to discuss what I think are the five biggest cybersecurity threats banks are facing in 2025, and how to meet them head on.
Five Growing Cybersecurity Threats for Community Banks
Threat #1—A Lack of a Written AI Fair Use Policy
Many banks are taking a “wait and see” approach to implementing new artificial intelligence tools at their bank, and that’s fair enough. Yet, that doesn’t mean that AI isn’t being used in your bank every single day. Here’s just a few of the ways AI could be encroaching on your systems:
- Smartphones enabled with Apple Intelligence or similar tools. Phones like these are capable of reading and understanding things that it sees in photos and videos. How easy would it be to take a photo over the shoulder of one of your developers while they’re writing code, or a service rep as they’re working with your customer’s accounts?
- Unauthorized use of free, large-language AI models— whenever you’re working on a free side of an AI engine like ChatGPT, the data you put in inevitably belongs to whoever owns that tool. Yet, the time-saving allure of these tools can’t be denied. Who’s to say that your busy loan officer hasn’t succumbed to the urge to use a free AI tool to craft a customer acceptance letter, or an account notice?
Data collecting AI tools used by your vendors— Even if your own AI protocols are airtight, there’s no telling how your customer data may be handled by any third-party providers you’re working with. If you don’t know how your data is flowing on their end, you need to start asking those questions now as part of your annual cybersecurity reviews.
The good news is—an AI acceptable use policy can help nip a lot of these dangerous risks in the bud. As part of your cybersecurity awareness program, ask your employees to sign the policy, so they understand the data-handling issues that come from working with AI. Use of AI tools should also be a key question that comes up as part of your third-party vendor reviews every year, and clear rules about AI should be written into your vendor contracts.
If you’re interested in developing one of these policies at your bank, read our latest blog on how to roll out an AI-awareness campaign. You can start with our own free template. Download it here.
Threat #2—Lack of a Security Permission Levels for Your Data
Even if you haven’t implemented artificial intelligence tools at your bank, it’s not too early to start thinking about how data flows through your organization. Who has permission to view what data, and when? How is that data grouped in your system? How well do your permission levels protect that data?
These answers are important now—especially as we hurtle toward our AI-enabled future. To make use of the new technology coming our way, start thinking about how you can group your data so that it can be safely crawled by private, AI-enabled search engines or tools.
Your permission levels should be well thought out now, to help prevent data leaks in general. But as more AI-enabled chat bots, business intelligence tools, and writing assistants become the norm, your data will be more vulnerable than ever. The time to tighten up your identity detection tools is now.
Threat #3—Failure to Fully Secure Your Endpoints
By now, most banks have some form of endpoint protection that monitors all the devices on their systems for unusual patterns of activity, and flags attempts at unauthorized access. Unfortunately, many IT directors don’t realize those endpoint protection systems may have significant blind spots.
For instance, standard endpoint detection tools do not extend to your cloud environment, such as the Microsoft Azure platform or the widely used Microsoft 365 productivity platform. To cover this, you’ll need an endpoint detection tool crafted specifically for M365, like our own Identity Threat and Detection Response Tool (ITDR). This tool will independently verify the identity of the people logging into your cloud environment no matter their location or device they’re using to log in.
We often see banks who have failed to invest in higher level, fully managed endpoint detection packages. This can create an administrative nightmare for many bank IT departments, which are often deluged with a flurry of false positives and system alerts on a daily basis. As a report, needed patches, repairs, and remediations often go undone.
Managed detection response programs fix this problem by handling the alerts and the remediations automatically. Better yet, they’ll provide you with the reporting you need to produce for your annual regulatory reviews. As your organization gets larger and more complex, an extended detection and response (XDR) plan can provide this same managed capability across all your networks.
Threat #4—Failing to Have Employee Training and Procedural Policies to Prevent Deepfakes
Phishing and spoofing attempts have always been abundant, especially in the banking sector. But mercifully, they were usually easy to spot, with bad translations and poor word choices that made the fakes stand out. AI-assisted deepfakes, however, have changed all that.
The most famous example is British multi-national design and engineering company Arup, which famously wired $25 million to fraudsters. How did this happen? Hackers sent a phishing email pretending to be a board member needing a big transfer of the firm’s cash so they could “complete a new acquisition.” When the suspicious employee demanded verification, the criminals simply set up a video call…then used deepfake technology to take on the voice and the image of their entire executive board. Believing she had unanimous live authorization, the employee dutifully wired the money to numerous offshore bank accounts, never to be seen again.
Strategies for Detecting Deepfakes:
You’ll need to train your staff to look for tell-tale fraud patterns, such as inconsistencies between ID documents, stolen photos lifted from the internet, sudden or unusual account activity, or large fund transfers. Then, you must implement critical safeguards, such as:
- Liveness checks—which can listen in on conversations to detect if a deep fake voice is being used.
- Biometric authentication, such as fingerprints and eye scans. Look for tools that can sense if it’s a live person providing this, or a mimicked scan.
- Monitoring tools to catch rapid transactions, high payment volumes to high-risk payees, frequent chargebacks, and more.
- Security awareness training for employees, so they can work well with these verification and monitoring tools.
- Written security policies and procedures—so employees and customers alike understand how they’re being protected.
Threat #5—Failure to do System-Wide Data Risk Assessments
Assessing data risk is, arguably, the most difficult part of your bank’s risk assessment process. This is largely because data touches every part of your IT infrastructure. You’ll need to evaluate how data risk is expressed in every corner of your risk management program, including:
- Compliance-Based Risk Assessment—ensuring that your data handling practices are compliant with all relevant banking regulations, and requirements from your cyber risk insurer.
- Operational Risk Assessment—which evaluates the risks to your data that come from your daily operations, including data handling, storage and access controls.
- Technical Risk Assessment—including penetration testing and vulnerability scanning to find the technical issues in your software, network security, and the like.
- Strategic Risk Assessment—which looks at long-term data handling risks that could be coming up as your bank adopts new technologies, expands, or creates changes in its data governance programs.
If they’re not already, data risk assessments should be part of your annual review before your bank cybersecurity audits. However, it’s important to continuously monitor your data risk on a daily basis. This can be achieved by creating qualitative and quantitative markers—such as number of remediations, time to detection, time to response, risk mitigation effectiveness, and more.
Like it or not, risk flows through your system, and it goes everywhere your data goes. Having a comprehensive approach to data risk management will help you keep that risk to a minimum.
Need Experts to Address Your Bank’s Cybersecurity Challenges? Integris Can Help.
Our Financial Institution Division (FID) at Integris offers more than 200 employees providing critical IT services to community banks and credit unions across the country. We’d love to help you, too. Contact us now for a free consultation.