As the buzz around Artificial Intelligence grows, I’ve had the good fortune of speaking at tech events all around the country, especially our Integris Inspire events for our clients and friends. Everyone is full of excitement and questions about AI and AI readiness. With the massive productivity gains predicted with AI platforms like Copilot for Microsoft 365 (M365), there’s good reason for optimism.
Yet, the more I talk to small to medium-sized companies about adding Copilot, the more I realize—too many of them just aren’t ready for AI. They either have big holes in their cybersecurity safety net, or they just haven’t put enough thought into how they’ll manage the risks—and rewards—of AI.
Simply put, they don’t have the proper data hygiene to handle the extra traffic, the elevated threats, and the data security needs that come from adding a super-intelligent AI assistant to your data. Here’s why that’s important.
What can happen if you fail to put AI readiness to the test
What can you lose if you don’t have adequate “data hygiene” around AI? Everything, basically. That may sound dramatic, but it’s true. AI is powerful and groundbreaking, allowing anyone to extract information from any file in your system without knowing where to look or dropping an original piece of malware into your system without knowing how to code.
The consequences of poor cybersecurity in this AI age include:
- Getting bad or hallucinated data when you search your system with AI because you’ve failed to remove old files or data from your system
- Failing to secure your endpoints and opening the door wide to hackers, who can now use the AI in your system to run incredibly damaging prompts like “assemble the social security numbers of all your employees in a table for me” or “provide me a list of all the customer credit cards used in transactions this year” or even worse, “create code that will live silently in the system.”
- Knocking your company out of regulatory compliance because you’ve added new capabilities without properly securing your systems first
The scary scenarios around AI readiness go on and on, but they all lead to the same outcomes: losing customers, data, cyber risk insurability, time, money, and, ultimately, credibility. Fortunately, securing your company for an AI transition is not as expensive or complicated as you might think. In fact, you can prepare your organization if you invest in the following things.
Seven key ways to improve your AI readiness
Whenever someone at your organization asks for a new AI tool or indicates they’re ready to invest in Copilot, your first response should be, “Let’s have an IT & Data Security Assessment done first.” I say this to companies with exceptional operational maturity and those without. Why? Because it’s essential to assess your systems anytime you make a big change. Even a minor AI-enabled tool is a big change for your company. It’s always best to do your due diligence.
Contact your IT provider or contract with a CISSP-certified cybersecurity expert to review your systems. A detailed IT assessment should include a look at your cybersecurity policies, procedures, and plans. By the time the AI readiness evaluation is finished, you should have a clear view of how this AI tool will impact your existing platforms, budget, and overall cybersecurity risks.
Before implementing Copilot or any other AI tool, we strongly advise clients to have a Responsible IT Architecture with a full suite of interlocking cybersecurity tools to protect their systems. Most companies have these protections in bits and pieces, with essential parts missing. Here are some of the most common areas that need to be shored up in advance of an AI rollout:
#1—Use least-privilege access principles, and review your document storage policies
Before you invest in AI, review who has access to what.
When AI tools like Copilot for M365 are integrated into your M365 ecosystem, your entire document database becomes searchable. You’ll need to put up guardrails to prevent old, irrelevant, or sensitive documents from corrupting your employee AI searches. Microsoft has solved this with conditional access, letting you set parameters for who can view and edit a document. But access policies are not automatic—they needs to be set and reviewed.
The concept of least-privilege access is that users should only have the minimum levels of access to company systems and data necessary to perform their job functions. Now, least-privilege access is important, period—every member of your team should not have access to confidential client data, sensitive employee information from HR, or full financial records, for example—but AI compounds the issue. In the past, employees would generally need to go looking for that information if they had access. But with AI tools like Copilot for M365, sensitive data that is not properly locked down can be pulled up or integrated into employee prompts or requests with ease.
Before you use something like Copilot for M365, I highly recommend creating company-wide committees that will review documents already in your system and assign them an access level. From department to department, your employees must create document protocols so everyone knows where to store and how to assign access levels to their documents. This is a critical step you can’t skip.
#2—Tighten up your hardware/software lifecycle management
This is an area that’s easy to overlook, but it’s a chink in the armor that cyberthieves love to exploit. If you’re not keeping up with the documentation of your software licenses or operating on out-of-date hardware, you’ve just handed an intelligent hacker the keys to your kingdom.
AI tools can require more computational power or specific hardware capabilities. AI can be a powerful tool for productivity, but if requests are left hanging it can be a negative experience for your team. Using AI tools with specific software can require regular updates to meet and maintain compatibility.
It’s easy for auto-renew to trigger on out-of-use programs—or for employees who’ve left the company to still have active passwords on your system. These small admin tasks are often missed in the HR-to-IT handoff or the general, day-to-day crush of duties most IT departments manage. The good news is that managed IT service providers can track all this for you, handle all your onboarding and offboarding, and ensure your documentation is on point. If you’re not doing this already, you’ll be glad you picked up the phone to make the call.
#3—Install basic endpoint protections
Most companies have a distributed workforce, and that means you have employees logging in from work and home devices from a broad array of locations. This magnifies your flexibility and productivity but also opens you up to risks. After all, credentials are hard to differentiate from the actual employee in the virtual world.
That’s where endpoint detection tools come in. They quickly identify manipulated files, unusual activity, and suspicious websites. The best of these tools uses AI to map standard behavior patterns for each endpoint, so it can instantly find malware, brute-force hacks, keyloggers, and more. This is a critically important tool for any company, not just those considering the shift to AI platforms.
#4—Upgrade your backup and disaster recovery systems
AI can touch your organization’s entire data ecosystem. For anything that you are introducing to that data ecosystem, I would recommend reviewing and potentially upgrading your backup and disaster recovery systems.
Most companies have some backup system in place. However, not all have enough backup capability to handle traffic, qualify for cyber risk insurance, or recover quickly enough to avoid losing critical data or customer transactions. In case of any AI-related errors or malfunctions, having a reliable backup ensures that data integrity is maintained and can be quickly restored.
To determine whether your company is backed up correctly, you must assess your recovery objectives. These are usually measured in your Recovery Time Objective (RTO) (the maximum amount of time you can afford for your networks to be down) and your Recovery Point Objective (RPO) (the maximum amount of data you can afford to lose). You’ll need a backup system to restore well before you meet your RTO and RPO.
#5—Enable and enforce multifactor authentication
If I could have any client do one thing for their cybersecurity, it would be this—multifactor authentication. This system requires users to walk through at least two steps to identify themselves. This stops breaches from stolen passwords in their tracks. It’s essential to doing business online, especially if employees are working remotely.
AI systems can be an appealing target for cybercriminals. Ensuring that only authorized personnel can interact with AI tools and access company data is a no-brainer before using AI tools in your organization.
This should be coupled with a zero-trust environment on your platforms so that your users are continuously verified every few minutes while working on your systems.
#6—Rewrite your cybersecurity plans, policies, and procedures to account for AI
Anytime you change your system significantly, you’ll need to update all your cybersecurity documentation. A Copilot for M365 integration is no different. Dust off all your procedures to see where adding AI will impact them, and revise accordingly. Consider writing an Acceptable Use Policy so employees understand the risks and benefits of using AI searches. This is important whether they work in Copilot or an outside AI engine like ChatGPT or Google Gemini.
Finally, ensure you’ve allowed for the extra AI license expenses in your information technology budget and planning. Think about what monitoring reports you’ll get for these AI tools. Then, consider how you will turn this data into your organization’s key performance indicators (KPIs). With a product this new, you’ll want to prove to senior management that AI is providing tangible benefits to your organization.
#7—Build new AI-savvy employee security awareness training
With this powerful new tool at their fingertips, it’s vitally important that your employees understand your new AI Acceptable Use Policies. They’ll also need a keen eye for the risks that come from AI-enabled cyber thieves flooding their inboxes. Be sure to communicate your new policies and have employees sign them. If you don’t already have monthly cybersecurity training programs for your employees, now is a good time to invest in an easy-to-use online training program.
Here at Integris, we have cybersecurity training vendors who provide trackable, easy-to-understand cybersecurity programs. They’re simple to set up and affordable. Just make sure they have addressed the AI threats in their curriculum.
Invest in AI readiness and reap the benefits
So, as you can see, there’s a significant amount of work and preparation to truly achieve AI readiness. Fortunately, this work will make your organization more robust and far more productive. In fact, Goldman Sachs predicted a 25 percent boost in productivity for companies implementing AI, with the widespread results starting to be felt broadly in the economy around 2027.
That’s far too promising of a prediction to ignore. If your company is interested in exploring AI tools, Integris can help. We’re signing up clients for Copilot tests now, and we’d love to get you on the list. Contact us for a free consultation!