Are you considering launching an AI tool like Microsoft Copilot for M365 or authorizing your employees to use third-party AI tools like video generators or business intelligence tools? If so, that’s great. But chances are, AI is being used in your company whether you know it or not, potentially exposing it to additional risk.
Recent surveys have shown that many employees are already sneaking off using tools like Chat GPT to lighten their workloads. Additionally, native AI engines like Apple Intelligence and Google Gemini will be available for smartphones as soon as the fall of 2024. Many other technology platforms are following suit by integrating AI engines into their products. Do you have an AI Acceptable Use Policy to protect yourself?
What Is an AI Acceptable Use Policy, and Why Is It So Important?
An AI Acceptable Use Policy sets boundaries for your staff regarding the appropriate and inappropriate use of AI. This policy is essential because it provides guidelines that employees can follow to protect critical data and avoid several common pitfalls associated with AI use, such as:
- Violating copyrights by using AI generation tools that don’t honor the rights of creators
- Uploading company information into external AI engines that store it against company protocol
- Sharing of protected customer data with AI engines, potentially leaking critical information, and violating compliance and privacy policies
- Creating AI-generated voice, photo, video, or prose that uses the work/image of the creators in a way they do not authorize, violating ethical boundaries
- Circulating inaccurate, AI-generated information due to poor fact-checking and bad prompt engineering
- Having poor “data hygiene” when it comes to managing files, making it impossible for the company’s AI engines to search its databases accurately and adequately
Any of these scenarios could result in serious consequences for your company—causing data breaches, inefficiencies, compliance issues, and even lawsuits.
That’s why it’s so critical that you institute an AI Acceptable Use Policy at your company, no matter where you are on your AI implementation journey. You can’t afford not to.
Creating an AI Acceptable Use Policy: Start with an AI Task Force
It may seem like overkill to create a committee just to build a simple little guidance document. Hear me out. To provide comprehensive guidance to your employees, you must have a full, organization-wide view of how AI will affect your company and your systems. Bringing together key players will help you do that.
Specifically, your AI steering committee should include:
- Your company’s cyber security officers or a representative from your managed service provider, if that’s who’s handling your cybersecurity policy.
- A representative from Human Resources can help with education and training.
- A corporate communications representative who can help get the word out about the new policy once completed.
- A member of your C-Suite, like your chief technology officer, CEO, or COO. This level of executive sponsorship will help move the project forward.
- Your chief of information technology can weigh in on how the new AI protocols will alter the company’s written IT plans, policies, and procedures.
How an AI Steering Committee Can Help Craft Your AI Acceptable Use Policy
Your AI steering committee will be critical to creating an AI-acceptable use policy that is thorough, well-considered, and well-suited to your organization’s needs. Together, they can do a lot.
First, they can review sample policies to create a starting point for the discussion about what should be in your AI policy. Then, they can survey the organization to find out what the potential use cases for AI are now and what they should be in the future. Once they have identified these use cases, they can evaluate the associated implementation risks and develop strategies to mitigate them.
One of their key duties will be to appoint an AI officer. Once the AI policy is released, this single point of contact will be important. This person will serve as a critical liaison with employees, answering questions and dealing with escalations as they occur.
With that done, the group can begin putting the policy down on paper. Their job will be to put strict rules in place governing how AI is used. They will create a path for maintaining those rules. In most cases, this will include protocols for approving the use of new AI technologies prior to their deployment within the company’s environment. They’ll also need to consider how AI-related breaches and escalations will be handled. Training should also be discussed—both for new hires and current employees, and training for any new revisions that need to be made in the future.
Launching an AI Acceptable Use Policy that Employees Embrace
It’s one thing to put your AI policy down on paper. It’s quite another to have a policy employees understand and embrace. If you want to have an AI policy that sticks, communication and training will be crucial.
Once a well-written policy is in place, the next challenge is creating a path to widespread adoption. We recommend these simple steps.
#1—Create a short, custom training video that covers the policy’s roles. A short video directly from your CEO—
Your CEO or designated AI officer can go a long way. Fancy production values are not necessary. Haveey have someone speak to the camera about why this policy is required, their roles and responsibilities, and who they can contact with questions or concerns.
#2—Send out the policy and video to all employees. Make watching the video mandatory for new hires as part of their orientation—
Once employees have watched the video, they’ll be required to digitally sign the document and send it back. This signed document will become part of their Human Resources files.
#3—Incorporate AI Awareness into Your Ongoing Cybersecurity Awareness Training—
At Integris, we recommend that every company invest in cybersecurity awareness training. Fortunately, we have products that help companies meet that need. These monthly short video courses can help your employees learn how to shut the front door against hackers. This helps you reduce the likelihood of an attack while also meeting the requirements of cyber risk insurers and cyber security regulators.
If you have a cybersecurity awareness program, ensure that it regularly includes information about emerging AI-oriented risks and threats. Our packages will start incorporating that material in the fall of 2024. Stay tuned.
#4—Make your AI Acceptable Use Policy part of a standing company archive–
If the document is easily accessible, employees can always refer to it whenever they have questions. This is ideal if you have a SharePoint portal with company news or a well-traveled file repository.
#5—Reconvene your AI committee at least once a year for a standard policy review—
After the rollout, most of the daily work will fall to your designated AI officer and, by default, your IT department. However, we recommend that your full committee do a review every year. This is a good chance to discuss any emerging AI threats and how they will affect your AI guidelines. It’s also a good time to assess how training works and what can be done to improve education and compliance.
Are You Ready to Write an AI Acceptable Use Policy for Your Company?
If so, Integris would like to help. Simply click the link below and add your information. You can download our comprehensive sample AI acceptable use policy. It’s a great way to start the conversation about AI compliance at your company. We’re signing up Betas for Microsoft 365 Copilot and would love to put you on the list. Contact us now for a free consultation.