Security Checklist
To embrace AI in a safe and responsible way, follow this checklist to protect university data.
Users of AI are accountable for data outcomes. When using consumer AI tools (e.g., ChatGPT, Dall-E, Anthropic), exercise caution—the data may become public. Major university software contracts (Microsoft, Google, Adobe, Zoom) prioritize data protection, but users should still adhere to handling guidelines.
Protect UNC Charlotte’s confidential information and your own. Do not enter confidential or legally restricted data (e.g., personnel records or data protected by FERPA, HIPAA, PCI etc.) or any data that Charlotte’s data classification policy identifies as level 2 or 3 into an AI tool. | |
Assume all information shared will be made public. Treat all information shared through an AI tool as though it will become public. Do not share information that is personal or sensitive and be mindful that the information you input into an AI tool may be retained by the vendor. | |
Follow all procurement guidelines when purchasing or using free tools. All AI purchases, including enhancements to existing licenses, must follow UNC Charlotte’s established purchasing procedures. This ensures appropriate legal, privacy, and security reviews are conducted. | |
Use enterprise campus-wide tools whenever possible. Use solutions that have been reviewed and approved by the University whenever possible. Unlike free consumer versions, enterprise-wide approved tools require your campus login and typically do not use your data to train their systems. | |
Be alert for bias and inaccuracy. AI-generated responses can be biased, inaccurate, inappropriate or may contain unauthorized copyrighted information. We are each responsible for the content of our work product. Continually review and verify outputs generated by AI tools to ensure accuracy. |