Guidelines for the Usage of AI-Powered Systems in a Professional Environment

by

AI-powered systems such as ChatGPT offer tremendous possibilities in a professional or academic environment. Such tools can reduce time spent completing repetitive tasks, assist with organization, and even streamline communication. Though usage of these tools may be convenient, end users must still ensure that sensitive data remains protected and secure. This article will provide an overview of guidelines to keep in mind when using AI-powered tools in a professional setting. 

Data Protection 

When using AI-powered tools, it is still important to ensure that any sensitive or confidential information remains safeguarded. Refrain from sharing personally identifiable information (PII) or any delicate university-related data with AI systems. 
 
For further information about data protection, please refer to Sensitive Data Storage Best Practices. 


Video Conference Sessions 

Artificial Intelligence (AI) bots such as Otter.AI and Read.AI are increasingly prevalent in virtual meetings conducted via Teams, Zoom, and other conferencing applications. While these tools may enhance convenience, they can also pose considerable privacy and security challenges. Much like recorded meetings, data is sent to the cloud where sensitive information could potentially be accessed. If you are in a meeting where a bot is present, do not hesitate to express your privacy concerns and request that recording be stopped. The host possesses the ability to disable these AI bots. 


Ethical Risks 

While Artificial Intelligence (AI) undoubtedly brings tremendous capabilities, it also introduces significant ethical risks that warrant careful consideration. Here are some key areas of concern: 

  • Bias: AI systems learn from data available on the Internet, which may inadvertently reflect discriminatory or biased viewpoints. 

  • Misinformation: Lacking inherent knowledge or the ability to fact-check, AI systems may propagate misinformation. 

  • Threat Actors: The powerful capabilities of AI can be exploited by malicious actors to disseminate false information, orchestrate social engineering attacks, or create convincingly deceptive content. 

  • Privacy Concerns: Interactions with AI often involve the exchange of personal or sensitive information. It is crucial to weigh the privacy and security implications of providing such information to an AI system, as unauthorized access or data breaches could result in the exposure of sensitive data. 

  • Accountability Challenges: AI systems function as tools, and the onus for their output ultimately falls on the users. This creates complexities in ascertaining responsibility if the system generates harmful or unethical content. As a safety measure, always scrutinize any content produced by AI. 


Vendor Due Diligence 

When engaging a third-party provider for AI services, it's important to understand the license agreement. IT Services should review the vendor's security policies, data safeguarding measures, and adherence to pertinent regulations. This ensures that their security practices are in harmony with CSUCI's established best practices. 


Training and Awareness 

Employees and students should be aware of the best practices for interacting with AI systems like ChatGPT and upholding the University's information security standards. Any inquiries or concerns should be promptly directed to the Information Security Team at infosec@csuci.edu.