The resources on this website are designed to support the Boston College community’s safe, responsible, and ethical use of AI, in accordance with University guidelines on AI use.
Ethical Considerations
As with any new technology, there are risks. Content creation by AI can potentially be misused, like generating misleading information or deep fakes. Ethical concerns also arise around AI authorship.
As with many digital tools, it's crucial to be aware of its limitations, such as potential biases in algorithms and the inability to replace human insights and connections. Additionally, faculty should offer, and students should defer to, guidelines about appropriate use of AI in coursework, or when such use is prohibited. Students should also be aware of the appropriate and authorized use of AI in each class as outlined by the instructor and in accordance with the University Academic Integrity Policies.
Legal Considerations
AI raises important and often intersecting legal issues. Understanding some of these legal implications is key to safe and responsible use of AI tools. Here are just three areas of legal compliance and risk to keep in mind as we collectively work to minimize exposure to potential legal pitfalls:
Data Privacy and Security
The universe of data that can get fed into AI systems is massive. In our University environment, particularly sensitive data include, for example, student records, employment records, health records, research data, financial information, proprietary or otherwise confidential information. When it comes to sharing information with AI systems, data privacy and security risks (data breaches, legal and regulatory violations, and reputational risks, etc.) persist. Users of AI tools across the University must understand their existing data privacy and security obligations, such as compliance with Boston College policies and procedures, the Family Educational Rights and Privacy Act (FERPA), and contractual obligations on various fronts. Be sure to visit the Data Security and Privacy page on this website for additional information specific to Boston College approved AI tools.
Intellectual Property
There are significant legal questions and continued uncertainty regarding AI and copyright infringement, as well as the ownership and eligibility of AI-generated works for IP protection. Authors, scholars, artists, and other copyright holders have filed numerous lawsuits against developers and deployers of AI systems, alleging copyright infringement and unauthorized use of their copyrighted works in connection with AI systems. As a practical matter for users of AI systems across the University, AI use must not violate the copyright and other IP rights of third parties. Additionally, authors, creators, and other content generators may face new challenges in protecting their own IP rights as AI use proliferates.
Evolving Legal Landscape
The AI legal and regulatory landscape is quickly evolving both in the U.S. and internationally. In the U.S., a growing number of states are developing their own laws governing the use of AI. There is no AI legislation at the U.S. federal level as of early 2025. However, a number of legal questions surrounding the use and development of AI tools are playing out in U.S. courts. The range of issues being litigated is vast: copyright and ownership; use of AI in academic coursework; academic integrity; data privacy; surveillance and tracking; bias and discrimination; and more. A number of public databases try to monitor these ongoing developments, including one from The George Washington University at .
Outside the U.S., many countries are developing and promulgating their own AI governance models and legislation. Notably, the European Union’s EU AI Act became law in 2024 and serves as a model for many for harmonized rulemaking on AI. The EU AI Act is anchored on a risk-based approach to AI systems and use cases, classifying them in four risk levels: minimal, limited, high, and prohibited. The first compliance requirements under the EU AI Act came into effect in February 2025, banning certain prohibited practices and imposing AI literacy requirements for providers and deployers of AI systems. Information about the EU AI Act is available at .
Why does this matter?
By understanding these and the myriad of other legal implications of AI use, members of the Boston College community can harness the benefits of safe and responsible AI use, while mitigating their and the University’s potential exposure to legal and regulatory risk in an ever-changing landscape. Boston College’s Office of the General Counsel can help faculty and staff navigate legal questions they may have about AI use.
Data Security and Privacy
Ҵý’s currently supported GenAI tools (see Resources > AI Tools) offer Data Protection. Which protects Ҵý data and ensures that data is not used to train the model and is not released to other users or organizations.
Be aware that any information entered in publicly available AI tools may not only be processed, but also retained and used by the AI to give answers to others. This means if you enter any personal information about yourself or any confidential Boston College information, that information may be stored and potentially shared with or sold to others.
Important:
- Do not use your Ҵý credentials (Ҵý username, password, or any Ҵý email address) to sign up for publicly available Generative AI tools.When you use your Ҵý email address to sign up for online services, even if they are free, you may be putting your personal information and Boston College data at risk.Not all companies meet Ҵý’s security standards when it comes to protecting user data.
- 'Confidential' and 'Strictly Confidential' data, as defined by theBoston College Data Security Policy, should not be used in any online AI tool.
University Policies
Use of AI tools must comply with all existing and applicable University policies and procedures. These include the Data Security Policy; the Professional Standards and Business Conduct Policy; the Use of University Technological and Information Resources Policy; Academic Integrity Policies; the Student Code of Conduct; and the various applicable policies promulgated by Boston College’s constituent colleges, schools, and programs.
Faculty considering the use of Generative AI themselves, or possible use by students in their classes, should refer to theCenter for Teaching Excellence’s貹.
Acquisition of new AI software or subscription, like any other software, is subject to the “GetTech” process.