In Brief
Seven principles and practical applications
- Deploying artificial intelligence (AI) across a college or university adds significant value to individual and institutional capabilities. It empowers faculty, staff, and students to achieve more and allows institutions to deliver their missions more effectively and efficiently.
- As institutions plan for and roll out AI solutions across their ecosystems, they should evaluate AI's practical applications and take deliberate steps to implement AI responsibly.
- Enforcing responsible AI practices is essential for organizations to build trust, ensure positive outcomes, and comply with evolving regulations.
- Following are seven principles and sample use cases for responsible AI that illustrate how colleges and universities can apply the technology to accelerate processes, uncover new insights, and thrive in an increasingly competitive and dynamic landscape.
- In practice, all seven principles should be considered and applied together when conceiving, developing, and deploying AI solutions.
Principle one: Human centeredness
AI should be used as a tool to empower humans by working alongside them to make them more efficient and effective — it should not replace or undermine them. Prioritizing human-centered design ensures AI systems align with human values, agency, needs, and respect for human dignity and integrity.
Advisory and application example: Research administration
The global research landscape continues to increase in complexity and competitiveness. How can research institutions support their investigators and expand their research portfolio amidst increased competition for limited federal funding dollars, a complex and ever-changing regulatory environment, heightened scrutiny from regulators and media, and record staff turnover and knowledge flight?
Leveraging the power of AI can help enable efficiency, compliance, and enhanced utilization of research funds. It can also help improve awards management and customer service to research faculty members and support institutions in meeting their research strategy and growth goals.
Potential use case
A principal investigator (PI) chatbot handles investigator inquiries and provides information on institutional and sponsor policies and investigators’ award(s). Investigators complain that this tool will replace human research administrators, leading to impersonal interactions and a lack of proper support for managing their complex or unique award issues.
Considerations |
---|
|
Approach |
---|
|
Outcomes |
---|
|
“AI systems truly excel when they are ‘by our side,’ providing personalized support that amplifies our capacity for creativity and complex analysis.”
Sonia Singh, managing director
Contact SoniaPrinciple two: Safety and security
While the positive impact of AI is undeniable, it is essential to proactively address potential privacy and security risks associated with its adoption. Prioritizing safety and security builds trust in AI, fosters responsible innovation, reduces enterprise compliance risk, and maximizes potential benefits.
Advisory and application example: IT services
Many safety concerns regarding the implementation of AI across colleges and universities relate to IT infrastructure and the ability to manage the technology’s intended and unintended impacts. AI adds another complex layer to IT services and management — giving institutions more tools to increase efficiency and support their operations and opening the door to potential security, safety, and data privacy concerns.
Potential use case
An internal IT communications platform incorporates AI for chatbots and automated responses to student, faculty, and staff inquiries. The platform is a valuable and quick educational tool, but due to system vulnerabilities, it is susceptible to unauthorized access from outside users and spreading misinformation.
Considerations |
---|
|
Approach |
---|
|
Outcomes |
---|
|
“As institutions more frequently utilize AI, they must first ensure that proper IT governance and service capabilities are in place to optimize its use and reduce risk.”
Matt Jones, managing director
Contact MattPrinciple three: Validity and reliability
A common concern about AI-generated content is the potential for inaccuracy. AI systems occasionally produce inaccurate or unreliable results that can lead to misguided decisions, harmful outcomes, and loss of user trust. By auditing AI-generated content and outputs and addressing errors, institutions can ensure accurate and informed decision making and build credibility and rapport with users.
Advisory and application example: Advancement
Maintaining and leveraging reliable constituent and gift data is vital to institutional advancement. Trustworthy, accurate AI outputs can enable effective communication and relationship-building with donors, help maximize the utilization of gifts and other restricted fund sources, and streamline fundraising processes.
Potential use case
An advancement AI tool analyzes historical donor data and predicts future donor makeup and magnitude trends. Concerns arise regarding the validity of these predictions, which can significantly affect marketing and donor outreach strategies, interactions, and gifts.
Considerations |
---|
|
Approach |
---|
|
Outcomes |
---|
|
“AI solutions require rigorous testing, validation, and monitoring to ensure accuracy and consistency. By achieving reliable AI outputs, university leaders and staff are empowered to make informed, accountable decisions for their institution.”
Alex Faklis, managing director
Contact AlexPrinciple four: Explainability and transparency
Because AI models typically operate within closed functions (i.e., users and stakeholders can only see the inputs and outputs, not the exact data guiding the AI), transparency must be prioritized. Institutions can promote and maintain trust with their stakeholders and help influence system reliability by providing clear documentation, interpretable models, and accessible explanations for AI use and outputs. Communication is critical for securing user buy-in and helping institutions realize AI’s full potential.
Advisory and application example: Student lifecycle management
Due to the hyper-competitive environment of student enrollment and the looming enrollment cliff, colleges and universities are increasingly leaning on technological innovation to operate more efficiently and remain competitive. From identifying prospective applicants to admissions considerations to student onboarding and support through graduation, AI can help institutions support students more effectively during all lifecycle phases. AI can be an effective partner for college and university staff as long as schools remain transparent on AI use with students and other constituents, document system design for easy explainability, and ensure those interacting directly with AI have the proper context to interpret its outputs.
Potential use case
A university deploys an AI-powered screening tool to take a first pass at scoring student applications and routing them for further review. Admissions counselors are unsure how the tool screens prospective students’ applications and raise concerns about potential discrimination or improper categorization for routing and other anomalies throughout the student lifecycle.
Considerations |
---|
|
Approach |
---|
|
Outcomes |
---|
|
“Providing clear explanations about decisions and processes to technical and non-technical stakeholders and ensuring that AI systems are understandable and interpretable builds trust in AI, promotes fairness, and enables educated human oversight.”
Rob Bielby, managing director
Contact RobPrinciple five: Accountability
Accountability is a fundamental ethical principle that must be emphasized during AI implementation and rollout. It connects AI to roles and responsibilities across a college or university and highlights the necessary role humans play in AI development and use. Ultimately, AI recommendations are inputs to the decision-making process, with the final responsibility resting on an individual.
Advisory and application example: Enterprise risk and compliance
As institutions face heightened scrutiny from students and their families, alumni, and external stakeholders, implementing robust enterprise risk management (ERM) and compliance programs is becoming a focus for decision makers. When leveraging AI, institutions must continuously react to regulatory, social, and political changes as new issues that may affect students, faculty, researchers, and staff arise. AI governance, policies, and oversight should be a crucial element tracked and managed within an institution’s ERM.
Potential use case
The internal audit team uses an AI tool to monitor compliance with policies and regulations. Since clear accountability was not established, any compliance issues that arise are not consistently addressed, leading to potential regulatory breaches.
Considerations |
---|
|
Approach |
---|
|
Outcomes |
---|
|
“Leading institutions will design and implement governance structures and processes that assign clear roles and responsibilities for AI decision making, oversight, and resolution of unintended consequences.”
Anne Pifer, managing director
Contact AnnePrinciple six: Fairness and bias
AI systems can inadvertently perpetuate or amplify existing biases in data and societal structures, leading to unfair and discriminatory outcomes. Promoting fairness and non-discrimination is fundamental to building trust in AI.
Advisory and application example: Operations
Given increasing costs, financial pressures, and the war for talent, institutions are seeking ways to do more with less. AI offers multiple opportunities for improving operations, including automated processes to increase efficiency and staff capacity for higher-value tasks. Sample areas of influence include procurement, payroll, budget reconciliations, and accounting.
Potential use case
An AI tool helps a university's e-procurement team complete an initial screening for vendors. The procurement tool shows a preference for larger, well-known vendors, potentially excluding smaller, equally capable vendors from consideration.
Considerations |
---|
|
Approach |
---|
|
Outcomes |
---|
|
“Identifying and mitigating biases throughout the entire AI life cycle leads to fair, impartial, and non-discriminatory solutions that foster more equitable outcomes for all.”
Kurt Dorschel, principal
Contact KurtPrinciple seven: Data privacy
Using personal data in AI systems raises significant privacy concerns, as misuse or breaches can cause harm and erode trust. Respecting data privacy is not only an ethical imperative but a legal requirement in most jurisdictions. Respecting confidentiality obligations is critical for maintaining trust and fulfilling ethical and legal responsibilities.
Advisory and application example: Student data privacy
Federal privacy regulations, such as the Family Educational Rights and Privacy Act (FERPA), have heightened scrutiny and expectations regarding student data use and protection. While AI provides colleges and universities with new opportunities to increase efficiency in the face of rising costs, it also introduces new concerns about student data privacy, including unauthorized access, breaches, and other threats.
Potential use case
An AI-powered e-learning platform that collects extensive student data, including personal and academic information, lacks sufficient security measures to avoid potential data breaches and FERPA violations.
Considerations |
---|
|
Approach |
---|
|
Outcomes |
---|
|
“Safeguarding the privacy and confidentiality of individuals’ data is critical to protecting against misuse and building trust.”
Mark Cianca, principal
Contact Mark