Responsible AI for higher education and research

In Brief

15-Minute Read

Seven principles and practical applications


  • Deploying artificial intelligence (AI) across a college or university adds significant value to individual and institutional capabilities. It empowers faculty, staff, and students to achieve more and allows institutions to deliver their missions more effectively and efficiently.
  • As institutions plan for and roll out AI solutions across their ecosystems, they should evaluate AI's practical applications and take deliberate steps to implement AI responsibly.
  • Enforcing responsible AI practices is essential for organizations to build trust, ensure positive outcomes, and comply with evolving regulations.
  • Following are seven principles and sample use cases for responsible AI that illustrate how colleges and universities can apply the technology to accelerate processes, uncover new insights, and thrive in an increasingly competitive and dynamic landscape.
  • In practice, all seven principles should be considered and applied together when conceiving, developing, and deploying AI solutions.
Seven guiding principles video thumbnail

Principle one: Human centeredness

AI should be used as a tool to empower humans by working alongside them to make them more efficient and effective — it should not replace or undermine them. Prioritizing human-centered design ensures AI systems align with human values, agency, needs, and respect for human dignity and integrity.

Advisory and application example: Research administration

The global research landscape continues to increase in complexity and competitiveness. How can research institutions support their investigators and expand their research portfolio amidst increased competition for limited federal funding dollars, a complex and ever-changing regulatory environment, heightened scrutiny from regulators and media, and record staff turnover and knowledge flight?

Leveraging the power of AI can help enable efficiency, compliance, and enhanced utilization of research funds. It can also help improve awards management and customer service to research faculty members and support institutions in meeting their research strategy and growth goals.

Potential use case

A principal investigator (PI) chatbot handles investigator inquiries and provides information on institutional and sponsor policies and investigators’ award(s). Investigators complain that this tool will replace human research administrators, leading to impersonal interactions and a lack of proper support for managing their complex or unique award issues.

Considerations
  • How can we better facilitate identifying funding opportunities and increasing proposal win rates?
  • In what ways can we deliver high-quality service to faculty and their research teams?
  • How can we reduce administrative burden, freeing staff to review high-risk or complex awards or studies and investigators to focus on scientific outcomes?
  • What methods can we use to stay ahead of the regulatory environment and funding landscape?
  • Can we enhance compliance risk mitigation through automated checking?
  • What are the best practices to comply with strict sponsor deadlines more easily?
  • How is it possible to do the same amount of work, or more, with fewer resources?
  • What are effective ways to train new team members quickly while reducing the training burden on current staff?
Approach
  • Involve end-users and communities in AI system design to incorporate their experiences, preferences, and values, fostering inclusivity and relevance.
  • Develop AI systems that empower individuals with insightful tools that supplement their decision making and goal achievement, where technology is a supportive aid.
  • Prioritize human oversight in AI, particularly in critical areas like research administration, where ethical judgments and adherence to regulations are vital.
  • Design features that encourage human involvement in AI decision making and enforce the review of AI's findings to anticipate future implications.
  • Ensure research administrators remain essential in guiding investigators and their teams through policies and procedures, answering unique proposal or award inquiries, especially when dealing with unique sponsor demands, or managing high-risk or vital institutional awards.
  • Build user-friendly AI interfaces that are easy to use, accessible, satisfactory and consider users' diverse capabilities and preferences.
Outcomes
  • A chatbot that augments, rather than replaces, human research administration staff by handling routine inquiries and finding relevant policies and procedures to reference during proposal preparation or award management.
  • A structure that provides investigators with access to human support for complex matters beyond basic AI responses and provides white glove service when needed.
  • Processes that ensure that the day-to-day needs of all investigators are met promptly, particularly when working against strict sponsor deadlines.

“AI systems truly excel when they are ‘by our side,’ providing personalized support that amplifies our capacity for creativity and complex analysis.”

Sonia Singh, managing director

Contact Sonia
Sonia Singh

Principle two: Safety and security

While the positive impact of AI is undeniable, it is essential to proactively address potential privacy and security risks associated with its adoption. Prioritizing safety and security builds trust in AI, fosters responsible innovation, reduces enterprise compliance risk, and maximizes potential benefits.

Advisory and application example: IT services

Many safety concerns regarding the implementation of AI across colleges and universities relate to IT infrastructure and the ability to manage the technology’s intended and unintended impacts. AI adds another complex layer to IT services and management — giving institutions more tools to increase efficiency and support their operations and opening the door to potential security, safety, and data privacy concerns.


Potential use case

An internal IT communications platform incorporates AI for chatbots and automated responses to student, faculty, and staff inquiries. The platform is a valuable and quick educational tool, but due to system vulnerabilities, it is susceptible to unauthorized access from outside users and spreading misinformation.

Considerations
  • Are our technology strategies and related governance, policies, and procedures robust enough to account for AI impacts?
  • Are our existing systems operating optimally, or are updates needed before incorporating or utilizing AI functionality?
  • How can we reduce administrative burden, freeing staff to review high-risk or complex awards or studies and investigators to focus on scientific outcomes?
  • Where do technology gaps exist, and how could these be filled or supplemented with AI?
  • How do we ensure faculty, staff, and students know and are appropriately trained on new solutions?
Approach
  • Assess risks throughout the AI life cycle, accounting for technical and societal factors, including risks associated with third-party AI models.
  • Validate or establish cybersecurity protocols to defend against unexpected system failures and attacks.
  • Perform comprehensive testing under varied conditions to identify potential failures and unintended consequences.
  • Establish human oversight mechanisms to supervise and intervene in AI operations.
  • Develop and test containment protocols to limit the effects of malfunction or unintended AI behavior.
Outcomes
  • A platform with strict authentication mechanisms and continuous monitoring of suspicious activities to enhance security.
  • Implemented fail-safes to ensure the system can quickly isolate affected components and maintain overall integrity during a potential security breach.
  • Coordination with the institution’s other existing technology solutions and IT security policies and practices.

“As institutions more frequently utilize AI, they must first ensure that proper IT governance and service capabilities are in place to optimize its use and reduce risk.”

Matt Jones, managing director

Contact Matt
Matt Jones

Principle three: Validity and reliability

A common concern about AI-generated content is the potential for inaccuracy. AI systems occasionally produce inaccurate or unreliable results that can lead to misguided decisions, harmful outcomes, and loss of user trust. By auditing AI-generated content and outputs and addressing errors, institutions can ensure accurate and informed decision making and build credibility and rapport with users.

Advisory and application example: Advancement

Maintaining and leveraging reliable constituent and gift data is vital to institutional advancement. Trustworthy, accurate AI outputs can enable effective communication and relationship-building with donors, help maximize the utilization of gifts and other restricted fund sources, and streamline fundraising processes.

Potential use case

An advancement AI tool analyzes historical donor data and predicts future donor makeup and magnitude trends. Concerns arise regarding the validity of these predictions, which can significantly affect marketing and donor outreach strategies, interactions, and gifts.

Considerations
  • How can we more efficiently and effectively engage with constituents?
  • How can we better identify prospective donors and estimate the magnitude of their donation?
  • How can we maximize donor outreach strategies or marketing campaigns, including leveraging AI to prompt outreach or draft communications?
  • Once funds are received, how can we more efficiently analyze them and ensure they are used appropriately and for maximum benefit?
Approach
  • Train and validate AI models with top-quality, diverse data (both quantitative and qualitative).
  • Assess AI models' accuracy and generalizability using testing and validation, including those sourced from third-party models.
  • Monitor AI in practical settings to identify and manage performance degradation, concept drift, or unexpected biases.
  • Enhance error detection and management techniques and create ways to limit the impact of inaccurate or unreliable predictions.
Outcomes
  • An AI model trained with high-quality, reliable, and complete datasets.
  • Incorporated anomaly detection and error correction mechanisms to ensure the reliability of the data and analyses over time.
  • Predictions that are validated through A/B testing and feedback from advancement office reviews and donor interaction.

“AI solutions require rigorous testing, validation, and monitoring to ensure accuracy and consistency. By achieving reliable AI outputs, university leaders and staff are empowered to make informed, accountable decisions for their institution.”

Alex Faklis, managing director

Contact Alex
Alex Faklis

Principle four: Explainability and transparency

Because AI models typically operate within closed functions (i.e., users and stakeholders can only see the inputs and outputs, not the exact data guiding the AI), transparency must be prioritized. Institutions can promote and maintain trust with their stakeholders and help influence system reliability by providing clear documentation, interpretable models, and accessible explanations for AI use and outputs. Communication is critical for securing user buy-in and helping institutions realize AI’s full potential.

Advisory and application example: Student lifecycle management

Due to the hyper-competitive environment of student enrollment and the looming enrollment cliff, colleges and universities are increasingly leaning on technological innovation to operate more efficiently and remain competitive. From identifying prospective applicants to admissions considerations to student onboarding and support through graduation, AI can help institutions support students more effectively during all lifecycle phases. AI can be an effective partner for college and university staff as long as schools remain transparent on AI use with students and other constituents, document system design for easy explainability, and ensure those interacting directly with AI have the proper context to interpret its outputs.

Potential use case

A university deploys an AI-powered screening tool to take a first pass at scoring student applications and routing them for further review. Admissions counselors are unsure how the tool screens prospective students’ applications and raise concerns about potential discrimination or improper categorization for routing and other anomalies throughout the student lifecycle.

Considerations
  • How can AI help identify prospective applicants and support or prompt interaction?
  • Can AI help match students with external scholarships that can be utilized/applied at my university?
  • How can our university better predict final enrollment numbers and demographics?
  • In what ways can we identify at-risk students and automatically suggest personalized academic support?
  • How can AI facilitate helpdesk or support service inquiries from students, answering common questions and directing them to appropriate resources?
  • Can AI assist students in outlining a graduation plan or analyzing graduation timelines based on historical and current credit data?
Approach
  • Disclose AI usage from the start. Help stakeholders grasp their role in service delivery.
  • Employ tactics for clear, concise AI decision explanations, retaining comprehensible AI output rationale.
  • Utilize easily explained models like decision trees or linear regression. Balance interpretability and performance for transparent effectiveness.
  • Develop tools that explain AI decision making to all audiences, regardless of their technical ability, to ensure all stakeholders can understand them.
  • Provide stakeholders with accessible details on AI system design, data sourcing, model assumptions, and decision making.
  • Emphasize the human oversight element employed to train and refine AI and ultimately make determinations.
Outcomes
  • An AI model trained with high-quality and complete datasets that can be easily referenced.
  • An integrated explainability/reference feature that highlights (in the case of admissions) application elements reviewed and criteria used by the AI model for confirmation and refinement from the admissions office.
  • Predictions that are continuously validated and refined through testing and feedback from relevant student lifecycle offices.
  • AI systems directly tied to enterprise data systems (ERPs, CRM platforms) where applicable, so relevant source data is easily traced and transparent.
  • Documentation on AI solution design approach and assumptions to be referenced by relevant stakeholders.

“Providing clear explanations about decisions and processes to technical and non-technical stakeholders and ensuring that AI systems are understandable and interpretable builds trust in AI, promotes fairness, and enables educated human oversight.”

Rob Bielby, managing director

Contact Rob
Rob Bielby

Principle five: Accountability

Accountability is a fundamental ethical principle that must be emphasized during AI implementation and rollout. It connects AI to roles and responsibilities across a college or university and highlights the necessary role humans play in AI development and use. Ultimately, AI recommendations are inputs to the decision-making process, with the final responsibility resting on an individual.

Advisory and application example: Enterprise risk and compliance

As institutions face heightened scrutiny from students and their families, alumni, and external stakeholders, implementing robust enterprise risk management (ERM) and compliance programs is becoming a focus for decision makers. When leveraging AI, institutions must continuously react to regulatory, social, and political changes as new issues that may affect students, faculty, researchers, and staff arise. AI governance, policies, and oversight should be a crucial element tracked and managed within an institution’s ERM.

Potential use case

The internal audit team uses an AI tool to monitor compliance with policies and regulations. Since clear accountability was not established, any compliance issues that arise are not consistently addressed, leading to potential regulatory breaches.

Considerations
  • How do we mitigate risks to university financial, student, and employee data associated with AI insights?
  • Do we have the appropriate governance structures and policies for AI decision making and oversight?
  • How can we better collect and use data to inform decision making for strategic planning and risk management?
  • What actions can we take to identify the highest priority risks so that we can react appropriately?
  • How can we ensure our risk mitigation efforts are working most effectively?
  • What can we do to stay ahead of the changing regulatory environment and funding landscape?
  • Are there ways to enhance compliance risk mitigation through automated checking?
Approach
  • Assign roles and responsibilities to all AI life cycle stakeholders for each phase.
  • Promote accountability through dedicated monitoring, auditing, and third-party model reviews, ensuring compliance with ethical, legal, and enterprise standards.
  • Establish mechanisms to promote accountability, including consequences for non-compliance with principles and policies.
  • Establish guidelines for AI systems to comply with all relevant laws, including intellectual property, privacy, anti-discrimination, and other regulations.
  • Implement a scheduled, uniform strategy for proactive risk conversations and reviews. This covers analysis of inaccuracies, misuse, and other issues, aiming for ongoing learning and responsible supervision.
Outcomes
  • Formal coordination across the internal audit, compliance, and risk management teams for AI oversight and incorporation into institutional ERM programs.
  • Assignment of specific roles for monitoring and addressing compliance alerts related to AI systems.
  • Implementation of regular compliance audits and accountability reports for AI solutions to ensure all issues are tracked and resolved promptly, maintaining adherence to internal policies and external regulations.

“Leading institutions will design and implement governance structures and processes that assign clear roles and responsibilities for AI decision making, oversight, and resolution of unintended consequences.”

Anne Pifer, managing director

Contact Anne
Anne Pifer

Principle six: Fairness and bias

AI systems can inadvertently perpetuate or amplify existing biases in data and societal structures, leading to unfair and discriminatory outcomes. Promoting fairness and non-discrimination is fundamental to building trust in AI.

Advisory and application example: Operations

Given increasing costs, financial pressures, and the war for talent, institutions are seeking ways to do more with less. AI offers multiple opportunities for improving operations, including automated processes to increase efficiency and staff capacity for higher-value tasks. Sample areas of influence include procurement, payroll, budget reconciliations, and accounting.

Potential use case

An AI tool helps a university's e-procurement team complete an initial screening for vendors. The procurement tool shows a preference for larger, well-known vendors, potentially excluding smaller, equally capable vendors from consideration.

Considerations
  • How do we mitigate risks associated with AI insight into university financial, HR, vendor relationships, or other data?
  • Which historical costs can be offset, and how should resources be reallocated as we build multiyear forecasts?
  • How do we plan or prepare for AI implementation across departments and not just within specific processes?
  • Can AI help determine how academic, research, and administrative space can be used more effectively?
Approach
  • Perform bias audits and analyze model predictions to pinpoint and rectify unfairness and discrimination, including third-party model biases.
  • Implement strategies to effectively handle explicit and implicit biases and consider setting up a bias council for ongoing evaluation.
  • Apply bias mitigation techniques, such as data preprocessing, algorithm alterations, and fairness constraints.
  • Evaluate AI system's performance across diverse demographic and socio-economic groups.
  • Engage with potentially affected communities for feedback to consider varied perspectives and address biases.
Outcomes
  • Expanded vendor selection criteria that include diverse vendor profiles, ensuring smaller vendors are considered.
  • An updated AI model that is retrained using fairness-aware algorithms and a representative vendor dataset.
  • Conducting regular audits and a documented, transparent selection process to help ensure fair opportunities for all vendors.
  • A policy enabling the e-procurement team to select the most appropriate vendor for their needs, regardless of size.

“Identifying and mitigating biases throughout the entire AI life cycle leads to fair, impartial, and non-discriminatory solutions that foster more equitable outcomes for all.”

Kurt Dorschel, principal

Contact Kurt
Kurt Dorschel bio image

Principle seven: Data privacy

Using personal data in AI systems raises significant privacy concerns, as misuse or breaches can cause harm and erode trust. Respecting data privacy is not only an ethical imperative but a legal requirement in most jurisdictions. Respecting confidentiality obligations is critical for maintaining trust and fulfilling ethical and legal responsibilities.

Advisory and application example: Student data privacy

Federal privacy regulations, such as the Family Educational Rights and Privacy Act (FERPA), have heightened scrutiny and expectations regarding student data use and protection. While AI provides colleges and universities with new opportunities to increase efficiency in the face of rising costs, it also introduces new concerns about student data privacy, including unauthorized access, breaches, and other threats.

Potential use case

An AI-powered e-learning platform that collects extensive student data, including personal and academic information, lacks sufficient security measures to avoid potential data breaches and FERPA violations.

Considerations
  • Is it possible to match students with external scholarships?
  • How can AI facilitate student enrollment and onboarding by answering common questions and directing students to resources?
  • What role can AI play in assisting students with class registration? Can it understand the demand for a particular class or section?
  • Can AI identify at-risk students and suggest personalized support? Is it able to analyze planned courses, majors, or transcripts for graduation requirements?
  • Can we use AI for helpdesk or support service inquiries from students?
  • How can AI assist students in campus life involvement by directing them to clubs, housing, or other activities aligned with their interests?
Approach
  • Implement robust data security measures, including encryption, access controls, and regular security audits to anonymize and protect data.
  • Collect and use the minimum amount of data necessary.
  • Obtain informed consent to use student data and communicate the purpose, scope, and implications of data collection.
  • Provide students with mechanisms to access, correct, or delete their information.
  • Review and update practices regularly to align with legal requirements and best practices.
  • Establish, validate, and test protocols and response plans for data breaches, including timely notification to affected students and authorities.
  • Promote responsible data usage and storage when training AI systems. Do not repurpose or store such data without explicit consent.
Outcomes
  • Installation of robust encryption and access control mechanisms in the e-learning platform.
  • Updated security protocols to address emerging threats and vulnerabilities.
  • Clear and comprehensive information provided to students on data collection and usage practices when using the platform, including their rights to access, modify, and delete their data.

“Safeguarding the privacy and confidentiality of individuals’ data is critical to protecting against misuse and building trust.”

Mark Cianca, principal

Contact Mark
Mark Cianca

Contact Us

I want to talk to your experts in