Secure Your AI with CARE

Advancing cutting-edge research and interdisciplinary collaboration to ensure AI systems are designed and used in ways that maximize safety and societal benefit while minimizing risks

Our Vision & Mission

We envision a future where AI technologies are designed, implemented, and regulated in a manner that maximizes benefits while minimizing potential risks and harm.

Our Vision

We envision a future where AI technologies are designed, implemented, and regulated in a manner that maximizes benefits while minimizing potential risks and harm.

By integrating diverse perspectives from academia, industry, and society, we aim to shape AI development and governance for the betterment of humanity.

Our Mission

The mission of CARE is to serve as a focal point for pioneering research in the realm of socially accountable and safe AI, while actively promoting the convergence of AI and societal concerns.

We aim to ensure that AI systems are designed and used in ways that respect human autonomy, promote fairness, and enhance human welfare.

Research Pillars

Our center focuses on seven key interdisciplinary research areas to ensure AI safety and responsible development across various domains

AI Safety + Science

Building unified red teaming/evaluation platforms for AI4Science models in collaboration with national labs and research centers, ensuring safe application in scientific domains.

View Publications

AI Safety + Data/Computing

Ensuring secure, reliable, and efficient AI systems with robust data integrity, computational reliability, and resistance to adversarial threats.

View Publications

AI Safety + Algorithms

Developing fundamental principles and methodologies to ensure AI systems operate safely and predictably with strong theoretical foundations and guarantees.

View Publications

AI Safety + Medicine/Healthcare

Creating robust safety mechanisms for AI systems in healthcare settings, ensuring reliable, unbiased recommendations while maintaining patient privacy.

View Publications

AI Safety + Economics

Proposing novel mechanism designs and risk assessments for AI in economic activities, ensuring sustainable and beneficial economic impacts.

View Publications

AI Safety + Law/Policy

Exploring regulatory frameworks, participatory governance, and ethical guidelines to ensure responsible AI development and deployment.

View Publications

AI Safety + Social Good

Leveraging AI to address critical social challenges while promoting equity, inclusivity, and accessibility, particularly for marginalized communities.

View Publications

Our Interdisciplinary Approach

Bringing together experts from various fields to tackle the complex challenges of AI safety

Research collaboration at CARE

Collaborative Excellence

The complexity of AI safety requires expertise from multiple disciplines. At CARE, we bring together leading researchers from computer science, law, philosophy, psychology, and engineering to study the technical and ethical challenges of developing safe and beneficial AI.

Our research center facilitates collaboration among computer science, data science, social sciences, law, and philosophy. For instance, cognitive psychologists provide insights into human values and ethics, while legal scholars analyze regulatory gaps and governance needs.

  • Developing new technical methods for AI safety and alignment
  • Creating theoretical frameworks to understand and predict AI behavior
  • Designing governance protocols for responsible AI deployment
  • Building unified evaluation platforms for AI models across domains
  • Establishing mathematical foundations and formal guarantees for AI safety

Community Building & Outreach

Fostering a diverse and inclusive community dedicated to AI safety

Cultivating Talent & Collaboration

The center provides extensive mentoring and professional development for students and early career researchers in AI safety. We recruit a diverse cohort of PhD students, postdocs, and researchers and pair them with senior mentors.

Visiting Scholar Program

Bringing junior scholars from minority-serving institutions to conduct collaborative research with our faculty

Diversity & Inclusion

Specific outreach to involve women and underrepresented minorities in AI safety research

Annual Conference

Bringing together academics, industry researchers, policymakers, and advocacy groups

Industry Retreats

Facilitating discussions on emergent AI safety problems with industry partners

Our Partnerships

Collaborating with leading organizations to advance AI safety research

What Success Looks Like

Measuring our impact and setting benchmarks for progress in AI safety research

Success for our research center will be defined by several key outcomes that advance the field of AI safety and responsible AI development:

  • Significant advancements in the safety of foundation models across different domains including science, medicine, economics, and social applications
  • Development of scalable and transferable methodologies that can be widely adopted by the AI community
  • Creation of a unified platform for safe data collection, model training, and evaluation that becomes an industry standard
  • Establishment of a vibrant interdisciplinary research community that bridges technical and social aspects of AI safety
  • Demonstrable improvements in the safety, fairness, and truthfulness of foundation models, setting new standards for AI reliability
  • Training the next generation of AI safety researchers through our education and mentorship programs
  • Influencing policy and governance frameworks to ensure responsible AI deployment at scale

Through these achievements, we aim to position our center as a leader in AI safety research, attracting additional funding and support to ensure long-term sustainability and impact.

Get Involved with CARE

Join our community and contribute to the advancement of safe and responsible AI. Whether you're a researcher, student, industry professional, or policy advocate, there's a place for you in our community.