Advancing cutting-edge research and interdisciplinary collaboration to ensure AI systems are designed and used in ways that maximize safety and societal benefit while minimizing risks
We envision a future where AI technologies are designed, implemented, and regulated in a manner that maximizes benefits while minimizing potential risks and harm.
We envision a future where AI technologies are designed, implemented, and regulated in a manner that maximizes benefits while minimizing potential risks and harm.
By integrating diverse perspectives from academia, industry, and society, we aim to shape AI development and governance for the betterment of humanity.
The mission of CARE is to serve as a focal point for pioneering research in the realm of socially accountable and safe AI, while actively promoting the convergence of AI and societal concerns.
We aim to ensure that AI systems are designed and used in ways that respect human autonomy, promote fairness, and enhance human welfare.
Our center focuses on seven key interdisciplinary research areas to ensure AI safety and responsible development across various domains
Building unified red teaming/evaluation platforms for AI4Science models in collaboration with national labs and research centers, ensuring safe application in scientific domains.
View PublicationsEnsuring secure, reliable, and efficient AI systems with robust data integrity, computational reliability, and resistance to adversarial threats.
View PublicationsDeveloping fundamental principles and methodologies to ensure AI systems operate safely and predictably with strong theoretical foundations and guarantees.
View PublicationsCreating robust safety mechanisms for AI systems in healthcare settings, ensuring reliable, unbiased recommendations while maintaining patient privacy.
View PublicationsProposing novel mechanism designs and risk assessments for AI in economic activities, ensuring sustainable and beneficial economic impacts.
View PublicationsExploring regulatory frameworks, participatory governance, and ethical guidelines to ensure responsible AI development and deployment.
View PublicationsLeveraging AI to address critical social challenges while promoting equity, inclusivity, and accessibility, particularly for marginalized communities.
View PublicationsBringing together experts from various fields to tackle the complex challenges of AI safety
The complexity of AI safety requires expertise from multiple disciplines. At CARE, we bring together leading researchers from computer science, law, philosophy, psychology, and engineering to study the technical and ethical challenges of developing safe and beneficial AI.
Our research center facilitates collaboration among computer science, data science, social sciences, law, and philosophy. For instance, cognitive psychologists provide insights into human values and ethics, while legal scholars analyze regulatory gaps and governance needs.
Fostering a diverse and inclusive community dedicated to AI safety
The center provides extensive mentoring and professional development for students and early career researchers in AI safety. We recruit a diverse cohort of PhD students, postdocs, and researchers and pair them with senior mentors.
Bringing junior scholars from minority-serving institutions to conduct collaborative research with our faculty
Specific outreach to involve women and underrepresented minorities in AI safety research
Bringing together academics, industry researchers, policymakers, and advocacy groups
Facilitating discussions on emergent AI safety problems with industry partners
Measuring our impact and setting benchmarks for progress in AI safety research
Success for our research center will be defined by several key outcomes that advance the field of AI safety and responsible AI development:
Through these achievements, we aim to position our center as a leader in AI safety research, attracting additional funding and support to ensure long-term sustainability and impact.
Join our community and contribute to the advancement of safe and responsible AI. Whether you're a researcher, student, industry professional, or policy advocate, there's a place for you in our community.