CARE conducts cutting-edge research across seven key directions to ensure AI systems are designed and used safely, reliably, and beneficially for humanity.
Our interdisciplinary approach brings together experts from various fields to address the complex challenges of creating safe and responsible AI systems.
Developing safety protocols and evaluation methods for AI systems used in scientific research, with a focus on preventing misuse while enabling beneficial applications in chemistry, biology, and materials science.
Creating infrastructure for secure data collection, efficient model training, and thorough evaluation of AI systems, ensuring that computational resources are used responsibly and safely across the AI development pipeline.
Advancing the theoretical foundations of AI safety through novel algorithms, formal verification methods, and robust optimization techniques that provide guarantees about AI system behavior and performance.
Creating robust safety mechanisms for AI systems deployed in healthcare settings, ensuring they provide reliable, unbiased recommendations while maintaining patient privacy and adhering to medical ethics.
Analyzing the economic implications of widespread AI deployment, with a focus on developing sustainable economic models that promote innovation while protecting workers and creative professionals.
Developing policy frameworks and regulatory approaches that balance innovation with safety, addressing questions of liability, accountability, and governance for increasingly capable AI systems.
Harnessing AI for positive social impact while ensuring safety and equity, focusing on applications that address global challenges like climate change, public health, and education access.
Our researchers are actively publishing their findings in top-tier academic conferences and journals. Explore our recent publications below.