Blog

Experiments we’re interested in supporting

Posted September 9, 2022

Eventually, we want every safety and alignment researcher in the world to be able to test their solution ideas on our platform. This will not only yield benchmarks for comparison, but also a playground for interaction, where at least some acute safety failures and harmful interaction dynamics can be discovered in silico before reaching the real world...


Our principles

Posted September 9, 2022
mindflower

At Encultured, we believe advanced AI technology could be used to make the world a safer, happier, and healthier place to live. However, we also realize that AI poses an existential risk to humanity if not developed with adequate safety precautions and attention to the geopolitical consequences of deployment plans. Given this, our goal is to develop products and services that help humanity collectively to steer toward the benefits and away from the risks of advanced AI systems...